Four years ago, Nate Silver, data analyst extraordinaire and editor-in-chief at fivethirtyeight.com wrote a book subtitled Why So Many Predictions Fail — but Some Don’t. It explains why predictive analytics fail so often, and according to the book jacket, “what happens when Big Data meets human nature.” Silver went on to correctly predict every election result for the past four years. Until this past November.

The surprise victory of U.S. President-elect Donald Trump now joins the list of infamous predictive failures, right along with 9/11 and the housing crisis of 2008, he said in San Francisco just after the election.

As the evening keynote at FutureStack16 he talked to the New Relic customers about the nexus between big data and prediction. There are now petabytes of data to sort through and the availability of data is expanding daily.

We are still in the infantile stages of data analysis because we now are working with petabytes of data, and these vast amounts of data have not been available for that long, said Silver. And it’s very important to remember that that data is not knowledge. It needs interpreting.

On election day the predictions for a Trump victory ranged from the Princeton model’s 1 percent chance to FiveThirtyEight’s 29 percent chance. These are really different forecasts, Silver said, even though all the forecasts are using the same data. And they all turned out to be wrong.

Read More:   Move Fast and Test in Kubernetes without Breaking Things – InApps Technology 2022

Such as the issues are ones facing IT data analysts every day.

The more data you have, the increase in complexity. If you have five variables, for example, there are ten two-way relationships between those variables.

Data is not interesting for its own sake, but for how it relates to everything else. It’s like a map, you need to see how the data relates to everything else. A coordinate itself doesn’t tell you anything. But as your data set gets larger, you go from five variables to ten, you begin to see an exponential increase in complexity.

The widespread availability of data in some fields adds to this complexity. For example in Economics you have 384,000 variables published in real time by the Federal Reserve, therefore 73,727,000,000 2-way relationships combinations you can run. And of course, you will want to create multi-way data relationships that just increase the complexity exponentially.

This results, Silver said, in a lot of false positive correlations, or correlations that are connected by data but do not actually mean anything.

Back in the 1980s, there seemed to be a strong correlation (not causation, but correlation) between who won the Super Bowl and how the stock market did in the following year. When a team from the NFC division won and the San Francisco 49ers were winning a lot, the market would go up. The years the AFC division teams won, the stock market would have a bad year. Until 2008, when all bets were off. This was this is statically significant, but there is no causality behind the correlation and no meaning between the connection.

“If you are finding a correlation, but not finding a cause behind it,” said Silver, “don’t take that bet.”

Data is increasing exponentially, so there is a lot of noise out there. Silver told the engineers they need to apply common sense along with the algorithms. There is still going to be a lot of noise. And false correlation.

Read More:   Why DevOps Needs to Change Security – InApps Technology 2022

You can program algorithms to eliminate some of the false positives, but common sense or gut instinct still goes a long way, he said. Get 80 percent of the way there with your algorithms, then bring in common sense. If something doesn’t seem like it should be right, it’s probably not.

When to Question your Findings

If you’ve discovered an insight that none of your competitors have never discovered, double check your data, Silver advised. Realize your competitors are almost as smart as you are. Sadly, often times, when you are out of consensus, you have a bug in your model, not a new feature.