Nate Silver is yet another example of data reinventing the world we live in.
The day before the presidential election, Silver’s FiveThirtyEight blog drove 20 percent of the traffic to the New York Times website, according to The New Republic. Some said the methods of this new-age political forecaster were bunk, but people certainly paid attention. And in the end, he was right, predicting the outcome of the presidential race in all 50 states using hard data rather gut feel.
In 2008, he was nearly as successful, predicting 49 out of 50 states.
No doubt, some will continue to badmouth his methods. The 34-year-old has tested his model on only two presidential elections, and he says only so much about how the model works. What we really need is an open source version of Silver’s methods. As Zeynep Tufekci points out in her opinion piece on Silver, this would allow for peer review and eliminate so much of the controversy around his predictions. It would also let so many others benefit from his methods — not only in the political world but perhaps other areas as well.
It’s understandable that Silver and The Times want to keep the methodology under wraps. Silver’s work is driving valuable traffic to The Times‘ website, and if he reveals his methods, the site loses a competitive advantage. In the end, peer-review isn’t all that important to The Times. But the peer review problem only gets bigger as publications start to imitate The Times, as they surely will. We’ll have all sorts of secret algorithms competing against each other — and no one will quite know whom to trust.