I am pleased to announce that FantasySCOTUS: Crowdsourcing a Prediction Market for the Supreme Court, co-authored with Adam Aft and Corey Carpenter, is on SSRN. This article analyzes the predictions made during the October 2009 Term onFantasySCOTUS.net. Based on this data, FantasySCOTUS accurately predicted a majority of the cases, and the top-ranked experts predicted over 75% of the cases correctly. Additionally, we compared our predictions to the path breaking Supreme Court Forecasting ProjectThe FantasySCOTUS top three experts not only outperformed the Forecasting Project’s experts, but they also slightly outperformed the Project’s model—75.7% compared with 75%., which developed a supercrunching algorithm to predict cases based on a certain cases characteristics. Finally, we assessed whether the Supreme Court prediction market merely mirrors the media—that is, do people make predictions based on coverage about the cases in the news and blogosphere. Using an expansive searching process—that considered both old school, and new school media—we found a correlation between media coverage and predictions.
This was a very fun paper to write, and reveals great insights about how people perceive the Court.
Here is the abstract:
Every year the Supreme Court of the United States captivates the minds and curiosity of millions of Americans—yet the inner-workings of the Court are not fully transparent. The Court, without explanation, only decides the cases it wishes. They deliberate and assign authorship in private. The Justices hear oral arguments, and without notice, issue an opinion months later. They sometimes offer enigmatic clues during oral arguments through their questions. Between arguments and the day the Court issues an opinion, the outcome of a case is essentially a mystery. Sometimes the outcome falls along predictable lines; other times the outcome is a complete surprise.
Court-watchers frequently make predictions about the cases in articles, on blogs, and elsewhere. Individually, some may be right, some may be wrong, but on the aggregate, these predictions tend to be correct. Until recently, there was not a way to pool together this collective wisdom, and generate accurate real-time predictions for all cases pending before the United States Supreme Court.
Now there is such a tool. FantasySCOTUS.net from the Harlan Institute: the Internet’s premier Supreme Court Fantasy League, and the first crowdsourced prediction market for jurisprudential speculation. During the October 2009 Supreme Court term, the 5,000 members made over 11,000 predictions for all 81 cases decided. Based on this data, FantasySCOTUS accurately predicted a majority of the cases, and the top-ranked experts predicted over 75% of the cases correctly. With this combined knowledge, we can now have a method to determine with a degree of certainty how the Justices will decide cases before they do. This essay explores the wisdom of the crowds in this prediction market and assesses the accuracy of FantasySCOTUS’ predictions
This essay presents the first detailed analysis, and comparison of this path breaking Supreme Court Forecasting Project, documented in a 2004 Columbia Law Review article. The Project developed a sophisticated algorithm and, utilizing decision-trees, predicted how the Justices would decide cases based on certain characteristics of a case—such as circuit of origin, type of case, and the political ideology of the case. To test the power of their model, the organizers of the Forecasting Project assembled a cadre of Supreme Court experts, litigators, and academics to make predictions about the same cases.
During the October 2002 Term, the Project’s model predicted 75% of the cases correctly, which was more accurate than the Forecasting Project’s experts, who only predicted 59.1% of the cases correctly. The FantasySCOTUS experts predicted 64.7% of the cases correctly, surpassing the Forecasting Project’s Experts, though the difference was not statistically significant. The Gold, Silver, and Bronze medalists in FantasySCOTUS scored staggering accuracy rates of 80%, 75% and 72% respectively (an average of 75.7%). The FantasySCOTUS top three experts not only outperformed the Forecasting Project’s experts, but they also slightly outperformed the Project’s model—75.7% compared with 75%.
Finally, this essay assesses whether a Supreme Court prediction market merely mirrors the media—that is, do people make predictions based on coverage about the cases in the news and blogosphere. To complete this assessment, we searched several sources for stories about the ten cases we considered—the ALLNEWS database on Westlaw, the Legal US News database on LexisNexis, as well as a custom Google search engine we programmed that sifts through the 2010 ABA Journal Blawg 100. Using this expansive searching process—that considered both old school, and new school media—we found a correlation between media coverage and predictions.
The inner-workings of the Supreme Court of the United States are shrouded in secrecy. From the first Monday in October to the last week in June, the Justices operate behind-the-scenes to determine some of the most important issues in our society. Now FantasySCOTUS can provide accurate, real-time predictions how the Court will decide these cases. The FantasySCOTUS crowdsourced prediction market provides an unprecedented insight into the decision-making of the United States Supreme Court.