PredictWise’s Election Performance

Bookmark and Share

We start with three different types of data …

Voter Intention Polling Data: Polling data has been the most prominent component of election forecasts for decades. From 1936 to about 2000, it was standard to just display the raw data, the results of individual voter intention polls, as an implicit forecast of an election. By 2004 poll aggregation became common on the internet (see Pollster.com). Although aggregated polls provide both stability and accuracy relative to individual poll results, as an implicit estimated vote share they still succumb to two well-known poll-based biases, especially earlier in the cycle: polls demonstrate larger margins than the election results and they have an anti-incumbency bias (i.e., early leads in polls fade towards Election Day and incumbent party candidates have higher vote shares on Election Day than their poll values in the late summer into the early fall) (see James Campbell). In 2008 some websites finally began publishing versions of aggregated and then debiased poll-based forecasts (see Nate Silver, 2008). Further, they shifted the outcome to the probability of victory in the Electoral College or senatorial elections versus the expected vote shares. Yet, raw, daily polls still dominate popular press coverage and simple aggregation and debiasing of polls is just starting to permeate the academic literature.

Fundamental Data: There is also a long history of econometric models that forecast elections with fundamental data. These models use a variety of economic and political indicators such as: past election results, incumbency, presidential approval ratings, economic indicators, ideological indicators, biographical information, policy indices, military situations, and facial features of the candidates. There are numerous examples of articles that forecast the national presidential vote share; there is a nine-page reference list in my paper with Patrick Hummel. However, there are few models that focus on Electoral College or senatorial elections; most simply forecast national vote shares. Further, models that include late arriving or non-duplicable data dominate the literature and press; these models cannot create forecasts until late in the cycle, if they can create forecasts before the election at all.

Prediction Market Data: The modern history of prediction markets is not as long as the other two data sources. The Iowa Electronic Market launched the modern era of prediction markets in 1988, introducing a winner-takes-all market in 1992. This type of market trades binary options which pay, for example, $10 if the chosen candidate wins and $0 otherwise. Thus, an investor who pays $6 for a “Democrat to Win” stock, and holds the stock through Election Day, earns $4 if the Democrat wins and loses $6 if the Democrat loses. In that scenario, if there are no transaction or opportunity costs, the investor should be willing to pay up to the price that equals her estimated probability of the Democrat winning the election. The market price is the value at which, if a marginal investor were willing to buy above it, investors would sell the stock and drive the price back down to that market price (and vice-versa if an investor were willing to sell below it); thus, the price is an aggregation of the subjective probability beliefs of all investors. Scholars have found that prediction market prices can create more accurate forecasts than polls-based forecasts in the last few cycles (see Berg et al. or my earlier paper) and in historical elections (see Paul Rhode and Koleman Strumpf). Like polls and fundamental data, prediction market prices also suffer a bias, the favorite longshot bias. Unfortunately, both the press and academia, if they acknowledge prediction markets at all, only cite raw prediction market prices as forecasts, thus failing to correct for these biases.

We ask what makes a good forecast …

One simple question motivates our method, what combination of these three key data types creates the most accurate, relevant, and timely forecasts (i.e., the most efficient and useful forecasts for the relevant stakeholders)? First, the answer is crucial for researchers studying electoral politics, or any other domain with forecasts, because accurate and granular forecasts allow them to connect shocks to the campaign with changes in the underlying likelihood of the relevant outcomes. Second, forecast accuracy is important for practitioners (i.e., campaigns or investors in campaigns) who want to make efficient choices when they spend time and money in the multi-billion dollar industry of political campaigns.

Accuracy: There have been few meaningful attempts to combine these different data types into a single forecast, even though the literature is clear that combining data is generally very effective in increasing accuracy. There are few exceptions, but most papers only investigate the national vote share and use simpler interpretations of the raw data. Overall, three related, but largely non-intersecting academic literatures persist, despite their shared goal of accurately forecasting election outcomes.

Relevancy: There is little discussion about what is the most relevant forecast. Academic forecasts tend to estimate vote share for two key reasons: academic literature focuses on incremental improvements on historical forecasts and estimated vote share is the historical standard, and observers frequently interpret raw polls as a naïve estimations of vote share, making it the simplest rubric. Expected vote share is certainly still extremely important for election workers, especially broken down by targetable demographics, but the marketplace for the general population is very clear that it desires probability of victory in the Electoral College. Further, state-by-state forecasts for the Electoral College not only offer a more compelling indicator for researchers and practitioners or investors, it also provides much more identification than forecasts of the national vote share.

Timeliness: There is no emphasis on the utility of the forecast when it is released; forecasts are judged by academia and the press at the time they are released or they are judged as if they were released on the eve of the event. Yet, both election researchers and practitioners benefit from early forecasts, when there are more resources left to allocate. And, they both benefit from timely forecasts, which provide a granular account of the election for researchers and are up-to-date when the practitioners or investors need to make a decision. 

We create a model that combines the three types of data and maximizes the three attributes of a good forecast …

First, we aggregate then debias the raw voter intention polling data, using parameters that we calibrate separately by: election type, days before the election, and the certainty of the raw data. The resulting forecast is the most accurate poll-based forecast readily available.

Second, we examine and clarify the transformation that debiases raw prediction market data, yielding an improved prediction market-based forecast.

Third, we combine the three forecasts based on polling data, fundamental data (using my work with Patrick Hummel), and prediction market data. The weighting parameters for our model demonstrate and capitalize on the shifting strength of the different forecast types across the studied timeframe; 130 days out, the forecast averages the separate forecasts from all three data types, but the fundamental model’s unique information decreases until Election Day, when the forecast is an average of the polling and prediction market-based forecasts.

We see how we did in 2012 …

We do not like to dwell too much on single election cycles, as the correlation between the outcomes somewhat diminishes the explanatory power of even 84 (51 Electoral College and 33 senatorial) different outcomes. Yet, PredictWise’s forecast does well in predicting the 2012 election; the below chart shows the errors every 4 hours for the last 130 days of the election in 2012. Unlike the within-sample from previous years, on which we created the model, it was not dominant at every point in the cycle, but it was the most consistent forecast. For a span of about 30 days early in the cycle when poll-based forecasts had a lower error than prediction market-based forecasts, PredictWise’s forecast was either below or near the poll-based forecast. Towards the end of the summer until last the month of the campaign, a span of about 45 days when prediction market-based forecast had a lower error than polls, PredictWise’s forecast again held closely to the lowest errors. At any given moment from 130 before the election to Election Day in 2012 PredictWise’s forecast is likely to have a lower error than either the completely poll-based or completely prediction market-based forecast.

Accuracy of probability of victory estimates for Electoral College and senatorial elections by fundamental data, voter intention poll, and prediction markets-based forecasts, along with PredictWise for 2012

There is no comparison with FiveThirtyEight in this post, because there is no comparison with FiveThirtyEight. I have compared PredictWise to three single data forecasts created with: voter intention polling, fundamental, and prediction markets data. FiveThirtyEight is some unknown combination of polling and fundamental data. First, FiveThirtyEight did not post senatorial predictions until about Labor Day. Without predictions during the toughest part of the process, it is impossible to compare our forecasts. Second, FiveThirtyEight updated sparingly until the very end, so their forecast were frequently stale. And, for the record, we had 50 of 51 Electoral College races correct on February 16, 2012 … forecasts records on the eve of the election are not useful to the stakeholders and do not interest us.