DavidMRothschild on June 11, 2013 @ 10:49AM
We start with three different types of data …
Voter Intention Polling Data: Polling data has been the most prominent component of election forecasts for decades. From 1936 to about 2000, it was standard to just display the raw data, the results of individual voter intention polls, as an implicit forecast of an election. By 2004 poll aggregation became common on the internet (see Pollster.com). Although aggregated polls provide both stability and accuracy relative to individual poll results, as an implicit estimated vote share they still succumb to two well-known poll-based biases, especially earlier in the cycle: polls demonstrate larger margins than the election results and they have an anti-incumbency bias (i.e., early leads in polls fade towards Election Day and incumbent party candidates have higher vote shares on Election Day than their poll values in the late summer into the early fall) (see James Campbell). In 2008 some websites finally began publishing versions of aggregated and then debiased poll-based forecasts (see Nate Silver, 2008). Further, they shifted the outcome to the probability of victory in the Electoral College or senatorial elections versus the expected vote shares. Yet, raw, daily polls still dominate popular press coverage and simple aggregation and debiasing of polls is just starting to permeate the academic literature.
Fundamental Data: There is also a long history of econometric models that forecast elections with fundamental data. These models use a variety of economic and political indicators such as: past election results, incumbency, presidential approval ratings, economic indicators, ideological indicators, biographical information, policy indices, military situations, and facial features of the candidates. There are numerous examples of articles that forecast the national presidential vote share; there is a nine-page reference list in my paper with Patrick Hummel. However, there are few models that focus on Electoral College or senatorial elections; most simply forecast national vote shares. Further, models that include late arriving or non-duplicable data dominate the literature and press; these models cannot create forecasts until late in the cycle, if they can create forecasts before the election at all.
Prediction Market Data: The modern history of prediction markets is not as long as the other two data sources. The Iowa Electronic Market launched the modern era of prediction markets in 1988, introducing a winner-takes-all market in 1992. This type of market trades binary options which pay, for example, $10 if the chosen candidate wins and $0 otherwise. Thus, an investor who pays $6 for a “Democrat to Win” stock, and holds the stock through Election Day, earns $4 if the Democrat wins and loses $6 if the Democrat loses. In that scenario, if there are no transaction or opportunity costs, the investor should be willing to pay up to the price that equals her estimated probability of the Democrat winning the election. The market price is the value at which, if a marginal investor were willing to buy above it, investors would sell the stock and drive the price back down to that market price (and vice-versa if an investor were willing to sell below it); thus, the price is an aggregation of the subjective probability beliefs of all investors. Scholars have found that prediction market prices can create more accurate forecasts than polls-based forecasts in the last few cycles (see Berg et al. or my earlier paper) and in historical elections (see Paul Rhode and Koleman Strumpf). Like polls and fundamental data, prediction market prices also suffer a bias, the favorite longshot bias. Unfortunately, both the press and academia, if they acknowledge prediction markets at all, only cite raw prediction market prices as forecasts, thus failing to correct for these biases.
We ask what makes a good forecast …
One simple question motivates our method, what combination of these three key data types creates the most accurate, relevant, and timely forecasts (i.e., the most efficient and useful forecasts for the relevant stakeholders)? First, the answer is crucial for researchers studying electoral politics, or any other domain with forecasts, because accurate and granular forecasts allow them to connect shocks to the campaign with changes in the underlying likelihood of the relevant outcomes. Second, forecast accuracy is important for practitioners (i.e., campaigns or investors in campaigns) who want to make efficient choices when they spend time and money in the multi-billion dollar industry of political campaigns.
Accuracy: There have been few meaningful attempts to combine these different data types into a single forecast, even though the literature is clear that combining data is generally very effective in increasing accuracy. There are few exceptions, but most papers only investigate the national vote share and use simpler interpretations of the raw data. Overall, three related, but largely non-intersecting academic literatures persist, despite their shared goal of accurately forecasting election outcomes.
Relevancy: There is little discussion about what is the most relevant forecast. Academic forecasts tend to estimate vote share for two key reasons: academic literature focuses on incremental improvements on historical forecasts and estimated vote share is the historical standard, and observers frequently interpret raw polls as a naïve estimations of vote share, making it the simplest rubric. Expected vote share is certainly still extremely important for election workers, especially broken down by targetable demographics, but the marketplace for the general population is very clear that it desires probability of victory in the Electoral College. Further, state-by-state forecasts for the Electoral College not only offer a more compelling indicator for researchers and practitioners or investors, it also provides much more identification than forecasts of the national vote share.
Timeliness: There is no emphasis on the utility of the forecast when it is released; forecasts are judged by academia and the press at the time they are released or they are judged as if they were released on the eve of the event. Yet, both election researchers and practitioners benefit from early forecasts, when there are more resources left to allocate. And, they both benefit from timely forecasts, which provide a granular account of the election for researchers and are up-to-date when the practitioners or investors need to make a decision.
We create a model that combines the three types of data and maximizes the three attributes of a good forecast …
First, we aggregate then debias the raw voter intention polling data, using parameters that we calibrate separately by: election type, days before the election, and the certainty of the raw data. The resulting forecast is the most accurate poll-based forecast readily available.
Second, we examine and clarify the transformation that debiases raw prediction market data, yielding an improved prediction market-based forecast.
Third, we combine the three forecasts based on polling data, fundamental data (using my work with Patrick Hummel), and prediction market data. The weighting parameters for our model demonstrate and capitalize on the shifting strength of the different forecast types across the studied timeframe; 130 days out, the forecast averages the separate forecasts from all three data types, but the fundamental model’s unique information decreases until Election Day, when the forecast is an average of the polling and prediction market-based forecasts.
We see how we did in 2012 …
We do not like to dwell too much on single election cycles, as the correlation between the outcomes somewhat diminishes the explanatory power of even 84 (51 Electoral College and 33 senatorial) different outcomes. Yet, PredictWise’s forecast does well in predicting the 2012 election; the below chart shows the errors every 4 hours for the last 130 days of the election in 2012. Unlike the within-sample from previous years, on which we created the model, it was not dominant at every point in the cycle, but it was the most consistent forecast. For a span of about 30 days early in the cycle when poll-based forecasts had a lower error than prediction market-based forecasts, PredictWise’s forecast was either below or near the poll-based forecast. Towards the end of the summer until last the month of the campaign, a span of about 45 days when prediction market-based forecast had a lower error than polls, PredictWise’s forecast again held closely to the lowest errors. At any given moment from 130 before the election to Election Day in 2012 PredictWise’s forecast is likely to have a lower error than either the completely poll-based or completely prediction market-based forecast.
Accuracy of probability of victory estimates for Electoral College and senatorial elections by fundamental data, voter intention poll, and prediction markets-based forecasts, along with PredictWise for 2012
There is no comparison with FiveThirtyEight in this post, because there is no comparison with FiveThirtyEight. I have compared PredictWise to three single data forecasts created with: voter intention polling, fundamental, and prediction markets data. FiveThirtyEight is some unknown combination of polling and fundamental data. First, FiveThirtyEight did not post senatorial predictions until about Labor Day. Without predictions during the toughest part of the process, it is impossible to compare our forecasts. Second, FiveThirtyEight updated sparingly until the very end, so their forecast were frequently stale. And, for the record, we had 50 of 51 Electoral College races correct on February 16, 2012 … forecasts records on the eve of the election are not useful to the stakeholders and do not interest us.
DavidMRothschild on May 18, 2013 @ 10:54AM
May 18 at 6:05 ET: Halfway through the voting and only two viable countries left Denmark (89%) and Ukraine (7%). Of course, this was our initial top and second predictions for first place.
5:58 PM via ET Twitter: Calling it for Denmark with 17 of 39 countries voting! #ev2013
5:51 PM via ET Twitter: It is now officially at 2 team race between Azerbaijan and
Denmark at #EV2013!
4:29 PM ET via ET Twitter: Top worldwide trend on twitter is #EV2013.
Unsure what it means, 6th top trend #Eurovision2013:
real-time forecast: ow.ly/lapuc
May 18 at 12:00 ET: On Monday, May 13, I cemented my pre-Eurovision 2013 predictions with Denmark at 41% likely to win. After a strong semi-final performance, and the field shrinking from 39 to 26 competitors, my prediction on the eve of the final now has Denmark at 55% likely to win. Norway is the second most likely winner at 14%, followed by Russia and Ukraine at 5%. You can follow predictions live as they update during the competition tonight here:
DavidMRothschild on March 02, 2013 @ 7:10PM
I judge my predictions on four major attributes: relevancy, timeliness, accuracy, and cost-effectiveness. I am very proud of my 2013 Oscar predictions, because they excelled in all four attributes: they predicted all 24 categories (and all combinations of categories), moved in real-time, were very accurate, and built on a scalable and flexible prediction model.
Relevancy is the only one of my major attributes that relies on the subjective input of stakeholders, rather than an objective measure; I relied on people with more domain specific information than myself about what I should predict and, after watching my first Oscar show from start to finish, it certainly felt that any relevant set of predictions should have all 24 categories. Of the six major categories that fall into the standard set of predictions, only two, best supporting actor and actress, are scattered into the first 20 awards. The show does not get to the biggest four awards, best: picture, director, actor, and actress until well past 11:30 PM ET. If I were watching the Oscars casually with my family or friends I would certainly want information on all 24 categories to sustain interest throughout the telecast. Further, predictions in all 24 categories are necessary to predict the total quantity of awards won by any given movie.
The real-time nature of my predictions proved extremely interesting in both quantifying and understanding the major trends of the awards season; further, predictions that are created just before Oscar day, are not available to interested people during these earlier events. Both major trends that I illustrated the day before the Oscars played a big role on Oscar night: Argo's rise with the award show victories and Zero Dark Thirty's fall with the increased concern over its depiction of torture. A third trend is evident in both the major categories where Django Unchained competed. Winner of best supporting actor, Christoph Waltz, moved from a small10 percent likelihood of victory at the start of the season to 40 percent on Oscar day, a hair behind Lincoln's Tommy Lee Jones. And, taking advantage of Zero Dark Thirty's fall, Django Unchained came from behind for a commanding lead in the prediction for best original screenplay by Oscar day.
The first judge of accuracy is the error; my error is meaningfully smaller than the best comparisons. A simple way to calculate the error is the take the mean of the squared error for each nominee, where the error is (1- probability of victory) for a winner and (0 - probability of victory) for a loser. A full set of predictions is 122, with 22 categories of 5, 1 category of 9, and 1 category of 3. My final predictions at 4 PM ET on Oscar day had a MSE of 0.067. One comparison is my earliest set of predictions, which had a MSE of 0.108; the error got smaller and smaller as the award shows and other information spread into my predictions. Nate Silver's FiveThirtyEight only predicted the big six categories and Mr. Silver provided prediction points, rather than probabilities. But, converting his predictions into probabilities by dividing each nominee's points by the sum of points in the category, he had a MSE of 0.075 for those six categories to my meaningfully smaller 0.056. A final comparison is with my only input that had all 24 categories; the Oscar day prediction of Betfair, the prediction market, has virtually the same error as mine. Which is why I also consider calibration.
The second judge of accuracy is the calibration; my calibration is very strong. The easiest way to check calibration is to chart the percentage of predictions that occur for bucketed groups of predictions (e.g., for all of my predictions around 20 percent, how many occur?). As you can see from the chart, when I made a prediction that was around 20 percent, around 20 percent of the predictions occurred. Admittedly, my gut was a little concerned with prediction like 99 percent for Life of Pi to win best visual effects or 97 percent for Les Miserables to win best sound mixing, but that is why I trust my data/models, not my gut. Betfair, which has nearly identical errors, is systematically under confident; while my predictions dance around the magical 45 degree line (i.e., perfect calibration line), 100 percent of Betfair prices that round to 50, 80, 90, and 100 occur, while prices that round to 70 occur 80 percent of the time.
Sources: Betfair, Intrade, Hollywood Stock Exchange
The third judge of accuracy is has to do with the models themselves and is not born out in any one set of outcomes; is the prediction model robust for the future or over-fitted to the past and/or present. First, my models examine the historical data, but are carefully crafted using both in-sample and out-of-sample data to ensure they predict the future, rather than describe the past. Second, I always calibrate and release my models without any data from the current set of events. Models released too close to an event frequently suffer from inadvertent "look ahead-bias" where the forecaster, knowing what the other forecasters and his/her gut is saying, inadvertently massages the model to provide the prediction they want. That is why I release my models to run at the start of any season without ever checking what the current season's data will predict, before they are released.
The cost effective nature of my modeling is the key to predicting all 24 categories. Along with traditional fundamental data, my model relies primarily on easily scalable data, like prediction markets and user generated experimental data. It is scalable and cost effective models/data that will eventually allow us to make incredible quantities of predictions in a wide-range of domains. Adding accuracy in an existing set of predictions like best actor or expected national vote share in the presidential election is fun and could be meaningful, but creating accurate, real-time predictions in ranges of questions that could not exist before is the real challenge and goal of my work.
This column syndicates with the HuffingtonPost.
Relevant, real-time, accurate, and scalable: 2013 Oscar predictions are a win for predictive science
DavidMRothschild on February 25, 2013 @ 12:33AM
Predicting the Oscars for me is not about the Oscars per se, but the science of predicting. The challenge was to make predictions in all 24 categories, when most predictions only do 6. The challenge was to make predictions that move in real-time during the time period between the nominations and the Oscars, when most predictions are static. The challenge was to make to predictions that were accurate, not just in the binary correctness, but in calibrated probabilities. The challenge was to make these cost effective predictions, so that they could not only scale to 24 categories, but be useful in making predictions in varying domains.Prediction market data, including Betfair, Hollywood Stock Exchange, and Intrade, combined with some user generated data from WiseQ, allowed me to meet all of these challenges.
I was able to produce predictions for all 24 categories, expanding down the list through film editing, sound mixing, etc. I showed how these predictions moved in real-time during the period between the Oscar nominations and the Oscars. For example, Argo zoomed upward in the best picture and adapted screenplay categories as Zero Dark Thirty plunged in best actress and original screenplay. I was very accurate with 19 of 24 categories correct and the winners in the other 5 categories showing reasonably high probabilities. Prediction market data and experimental prediction games harnessed the wisdom of the crowds to allow me to scale easily to all 24 categories. These same data/models will allow me to easily expand to all sort so domains in the near future.