DavidMRothschild on March 01, 2014 @ 3:02PM
I always emphasis four traits to any good indictor or prediction: accurate, relevant, timely, and scalable. Accurate means a small error and well calibrated. Relevant means that the values the right value for the stakeholders. Timely means that the values starts early and update regularly. Scalable means that the methods and platforms allow many different values to be generated with minimal marginal cost.
The Oscar predictions at PredictWise are all four. First, we offer the probability of victory for all 24 categories. While some people offer binary outcomes and some people offer 6 or 8 or 9 categories, we do not settle for anything short of the right question for the right outcomes. Second, we are confident that our predictions are well calibrated in that if we say 10 things are 90% to happen, 9 of them will happen. Right now we have about 85% likelihood on the leader for all 24 categories. That means we hope that between 20 and 21 are pointing towards the winner. Third, our predictions started just after the nominations and have moved with the results of the awards season or other outside events. Fourth, our scalability is what allows us to jump into the Oscars as easily as we will jump into March Madness and the World Cup this summer.
So, if you are scoring at home, here are our predictions as of late on February 28, 2014. The predictions will change during the lead-up to the awards show and the show itself. I will be live blogging during the Oscars on this site and tweeting updates. Live values are here, here, and here:
DavidMRothschild on February 21, 2014 @ 10:13AM
Over the summer I got together with my research assistant Deepak Pathak and my colleague Miro Dudik to take a look at four different types of Oscar data: fundamentals, polling, prediction markets, and experts. While there are certainly meaningful things to learn from all of the data sources, properly translated prediction market prices were, by far, the most superior data source for creating continuously updating and accurate forecasts for all 24 Oscar categories.
Where did the other data go wrong?
Forecasts created with fundamentals (i.e., box office receipts, number of screens, release dates, other awards shows, etc.) are simply not that accurate across 24 categories. While this data is enticing, because it does so well in sports and politics, when you think a lot about it (like we did) you realize the huge problem; this data does not provide much identification between categories like song or makeup. If a movie does well in the box office, is because people like the song or the makeup? Further, there needs to be separate model for each one of these categories. While the same variables correlate for any football game or any of the Electoral College elections in a given year, the song or makeup categories all need different variables.
Polling has the potential to create both accurate and timely forecasts, but it requires incentives for frequent responses by high-information users to stay timely, and proper transformation of raw polls into forecasts to be accurate. By definition, real money prediction markets provide the incentives that polls generally lack.
Experts can create something similar to fundamental models, but it is not clear ex-ante which experts are going to work as most of the methodology is opaque.
That leaves prediction market data; the prices that investors are willing to sell contracts that will be worth either $1 or $0 depending on whether the outcome occurs or not. The price is highly suggestive of a probability, but theory, backed by empirical data does allow us to translate raw prediction market prices into probabilities.
The paper, which suggests a method derived from our review of these four data types, is currently under academic review, but you can see how its method does in 2014 on here (and here and here); I launched the predictions shortly after the nomination and they have been updating every few minutes since then. Here are the current predictions for eight big categories. They are scary confident with an average of 84%.
Best Picture: 12 Years a Slave is at 84%, which is a strong lead over Gravity at 14%.
Best Director: Gravity’s Alfonso Curaon at 98% is dominating this race with 12 Years a Slave’s Steve McQueen at just 2%.
Best Leading Actor: Dallas Buyers Club’s Matthew McConaughey at 90% is leading big over The Wolf on Wall Street’s Leonardo DiCaprio at 7%.
Best Leading Actress: Blue Jasmine’s Cate Blanchett at 98% is a massive favorite over American Hustle’s Amy Adams at 2%.
Best Supporting Actor: Dallas Buyers Club’s Jared Leto at 97% is also a strong favorite over Captain Phillip’s Barkhad Abdi at 2%.
Best Supporting Actress: In one of the tightest categories I have, 12 Years Slave’s Lupta Nyongo at 62% is leading American Hustle’s Jennifer Lawrence at 37%.
Best Adapted Screenplay: 12 Years a Slave at 88% is a strong favorite over Philomena at 7%.
Best Original Screenplay: In the only real-toss up of the top categories I have Her at 53% to American Hustle at 42%.
As a disclaimer, I have only seen two of the movies mentioned in all of these categories, but I am not saying which ones!
DavidMRothschild on September 26, 2013 @ 9:01AM
We obsess about the aggregated prices that emerge from markets, whether it is oil, the Dow Jones, or the prediction market contract on who will be the next president of the United States. The price is a reflection of the subjective beliefs of individual traders, and we spend too little time considering the individual traders’ expectations, strategies, and motivations that combine to create that price. Rajiv Sethi of Barnard College and I were very lucky to examine a unique dataset this summer, which allowed us to learn more about how individual traders behave in markets; specifically, we examined trade-level data for all trades that occurred in the final two weeks of the 2012 election for either Obama or Romney to win on Intrade, the largest political prediction market in 2012. Our main academic finding is that traders are surprisingly one directional, almost always buying contracts favoring one of the two candidates. A secondary finding that has garnered popular attention is that one trader heavily influenced the price of these contracts, possibly for potential political gain, by investing nearly $4 million in Romney positions over two weeks.
The academic question our paper examines is the trading strategies and motivations of the traders. And our finding is that most traders are either performing arbitrages (i.e., buying and selling contracts that guarantee them a small return) or trading in one direction (i.e., only going long on Obama or Romney). The archetypical informational trader gets new information and goes long on whichever side it favors. What we see is that traders get new information and use it to keep trading for their chosen candidate. An example would be a situation where there was bad news for the Romney campaign. Obama traders would go long for Obama and push up the price for Obama, but after a little while, the Romney traders would push back when they thought the price has moved too far towards for Obama. Everyone gets to keep going long for their candidate, but the new price still reflects the new information. Rajiv goes into this in more detail in an earlier blog post.
But, you are probably not reading this article to learn about trading strategies, so I will pivot to the possible market manipulation.
Many observers noted in real-time that a peculiar wedge opened up between Intrade and another prediction market, Betfair. Intrade was consistently 5-10 percentage points more bullish on Romney (i.e., a contract that paid out $1.00 if Obama won could trade for $0.70 on Intrade and $0.75 or $0.80 on Betfair). Our new dataset shows that one trader was making that happen by providing massive amounts of liquidity to the Romney side of all trades (accounting for roughly 1/3 of all action favoring Romney) and creating “fire-walls” of several hundred thousand dollars to keep the Romney price static at times of high information flow (e.g., if the price for an Obama contract was $0.70 the trader would note on the order book the willingness to sell $100,000 or more of Obama to win at $0.70 so that Obama traders could keep buying at that $0.70 price indefinitely without the price going up). Note that we have no personal information on this trader, just the trader’s trades.
We provide three possible explanations for the trader’s strategy: (1) the trader was convinced that Romney was underpriced throughout the period and was expressing a price view, (2) the trader was hedging an exposure held elsewhere, or (3) the trader was attempting to distort prices in the market for some other purpose.
(1) Simply going long Romney is unlikely because at any point the trader could have gone to Betfair and purchased the same contract for less money. There are many costs to utilizing Betfair: it trades in British Pounds, it blocks U.S. ip addresses, etc. But, a trader of this size could have overcome those costs at less money than the trader’s loss by utilizing Intrade versus Betfair.
(2) Hedging is a more plausible explanation for the trader’s behavior, but still unlikely. Earlier academic literature shows that some market indexes and the likelihood of the election outcome can be correlated, but we could not find similar patterns in 2012 (e.g., historically, sudden increases in the likelihood of a Democratic victory are adversely correlated with the S&P, despite historical correlations of stronger stock markets under Democratic presidencies). We cannot eliminate the possibility that the trader was hedging some more detailed securities like specific energy contracts that traders may have concluded would be more tightly impacted by the election outcome (e.g., the trader could have been going long Romney to cover potential losses in renewable energy contacts should Romney have won the election).
(3) Distorting the price for some other purpose, possibility political, is the most likely motivation of the trader. Placing hundreds of thousands of dollars on the order book, precisely at times of a lot of new information, is a strategy that maximizes the impact of the investment, but not the return. One of the trader’s most active periods was between 7:30 PM and 9:00 PM ET on Election Day, when new information was arriving by the second. At that time the trader essentially placed so many potential trades at about $0.70 per $1.00 for Obama that the trader was telling the market that s/he would match anyone who wanted to buy Obama at that price. This froze the price for the crucial 1.5 hours between the first major reports of election returns and the last swing state poll closings. Ultimately, the trader spent about $375,000 during that 1.5 hours and, as soon as the trader left the market at 9:00 PM, Obama shot up past $0.90 per $1.00 contract.
If this trader were attempting to manipulate the market, it is a three step process to make it successful: (1) change price, (2) convince people it is real, and (3) have people change behavior because of it.
(1) One trader was able to control the price of a liquid market with a massive wall of limit orders for two reasons. First, markets, by design move with large quantities of money regardless of whether it is 1,000 people with $100 or 1 person with $100,000. A sea of traders could move a stock price or Warren Buffett alone could move a stock price, as the market does not care about the motive or quantity of traders. Second, the government’s harassment of Intrade, and other online markets, made it difficult and risky to join and keep money in Intrade, which limited participants and readily available money. Thus, even if it was tempting to buy Obama long at $0.70 per $1.00 contract on Election Day, it is likely that few traders had the money sitting in the market necessary to buy up all of the contracts that the Romney trader was willing to sell. You would need to already be a trader and have tens or hundreds of thousands of dollars sitting idle in your account.
(2) This is the hardest step; it is not as easy to convince people that the price level is real, but maybe people who want evidence will appreciate any data source that validates them. Sites that present prediction market data, frequently aggregated Intrade with other markets to ameliorate the concern that any one data source can be wrong. Overall, prediction market data was very successful in providing accurate and timely predictions of the 2012 election. Yet, if someone was looking for a reason to be hopeful about Romney, Intrade’s price provided a solid piece of data for them.
(3) If people are convinced it is real, the impact on the campaign is going to happen. There is a cascading effect to being a viable candidate; the more viable a candidate appears, the more money and volunteers, support, and turnout the candidate receives. Thus, the more viable the candidate appears, the more viable the candidate becomes. When it comes to Election Day, one piece of positive news may be the validation someone needs to stop off at the polling place on the way home from a long day at work.
If manipulation could be successful, it would be worth it. If a few million dollars could boost fundraising and morale, than it would be a good investment next to one more television advertisement in a flooded Ohio, Florida, or Virginia market. Roughly $28 million was spent on TV advertising in just one state, Ohio, in the last week of the election alone.
We cannot say for sure if this market was manipulated, but someone definitely shifted the price heavily towards Romney and maintained that price imbalance until 9 PM on Election Day, when the polls closed in the last swing state and the election was finally up to the vote counters. Yet, despite this trader’s efforts, most observers, even if they were following just prediction markets, still received a very accurate forecast of the election.
This column syndicates with the HuffingtonPost.
DavidMRothschild on September 04, 2013 @ 8:05AM
On August 4 I tweeted that “Smart money is on de Blasio edging out [Bill] Thompson” for the second spot on the runoff. I followed that up by noting that Christine Quinn’s trajectory was troubling; it is not a good sign for a runoff if you are heading in the wrong direction. Both statements proved prescient as de Blasio was fourth in the polls on August 4 and is the current heavy favorite to be the next mayor of New York City. Meanwhile Quinn’s downward trajectory may push her out of a potential runoff, or even ameliorate the need for a runoff. But, Twitter does lead a little too much to the imagination, so here are some more details on the New York City mayoral contest.
The election in New York City is, potentially, a three step process. First, both parties have a primary on September 10. Second, if no candidate receives over 40% of the vote in the primary, there is a runoff between the top two candidates on October 1. Third, there is an election between the Democratic and Republican candidate on Tuesday, November 5.
Through the end of July Quinn and Anthony Weiner were trading spots on the top of the polls with de Blasio and Thompson battling it out for third and fourth. The New York Times poll that was in the field from August 2-7 showed the completion of Weiner’s fall to nearly single digits, but Quinn still had a lead and there were a huge amount of undecided voters; de Blasio was third with 14% and Thompson second with 16%. After that, August saw a string of six straight polls with de Blasio leading, finally blowing past 40% with the latest Quinnipiac poll that was in the field from August 28-September 1.
Meanwhile, Quinn continued to plateau through July with a string of polls with her leading, but always in the 22% to 32% range and no upward trajectory. As the frontrunner, this was troubling, because she has a lot of time to make her case to the Democratic electorate. Since then she has shown a consistent downward trajectory with the last three polls putting her below 20% and, crucially, below Thompson.
The smart money I was referring to on August 4 was the bookies like Paddy Power, Stan James and few other others; but, it was not as easy picking out the best odds. First, Quinn had the best odds of all, so she was still the favorite to get one of two spots in the potential run. Second, de Blasio had slightly more favorable odds than Thompson. The polls showed them nearly tied, but the betters favored de Blasio. Thus, the smart money had de Blasio pulling ahead of Thompson. Third, with Quinn still dominating the polls the bookies had no reason to push Quinn down below de Balsio and Thompson; it was still safe money to assume she had a higher probability than either de Blasio or Thompson. But, did she have a higher probability than the ultimate winner of the "not-Quinn" fight between de Blasio and Thompson; in my reading of the data, no.
The likely Republican candidate, Joe Lhota, is trailing any Democratic candidate by a wide margin. After five terms of Republican mayors, the Big Apple looks poised to put a Democrat back in Gracie Mansion and it is about 60-65% Bill de Blasio will be the next mayor of NYC.1
Then again, my friend Andrew Gelman of Columbia has a timely reminder for us in his blog: primary elections are hard to predict.
DavidMRothschild on June 11, 2013 @ 10:49AM
We start with three different types of data …
Voter Intention Polling Data: Polling data has been the most prominent component of election forecasts for decades. From 1936 to about 2000, it was standard to just display the raw data, the results of individual voter intention polls, as an implicit forecast of an election. By 2004 poll aggregation became common on the internet (see Pollster.com). Although aggregated polls provide both stability and accuracy relative to individual poll results, as an implicit estimated vote share they still succumb to two well-known poll-based biases, especially earlier in the cycle: polls demonstrate larger margins than the election results and they have an anti-incumbency bias (i.e., early leads in polls fade towards Election Day and incumbent party candidates have higher vote shares on Election Day than their poll values in the late summer into the early fall) (see James Campbell). In 2008 some websites finally began publishing versions of aggregated and then debiased poll-based forecasts (see Nate Silver, 2008). Further, they shifted the outcome to the probability of victory in the Electoral College or senatorial elections versus the expected vote shares. Yet, raw, daily polls still dominate popular press coverage and simple aggregation and debiasing of polls is just starting to permeate the academic literature.
Fundamental Data: There is also a long history of econometric models that forecast elections with fundamental data. These models use a variety of economic and political indicators such as: past election results, incumbency, presidential approval ratings, economic indicators, ideological indicators, biographical information, policy indices, military situations, and facial features of the candidates. There are numerous examples of articles that forecast the national presidential vote share; there is a nine-page reference list in my paper with Patrick Hummel. However, there are few models that focus on Electoral College or senatorial elections; most simply forecast national vote shares. Further, models that include late arriving or non-duplicable data dominate the literature and press; these models cannot create forecasts until late in the cycle, if they can create forecasts before the election at all.
Prediction Market Data: The modern history of prediction markets is not as long as the other two data sources. The Iowa Electronic Market launched the modern era of prediction markets in 1988, introducing a winner-takes-all market in 1992. This type of market trades binary options which pay, for example, $10 if the chosen candidate wins and $0 otherwise. Thus, an investor who pays $6 for a “Democrat to Win” stock, and holds the stock through Election Day, earns $4 if the Democrat wins and loses $6 if the Democrat loses. In that scenario, if there are no transaction or opportunity costs, the investor should be willing to pay up to the price that equals her estimated probability of the Democrat winning the election. The market price is the value at which, if a marginal investor were willing to buy above it, investors would sell the stock and drive the price back down to that market price (and vice-versa if an investor were willing to sell below it); thus, the price is an aggregation of the subjective probability beliefs of all investors. Scholars have found that prediction market prices can create more accurate forecasts than polls-based forecasts in the last few cycles (see Berg et al. or my earlier paper) and in historical elections (see Paul Rhode and Koleman Strumpf). Like polls and fundamental data, prediction market prices also suffer a bias, the favorite longshot bias. Unfortunately, both the press and academia, if they acknowledge prediction markets at all, only cite raw prediction market prices as forecasts, thus failing to correct for these biases.
We ask what makes a good forecast …
One simple question motivates our method, what combination of these three key data types creates the most accurate, relevant, and timely forecasts (i.e., the most efficient and useful forecasts for the relevant stakeholders)? First, the answer is crucial for researchers studying electoral politics, or any other domain with forecasts, because accurate and granular forecasts allow them to connect shocks to the campaign with changes in the underlying likelihood of the relevant outcomes. Second, forecast accuracy is important for practitioners (i.e., campaigns or investors in campaigns) who want to make efficient choices when they spend time and money in the multi-billion dollar industry of political campaigns.
Accuracy: There have been few meaningful attempts to combine these different data types into a single forecast, even though the literature is clear that combining data is generally very effective in increasing accuracy. There are few exceptions, but most papers only investigate the national vote share and use simpler interpretations of the raw data. Overall, three related, but largely non-intersecting academic literatures persist, despite their shared goal of accurately forecasting election outcomes.
Relevancy: There is little discussion about what is the most relevant forecast. Academic forecasts tend to estimate vote share for two key reasons: academic literature focuses on incremental improvements on historical forecasts and estimated vote share is the historical standard, and observers frequently interpret raw polls as a naïve estimations of vote share, making it the simplest rubric. Expected vote share is certainly still extremely important for election workers, especially broken down by targetable demographics, but the marketplace for the general population is very clear that it desires probability of victory in the Electoral College. Further, state-by-state forecasts for the Electoral College not only offer a more compelling indicator for researchers and practitioners or investors, it also provides much more identification than forecasts of the national vote share.
Timeliness: There is no emphasis on the utility of the forecast when it is released; forecasts are judged by academia and the press at the time they are released or they are judged as if they were released on the eve of the event. Yet, both election researchers and practitioners benefit from early forecasts, when there are more resources left to allocate. And, they both benefit from timely forecasts, which provide a granular account of the election for researchers and are up-to-date when the practitioners or investors need to make a decision.
We create a model that combines the three types of data and maximizes the three attributes of a good forecast …
First, we aggregate then debias the raw voter intention polling data, using parameters that we calibrate separately by: election type, days before the election, and the certainty of the raw data. The resulting forecast is the most accurate poll-based forecast readily available.
Second, we examine and clarify the transformation that debiases raw prediction market data, yielding an improved prediction market-based forecast.
Third, we combine the three forecasts based on polling data, fundamental data (using my work with Patrick Hummel), and prediction market data. The weighting parameters for our model demonstrate and capitalize on the shifting strength of the different forecast types across the studied timeframe; 130 days out, the forecast averages the separate forecasts from all three data types, but the fundamental model’s unique information decreases until Election Day, when the forecast is an average of the polling and prediction market-based forecasts.
We see how we did in 2012 …
We do not like to dwell too much on single election cycles, as the correlation between the outcomes somewhat diminishes the explanatory power of even 84 (51 Electoral College and 33 senatorial) different outcomes. Yet, PredictWise’s forecast does well in predicting the 2012 election; the below chart shows the errors every 4 hours for the last 130 days of the election in 2012. Unlike the within-sample from previous years, on which we created the model, it was not dominant at every point in the cycle, but it was the most consistent forecast. For a span of about 30 days early in the cycle when poll-based forecasts had a lower error than prediction market-based forecasts, PredictWise’s forecast was either below or near the poll-based forecast. Towards the end of the summer until last the month of the campaign, a span of about 45 days when prediction market-based forecast had a lower error than polls, PredictWise’s forecast again held closely to the lowest errors. At any given moment from 130 before the election to Election Day in 2012 PredictWise’s forecast is likely to have a lower error than either the completely poll-based or completely prediction market-based forecast.
Accuracy of probability of victory estimates for Electoral College and senatorial elections by fundamental data, voter intention poll, and prediction markets-based forecasts, along with PredictWise for 2012
There is no comparison with FiveThirtyEight in this post, because there is no comparison with FiveThirtyEight. I have compared PredictWise to three single data forecasts created with: voter intention polling, fundamental, and prediction markets data. FiveThirtyEight is some unknown combination of polling and fundamental data. First, FiveThirtyEight did not post senatorial predictions until about Labor Day. Without predictions during the toughest part of the process, it is impossible to compare our forecasts. Second, FiveThirtyEight updated sparingly until the very end, so their forecast were frequently stale. And, for the record, we had 50 of 51 Electoral College races correct on February 16, 2012 … forecasts records on the eve of the election are not useful to the stakeholders and do not interest us.