Relevant, real-time, accurate, and scalable: 2013 Oscar predictions are a win for predictive science
DavidMRothschild on February 25, 2013 @ 12:33AM
Predicting the Oscars for me is not about the Oscars per se, but the science of predicting. The challenge was to make predictions in all 24 categories, when most predictions only do 6. The challenge was to make predictions that move in real-time during the time period between the nominations and the Oscars, when most predictions are static. The challenge was to make to predictions that were accurate, not just in the binary correctness, but in calibrated probabilities. The challenge was to make these cost effective predictions, so that they could not only scale to 24 categories, but be useful in making predictions in varying domains.Prediction market data, including Betfair, Hollywood Stock Exchange, and Intrade, combined with some user generated data from WiseQ, allowed me to meet all of these challenges.
DavidMRothschild on February 23, 2013 @ 11:27AM
I created my Oscar predictions in real-time, because real-time movement is an important part of my basic research into predictions, not because I thought the Oscars would provide an interesting domain for movement; I was wrong. In category after category significant movement in the likely winner provides a window into the power of certain events that occurred on the road to the Oscars. These events include regularly scheduled events, such as awards shows, and idiosyncratic events, such as prominent commentary on certain movies.
DavidMRothschild on February 22, 2013 @ 11:23AM
I am stunned at the confidence of my predictions for the Oscars, seen in real-time here and here. Of the 24 categories that the Academy of Motion Pictures Arts and Sciences will present Oscars for live this Sunday, the favorite in eight of them are 95 percent or more likely to win their category. Yet despite my concern that no nominee should be that confident of victory, I have no choice but to stick to the data and models. My data and models have proven correct over and over, while hunches and guts checks are prone to failure.
DavidMRothschild on February 08, 2013 @ 5:53PM
After addressing all 24 categories individually, it is an interesting and meaningful follow-up to consider how they interact. If Lincoln wins the Oscar for Best Adapted Screenplay, does that make Lincoln's likelihood of winning Best Picture increase, decrease, or is there no correlation?
A positive correlation story assumes that voters like (or know) certain movies and will vote for those movies in multiple categories; thus, as movies win earlier categories, they are more likely to win later categories. For example, in the most extreme situation, assume voters are either Argo or Lincoln fans. Any voter that votes for Argo (Lincoln) for Best Adapted Screenplay will also vote for Argo (Lincoln) for Best Picture. Thus, if Argo (Lincoln) wins Best Adapted Screenplay it becomes extremely likely to win Best Picture.
DavidMRothschild on February 01, 2013 @ 10:47AM
I spent several weeks this winter immersed in spreadsheets full of historical Oscar data to explore methods of using fundamentals to predict Oscar winners. Fundamental models work really well in forecasting political elections, where significant categories of data include: past election results, incumbency, presidential approval, ideology, economic indicators, and biographical data. Yet, fundamental models are much less efficient in forecasting awards shows, where they would include categories such as: studio inputs, box office success, subjective ratings, Oscar nominations, and biographical data. The reason is simple, prior to the other awards shows, there is a dearth of variables that properly identify individual award categories, as most data is just movie specific.