DavidMRothschild on March 02, 2013 @ 8:10PM
I judge my predictions on four major attributes: relevancy, timeliness, accuracy, and cost-effectiveness. I am very proud of my 2013 Oscar predictions, because they excelled in all four attributes: they predicted all 24 categories (and all combinations of categories), moved in real-time, were very accurate, and built on a scalable and flexible prediction model.
Relevant, real-time, accurate, and scalable: 2013 Oscar predictions are a win for predictive science
DavidMRothschild on February 25, 2013 @ 1:33AM
Predicting the Oscars for me is not about the Oscars per se, but the science of predicting. The challenge was to make predictions in all 24 categories, when most predictions only do 6. The challenge was to make predictions that move in real-time during the time period between the nominations and the Oscars, when most predictions are static. The challenge was to make to predictions that were accurate, not just in the binary correctness, but in calibrated probabilities. The challenge was to make these cost effective predictions, so that they could not only scale to 24 categories, but be useful in making predictions in varying domains.Prediction market data, including Betfair, Hollywood Stock Exchange, and Intrade, combined with some user generated data from WiseQ, allowed me to meet all of these challenges.
DavidMRothschild on February 24, 2013 @ 10:26PM
DavidMRothschild on February 24, 2013 @ 5:22PM
DavidMRothschild on February 23, 2013 @ 12:27PM
I created my Oscar predictions in real-time, because real-time movement is an important part of my basic research into predictions, not because I thought the Oscars would provide an interesting domain for movement; I was wrong. In category after category significant movement in the likely winner provides a window into the power of certain events that occurred on the road to the Oscars. These events include regularly scheduled events, such as awards shows, and idiosyncratic events, such as prominent commentary on certain movies.
DavidMRothschild on February 22, 2013 @ 12:23PM
I am stunned at the confidence of my predictions for the Oscars, seen in real-time here and here. Of the 24 categories that the Academy of Motion Pictures Arts and Sciences will present Oscars for live this Sunday, the favorite in eight of them are 95 percent or more likely to win their category. Yet despite my concern that no nominee should be that confident of victory, I have no choice but to stick to the data and models. My data and models have proven correct over and over, while hunches and guts checks are prone to failure.
DavidMRothschild on February 08, 2013 @ 6:53PM
After addressing all 24 categories individually, it is an interesting and meaningful follow-up to consider how they interact. If Lincoln wins the Oscar for Best Adapted Screenplay, does that make Lincoln's likelihood of winning Best Picture increase, decrease, or is there no correlation?
A positive correlation story assumes that voters like (or know) certain movies and will vote for those movies in multiple categories; thus, as movies win earlier categories, they are more likely to win later categories. For example, in the most extreme situation, assume voters are either Argo or Lincoln fans. Any voter that votes for Argo (Lincoln) for Best Adapted Screenplay will also vote for Argo (Lincoln) for Best Picture. Thus, if Argo (Lincoln) wins Best Adapted Screenplay it becomes extremely likely to win Best Picture.
DavidMRothschild on February 01, 2013 @ 11:47AM
I spent several weeks this winter immersed in spreadsheets full of historical Oscar data to explore methods of using fundamentals to predict Oscar winners. Fundamental models work really well in forecasting political elections, where significant categories of data include: past election results, incumbency, presidential approval, ideology, economic indicators, and biographical data. Yet, fundamental models are much less efficient in forecasting awards shows, where they would include categories such as: studio inputs, box office success, subjective ratings, Oscar nominations, and biographical data. The reason is simple, prior to the other awards shows, there is a dearth of variables that properly identify individual award categories, as most data is just movie specific.
DavidMRothschild on January 29, 2013 @ 12:35PM
The most exciting movie-to-movie competition in this year's Oscars is between Ben Affleck's Argo and Steven Spielberg's' Lincoln, which will play out in three categories: Best Picture, Best Director, and Best Adapted Screenplay. In the beginning of this Oscar season, with the Oscar nominations on January 10, Lincoln held a big lead in all three categories, but everything that can move has shifted towards Argo since then.
DavidMRothschild on January 28, 2013 @ 10:39AM
Most of the discussions around the Oscars focus on the six main categories, but there are 18 other categories with awards on Oscar night. Six categories focus on the best picture in a certain class. Not surprisingly my predictions tend to favor the better known movies in those categories including: Brave as animated feature, Searching for Sugar Man as documentary feature, and Amour as foreign language film. The other twelve categories focus on the key elements of major movies. While these predictions focus more on mainstream movies, it is not necessarily the ones dominating the main categories: Life of Pi leads in three, Zero Dark Thirty in three, Anna Karenina in two, and Lincoln in only one.