DavidMRothschild on February 08, 2013 @ 6:53PM
After addressing all 24 categories individually, it is an interesting and meaningful follow-up to consider how they interact. If Lincoln wins the Oscar for Best Adapted Screenplay, does that make Lincoln's likelihood of winning Best Picture increase, decrease, or is there no correlation?
A positive correlation story assumes that voters like (or know) certain movies and will vote for those movies in multiple categories; thus, as movies win earlier categories, they are more likely to win later categories. For example, in the most extreme situation, assume voters are either Argo or Lincoln fans. Any voter that votes for Argo (Lincoln) for Best Adapted Screenplay will also vote for Argo (Lincoln) for Best Picture. Thus, if Argo (Lincoln) wins Best Adapted Screenplay it becomes extremely likely to win Best Picture.
A negative correlation story assumes that voters want to spread around their accolades by giving different movies votes in different categories; thus, as movies win earlier categories, they are less likely to win later categories. For example, in the most extreme situation, assume voters like Argo and Lincoln and want them both to have victories. Any voter that votes for Argo (Lincoln) for Best Adapted Screenplay will vote for Lincoln (Argo) for Best Picture. Thus, if Argo (Lincoln) wins Best Adapted Screenplay it becomes extremely likely the other will win Best Picture.
A caveat is that it is very hard to coordinate which direction to split votes. If voters randomly split their vote, both categories would be very close. But, we have an easy option for this year's Oscars, because the voters did not nominate Argo's Ben Affleck for Best Director. So, voters who wish to split the prominent Oscars would vote Lincoln's Steven Spielberg for Best Director and Argo for Best Picture. Of course, a positive correlation story would doom Argo for Best Picture, because the same voters that kept Affleck from even being nominated for Best Director would not vote for Argo for Best Picture.
A total independence story assumes that voters look at each category independently and vote for the nominee they think deserves the award the most.
In politics we assume massively high positive correlation, especially as the campaign enters the finals months. National trends move states upwards or downwards as a pack, rather than the states shifting independently shifting between candidates.
But, the answer is not as clear for the Oscars, where I do not have the historical data to answer this question with any significance; thus, I am opening the question up to you. We have created a game where you can vote on the likelihoods of the independent categories, group of categories, and the overall quantity of Oscars per movie. Please participate, as your answers are crucial to my research and writing on this topic!
This column syndicates with the HuffingtonPost.
DavidMRothschild on February 01, 2013 @ 11:47AM
I spent several weeks this winter immersed in spreadsheets full of historical Oscar data to explore methods of using fundamentals to predict Oscar winners. Fundamental models work really well in forecasting political elections, where significant categories of data include: past election results, incumbency, presidential approval, ideology, economic indicators, and biographical data. Yet, fundamental models are much less efficient in forecasting awards shows, where they would include categories such as: studio inputs, box office success, subjective ratings, Oscar nominations, and biographical data. The reason is simple, prior to the other awards shows, there is a dearth of variables that properly identify individual award categories, as most data is just movie specific.
But, there are two goals of fundamental models: forecasting and determining which variables have predictive power. While fundamental models do not make great forecasts for the Oscars relative to other data including prediction markets, they can still provide insight into which variables we should follow.
All of the insights in this column are into the predictive power of variables, conditional on a movie getting a nomination for an Oscar, at the time of the nomination. How well a movie does in the box office, especially after a few weeks, the popular ratings, and how many nominations the movie receives are all significant predictive variables.
Studio Inputs: This category includes variables like: budget, release date, genre, and when the movie goes to wide release. Some of these variables correlated strongly with whether a movie gets a nomination, but conditional on being a nominee, they are not predictive of the eventual winner. For example, movies released late in year are more likely to get a nomination for an Oscar, relative to movies released in the spring, but conditional on getting nomination, they are no more likely to win the Oscar.
Box Office Success: This category includes variables like: gross revenue, screens, average gross revenue per screen, these values on the first week of wide release and the first four weeks of wide release, and many other combinations. Between gross revenue and number of screens there are some really interesting variables to consider here. This is further complicated by the staggered opening of many Oscar nominated movies. After much investigation, the predictive power in this category is highly correlated with the change that happens over the first few weeks. A key inflection point appears to be between weeks four and five. For Best Picture I follow this variable closely: 2*Gross Week 5 - Gross Week 4.
From week four to week five, Argo went from $13.3 million to $9.0 million, while Lincoln went from $18.0 million to $12.4 million. Thus, from this rubric, Lincoln has a slightly healthier $6.8 million to $4.7 million, but this is a not a significant difference.
Subjective Rating: This category includes variables like: popular and critical ratings, along with the MPSAA rating. In the battle between popular and critical ratings the people win! Popular ratings dwarf the critical ratings in predictive power.
Interestingly, Lincoln and Argo are tied in critical ratings, but Argo leading Lincoln 93 to 86 in popular ratings.
Oscar Nominations: It is no surprise that the Oscar voters value their own judgment and movies with more nominations tend to do well in winning Oscars! There is significant and meaningful predictive power in the number of Oscar nominations a movie receives.
In this category, Lincoln dominates with 12 nominations to Argo's 7 nominations.
Biographical Data: This category includes variables like: age, previous nominations, previous wins, and lifetime wins. Nominations and wins certainly have predictive power in the four main categories of: actor, actress, supporting actor and supporting actress. For these categories more nominations is a positive predictive sign. While not the case in the main categories, in less well known categories, repeated victories by the same people are more common and, correlate significantly with victory.
This column syndicates with the HuffingtonPost.
DavidMRothschild on January 29, 2013 @ 12:35PM
The most exciting movie-to-movie competition in this year's Oscars is between Ben Affleck's Argo and Steven Spielberg's' Lincoln, which will play out in three categories: Best Picture, Best Director, and Best Adapted Screenplay. In the beginning of this Oscar season, with the Oscar nominations on January 10, Lincoln held a big lead in all three categories, but everything that can move has shifted towards Argo since then.
While the Oscar voters nominated Steven Spielberg for his directing of Lincoln, they did not nominate Ben Affleck. Winning an Oscar is a two stage process; first there is the nomination and then there is the main award, and Affleck's snub had two main impacts. First, lacking the competition made it much more likely Spielberg wins Best Director (and impossible for Affleck to win!). Second, it is extremely rare that a director does not receive a nomination and the movie wins Best Picture. So, the nominations serve as a key leading indicator of Oscar for Best Picture, bolstering Lincoln's early position as the favorite for best picture.
Yet, the awards shows have been going Argo's way. First, Argo (Best Picture in a drama) and Ben Affleck (Best Director) won in head-to-head competitions at the Golden Globes. Second, Argo (Outstanding Ensemble in a Motion Picture) won at the SAG Awards. Lincoln was leading heavily for best picture coming out of the nomination with nearly 90 percent likelihood of winning the Oscar; Argo was second, but with only 5 percent. After the Golden Globes, my predictions did not waver much; the main difference was Argo clearing out the rest of the field. But, Argo's victory at the SAG awards (along with the PGA award for producing) sent my numbers flying, with Argo now up to 55 percent and Lincoln now down to 45 percent. Essentially, this is now a toss-up.
The earlier awards shows do not have a category for adopted screenplay, but I think it is an interesting proxy fight between Argo and Lincoln. My prediction currently still gives Tony Kushner's adaptation Lincoln, based on Doris Kearns Goodwin's Team of Rivals, the edge at nearly 65 percent over Chris Terrio's adaptation of Argo at 35 percent.
This column syndicates with the HuffingtonPost.
DavidMRothschild on January 28, 2013 @ 10:39AM
Most of the discussions around the Oscars focus on the six main categories, but there are 18 other categories with awards on Oscar night. Six categories focus on the best picture in a certain class. Not surprisingly my predictions tend to favor the better known movies in those categories including: Brave as animated feature, Searching for Sugar Man as documentary feature, and Amour as foreign language film. The other twelve categories focus on the key elements of major movies. While these predictions focus more on mainstream movies, it is not necessarily the ones dominating the main categories: Life of Pi leads in three, Zero Dark Thirty in three, Anna Karenina in two, and Lincoln in only one.
The six class specific best picture categories offer a mix of well-known and more obscure movies. The most likely winners in all three feature length film categories have all received mainstream coverage. In the animated feature category, Pixar's Brave, at just over 50 percent likelihood of winning the Oscar, is competing against Tim Burton and Disney's Frankenweenie at 30 percent. In the documentary feature category, Searching for Sugar Man at 88 percent likely for victory, is competing against four less known competitors. Searching for Sugar Man has grossed over $3 million already and peaked at 157 theaters, both of which are serious distribution numbers for a documentary. Amour, with the recognition of its nomination for best picture and winning the Golden Globe for best foreign language film, is dominating the foreign language category, at over 95 percent to win. The three short categories: animated, documentary, and live-action are much harder to predict. But, Open Heart, by HBO, is a strong favorite in the documentary category with about 75 percent.
Two fun categories are best original song and score. Adele's Skyfall, the theme song for new Bond movie by the same name, is our heavy favorite to win best original song at over 85 percent. Its main competition is Suddenly, sung by Hugh Jackman, and the only new song in Les Miserables. In a much tighter race, we have Life of Pi's original score just over 60 percent to win best original score, over John Williams' Lincoln at about 30 percent likelihood.
Two tight and compelling categories are best original and adapted screenplay. Best adapted screenplay is a tight fight between Lincoln and Argo. Lincoln is my favorite at just over 65 percent, but Argo is a tight second. Best original screenplay is a three way fight between Zero Dark Thirty, Django Unchained, and Amour. Django Unchained is a tight favorite at almost 40 percent to Zero Dark Thirty's 35 percent.
Eight additional categories focus on key elements of putting together a movie:
•Costume Design: I have Anna Karenina as the heavy favorite for costume design, at over 75 percent likelihood of victory. Les Miserables is the main competitor at just below 20 percent.
•Production Design: Anna Karenina, at 35 percent, holds a much slimmer lead over Les Miserables (25 percent), Lincoln (20 percent), and Life of Pie (15 percent) for production design.
•Visual Effects: Life of Pi is up over 80 percent for this Oscar with The Hobbit as a distant second.
•Makeup and Hairstyling: The Hobbit leads this category at just over 50 percent, trailed by Les Miserables at 35 percent.
•Film Editing: Zero Dark Thirty is up just over 50 percent relative to Argo at just under 40 percent.
•Cinematography: Life of Pi is the heavy favorite at nearly 95 percent.
•Sound Mixing: Les Miserables at nearly 80 percent to Skyfall at 15 percent.
•Sound Editing: Zero Dark Thirty at nearly 60 percent to Skyfall at 20 percent and Life of Pi at 15 percent.
This column syndicates with the HuffingtonPost.
DavidMRothschild on January 17, 2013 @ 5:14PM
Branching out from politics and economics, I have been examining Oscar predictions over the last few weeks. While I approach the science of predictions the same way for both political elections and the Oscars, there are some key differences. When I forecast politics I utilize four main sources of data: fundamental data (i.e., economic indicators, incumbency, etc.,), prediction markets, polls, and user-generated data. Two of these sources: polls and fundamental data are much less useful for the Oscars. This places greater strain on the other two sources: prediction markets and user-generated data.
Early in an election cycle I rely on the fundamental data to provide a baseline prediction for all of the elections. My model was very accurate for 2012, correctly predicting 50 of 51 Electoral College elections in mid-February. The same two candidates run in all 51 Electoral College races, thus there is no state-by-state difference for some key fundamental categories: presidential approval, incumbency, and home state. But, there is meaningful state-by-state identification for other key categories: past election results, economic indicators, and state-level ideology measures. This helps fundamental models provide extremely accurate early forecasts.
Fundamental data of movies do not have the same type of identification that it has in elections. In many of the 24 categories, the same set of movies is running: Lincoln (12), Life of Pi (11), etc. Yet, most of the fundamental data of movies are not category specific: studio input choices (budget, release date, genre), success with general audience (gross revue and screens by week), and ratings. There is person level data available for some categories, but there is little objective data on the value or rating of any one person's role, relative to the overall movie. This makes fundamental models for the Oscars very imprecise.
As the election cycle progresses, I incorporate polling and prediction market data into my forecasts; this data allows me to see how the forecasts adjust to the main events of the campaign. Polls collect the voting intention of a random sample of a representative group of voters and historical data allows me to project that polling data to Election Day. Prediction markets gather the expectations of a self-selected group of high information users. My models phase out fundamental data sharply after Labor Day and rely almost exclusively on these two reliable sources of information.
There is no reliable polling of the voters in the Academy of Motion Picture Arts and Sciences. Rather than citizens who have reached the age of 18, the Oscars have a more select group of members. While political pollsters face increasingly perilous low response rates in conjunction with the uptick in cellphone-only households, any potential Oscar pollster would have to overcome much greater obstacles to reach this elusive collection of movie insiders.
Fortunately, the Oscars are an ideal use case for prediction markets. For big elections, the vast majority of the information in prediction markets is the latest polls. Prediction markets have an advantage of being able to digest late-breaking events and are especially useful earlier in the cycle and in situations where there is less polling. They are the most reliable data available, but only offer a marginally better forecast than polling data in trustworthy hands. Yet, for Oscars there is a lot of information about likely outcomes, but little of it is objective, digestible data. It is common knowledge that Daniel Day-Lewis shined in Lincoln, but it hard to pin down a reliable statistically significant data point that demonstrates his role in the movie's success. Dispersed information, among dispersed informed users, makes this a more ideal case of where prediction markets can shine relative to other forecasting data. They did very well in both 2011 and 2012 in forecasting the Oscars.
Experimental user data proved valuable during the 2012 election, but was not necessary. In this column, I showed the value of the Xbox data we collected both during debates and in a daily panel. But, its main use case is as a fun engagement device and for future research. I also showed off my prediction games that explored correlation between states, but that promise is in the future.
Experimental user data will prove not only valuable, but necessary, in forecasting the Oscars. There are not going to be detailed prediction markets in many categories, so we are going to rely on our users to supplement those categories. Further, we are excited to learn more about the correlations between different categories, from the insights our users provide. How does winning the Oscar for best actor correlate with winning the Oscar for best picture? Stay tuned for the launch of these games later this month.
This column syndicates with the HuffingtonPost