PredictWise Blog

My Confident Predictions for the Oscars (Syndicated on the Huffington Post)

Bookmark and Share

I am stunned at the confidence of my predictions for the Oscars, seen in real-time here and here. Of the 24 categories that the Academy of Motion Pictures Arts and Sciences will present Oscars for live this Sunday, the favorite in eight of them are 95 percent or more likely to win their category. Yet despite my concern that no nominee should be that confident of victory, I have no choice but to stick to the data and models. My data and models have proven correct over and over, while hunches and guts checks are prone to failure.

I created and tested these models using historical data and then release them to run prior to the Oscar nominations; I do not make any tweaks to my models once they are live, because I do not want to inadvertently bias my results with considerations of the current predictions. The Oscar predictions rely mainly on de-biased, aggregated prediction markets. This method has proven not just accurate, but has the added benefit updating in real-time and is so scalable that I can provide predictions in all 24 categories. Further, I incorporate user generated data to help determine the correlations between categories, within movies, that I use to create predictions on the number of Oscars for each movie.

The biggest errors in my 2012 election forecasting were painfully obvious to me, even as I published my forecasts in mid-February, but I stuck to them. The errors came from state-by-state predictions of vote share for Massachusetts and Utah. Despite Rick Santorum dominating the nation's polling for the Republican nomination, I was extremely confident of a Mitt Romney nomination. Our model demanded the home state of the Republican nominee and we provided Romney's official home state of Massachusetts. "Everyone" knew that he would get a home state bump in Utah, where he has religious roots and was instrumental in saving the 2002 Winter Games, and not in Massachusetts. But there is no objective data for swapping out the official home state. Making arbitrary model/data changes is bad science and costly; I design my models to be easily scalable to new questions and categories of questions and I do not want to manually review each individual prediction for extra data. So, I proudly overestimated Romney's vote share in Massachusetts (although I still had him losing!) and underestimated his vote share in Utah (although I still had him winning!). Because, while that hunch was correct, over time science is much more reliable than hunches. So, I am sticking to my increasingly confident predictions going into Oscar night and feeling confident about them.

Further, while the eight strong predictions are very salient, the average prediction is at its exact historical level. The average likelihood of victory for the favorite nominee across the 24 categories is 75 percent. In five categories the favorite nominee is not even 50 percent likely to win! Thus, if my model is properly calibrated, I will only get 18 out of 24 categories correct (i.e., 75 percent of the categories). This is the exact same average likelihood that the market-based forecasts provided in 2011 and 2012, and in both years, 3 of every 4 categories landed correctly.

I leave you with a question; is there any particular likelihood, in any of the 24 categories, which you would place a large wager against? What looks to high and what looks too low for you? I invite you to go on the WiseQ Oscar Game and prove me wrong and you right!

This column syndicates with the HuffingtonPost.

After addressing all 24 categories individually, it is an interesting and meaningful follow-up to consider how they interact. If Lincoln wins the Oscar for Best Adapted Screenplay, does that make Lincoln's likelihood of winning Best Picture increase, decrease, or is there no correlation?

A positive correlation story assumes that voters like (or know) certain movies and will vote for those movies in multiple categories; thus, as movies win earlier categories, they are more likely to win later categories. For example, in the most extreme situation, assume voters are either Argo or Lincoln fans. Any voter that votes for Argo (Lincoln) for Best Adapted Screenplay will also vote for Argo (Lincoln) for Best Picture. Thus, if Argo (Lincoln) wins Best Adapted Screenplay it becomes extremely likely to win Best Picture.

A negative correlation story assumes that voters want to spread around their accolades by giving different movies votes in different categories; thus, as movies win earlier categories, they are less likely to win later categories. For example, in the most extreme situation, assume voters like Argo and Lincoln and want them both to have victories. Any voter that votes for Argo (Lincoln) for Best Adapted Screenplay will vote for Lincoln (Argo) for Best Picture. Thus, if Argo (Lincoln) wins Best Adapted Screenplay it becomes extremely likely the other will win Best Picture.

A caveat is that it is very hard to coordinate which direction to split votes. If voters randomly split their vote, both categories would be very close. But, we have an easy option for this year's Oscars, because the voters did not nominate Argo's Ben Affleck for Best Director. So, voters who wish to split the prominent Oscars would vote Lincoln's Steven Spielberg for Best Director and Argo for Best Picture. Of course, a positive correlation story would doom Argo for Best Picture, because the same voters that kept Affleck from even being nominated for Best Director would not vote for Argo for Best Picture.

A total independence story assumes that voters look at each category independently and vote for the nominee they think deserves the award the most.

In politics we assume massively high positive correlation, especially as the campaign enters the finals months. National trends move states upwards or downwards as a pack, rather than the states shifting independently shifting between candidates.

But, the answer is not as clear for the Oscars, where I do not have the historical data to answer this question with any significance; thus, I am opening the question up to you. We have created a game where you can vote on the likelihoods of the independent categories, group of categories, and the overall quantity of Oscars per movie. Please participate, as your answers are crucial to my research and writing on this topic!

This column syndicates with the HuffingtonPost.

What is predictive of the Oscars? (Syndicated on the Huffington Post)

Bookmark and Share

I spent several weeks this winter immersed in spreadsheets full of historical Oscar data to explore methods of using fundamentals to predict Oscar winners. Fundamental models work really well in forecasting political elections, where significant categories of data include: past election results, incumbency, presidential approval, ideology, economic indicators, and biographical data. Yet, fundamental models are much less efficient in forecasting awards shows, where they would include categories such as: studio inputs, box office success, subjective ratings, Oscar nominations, and biographical data. The reason is simple, prior to the other awards shows, there is a dearth of variables that properly identify individual award categories, as most data is just movie specific.

But, there are two goals of fundamental models: forecasting and determining which variables have predictive power. While fundamental models do not make great forecasts for the Oscars relative to other data including prediction markets, they can still provide insight into which variables we should follow.

All of the insights in this column are into the predictive power of variables, conditional on a movie getting a nomination for an Oscar, at the time of the nomination. How well a movie does in the box office, especially after a few weeks, the popular ratings, and how many nominations the movie receives are all significant predictive variables.

Studio Inputs: This category includes variables like: budget, release date, genre, and when the movie goes to wide release. Some of these variables correlated strongly with whether a movie gets a nomination, but conditional on being a nominee, they are not predictive of the eventual winner. For example, movies released late in year are more likely to get a nomination for an Oscar, relative to movies released in the spring, but conditional on getting nomination, they are no more likely to win the Oscar.

Box Office Success: This category includes variables like: gross revenue, screens, average gross revenue per screen, these values on the first week of wide release and the first four weeks of wide release, and many other combinations. Between gross revenue and number of screens there are some really interesting variables to consider here. This is further complicated by the staggered opening of many Oscar nominated movies. After much investigation, the predictive power in this category is highly correlated with the change that happens over the first few weeks. A key inflection point appears to be between weeks four and five. For Best Picture I follow this variable closely: 2*Gross Week 5 - Gross Week 4.

From week four to week five, Argo went from $13.3 million to $9.0 million, while Lincoln went from $18.0 million to $12.4 million. Thus, from this rubric, Lincoln has a slightly healthier $6.8 million to $4.7 million, but this is a not a significant difference.

Subjective Rating: This category includes variables like: popular and critical ratings, along with the MPSAA rating. In the battle between popular and critical ratings the people win! Popular ratings dwarf the critical ratings in predictive power.

Interestingly, Lincoln and Argo are tied in critical ratings, but Argo leading Lincoln 93 to 86 in popular ratings.

Oscar Nominations: It is no surprise that the Oscar voters value their own judgment and movies with more nominations tend to do well in winning Oscars! There is significant and meaningful predictive power in the number of Oscar nominations a movie receives.

In this category, Lincoln dominates with 12 nominations to Argo's 7 nominations.

Biographical Data: This category includes variables like: age, previous nominations, previous wins, and lifetime wins. Nominations and wins certainly have predictive power in the four main categories of: actor, actress, supporting actor and supporting actress. For these categories more nominations is a positive predictive sign. While not the case in the main categories, in less well known categories, repeated victories by the same people are more common and, correlate significantly with victory.

This column syndicates with the HuffingtonPost.

Argo versus Lincoln (Syndicated on the Huffington Post)

Bookmark and Share

The most exciting movie-to-movie competition in this year's Oscars is between Ben Affleck's Argo and Steven Spielberg's' Lincoln, which will play out in three categories: Best Picture, Best Director, and Best Adapted Screenplay. In the beginning of this Oscar season, with the Oscar nominations on January 10, Lincoln held a big lead in all three categories, but everything that can move has shifted towards Argo since then.

While the Oscar voters nominated Steven Spielberg for his directing of Lincoln, they did not nominate Ben Affleck. Winning an Oscar is a two stage process; first there is the nomination and then there is the main award, and Affleck's snub had two main impacts. First, lacking the competition made it much more likely Spielberg wins Best Director (and impossible for Affleck to win!). Second, it is extremely rare that a director does not receive a nomination and the movie wins Best Picture. So, the nominations serve as a key leading indicator of Oscar for Best Picture, bolstering Lincoln's early position as the favorite for best picture.

Yet, the awards shows have been going Argo's way. First, Argo (Best Picture in a drama) and Ben Affleck (Best Director) won in head-to-head competitions at the Golden Globes. Second, Argo (Outstanding Ensemble in a Motion Picture) won at the SAG Awards. Lincoln was leading heavily for best picture coming out of the nomination with nearly 90 percent likelihood of winning the Oscar; Argo was second, but with only 5 percent. After the Golden Globes, my predictions did not waver much; the main difference was Argo clearing out the rest of the field. But, Argo's victory at the SAG awards (along with the PGA award for producing) sent my numbers flying, with Argo now up to 55 percent and Lincoln now down to 45 percent. Essentially, this is now a toss-up.

The earlier awards shows do not have a category for adopted screenplay, but I think it is an interesting proxy fight between Argo and Lincoln. My prediction currently still gives Tony Kushner's adaptation Lincoln, based on Doris Kearns Goodwin's Team of Rivals, the edge at nearly 65 percent over Chris Terrio's adaptation of Argo at 35 percent.

This column syndicates with the HuffingtonPost.

Most of the discussions around the Oscars focus on the six main categories, but there are 18 other categories with awards on Oscar night. Six categories focus on the best picture in a certain class. Not surprisingly my predictions tend to favor the better known movies in those categories including: Brave as animated feature, Searching for Sugar Man as documentary feature, and Amour as foreign language film. The other twelve categories focus on the key elements of major movies. While these predictions focus more on mainstream movies, it is not necessarily the ones dominating the main categories: Life of Pi leads in three, Zero Dark Thirty in three, Anna Karenina in two, and Lincoln in only one.

The six class specific best picture categories offer a mix of well-known and more obscure movies. The most likely winners in all three feature length film categories have all received mainstream coverage. In the animated feature category, Pixar's Brave, at just over 50 percent likelihood of winning the Oscar, is competing against Tim Burton and Disney's Frankenweenie at 30 percent. In the documentary feature category, Searching for Sugar Man at 88 percent likely for victory, is competing against four less known competitors. Searching for Sugar Man has grossed over $3 million already and peaked at 157 theaters, both of which are serious distribution numbers for a documentary. Amour, with the recognition of its nomination for best picture and winning the Golden Globe for best foreign language film, is dominating the foreign language category, at over 95 percent to win. The three short categories: animated, documentary, and live-action are much harder to predict. But, Open Heart, by HBO, is a strong favorite in the documentary category with about 75 percent.

Two fun categories are best original song and score. Adele's Skyfall, the theme song for new Bond movie by the same name, is our heavy favorite to win best original song at over 85 percent. Its main competition is Suddenly, sung by Hugh Jackman, and the only new song in Les Miserables. In a much tighter race, we have Life of Pi's original score just over 60 percent to win best original score, over John Williams' Lincoln at about 30 percent likelihood.

Two tight and compelling categories are best original and adapted screenplay. Best adapted screenplay is a tight fight between Lincoln and Argo. Lincoln is my favorite at just over 65 percent, but Argo is a tight second. Best original screenplay is a three way fight between Zero Dark Thirty, Django Unchained, and Amour. Django Unchained is a tight favorite at almost 40 percent to Zero Dark Thirty's 35 percent.

Eight additional categories focus on key elements of putting together a movie:
•Costume Design: I have Anna Karenina as the heavy favorite for costume design, at over 75 percent likelihood of victory. Les Miserables is the main competitor at just below 20 percent.
•Production Design: Anna Karenina, at 35 percent, holds a much slimmer lead over Les Miserables (25 percent), Lincoln (20 percent), and Life of Pie (15 percent) for production design.
•Visual Effects: Life of Pi is up over 80 percent for this Oscar with The Hobbit as a distant second.
•Makeup and Hairstyling: The Hobbit leads this category at just over 50 percent, trailed by Les Miserables at 35 percent.
•Film Editing: Zero Dark Thirty is up just over 50 percent relative to Argo at just under 40 percent.
•Cinematography: Life of Pi is the heavy favorite at nearly 95 percent.
•Sound Mixing: Les Miserables at nearly 80 percent to Skyfall at 15 percent.
•Sound Editing: Zero Dark Thirty at nearly 60 percent to Skyfall at 20 percent and Life of Pi at 15 percent.

This column syndicates with the HuffingtonPost.