PredictWise Blog

Oscar Predictions on the Move! (Syndicated on the Huffington Post)

Bookmark and Share

I created my Oscar predictions in real-time, because real-time movement is an important part of my basic research into predictions, not because I thought the Oscars would provide an interesting domain for movement; I was wrong. In category after category significant movement in the likely winner provides a window into the power of certain events that occurred on the road to the Oscars. These events include regularly scheduled events, such as awards shows, and idiosyncratic events, such as prominent commentary on certain movies.

Every prediction I do is in real-time for two reasons. First, real-time predictions provide the most updated prediction for the end user whenever that user needs them. For example, it is easy to see with economic or financial predictions that knowing the likely outcome is an important part of major decisions that happen continuously. Movement is a good thing in predictions, because it demonstrates that predictions are absorbing new information that affects the outcome we are predicting. Second, real-time predictions provide a granular track-record to explore when/why movements occur (i.e., what things actually impact the final outcome). Granular predictions allow me to judge the value of a debate or big advertisement buy or vice-presidential choice or an awards show, something that cannot be isolated with less regular indicators.

The most obvious movement has been in the best picture category, where Lincoln's original lead has collapsed as award show after award show favored Argo. Shortly after the nominations were released Argo was in a distant second place to Lincoln at just 8 percent likely to win. Yet, all of these wins brought Argo to 93 percent.

This theme carried into the adapted screenplay category, where a commanding lead by Lincoln is now a tight proxy fight with Argo. Our data is demonstrating a strong positive correlation between the outcomes of these two categories. Lincoln started off with a smaller lead, 70 percent likely to win to best adapted screenplay. And, the change has not been as dramatic with Argo leading slightly at 57 percent.

2013-02-23-PredictWise_OscarMovement.png

Sources: Betfair, Hollywood Stock Exchange, Intrade, WiseQ (detailed at PredictWise.com)

Zero Dark Thirty's likelihood has fallen in nearly every one of its strongest categories including best actress and original screenplay. The implication is that the increased scrutiny of Zero Dark Thirty's depiction of torture will hurt it with the voters. Just after the nominations were released, Zero Dark Thirty's Jessica Chastain was a viable 28 percent to win best actress, but that has plummeted to 5 percent in the last few weeks. Similarly, Zero Dark Thirty was 65 percent likely to win for best original screenplay. Amour and Django Unchained were distant second and third at about 13 and 17 percent likelihood. Today we have Django Unchained leading with 47 percent and Zero Dark Thirty nearly tied with Amour around 25 percent.

By the time this Oscar night concludes we will have a much richer understanding of the value of the awards show and the cost of negative publicity.

If you think you are a better prognosticator than I, please play the new WiseQ Oscars Game and show me how smart you are!

This column syndicates with the HuffingtonPost.

My Confident Predictions for the Oscars (Syndicated on the Huffington Post)

Bookmark and Share

I am stunned at the confidence of my predictions for the Oscars, seen in real-time here and here. Of the 24 categories that the Academy of Motion Pictures Arts and Sciences will present Oscars for live this Sunday, the favorite in eight of them are 95 percent or more likely to win their category. Yet despite my concern that no nominee should be that confident of victory, I have no choice but to stick to the data and models. My data and models have proven correct over and over, while hunches and guts checks are prone to failure.

I created and tested these models using historical data and then release them to run prior to the Oscar nominations; I do not make any tweaks to my models once they are live, because I do not want to inadvertently bias my results with considerations of the current predictions. The Oscar predictions rely mainly on de-biased, aggregated prediction markets. This method has proven not just accurate, but has the added benefit updating in real-time and is so scalable that I can provide predictions in all 24 categories. Further, I incorporate user generated data to help determine the correlations between categories, within movies, that I use to create predictions on the number of Oscars for each movie.

The biggest errors in my 2012 election forecasting were painfully obvious to me, even as I published my forecasts in mid-February, but I stuck to them. The errors came from state-by-state predictions of vote share for Massachusetts and Utah. Despite Rick Santorum dominating the nation's polling for the Republican nomination, I was extremely confident of a Mitt Romney nomination. Our model demanded the home state of the Republican nominee and we provided Romney's official home state of Massachusetts. "Everyone" knew that he would get a home state bump in Utah, where he has religious roots and was instrumental in saving the 2002 Winter Games, and not in Massachusetts. But there is no objective data for swapping out the official home state. Making arbitrary model/data changes is bad science and costly; I design my models to be easily scalable to new questions and categories of questions and I do not want to manually review each individual prediction for extra data. So, I proudly overestimated Romney's vote share in Massachusetts (although I still had him losing!) and underestimated his vote share in Utah (although I still had him winning!). Because, while that hunch was correct, over time science is much more reliable than hunches. So, I am sticking to my increasingly confident predictions going into Oscar night and feeling confident about them.

Further, while the eight strong predictions are very salient, the average prediction is at its exact historical level. The average likelihood of victory for the favorite nominee across the 24 categories is 75 percent. In five categories the favorite nominee is not even 50 percent likely to win! Thus, if my model is properly calibrated, I will only get 18 out of 24 categories correct (i.e., 75 percent of the categories). This is the exact same average likelihood that the market-based forecasts provided in 2011 and 2012, and in both years, 3 of every 4 categories landed correctly.

I leave you with a question; is there any particular likelihood, in any of the 24 categories, which you would place a large wager against? What looks to high and what looks too low for you? I invite you to go on the WiseQ Oscar Game and prove me wrong and you right!

This column syndicates with the HuffingtonPost.

After addressing all 24 categories individually, it is an interesting and meaningful follow-up to consider how they interact. If Lincoln wins the Oscar for Best Adapted Screenplay, does that make Lincoln's likelihood of winning Best Picture increase, decrease, or is there no correlation?

A positive correlation story assumes that voters like (or know) certain movies and will vote for those movies in multiple categories; thus, as movies win earlier categories, they are more likely to win later categories. For example, in the most extreme situation, assume voters are either Argo or Lincoln fans. Any voter that votes for Argo (Lincoln) for Best Adapted Screenplay will also vote for Argo (Lincoln) for Best Picture. Thus, if Argo (Lincoln) wins Best Adapted Screenplay it becomes extremely likely to win Best Picture.

A negative correlation story assumes that voters want to spread around their accolades by giving different movies votes in different categories; thus, as movies win earlier categories, they are less likely to win later categories. For example, in the most extreme situation, assume voters like Argo and Lincoln and want them both to have victories. Any voter that votes for Argo (Lincoln) for Best Adapted Screenplay will vote for Lincoln (Argo) for Best Picture. Thus, if Argo (Lincoln) wins Best Adapted Screenplay it becomes extremely likely the other will win Best Picture.

A caveat is that it is very hard to coordinate which direction to split votes. If voters randomly split their vote, both categories would be very close. But, we have an easy option for this year's Oscars, because the voters did not nominate Argo's Ben Affleck for Best Director. So, voters who wish to split the prominent Oscars would vote Lincoln's Steven Spielberg for Best Director and Argo for Best Picture. Of course, a positive correlation story would doom Argo for Best Picture, because the same voters that kept Affleck from even being nominated for Best Director would not vote for Argo for Best Picture.

A total independence story assumes that voters look at each category independently and vote for the nominee they think deserves the award the most.

In politics we assume massively high positive correlation, especially as the campaign enters the finals months. National trends move states upwards or downwards as a pack, rather than the states shifting independently shifting between candidates.

But, the answer is not as clear for the Oscars, where I do not have the historical data to answer this question with any significance; thus, I am opening the question up to you. We have created a game where you can vote on the likelihoods of the independent categories, group of categories, and the overall quantity of Oscars per movie. Please participate, as your answers are crucial to my research and writing on this topic!

This column syndicates with the HuffingtonPost.

What is predictive of the Oscars? (Syndicated on the Huffington Post)

Bookmark and Share

I spent several weeks this winter immersed in spreadsheets full of historical Oscar data to explore methods of using fundamentals to predict Oscar winners. Fundamental models work really well in forecasting political elections, where significant categories of data include: past election results, incumbency, presidential approval, ideology, economic indicators, and biographical data. Yet, fundamental models are much less efficient in forecasting awards shows, where they would include categories such as: studio inputs, box office success, subjective ratings, Oscar nominations, and biographical data. The reason is simple, prior to the other awards shows, there is a dearth of variables that properly identify individual award categories, as most data is just movie specific.

But, there are two goals of fundamental models: forecasting and determining which variables have predictive power. While fundamental models do not make great forecasts for the Oscars relative to other data including prediction markets, they can still provide insight into which variables we should follow.

All of the insights in this column are into the predictive power of variables, conditional on a movie getting a nomination for an Oscar, at the time of the nomination. How well a movie does in the box office, especially after a few weeks, the popular ratings, and how many nominations the movie receives are all significant predictive variables.

Studio Inputs: This category includes variables like: budget, release date, genre, and when the movie goes to wide release. Some of these variables correlated strongly with whether a movie gets a nomination, but conditional on being a nominee, they are not predictive of the eventual winner. For example, movies released late in year are more likely to get a nomination for an Oscar, relative to movies released in the spring, but conditional on getting nomination, they are no more likely to win the Oscar.

Box Office Success: This category includes variables like: gross revenue, screens, average gross revenue per screen, these values on the first week of wide release and the first four weeks of wide release, and many other combinations. Between gross revenue and number of screens there are some really interesting variables to consider here. This is further complicated by the staggered opening of many Oscar nominated movies. After much investigation, the predictive power in this category is highly correlated with the change that happens over the first few weeks. A key inflection point appears to be between weeks four and five. For Best Picture I follow this variable closely: 2*Gross Week 5 - Gross Week 4.

From week four to week five, Argo went from $13.3 million to $9.0 million, while Lincoln went from $18.0 million to $12.4 million. Thus, from this rubric, Lincoln has a slightly healthier $6.8 million to $4.7 million, but this is a not a significant difference.

Subjective Rating: This category includes variables like: popular and critical ratings, along with the MPSAA rating. In the battle between popular and critical ratings the people win! Popular ratings dwarf the critical ratings in predictive power.

Interestingly, Lincoln and Argo are tied in critical ratings, but Argo leading Lincoln 93 to 86 in popular ratings.

Oscar Nominations: It is no surprise that the Oscar voters value their own judgment and movies with more nominations tend to do well in winning Oscars! There is significant and meaningful predictive power in the number of Oscar nominations a movie receives.

In this category, Lincoln dominates with 12 nominations to Argo's 7 nominations.

Biographical Data: This category includes variables like: age, previous nominations, previous wins, and lifetime wins. Nominations and wins certainly have predictive power in the four main categories of: actor, actress, supporting actor and supporting actress. For these categories more nominations is a positive predictive sign. While not the case in the main categories, in less well known categories, repeated victories by the same people are more common and, correlate significantly with victory.

This column syndicates with the HuffingtonPost.

Argo versus Lincoln (Syndicated on the Huffington Post)

Bookmark and Share

The most exciting movie-to-movie competition in this year's Oscars is between Ben Affleck's Argo and Steven Spielberg's' Lincoln, which will play out in three categories: Best Picture, Best Director, and Best Adapted Screenplay. In the beginning of this Oscar season, with the Oscar nominations on January 10, Lincoln held a big lead in all three categories, but everything that can move has shifted towards Argo since then.

While the Oscar voters nominated Steven Spielberg for his directing of Lincoln, they did not nominate Ben Affleck. Winning an Oscar is a two stage process; first there is the nomination and then there is the main award, and Affleck's snub had two main impacts. First, lacking the competition made it much more likely Spielberg wins Best Director (and impossible for Affleck to win!). Second, it is extremely rare that a director does not receive a nomination and the movie wins Best Picture. So, the nominations serve as a key leading indicator of Oscar for Best Picture, bolstering Lincoln's early position as the favorite for best picture.

Yet, the awards shows have been going Argo's way. First, Argo (Best Picture in a drama) and Ben Affleck (Best Director) won in head-to-head competitions at the Golden Globes. Second, Argo (Outstanding Ensemble in a Motion Picture) won at the SAG Awards. Lincoln was leading heavily for best picture coming out of the nomination with nearly 90 percent likelihood of winning the Oscar; Argo was second, but with only 5 percent. After the Golden Globes, my predictions did not waver much; the main difference was Argo clearing out the rest of the field. But, Argo's victory at the SAG awards (along with the PGA award for producing) sent my numbers flying, with Argo now up to 55 percent and Lincoln now down to 45 percent. Essentially, this is now a toss-up.

The earlier awards shows do not have a category for adopted screenplay, but I think it is an interesting proxy fight between Argo and Lincoln. My prediction currently still gives Tony Kushner's adaptation Lincoln, based on Doris Kearns Goodwin's Team of Rivals, the edge at nearly 65 percent over Chris Terrio's adaptation of Argo at 35 percent.

This column syndicates with the HuffingtonPost.