PredictWise Blog

Eurovision 2013

Bookmark and Share

May 18 at 6:36 ET: Our predictions, once again, came out very strongly. Looking at our initial predictions on Monday, before the first of the semi-finals, we had the most likely winners as: Denmark (41%), Ukraine (11%), Norway (11%), Russia (6%) and Azerbaijan (5%). The final order was: Denmark (predicted winner), Azerbaijan (5th most likely), Ukraine (2nd most likely), Norway (3rd most likely), Russia (4th most likely). In other words our 1, 2, 3, 4, and 5 finished 1, 3, 4, 5, and 2 ... This is definitely another triumph for our domain independent prediction technology. More to come tomorrow ...

May 18 at 6:05 ET: Halfway through the voting and only two viable countries left Denmark (89%) and Ukraine (7%). Of course, this was our initial top and second predictions for first place.

5:58 PM via ET Twitter: Calling it for Denmark with 17 of 39 countries voting! #ev2013

5:51 PM via ET Twitter: It is now officially at 2 team race between Azerbaijan and
Denmark at #EV2013!

May 18 at 5:15 ET: The songs have been songs and the voting has begun! I am amazed at how little impact the actual singing had on the predictions. With no major miscues by the favorites and tons of pre-event publicity over their songs you would expect the impact on the predictions to be dulled a little, but not this much. That being said, with the singing over we have the following current predictions to win: Denmark at 56%, Norway at 13%, Ukraine at 10%, Russia at 5%, and Greece at 3%.


4:59 PM ET via ET Twitter: Almost done the song portion of #ev2013:
still have Denmark as the favorite and Spain as my most likely last: ow.ly/larbE

4:29 PM ET via ET Twitter: Top worldwide trend on twitter is #EV2013.
Unsure what it means, 6th top trend #Eurovision2013:
real-time forecast: ow.ly/lapuc
 
May 18 at 12:00 ET: On Monday, May 13, I cemented my pre-Eurovision 2013 predictions with Denmark at 41% likely to win. After a strong semi-final performance, and the field shrinking from 39 to 26 competitors, my prediction on the eve of the final now has Denmark at 55% likely to win. Norway is the second most likely winner at 14%, followed by Russia and Ukraine at 5%. You can follow predictions live as they update during the competition tonight here:

Judging the Oscar Predictions (Syndicated on the Huffington Post)

Bookmark and Share

I judge my predictions on four major attributes: relevancy, timeliness, accuracy, and cost-effectiveness. I am very proud of my 2013 Oscar predictions, because they excelled in all four attributes: they predicted all 24 categories (and all combinations of categories), moved in real-time, were very accurate, and built on a scalable and flexible prediction model.

Relevancy is the only one of my major attributes that relies on the subjective input of stakeholders, rather than an objective measure; I relied on people with more domain specific information than myself about what I should predict and, after watching my first Oscar show from start to finish, it certainly felt that any relevant set of predictions should have all 24 categories. Of the six major categories that fall into the standard set of predictions, only two, best supporting actor and actress, are scattered into the first 20 awards. The show does not get to the biggest four awards, best: picture, director, actor, and actress until well past 11:30 PM ET. If I were watching the Oscars casually with my family or friends I would certainly want information on all 24 categories to sustain interest throughout the telecast. Further, predictions in all 24 categories are necessary to predict the total quantity of awards won by any given movie.

The real-time nature of my predictions proved extremely interesting in both quantifying and understanding the major trends of the awards season; further, predictions that are created just before Oscar day, are not available to interested people during these earlier events. Both major trends that I illustrated the day before the Oscars played a big role on Oscar night: Argo's rise with the award show victories and Zero Dark Thirty's fall with the increased concern over its depiction of torture. A third trend is evident in both the major categories where Django Unchained competed. Winner of best supporting actor, Christoph Waltz, moved from a small10 percent likelihood of victory at the start of the season to 40 percent on Oscar day, a hair behind Lincoln's Tommy Lee Jones. And, taking advantage of Zero Dark Thirty's fall, Django Unchained came from behind for a commanding lead in the prediction for best original screenplay by Oscar day.

The first judge of accuracy is the error; my error is meaningfully smaller than the best comparisons. A simple way to calculate the error is the take the mean of the squared error for each nominee, where the error is (1- probability of victory) for a winner and (0 - probability of victory) for a loser. A full set of predictions is 122, with 22 categories of 5, 1 category of 9, and 1 category of 3. My final predictions at 4 PM ET on Oscar day had a MSE of 0.067. One comparison is my earliest set of predictions, which had a MSE of 0.108; the error got smaller and smaller as the award shows and other information spread into my predictions. Nate Silver's FiveThirtyEight only predicted the big six categories and Mr. Silver provided prediction points, rather than probabilities. But, converting his predictions into probabilities by dividing each nominee's points by the sum of points in the category, he had a MSE of 0.075 for those six categories to my meaningfully smaller 0.056. A final comparison is with my only input that had all 24 categories; the Oscar day prediction of Betfair, the prediction market, has virtually the same error as mine. Which is why I also consider calibration.

The second judge of accuracy is the calibration; my calibration is very strong. The easiest way to check calibration is to chart the percentage of predictions that occur for bucketed groups of predictions (e.g., for all of my predictions around 20 percent, how many occur?). As you can see from the chart, when I made a prediction that was around 20 percent, around 20 percent of the predictions occurred. Admittedly, my gut was a little concerned with prediction like 99 percent for Life of Pi to win best visual effects or 97 percent for Les Miserables to win best sound mixing, but that is why I trust my data/models, not my gut. Betfair, which has nearly identical errors, is systematically under confident; while my predictions dance around the magical 45 degree line (i.e., perfect calibration line), 100 percent of Betfair prices that round to 50, 80, 90, and 100 occur, while prices that round to 70 occur 80 percent of the time.

2013-03-02-Rothschild_2013OscarCalibration.png

Sources: Betfair, Intrade, Hollywood Stock Exchange

The third judge of accuracy is has to do with the models themselves and is not born out in any one set of outcomes; is the prediction model robust for the future or over-fitted to the past and/or present. First, my models examine the historical data, but are carefully crafted using both in-sample and out-of-sample data to ensure they predict the future, rather than describe the past. Second, I always calibrate and release my models without any data from the current set of events. Models released too close to an event frequently suffer from inadvertent "look ahead-bias" where the forecaster, knowing what the other forecasters and his/her gut is saying, inadvertently massages the model to provide the prediction they want. That is why I release my models to run at the start of any season without ever checking what the current season's data will predict, before they are released.

The cost effective nature of my modeling is the key to predicting all 24 categories. Along with traditional fundamental data, my model relies primarily on easily scalable data, like prediction markets and user generated experimental data. It is scalable and cost effective models/data that will eventually allow us to make incredible quantities of predictions in a wide-range of domains. Adding accuracy in an existing set of predictions like best actor or expected national vote share in the presidential election is fun and could be meaningful, but creating accurate, real-time predictions in ranges of questions that could not exist before is the real challenge and goal of my work.

This column syndicates with the HuffingtonPost.

Predicting the Oscars for me is not about the Oscars per se, but the science of predicting. The challenge was to make predictions in all 24 categories, when most predictions only do 6. The challenge was to make predictions that move in real-time during the time period between the nominations and the Oscars, when most predictions are static. The challenge was to make to predictions that were accurate, not just in the binary correctness, but in calibrated probabilities. The challenge was to make these cost effective predictions, so that they could not only scale to 24 categories, but be useful in making predictions in varying domains.Prediction market data, including Betfair, Hollywood Stock Exchange, and Intrade, combined with some user generated data from WiseQ, allowed me to meet all of these challenges.

I was able to produce predictions for all 24 categories, expanding down the list through film editing, sound mixing, etc. I showed how these predictions moved in real-time during the period between the Oscar nominations and the Oscars. For example, Argo zoomed upward in the best picture and adapted screenplay categories as Zero Dark Thirty plunged in best actress and original screenplay. I was very accurate with 19 of 24 categories correct and the winners in the other 5 categories showing reasonably high probabilities. Prediction market data and experimental prediction games harnessed the wisdom of the crowds to allow me to scale easily to all 24 categories. These same data/models will allow me to easily expand to all sort so domains in the near future.

PredictWise_OscarPics_Winners.png

2013 Oscar Predictions with Winners!

Bookmark and Share

PredictWise_OscarPics_Winners.png

2013 Oscar Predictions

Bookmark and Share

PredictWise_OscarPics.png