DavidMRothschild on March 19, 2015 @ 9:17AM
Below I compare the prediction market-based predictions on PredictWise with FiveThirtyEight’s statistical model and the New York Times’ Upshot innovative pari-mutuel betting game. First and foremost the three methods are extremely similar. In just 10 of 32 first round games are any of the three models more than 10 percentage points different.
1) PredictWise and Upshot both have 8 seed Oregon winning, slightly, but FiveThirtyEight has them at just 41%. PredictWise and FiveThirtyEight have 11 seed Texas winning, slightly, but Upshot has them at 42%.
2) All three methods have 10 Ohio State favored over VCU. Besides Texas, those are the only two double-digit teams favored. 11 seed Ole Miss is 40% to win and 10 seed Davidson is 43% to win; both strong upset potential.
DavidMRothschild on March 19, 2015 @ 9:04AM
Here are the pre-tournament market-based predictions for the 2015.
This data is driven by a mix of Betfair (prediction market) and bookie data. Step 1: construct prices from the back, lay, and last transaction odds, in the Betfair order book or, when not available, the lowest odds to buy from a major bookie. For Betfair data we take the average of the cheapest cost to buy a marginal share and the highest price to sell a marginal share, unless the differential is too large or does not exist. Step 2: correct for historical bias and increased uncertainty in constructed prices near $0 or $1. We raise all of the constructed prices to a pre-set value depending on the domain. Step 3: normalized to equal 100% for any mutually exclusive set of outcomes.
DavidMRothschild on March 17, 2015 @ 3:25PM
Here are the pre-tourney market-based odds for the pre-tourney games!
Oscar Night 2015 - Why I am stoked PredictWise got 20 of 24 (not 24 of 24) Oscar predictions “right”
DavidMRothschild on February 25, 2015 @ 2:14PM
Our Oscar predictions have been 19 for 24, 21 for 24, and 20 for 24 over the last three years, in the binary outcome space (i.e., the most likely candidate won the Oscar). Of the 12 “misses” 11 have been the second most likely and one has been the third most likely. But, our predictions are not probabilities for a reason; if we only cared about which candidate was the most likely and not how likely, we would not bother calibrating the difference!
What we are most proud of is the calibration of the Oscar predictions. In the 72 categories (24 per year) we have forecasted in the last three years, the average forecast for the leading candidate was 82%. Thus, on average, we expected to “win” a category 82% of the time and “lose” a category 18% of the time. Thus, 0.82*72 = 59 “wins” and 0.18*72 = 13 “losses” in expectation. Our 60 “wins” is pretty well calibrated!
A better way to think about calibration is to look at the 365 predictions we have made in the last three years. Of course, the predictions are not independent of each other (only one candidate can win in any category/year combination), but with five candidates in a category (except up to 9 in Picture and 3 in Makeup and Hairstyling) it is reasonable to use all predictions in testing calibration. On the x-axis we round each prediction into six buckets and on the y-axis we plot the percentage of predictions in that bucket that actually occur.
In an ideally calibrated set of predictions, the answers would all lie on the 45 degree line; if the average prediction is 20% in a group of predictions, it should occur 20% of the time. All three years are extremely well calibrated.
The final Oscar predictions are prediction market-based; the model is in this paper.