DavidMRothschild on November 04, 2014 @ 1:06PM
1:30 AM ET: This is what under-performance versus the polls looks like:
10:33 PM ET:
10:13 PM ET: We have the senate at 95% for the Republicans, but that almost feels generous to the Democrats.
8:07 PM ET: The Democrats continue to over-perform the trandtional polling in the exit polling (with the exception of VA, which is tight). And, they are all but certain to capture both of the key must wins states of NH and NC. Georgia is tighter than expected as well. This is going to to be long night, but definitly better for the Dems than expected.
7:34 PM ET: We just upgraded NC to 95% in light of the strong exit polls for the incumbent Hagan. VA is too tight to call right now, which is bad new for the Democrats. But, both GA elections are too close to call as well, which is bad news for the Republicans. Overall, the Democrats are slightly over-performing on the exit polls relative to the traditional polling. But, it is too early to say if the results are going to be biased in either direction this election cycle. Grab a snack, becuase the roller coaster at 8 PM ET!
6:36 PM ET: Kentucky has closed a part of the state and the early returns have GOP incumbent McConnell well ahead of 2008 in almost every location. He went into tonight at 100% and will win
3:20 PM ET:
1:30 PM ET:
1:15 PM ET: The table below will update all night tonight and all commentary by me will be in this spot. So, please check back for live updates starting just before 7 PM ET.
Link to House page ...
Note: Prob is "probability of victory" and EV is "expected vote share"
Congress - Likelihood of Party Control - After 2014 Election
2014 Senate - Likelihood of Victory for Democratic Candidate - Election Night
2014 Governor - Likelihood of Victory for Democratic Candidate - Election Night
DavidMRothschild on November 03, 2014 @ 11:29PM
Five weeks ago I launched a new website, with a few friends, including Miro Dudik and David Pennock, called Microsoft Prediction Lab. The website consolidates research into both non-representative polling and prediction games. I have spent years understanding how various raw data: polling, prediction markets, and social media and online data, can be transformed into indicators of present interest and sentiment, as well as predictions, of varying populations. Then, how decision makers allocate resources with the low latency and quantifiable market intelligence that we produce. Microsoft Prediction Lab allows us to continuously innovate not only on the path of raw data to analytics to consumption, but the collection of the data itself. Please take a look at this post on the mission of Microsoft Prediction Lab.
First, we would like the thank the thousands of active users who made this first game such an interesting and meanignful experience!
Microsoft Prediction Lab has a market for all 507 elections in the midterms: 36 senatorial, 36 gubernatorial, and 435 house. In each of these markets users can buy and sell contracts on the possible outcomes of each election. For example, in New Hampshire there are two possible outcomes: Democratic candidate Jeanne Shaheen and Republican candidate Scott Brown. A prediction on Shaheen would return 112 points for every 100 wagered, while a prediction on Brown would return 467 point for the same 100 points! If someone thought that Brown as undervalued, that there was a good return in wagering 100 points for 467, s/he should predict Brown and if s/he thought Shaheen was undervalued they should buy Shaheen. As people predict Brown, the return goes down and vice-versa.
The return that an investment settles on in a market is extremely correlated with the probability of the outcome. We show the translation of the price to the probability on the market alongside the return on prediction.
Further, the markets moved quite a bit over the last few weeks. Actually, there was movement in most of the 507 markets. In this market 85 people placed predictions, while others saw well into triple digits. To test how efficient that movement was, whether the crowd was supplying information, we captured the probabilities in all 507 races at about midnight on Election Eve.
These probabilities represent the probability of victory for the party just before 12:00 AM on Election Day. We look forward to checking back later this week to see how Microsoft Prediction Lab did ... and, for those of you playing the game, you have until 9 PM ET to keep the predictions coming!
Microsoft Prediction Lab - Final Probabilities
DavidMRothschild on November 03, 2014 @ 7:26PM
The Democrats have a decent chance at having a great night in the governor’s races. There are 10 elections that I am following closely:
2 Tough, but possible pickups for the Democrats:
Wisconsin: this is a potentially a huge pick-up for the Democrats against a likely 2016 contender for the Republicans. Scott Walker is a conservative Republican leader in a solid blue state.
Maine: Paul LePage is a second unapologetic conservative Republican in a solid blue state. He is also in a tight spot.
3 Likely pickups for the Democrats:
Florida: neither the Democratic former governor Crist nor the current Republican governor Scott are pulling ahead in this nail-biter.
Alaska: independent Bill Walker (running with a Democratic lieutenant governor) is looking very strong against the Republican incumbent.
Kansas: Paul Davis is pulling ahead of conservative Republican Sam Brownback in this incredibly red state.
3 Likely holds for the Democrats:
Colorado, Illinois, and Connecticut: all three of these have looked tight, but are likely to remain blue governors in blue states.
1 Likely hold for the Republicans:
Michigan: this is a generally blue state, but there is a Republican incumbent heading towards a likely hold.
1 Likely loss for the Democrats:
Massachusetts: the only likely bad mark on the election for the Democrats is Martha Coakley. She is poised to lose against a moderate Republican Charlie Baker. Although, as a moderate Republican in a state with a long tradition of moderate Republican governors*, this is not a major ideological shift. *Note: I am referring to Governor Romney, a moderate, who should not be confused with Republican nominee for present Romney, who is not a moderate!)
DavidMRothschild on November 03, 2014 @ 11:12AM
The Republicans are about 85% likely to take control of the U.S. Senate in January, 2015. This is going to happen, because the Democrats are going to win Blue States and the Republicans are going to win Red States. And, the Republicans are likely going to win two crucial Purple States (Iowa and Colorado). This election is not wave or a disaster for either party, but pretty much as should be expected. The most likely outcome is going to be the Democrats controlling 47-8 seats to the Republican 52-3.
The Democratic path to victory is very simple; they need to capture both New Hampshire and North Carolina, which are likely, and then three additional states of the five in play. The runoff system makes it very unlikely they will win in Georgie and Louisiana. And, I do not think Kansas’ Orman is going to make himself the swing vote with a 49 Democratic senate. Why would he do that if he can switch back in 2017 with seniority when the Democrats recapture the senate?
There is a possibility of a systematic polling bias against the Democrats (i.e., the Democrats will over-perform polls on Election Day). Nate Silver seems to think I am wrong (although the post is very long and does not address me directly, so it is hard to say for sure). He believes that the bias in 2012 is an historical anomaly. His argument is that states tend to break towards their fundamentals (i.e., Democratic states polls are biased against Democrats and Republican states polls are biased against Republicans). The problem is that he runs a regression with data from 1998 to 2012 and includes presidential polling along with senatorial. First, in 1997 the response rate for random digit dialing was still 36%, versus <9% in 2012. While I applaud the use of added observations, including time before the technical issues appeared for pollsters is a mistake that will smooth all treatment effects. Second, presidential polling is going to swamp senatorial polling. Not only in size, but in stability and accuracy. Presidential elections have more stable voter turnout.
Knowing there was an issue in 2012, I have no doubt that the pollsters will try to correct for any bias, but I believe it is likely that there will still be some bias against Democrats. First, with shifts in both coverage and non-response error moving fast it is not easy to correct for what happened in two or four years ago. Baseline data used for corrections is already obsolete (i.e., reliable lists of cell-phone only versus landline/cell phone are not updated fast enough!) and simply correcting for the error of four years ago is also obsolete (i.e., any correlations from four years ago are already wrong). This group of pollsters may under or over correct, but it is hard to see how they would be systematic unbiased. Further, they are technically very conservative people, so under-correcting is just more likely. Second, many pollsters are not knowledgeable enough to do that and will not bother. This group of pollsters are likely to favor Republicans.
Note: I had a very interesting early morning phone call today with Sam Wang of Princeton, so I expect he may touch on some of the same topics about bias today as well.
Any potential systematic polling bias, that favors the Democrats, will only get them Colorado and possibly Iowa; putting them at 49 seats. Colorado is most likely, because, with high population movement and Hispanic population, it is susceptible to bias. High population movement, young people with cell phones, leads to coverage error that favors Republicans. Hispanic population lead to non-response error, as Democratic Hispanics are less likely to answer English polls as Republican Hispanics. Iowa is the next most likely, as a Democratic leaning purple state shares some of the same attributes. It is less likely that Alaska or Georgia are going to be heavily hit by bias, as both states are much less susceptible. Kansas does not have a Democratic challenger!
DavidMRothschild on November 02, 2014 @ 1:10PM
Written with Sharad Goel and Houshmand Shirani-Mehr
Election forecasts, whether on HuffingtonPost's Pollster, New York Times’ Upshot, FiveThirtyEight, or PredictWise, report a margin of error of typically 3 percentage points. That means that 95% of the time the election outcome should lie within that interval. We find, however, that the true error is actually much larger than that, and moreover, polls historically understate support for Democratic candidates.
To estimate the true margin of error, we looked at all polls for senatorial races in 2012 that were published on the two major poll aggregation sites (Huffington Post’s Pollster and Real Clear Politics).Then, using the standard formula, we computed their theoretical margin of error. Finally, we simply plotted the percentage of polls where the outcome of the election actually fell within the standard confidence range.
Note: Data are from Huffington Post’s Pollster and Real Clear Politics. The thick horizontal line, a little below 70%, represents overall percent of outcomes in the reported 95% confidence ranges.
For polls conducted right before Election Day, the actual election outcome falls within a poll’s stated 95% confidence interval about 75% of the time. That means that whereas the polls’ margin of error says they should capture 95% of outcomes, they in fact capture only 75%. In other words, the reported margins of error are far too optimistic.
Why are the reported confidence intervals too narrow? First, polls only measure attitudes at the time they were conducted, not on Election Day, and the standard error estimates neglect to account for this. (To be fair, the pollsters typically add the disclaimer that results reflect the likely outcome of a hypothetical election held on the day of the poll.) But, close to Election Day, there is probably little real change in support, and the reported confidence intervals are still too small. This discrepancy is attributable to polling companies reporting only one of four major sources of error, as we describe below.
Sampling Error: This is the one source of error that pollsters do report, and it captures the error associated with only measuring opinions in a random sample of the population, as opposed to among all voters.
Coverage Error: Pollsters aim to contact each likely voter with equal probability and deviations from this result in coverage error. This was relatively easy in the world of ubiquitous landline phones (remember those), but with the rise of cell phones and internet it is not so easy to determine how to mix polling methods so that any given likely voter is contacted. This problem is getting worse each year, as landline penetration decreases. Coverage error is exacerbated by shifting modes of voting, such as voting by mail or early voting, which complicate traditional screens used to determine who is likely to vote.
Non-Response Error: After identifying a random set of likely voters, pollsters still need them to actual answer the polling questions. If those who are willing to be interviewed systematically differ from those who are not, this introduces yet another source of error, non-response error. This problem is also getting worse each year, as people are increasingly reluctant to answer cell-phone calls from unknown numbers or to take ten minutes to answer a poll in a busy world.
Survey Error: The exact wording of the questions, the order of the questions, the tone of the interviewer, and numerous other survey design factors all affect the result, leading to still another error source.
As Nate Cohn outlined in the New York Times on Thursday, the latter three error sources are more likely to undercount Democrats than Republicans. For example, Democrats are more likely than Republicans to have a cell-phone from a different area code than where they currently live (like all three of the authors of this article), which in turn results in coverage error since such individuals cannot be included in state-level polls. Cohn notes that among cell-phone only adults, people whose area code does not match where they live lean Democratic by 14 points, whereas those that matched lean Democratic by 8 points. For an example of non-response and survey error, Cohn notes that Hispanics who are uncomfortable taking a poll in English are more likely to vote Democratic than demographically similar Hispanics.
Thus, we expect the actual polling errors to be larger than the stated errors, and moreover, we expect polling results to favor the Republicans. This pattern is strikingly apparent when we plot the observed differences between poll predictions and actual election outcomes for the 2012 Senate races. Positive numbers indicate the poll skewed in favor of the Republicans. Alongside the observed differences, we plot the theoretical distribution of poll results if sampling error were the only factor.
Note: Data are from Huffington Post’s Pollster and Real Clear Politics.
The observed distribution clearly skews toward the Republican candidates. Further, the observed distribution is wider than the theoretical one, in large part because the polls are conducted over several weeks prior to the election, while the theoretical distribution does not take into account how much candidate support varies over the course of the campaign.
How much do these overly optimistic forecasts matter? First, the theoretical 3 percentage point margin of error is already substantial, and puts nearly every competitive race within that range. Second, when you add in the unaccounted for errors, election outcomes in contested races are simply far less certain; and coverage and non-response errors will likely only get worse each cycle. Third, while aggregating a bunch of polls for each election reduces the variance, it does not eliminate the bias, so these overconfident predictions pose a problem for aggregate forecasts as well. In short, those fancy models that show probability of victory are only as good as their ingredients, and if the polls are wrong, the poll aggregations will be wrong as well.
Sharad Goel is an Assistant Professor at Stanford University
Houshmand Shirani-Mehr is a graduate student at Stanford University