2012 campaign

Obama Campaign Says It Was 42 Percent More Accurate Than Nate Silver


Volunteers making phone calls for the Obama campaign at its Chicago field office the day before the 2012 election

Photograph by Daniel Acker/Bloomberg

Volunteers making phone calls for the Obama campaign at its Chicago field office the day before the 2012 election

On Tuesday, Gallup shared the results of its study on why its presidential polls were so badly flawed. Today, I thought I’d share some internal Obama-campaign data porn (drummed up during the reporting for this piece on Google’s Eric Schmidt going into business with ex-Obama staffers) about which forecasters performed best. After the election, the hyper-competitive members of Obama’s analytics team undertook a study to see how their own polls and predictions stacked up against those by Nate Silver, Talking Points Memo, RealClearPolitics, Pollster.com and some others, including Mitt Romney’s internal polls*. The results are pretty interesting.

The Obama folks make two big claims: that they were more accurate—and more accurate earlier in the race—than their competitors. To back this up, they agreed to share, for the first time, the results of their own nightly forecasting model, code-named “Golden,” that was based on 62,000 simulations of the November election and distributed each day at 2 p.m. to David Axelrod, Jim Messina, and the rest of the campaign’s brain trust. The “Golden Report” I’ve included below gives an inside-the-cockpit glimpse of how the race appeared at the state, national, and electoral college levels, as well as the true confidence level inside Obama’s Chicago headquarters, at a critical moment in the race—just after Obama flubbed the first debate.

First, here’s a slide from Obama’s internal report, showing the average error and partisan bias of some of the best-performing forecasters in what the campaign calls “Tier 1 Battleground states.” The Obama team, the study notes, “proved to be more than twice as accurate as RealClearPolitics’ forecast, twice as accurate as Pollster.com’s forecast, 58% more accurate than TalkingPointsMemo’s forecast & 42% more accurate than Nate Silver’s forecast.” What stands out is the systemic bias among public polls toward the Republican candidate—you might even say the polls were skewed—since all these forecasts were drawn from public polls:

Here’s an additional slide measuring the campaign’s polling against some poorly performing competitors (Gallup is not included, but you can see how it performed against Obama’s team here.):

More intriguing to me is the Golden Report because having an accurate view of the race, particularly early on and at the state level, is so important for a campaign. Among other things, it determines where staff and money are allotted, and it can also influence the candidate’s message and schedule—recall Romney’s sudden, late, and ultimately futile forays into Pennsylvania and Minnesota. What’s more, as Elizabeth Wilner points out in her column this week for the Cook Political Report, public polls tend to converge in the final weeks of a race—probably because, many experts suspect, pollsters whose result are outliers start cooking their numbers to “magically fall in line with the majority of other polls” just before the election. Being demonstrably accurate early in a race ought to earn you extra points.

By that standard, the Oct. 9, 2012, Golden Report looks pretty darn good. Here’s the introduction:

THE GOLDEN REPORT

***CONFIDENTIAL***

10.09.2012

Using current polling and analytics modeling, we simulated the November election 62,000 times. Based on this analysis, we have a 76.0% chance of winning the electoral college on election day with an average of 310.6 electoral votes. The map below displays our chance of winning each state. We are currently leading Mitt Romney with 51.8% 2-way support nationally.

And here’s a map from the report that shows exactly where the analytics team thought the race stood. It shows the percentage likelihood that Obama would win any given state. (Ao, for example, the campaign believed it had a 75.4 percent likelihood of winning Ohio.) Obama ended up carrying every state where he was above 50 percent, finishing with 332 electoral votes and 51.07 percent of the popular vote to Romney’s 47.21 percent:

Finally, here’s a chart from the report showing the “Battleground Tracker States”—how they saw the head-to-head race (“Two-Way Support”) and the likelihood of winning each state.

Returning to the Obama campaign’s post-election study of forecasts, here is what it concludes about its own performance: “‘Golden’ successfully predicted POTUS’s vote share to within half a percentage point in 6 of the top 10 battleground states: IA, NH, CO, FL, NV & WI. It predicted POTUS’s vote share to within less than a single percentage point in VA, NC & PA. In short, we predicted POTUS’s vote share within one percent in 9 of the top 10 battleground states. Even in the 10th state, OH—our biggest ‘miss,’ if you can call it that—we predicted POTUS’s vote share to within less than two percentage points.”

It was obvious on Election Night that the Obama campaign had had a much clearer view of the race than Romney—and many of the public polls, too. Now we can see exactly how much clearer it was.

* What the campaign relied on here is, at best, incomplete: Romney polls from select states that were given to The New Republic’s Noam Scheiber and that appeared in this piece (“The Internal Polls That Made Mitt Romney Think He’d Win”)

Green_190
Green is senior national correspondent for Bloomberg Businessweek in Washington. Follow him on Twitter @JoshuaGreen.

Best LBO Ever
LIMITED-TIME OFFER SUBSCRIBE NOW
 
blog comments powered by Disqus