2012 Election

The Dark Art of Political Polling


Early voters wait in line to vote in the presidential election on the first day of early voting at a polling station setup at the City of Miami City Hall on Oct. 27, 2012 in Miami, FL

Photograph by Joe Raedle/Getty Images

Early voters wait in line to vote in the presidential election on the first day of early voting at a polling station setup at the City of Miami City Hall on Oct. 27, 2012 in Miami, FL

How could a Gallup Organization survey published a week before the election show Mitt Romney up by 5 percentage points, while a CBS/New York Times poll from the same period put him 1 point behind President Obama? Even professional poll watchers don’t know.

New technologies such as e-mail blasts have made it possible to field polls cheaply—and to publicize them on the Internet, bypassing traditional gatekeepers in the mainstream media. With hundreds of polls to choose from, candidates draw attention to those that support their case. The public is growing suspicious of a snow job, if you believe an Oct. 2 poll about polls by Public Policy Polling. It found that 42 percent of respondents say pollsters are manipulating results to show Obama ahead. “The. Polls. Have. Stopped. Making. Any. Sense,” Nate Silver, who runs the New York Times’ FiveThirtyEight poll blog, tweeted in September.

A well-done poll uses statistical science to produce something a little like magic. The key is giving every person in the target population an equal chance of being contacted. The classic method is known as “random digit dialing,” which sprays out calls to both listed and unlisted phone numbers. By interviewing a sample of just 1,000 or so people, a pollster can come very close to divining the opinions of hundreds of millions.

But there are far more ways to get it wrong than right. Take “tracking” polls. Unlike one-shot polls, they last for months. Each day new people are interviewed, and results are published for a rolling average of the previous three, five, or seven days. These polls get top billing from the media just before an election, but they have a little-noted flaw. Because each wave of polling takes only a day, there’s no opportunity to call back people who don’t answer the first time. That means raw results are skewed toward the kind of people who sit by the phone. “A one-day poll is going to get you a lot of old women,” says Cliff Zukin, a Rutgers University polling expert.

Pollsters correct for this by giving less weight to replies from quick-to-answer types and more to replies from people in hard-to-reach groups. But the reweighting itself is imprecise, so a tracking poll’s actual margin of error can be a percentage point or two higher than what’s reported as its “sampling error,” typically 3 to 4 percentage points. Paul Lavrakas, president of the American Association for Public Opinion Research, calls this “almost like a dirty little secret.”

Another secret: Americans routinely lie to pollsters when asked if they’ll vote—saying yes when the real answer is heck no. The trick is to ask a set of questions that ferrets out respondents’ true intentions. In an Oct. 18 blog post, FiveThirtyEight’s Silver contended that Gallup may have gotten its formula wrong, overestimating the likelihood of Romney’s supporters going to vote. Frank Newport, Gallup’s editor-in-chief, says that can’t be ruled out. “If our model is not as accurate as we’d like, we’ll definitely reevaluate it,” he says. RealClearPolitics’ average of eight national polls (including Gallup’s) showed Romney with just a 0.8 percentage-point lead through Oct. 28, compared with Gallup’s 5 percentage points.

Any poll that doesn’t include cell phones is highly suspect because cell-only households are more likely to vote Democratic than ones in landline-only households, says the American Association for Public Opinion Research. Robo-dialing polls, the cheapest kind of phone poll, are always landline-only, because federal law prohibits robo-dialing of mobile numbers. Correcting for that political bias by reweighting results is imperfect. Polls conducted online raise a red flag because of disparities in Web usage. YouGov, for one, works hard to get a clean random sample. But some Internet-based firms use e-mail lists that are unrepresentative, counting on weighting to fix errors. Pollsters generally disclose their methodologies, but journalists often fail to report them, treating all polls as if they’re equally valid.

The proliferation of polls has added to the public’s disillusionment, which only makes the pollsters’ job harder. People don’t answer calls from unfamiliar numbers or hang up when they hear a pollster on the line. So pollsters have to try about 10,000 households just to complete 1,000 interviews, more than three times as many as 15 years ago. “It’s harder every day,” says Ann Selzer, president of Selzer & Co., which polls for Bloomberg News.

A simple rule is to beware of polls whose results are far outside the mainstream. Journalists (though not all of us) play them up because they’re surprising, but outliers are the least likely to be accurate. Polls that crunch averages are a decent guide, because errors tend to cancel each other out. Ultimately, though, no poll is anything more than a snapshot of a single moment. The only real way to figure out who’ll win: wait till Election Day.

The bottom line: To get a reliable sample of 1,000 people, pollsters need to call 10,000, more than three times as many as 15 years ago.

Coy_190
Coy is Bloomberg Businessweek's economics editor. His Twitter handle is @petercoy.

The Good Business Issue
LIMITED-TIME OFFER SUBSCRIBE NOW
 
blog comments powered by Disqus