By Craig Charney | Campaigns & Elections | July 9, 2007 | 3 pages
I’m a pollster—and I never believe the polls! At least, not until I’ve gone over them myself to make sure I can believe them. Every poll—like any other text—has to be read critically. This mantra is not theoretical. Badly worded questions, poorly structured samples and misread polls have led to countless nasty surprises and election defeats. Here are 10 real problems from real surveys—some infamous, some not—to show what you need to be alert to in reading polling results, whether the results come from your own pollster or a published poll. C&E
“Have you stopped beating your wife?” It’s the classic leading question: yes or no, there’s no answer that gets you off the hook. But lots of polls have asked similar questions—and many still do. In 1937, Gallup asked, “Would you vote for a woman for president if she were qualified in every other way?” (So being female is a disqualification?) Just last year, Social Weather Stations, a leading polling outfit in the Philippines, asked whether President Gloria Macapagal Arroyo’s opponents “should start helping to improve the country and stop too much politics.” I’ll bet few said “no” to that one.
When you ask more than one thing at a time, or offer nonexclusive choices, respondents don’t really know what their answers really mean—and neither do the questioners. For example, a recent Fortune 500 company staff survey asked, “Training and career planning are available to me, true or false?” What if training is but career planning isn’t? And this February, an APIPSOS poll asked, “Do you think the war in Iraq is a worthy cause or a hopeless cause?” What answer do you give if you think it is both worthy and hopeless?
Double negatives in questions create answers with double meanings. Take the Roper poll for the American Jewish Committee in 1992. It asked, “Does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened?” The results? A shock: 34 percent seemed to say the Holocaust may not have happened. In response to public outcry, the question was reworked in 1994, asking if people were certain the Holocaust had happened. Asked that way, only 1 percent said it was possible the genocide never took place and 8 percent were unsure.
The first question asked often influences a person’s answer to the next. In a widely cited South African survey for a German foundation, black South Africans were asked, “There is an organization in the African areas called Inkatha. Do you know about it?” The next question was, “Thinking of African leaders in South Africa, whom do you admire most?” Guess what? Inkatha leader Gatsha Butlelezi topped the poll with 44 percent—above Nelson Mandela. Would it be a shock to learn that the lead researcher was a Buthelezi fan?
Every poll has a margin of error (plus/minus so many percent), which you ignore at your peril, particularly with small numbers or differences. Before the 1994 South African election that brought Mandela to power, for example, two successive polls in a newspaper in the Eastern Cape province found that support for a rival’s party more than tripled, rising from 2 percent to 7 percent. What the reporters who headlined the story didn’t know: With the sample of 200 voters in each poll, the error margin was +/7 percent—making the change meaningless.
You always have to look at the data, not just what pollsters say about it. In a talk at Columbia University this year, John Mearshimer, author of a controversial paper on the influence of the Israel lobby, cited a Washington Post poll as saying that the majority of Americans (53 percent) held Israel responsible for last year’s Lebanon war. The actual poll had just 7 percent blaming Israel, 39 percent Hezbollah and 46 percent both—so many more blamed Hezbollah, singly or in combination, than Israel.
But besides bias, the desire for headlines also can lead to misreading data. The New York Times reported on Jan. 16, 2007 that based on an analysis of the government’s American Community Survey, for the first time “more American women are living without a husband than with one.” Four weeks later, the paper’s ombudsman sheepishly wrote, “The new majority materialized only because the Times chose to use survey data that counted as spouseless women teenagers 15 through 17.”
Polls only represent the people who respond to them. So to get an accurate sample, everyone needs an equal chance to be interviewed. This is another way of saying: random sampling is a must. In history’s most famous polling disaster, the 1936 Literary Digest poll predicted Republican Alf Landon would beat Democrat Franklin Roosevelt, 57 percent to 43 percent. They were sure they were right, because their sample was 2 million: subscribers, auto owners and telephone users. In fact, Roosevelt clobbered Landon 61 percent to 37 percent, the biggest landslide in U.S. history, and Literary Digest folded out of embarrassment. Despite the sample’s huge size, this was a textbook case of a biased sample: all of the sample’s component groups were heavily Republican.
Some professional focus group respondents will say anything to get their $75 or $100. Often the recruiter tells them what to say. (Sometimes you hear them coaching each other in the waiting room: “Tell them you always vote …”) The results were evident in a focus group where the recruiter reported that everyone was a registered Democratic primary voter. The group seemed a bit odd to the clients, who checked the voter rolls and discovered half the participants weren’t registered and most of the rest didn’t vote in primaries. (The distressed pollster? Me.)
The only ways to avoid this problem are to recruit political focus groups off voter lists or confirm recruits from facility databases against the voter rolls. It’s more expensive and the facilities hate it—but as Ronald Reagan said of dealing with the Russians, you have to verify, not just trust!
It’s amazing how often people forget that not everybody actually votes, and that certain groups (often, but not always, Democratic leaning ones) vote less often. For instance, two weeks before the 2004 election, the CNN/USA Today/Gallup generic congressional poll had Democrats ahead of Republicans, 49 percent to 47 percent. When the poll restricted the electorate to likely voters, Republicans were up, 50 percent to 47 percent—and they prevailed on Election Day, too.
The first member of a group—based on ethnicity, race or gender—who is a serious contender for office often mobilizes a new wave of supporters. The race then redefines the universe for polling, too. When Fernando Ferrer ran for New York City mayor in 2001, he pushed the Latino share of the Democratic primary electorate to 24 percent. His campaign deliberately worked the turnout surge, registering new Latino voters and calling Latinos who voted in presidential elections but not primaries. This wreaked havoc with pollsters’ models based on previous Latino turnout.
This model will be a big issue in the 2008 presidential race. Political commentator Dick Morris has estimated that 5 million new single women voters will come out for Sen. Hillary Clinton, D N.Y. With the possibilities of the first woman, African American or Latino at the top of the ticket, next year’s election has the potential for generating first-of-a-kind surprises for pollsters—and the rest of the political class—in a big way.
Craig Charney is president of Charney Research, a New York polling firm.