Political polls have become a part of our political vocabulary; during the current election season, you're more likely to hear a discussion of a candidate's 16-point lead than of his health care proposals or foreign policy stance.
Yet for all their currency, there is still something of a distrust of polls, as if it must take magic for a small number of people to give you an accurate representation of how the whole country will vote.
Herbert Smith is a political science professor at Western Maryland College who has has been conducting political polls for Channel 11 (WBAL-TV) since 1983. He helps direct a number of other surveys on non-political issues for the University of Baltimore's Schaefer Center.
Q: How can so few people accurately represent the views of so many?
A: The basic answer is that it's like seeing how a pot of soup tastes by taking a spoonful. It's like a physician figuring out your blood chemistry from a few cc's that they take out of you.
It's called sampling. If, in a political poll, a pollster follows the mechanics of sampling so that each individual who is likely to vote has an equal chance of being selected for the sample of 400, 600 or 1,200 or however many, then that sample, within the 95 percent confidence limits, will reflect the reality of public opinion.
Q: Where do these samples come from? Do you just pick them out of the phone book?
A: The vast majority of pollsters buy their samples. There are firms that generate lists of random telephone numbers screened for residential households. Another has the registered voters for each state randomized and listed in terms of street addresses with voting histories. You then ask your basic screening question to get the probable voters: Are you likely to vote in the election? That's why you see results listed sometimes in terms of all respondents and likely voters.
From those lists, you set up a model of a normal election so that if a certain percentage of voters is from a certain area, you have that area properly represented. Then you keep running tabs in terms of age, race [and] sex as you're doing the questioning.
That way, you can weight your results if it's necessary. If your running quotas show that you only have 10 percent response from voters 18 to 24 years old and they should represent 20 percent of the sample, then you weight that 10 percent to represent 20 [percent]. You do the same thing with minority voters, that sort of thing.
Q: Is this really an exact science?
A: The science of public opinion polling is very well established. There are standard cookbooks for how to do it. The problem comes in measuring attitudes and non-attitudes, the intensity that beliefs are held, what's a genuine opinion and what's something else.
Q: So what happens when a poll goes bad?
A: It's that 5 percent exception that proves the rule. Sometimes you get a bad sample. You got something wrong in your teaspoon of soup, the one jalapeno pepper in the pot. That's why it's good we have multiple polls.
For instance, I don't think, though everyone still reports it, that Dukakis had a 17-point lead over Bush after the Democratic convention in '88. There were three or four other polls out there that showed his lead at 10 to 12 percent, which was probably accurate. To the political reporters, 17 points made the biggest news so that's what everyone wrote and talked about. But when you see one poll spiking out there, something is probably wrong with it. That can happen.
Q: Doesn't the distrust some people have of polls go back to some famous gaffes, the Literary Digest poll that showed Alf Landon beating Franklin Roosevelt in 1936 and all those polls that gave Dewey a big lead over Truman in '48?
A: There are sound mechanical reasons those polls were wrong. Literary Digest had a sample of 3 million, but they took it from telephone lists and motor vehicle registrations. At that time, that represented two-thirds of the country. The one-third of the country they left out went for Roosevelt something like 80 percent to 20 percent for Landon.
In '48, the electorate was volatile. There was a late-developing surge to Truman, and all the polls except Roper, I believe, quit polling three weeks before the election when Dewey's lead looked stable.
Q: How important is the phrasing of questions?
A: It's not that important on something like a presidential race. The questions are all standardized. In fact, polling is the one place where plagiarism is encouraged. The books tell you how to write the questions -- "If the election were held today . . .", "Is the country headed in the right direction, wrong direction," that sort of stuff -- and encourage you to copy.