Monday, Apr. 28, 1980

How Not to Read the Polls

By Daniel Yankelovich

The writer is president of Yankelovich, Skelly & White Inc., the New York-based public opinion research firm that since 1972 has conducted polls on various subjects for this magazine.

In the 1976 presidential election, the catch phrase was "voter apathy." Journalists and politicians cited the public opinion polls to "prove" a mass defection from the electoral process. But while millions of voters stayed away from the voting booths, apathy was not the phenomenon at all. Voters were angry, frustrated and irritated at what they felt was the futility of their participation in elections. Something was in the air, but it was not voter indifference.

This year the catch phrase is "volatile." First the public favors an unannounced Kennedy candidacy 2 to 1 over President Carter's; months later that has changed to Carter 2 to 1 over Kennedy. Then, in New York and Connecticut, Kennedy beats Carter. The President's management of the American hostage situation in Iran was at first a major plus in his approval ratings; then it became a minus. Once again the public opinion polls are cited, and "volatility" is said to be the explanation. But once again the catch phrase is misleading.

Certainly there have been wide swings in public opinion in this 1980 presidential campaign. And just as certainly there will be more swings. But volatile, according to the dictionary, describes one who is "lighthearted," "fickle" or "capricious" and whose views are "transitory or fleeting." Applied to the current mood of the American public, these terms are laughably inaccurate. One can describe the American electorate in 1980 as troubled or conflict-ridden or agitated about what many regard as the unsatisfactory choices that confront them. But this is hardly a fickle or transitory state of mind. And it certainly is not lighthearted.

In 1944 Sociologist Paul Lazarsfeld demonstrated in his studies of voting behavior that conflict breeds delay in a voter's making up his or her mind. What we are witnessing now in the ups and downs of public opinion poll data is irresolution bred by strong conflict. Its presence means that many voters are going to wait until the last minute to decide. Every electoral race in this campaign, from the primaries to the main bout, is likely to be a cliffhanger, with the opinion polls unable to predict the outcome much in advance of the event.

Ordinarily the public is neither as conflict-ridden nor as relentlessly bombarded with daily polls as we are now. Though the contemporary scene makes for confusion and instability, there is one consolation. The confusion presents a unique opportunity to gain an insight indispensable to all who rely upon poll data. It highlights a missing element in the relationship of opinion polls to the public whose views they register.

The insight comes to light when we address the question "In what respects are the findings of public opinion polls really accurate?" Our numbers-conscious era makes it appear that survey percentages are like most other numbers we rely on for accurate representations of reality. When the morning news reports that the temperature is 90DEG and the humidity stands at 80%, experience tells us what the numbers mean as we dress for the outdoors. Ninety degrees is 90DEG F(or 32DEG C) and while additional interpretations offered by chatty forecasters may prove diverting, we certainly have all the information we need when the numbers are straight. But a disconcerting truth is that a number presented in an opinion poll report is not accurate in the same sense that a number in a weather report is. While survey percentages rarely "lie," no single poll number will by itself give the information that the poll reader needs to know; something vital is missing.

Perhaps the inflation-riddled U.S. economy presents us with a more useful model for understanding what information we require from a public opinion poll. What, after all, is a "constant dollar," and what do we need to know about it? Is it not enough to say that in the past five years the average worker's after-tax income has risen 69% to know that we are staring into the face of unmatched prosperity and good fortune? Obviously it is not, because we also know that between the push of an inflated Consumer Price Index and the pull of deflating dollars, the same average worker has actually suffered a 4% net loss in buying power after his earnings are adjusted for inflation.

Without the adjustment for inflation (i.e., the conversion of income into constant dollars), one would erroneously infer from the 69% increase in earnings that workers had significantly improved their lot, while in reality they have fallen behind. A public opinion poll number can mislead the reader in precisely the same fashion. Technically, a survey finding may be correct (just as the 69% increase in after tax earnings is technically correct), but it is also radically incomplete in terms of the information it conveys.

What is the missing piece of information, the equivalent of "adjusted for inflation," that we need in order to understand opinion polls? First, it is decidedly not the cautionary words we read about sampling error. In an effort to inform readers, most pollsters and the news media report the range of error that can be attributed to sampling, generally in the range of plus or minus 3% to 5%. Readers consistently misinterpret the meaning of this "warning label" -- and understandably so. Most people will take it to mean that if 61% of the public is reported to support a program of national health insurance and the sampling error is said to be 3%, then the boundaries of endorsement of national health insurance are no wider than between a 58% majority and a 64% majority. What is really meant by "sampling error," however, is much narrower. The phrase specifies only the range of probable inaccuracy in the survey caused by projecting the poll figures from a random sample onto the population at large. It says nothing about errors that might be caused by a sloppily worded question or a biased one or a single question that evokes complex feelings. Example: "Are you satisfied with your job?" Most important of all, warning labels about sampling error say nothing about whether or not the public is conflict-ridden or has given a subject much thought. This is the most serious source of opinion poll misinterpretation. Knowing a respondent's state of mind when he answers a poll question is as important to poll results as adjustments for inflation are to interpreting today's economy, and can only be determined by questions that go beyond sentiments of approval or disapproval into matters of personal knowledge, commitment and unresolved conflict.

If, for example, a respondent feels that the President is "too soft on Iran," it would be helpful to know what information he has about the actions the President has already taken, how much risk he feels the U.S. should take with the lives of the hostages, how he appraises the potential Soviet threat to Iran, and how he feels about preserving America's honor (including what that means to him).

Consider the Panama Canal Treaties and the debate they stimulated in late 1977. Sensitive politicians all, the Senators turned to public opinion polls for a reading of national sentiment and were confronted with what seemed to be clear public opposition to the treaties. Closer inspection, however, revealed that the almost 2-to-1 margin of disapproval of the treaties came from those who said that they were not familiar with the terms of the accords.

In light of these findings, the President authorized Ambassadors Sol Linowitz and Ellsworth Bunker to head a cast of diplomats and military advisers on an extensive public education campaign. Thanks to surveys that followed these efforts, the Senate realized that many of the original public attitudes toward the treaties had not been settled judgments but rather off-the-cuff reactions to questions about a perceived American foreign policy "loss." Even today, three years later, an ardent core of opposition to the treaties remains. Still, when the Senate finally acted, it had been shown the true dimensions of public opposition, not an exaggerated and misleading picture.

On complex issues like the canal treaties, pollsters and policymakers need to know whether the views of the public are firm and settled or soft and tentative. On such questions people sometimes say what they mean, but often they do not. They almost never lie but frequently they have not made up their minds. Those who say "I don't know" are only a small part of the respondents who are likely to change their minds once they devote more thought to a question or acquire new information or resolve their conflicting emotions.

On almost every key public issue--electing a President, responding to the Soviets in Afghanistan, reacting to anti-inflation initiatives--there is a process, part deliberative and part intuitive, by which people eventually settle their views. Any single opinion poll will catch the American public, as in a snap shot, at a fixed point in the process. But which point is it? Percentages alone do not reveal whether people are at the vague beginning, the turbulent middle or the conclusive end of the process of making up their minds.

Even in the most obvious uses of polling, in pre-election surveys, a 2-to-l lead in January and a 2-to-l lead in late October are clearly two different numbers, because in the interim the public has passed through most stages of the making-up-one's-mind process. In examining poll reports, the public needs to ask: "Is this poll focusing on questions to which people have given serious thought, or is it culling top-of-the-head responses -- and how do we know which is which?"

In a political campaign it does not much matter if opinion polls fail to report every change in the public's state of mind. We can wait, as we should, for the final poll, conducted at the ballot box. That is the only one that counts, and the truth of public sentiment will be revealed when the tabulations are finished. But on crucial questions of national policy, particularly in foreign affairs, there is no target Election Day by which voters must make up their minds on military policy, national security and crucial alliances. When the public reads the results of the latest surveys, it may think it is getting the full picture of American opinion. But it is only getting half the story, and the missing half may be the critical one.

How do pollsters find out, on specific issues, whether and to what extent people have worked through the process of making up their minds? This is not simple to do, and it is often expensive, but the techniques lie within the current state of the polling art, although the methods must be refined. Increasingly, government leaders study polls, both in the U.S. and in foreign capitals. As polls grow in significance, so does the need for information about the public's state of mind to supplement simple percentages.

The task that lies ahead, in an era ever more sensitive to public opinion, is to put the policymakers in touch not only with the true feelings of the electorate, but with the levels of intensity and conviction that those feelings represent. On the complex questions of national and international importance, the percentage of citizens who approve or disapprove of a particular position do so not only "strongly, partly or not at all"; the response is also tempered by emotion, expectation, circumstance, economics or moral fervor. Those who take the public pulse and those who report the findings are more duty-bound than ever to provide the interpretive framework that shapes the numbers.

--Daniel Yankelovich

This file is automatically generated by a robot program, so viewer discretion is required.