Poll Problems — the Wisdom of Crowds or Pluralistic Ignorance

November 6, 2022
Posted by Jay Livingston

In the last few elections, the pre-election polls have gotten it really wrong. Partly that’s because cell phones and low response rates have made sampling difficult. But it also might be that pollsters are not asking the right question. Maybe the usual question — “Who are you going to vote for?” — is not the best way to predict election results.

The most recent episode of NPR’s Planted Money explored this question and in the end tried a less direct approach that some polls are now using. They went to the Marist College poll and got the directors to insert two questions into their polling on local House of Representatives races. The questions were:

  • Who do you think will win?
  • Think of all the people in your life, your friends, your family, your coworkers. Who are they going to vote for?

At the time, the direct question “Who will you vote for?” the split between Republicans and Democrats was roughly even. But these new two questions showed Republicans way ahead. On “Who will win?” the Republicans were up 10 points among registered voters and 14 points among the “definitely will vote” respondents. On the friends-and-family question, the corresponding numbers were Republicans +12 and +16.

Planet Money sees these results as an example of “the wisdom of crowds” — the idea that the best estimate comes not from the experts but from the collective judgment of everyone with an opinion on the matter at hand. The idea goes back to Galton at the Fair – statistician Francis Galton at the Plymouth (UK) farmers’ fair in 1906.

At the fair, Galton noticed people submitting their guesses on the weight of an ox. Galton the statistician kept track of all the guesses— some 800 in all— and computed the group mean. Galton the eugenist assumed that the guesses of the ignorant would detract from the overall accuracy, while the guesses of farmers and butchers would be closer. The mean of the group was 1197 pounds; the ox’s weight, 1198 pounds. The group did better than the most expert individual

That’s from one of the many blogposts I have done on the topic (here). I’ve looked at predictions in the Superbowl, the Oscars, and securities trading. In some cases, notably the speculation that led to the financial crisis of 2008, the crowd has not always been wise.

Planet Money thinks that the crowd — the people being polled — is wiser than the experts doing the polling and analysis and that Republicans are going to win big.

But there are two other ideas from social science that can also explain the discrepancy between the responses to the questions.
  1. Pluralistic ignorance. This is the cognitive error where people think, mistakenly, think they are in the minority. For example, college students may think that just about everyone else on campus is having great sex and having it frequently when in fact most of their fellow students are in the same unfulfilled boat that they are.

  2. Social desirability. When asked questions in a survey, people avoid answers they think will make them look bad. Ask “How many books have you read this year?” and you’ll probably get an overcount.
If Republicans — the politicians, the right-wing media, Trump, the MAGA hats, et al.  — are making the most noise and generally dominating the political discourse, supporting the Democrats may just seem wrong or at least not the sort of thing you want yo bring up. If Democrats then are keeping their preferences to themselves, even they will perceive Republicans as the dominant party, and that’s what they’ll tell the Marist pollster on the phone asking who’s going to win. They may also think that most others in their social world are going Red.

It’s complicated. The people you call, even the few who don’t hang up, might give answers that are inaccurate — about what others think and even about what they themselves think. That may always have been true, but in what Planet Money calls “the Golden Age of polling,” roughly from the seventies to 2014, pollsters could make the necessary adjustments. Since then, poll problems have been sort of like Covid — you manage to solve one, and then a new variant comes along.


Andrew Gelman said...

This is wack cos nobody thought Republicans were +12 or +16. This makes me wonder who those friends and family were. It would be interesting to see a breakdown of these responses given respondent characteristics such as geography, age, and party ID.

Jay Livingston said...

Maybe nobody was predicting a +12 difference. I admit I didn’t pay close attention to the pre-election news. But that was because I too thought the Republicans would do well. It wasn’t just on Fox that there was talk of a Red Wavei. The general reaction to the closeness of the election seemed to be surprise.

The Marist sample was “over a thousand.” I’d guess it wasn’t very much over. Given the sample and the questions, I would think that the outcome we should look at is the total number of votes won by each party, not the number of seats won, I don’t know what those numbers are. But in any case, there’s a big error in people’s perceptions of others preferences.