Surveys and Confirmation Bias

November 10, 2011
Posted by Jay Livingston

When he taught research methods as a grad student, Michael Schwartz gave his students this assignment: “Create a survey to show . . .” and he would tell them the conclusion he wanted the survey to support.  The next week, he’d give them the same assignment but with the desired conclusion the opposite of the first one.

A year and a half ago, I criticized (here) a much publicized study by Dan Klein and Zeljka Buturovic:  “This survey, I said, “wasn’t designed to discover what people think. It was designed to prove a political point,” and that point was that liberal ideology blinds people to economic facts. 

I was reminded of Mike’s assignment when I read Klein’s recent article at The Atlantic.  In a bit of academic fairness that’s probably all too rare, Klein went on to create a survey designed to see if conservative ideology has a similar effect.

Klein hoped that his conservative and libertarian allies would not so readily agree with politically friendly economic ideas that were nevertheless unsound. But conservatives in the new survey were “equally stupid” as the liberals in the earlier survey.

Klein also expected some nasty nyah-nyahing from his liberal critics.  But no, “The reaction to the new paper was quieter than I expected.”   In fact, one of those liberal critics, Matt Yglesias, offered an observation that Klein used as his takeaway from the two surveys: “there’s a lot of confirmation bias out there.” 

Yes, but confirmation bias is not just something that affects people who respond to surveys.  As Mike’s assignment makes clear, we also need to be wary of confirmation bias on the part of those who create the surveys. There is the further problem I mentioned in my earlier post:  a one-shot survey is inherently ambiguous. We can’t be sure just what the respondents really hear when they are asked the question. 

My own takeaway, besides admiration for Klein’s honesty, is that when you design your research as a statement (proving some point), you don’t learn nearly as much as when you design it as a genuine question.

2 comments:

  1. Great post, thanks for sharing. The problem with surveys and statistics is that they can be constructed to show a point rather than objectively reporting accurate numbers and opinions.

    ReplyDelete
  2. Avoiding bias in a survey or research is the best way to keep the results accurate and free of the influence of external factors. The best way for a research to be labeled authentic is to have it accredited by a trusted accreditation firm.

    ReplyDelete