Showing posts with label Methods. Show all posts
Showing posts with label Methods. Show all posts

Psychology (!!!) or Sociology (zzz)

February 8, 2012
Posted by Jay Livingston

News media have to come up with provocative headlines and ledes, even when they’re reporting on academic papers.  And even when the reasonable reaction would be “Well, duh,” rather than a gasp in 72-point caps.  But if that’s the route you want to go, it usually helps to think psychologically rather than sociologically.

Here’s a headline from Forbes
Facebook More Addictive Than Cigarettes, 
Study Says
And the Annenberg School Website started their story with this.
Cigarettes and alcohol may not be the most addicting drugs on the market, according to a recent study.
A team from the University of Chicago's business school has suggested everyone's suspicion: social networking is addictive. So addictive that constantly using sites like Facebook and Twitter may be a harder vice to kick than smoking and drinking.  [emphasis added]
The study in question is “Getting Beeped With the Hand In The Cookie Jar: Sampling Desire, Conflict, and Self-Control in Everyday Life” by Wilhelm Hofmann, Kathleen D. Vohs, and Roy F. Baumeister, presented at a recent Society for Personality and Social Psychology conference.  They had subjects (N=205) wear beepers and report on their desires. 
I found out about it in a Society Pages research round-up (here).
A study of 205 adults found that their desires for sleep and sex were the strongest, but the desire for media and work were the hardest to resist. Surprisingly, participants expressed relatively weak levels of desire for tobacco and alcohol. This implies that it is more difficult to resist checking Facebook or e-mail than smoking a cigarette, taking a nap, or satiating sexual desires.
Of course it’s more difficult.   But the difficulty has almost nothing to do with the power of the internal desire and everything to do with the external situation, as The Society Pages (a sociology front organization) should well know.  In a classroom, a restaurant, a church, on the street, in an elevator – just about anywhere – you can quietly glance down at your smartphone and check your e-mail or Facebook page.  But to indulge in smoking, sleeping, and “satiating sexual desires,” you have to be willing to violate some serious norms and even laws.

It’s not about which desires are difficult to resist.  It’s about which desires are easy to indulge.  The study tells us not about the strength of psychological desires but the strength of social norms.  You can whip out your Blackberry, and nobody blinks.  But people might react more strongly if you whipped out, you know, your Marlboros. 

The more accurate headline might be
Checking Twitter at Starbucks OK, Having Sex There, Not So Much, Study Finds
But that headline is not going to get nearly as much attention.

Doing the Math

February 7, 2012
Posted by Jay Livingston

My students sometimes have trouble with math, even what I think is simple math.  Percentage differences, for example.  I blame it on the local schools.  Once I explain it, I think most of them catch on. 

Stephen Moore is not from New Jersey.  His high school diploma is from the highly regarded New Trier, he has an economics masters degree from George Mason, and he writes frequently about economics.  A couple of days ago he wrote in the Wall Street Journal (here) about how much better it was to work for the government than for private employers.* 
Federal workers on balance still receive much better benefits and pay packages than comparable private sector workers, the Congressional Budget Office reports. The report says that on average the compensation paid to federal workers is nearly 50% higher than in the private sector, though even that figure understates the premium paid to federal bureaucrats.

CBO found that federal salaries were slightly higher (2%) on average, while benefits -- including health insurance, retirement and paid vacation -- are much more generous (48% higher) than what same-skilled private sector workers get.
It’s not clear how Moore arrived at that 50% number.  Maybe he added the 2% and the 48%. 

Let’s assume that the ratio of salary to benefits is 3 - 1.  A worker in the private sector who makes $100,000 in salary would get $33,000 worth of benefits. The government worker would get 2% more in salary and 48% more in benefits. 


Private
      Gov't.
Salary 100,000 102,000
Benefits   33,000 49,500
Total 133,000 151,500

If total compensation for private-sector workers is $133,000, and if government workers were getting 50% more than that, their total compensation would be $200,000. But the percentage difference between the $150K and the $133K is nowhere near 50%.  The government worker pay package is 14% higher. 

I think I could explain this so my students would understand it.  But then again, they don’t write columns for the Wall Street Journal.

----------------------------
* The WSJ gives the article the title “Still Club Fed.” The more accurate title would be “Government Jobs Are Good Jobs.” Of course, the latter takes the perspective of people looking for work, a viewpoint that doesn’t get much consideration at the WSJ.

Applied Probability

 February 6, 2012
Posted by Jay Livingston

Long-odds prop bets are sucker bets.  The odds that bookmakers offer are nowhere near the true probability.  But expected values matter only if you’re playing a large number of times, which is what the house is doing.  The bettor is betting just once, and 50-to-one odds sounds like a lot.

Take yesterday’s game. The odds that the first points of the game would be the Giants scoring a safety were 50-1.  That
’s what the bookies offered.

But what is the true probability?  In the previous NFL season, there were 2077 scores, not counting point-after-touchdown.  Here is the breakdown (I found the data here).

  • Touchdowns      1270
  • Field Goals           794
  • Safeties                    13
The probability of the first score being a safety by either team is 2064 to 13 or about 160 to 1.  The probability of the first score being a safety by a specified side is double that.  Even if that specified side is the Giants and their defense is twice as good as the Patriots defense, that still makes the probability at least 200 to 1.  The Las Vegas books were offering only 50 - 1, one-fourth of the correct odds.  So the expected return on a $1000 bet is $250 – a $750 loss.   What a ripoff.

Of course, not everyone feels duped.



Somewhere, someone is walking around with an I Brady t-shirt. 

HT: My colleague Faye Glass, though she tells me this picture is all over the Internet.

Do You Hear What I Hear? Maybe Not.

December 18, 2011
Posted by Jay Livingston

As I’ve said before (here), the question the researcher asks is not always the question people hear.   Thats especially true when the question is about probabilities.

Here, for example, is the ending of a fictional vignette from a recent study in the Journal of Personality and Social Psychology.
Richard found a wallet on the sidewalk. Nobody was looking, so he took all of the money out of the wallet. He then threw the wallet in a trash can.
Is it more probable that Richard is
    a. a teacher
    b. a teacher and a rapist
Since the category “a teacher” necessarily includes teacher/rapists as well, the correct answer is “a.” But many people choose “b.”  The study used this “conjunction fallacy”* to probe for prejudices by switching out the rapist for various other categories.  Some subjects were asked about atheist/teachers, others about Muslim/teachers, and so on.  The finding:
A description of a criminally untrustworthy individual was seen as comparably representative of atheists and rapists but not representative of Christians, Muslims, Jewish people, feminists, or homosexuals.
Andrew Gelman, a usually mild-mannered reporter on things methodological, had a post on this with the subject line, “This one is so dumb it makes me want to barf.”
What’s really disturbing about the study is that many people thought it was “more probable” that the dude is a rapist than that he is a Christian! Talk about the base-rate fallacy.
Maybe it would settle Andrew’s stomach to remember that the question the researchers asked was almost certainly not the question people heard.   What the researchers pretend to be asking is this:
Of all thieves, which are there more of – teachers or rapist/teachers? 
After all, that is indeed the literal meaning.  But it’s pretty obvious that the question people are answering is something different:
Which group has a higher proportion of thieves among them – all teachers or the subset rapist/teachers?
The researchers say they weren’t at all interested in demonstrating the conjunction fallacy.  They were just using it to uncover the distrust people feel towards atheists.  What they found was that when it comes to dishonesty, people (specifically, 75 female and 30 male undergrads at the University of British Columbia) rank atheists at about the same level as rapists.

But why resort to such roundabout tricks?  Why not ask the question directly?**
Who is more likely to steal a wallet when nobody is looking?
    a.  an atheist
    b. a rapist
    c.  neither; they are equally larcenous
Or:
On a seven-point scale, rank each of the following on how likely they would be to steal a wallet when nobody is looking:
  •     an atheist: 1   2   3   4   5   6   7
  •     a Christian: 1   2   3   4   5   6   7
  •     a rapist: 1   2   3   4   5   6   7
  •     etc. 
Instead, they asked questions that they knew would confuse nearly anyone not fluent in the language of statistics and probability.  I wonder what would happen if in their “who do you distrust” study they had included a category for experimental social psychologists.***

---------------
Daniel Kahneman and Amos Tversky pretty much invented the conjunction fallacy thirty years ago with their “Linda problem,” and Kahneman discusses it in his recent book Thinking Fast and Slow.  To get the right answer, you have to ignore intuition and make your thinking very, very slow.  Even then, people with no background in statistics and logic may still get it wrong.

** The authors presentation of their results is also designed to frustrate the ordinary reader. Each condition (rapist/teacher, atheist/teacher, homosexual/teacher, etc.) had 26 (or in one case 27) subjects.  The payoff was the number of errors in each group.  But the authors don’t say what that number was.  They give the chi-square, the odds ratios, the p’s and the b’s.  But they don’t tell us how many of the 26 subjects thought that the wallet snatcher was more likely to be an atheist/teacher or a Christian/teacher than to be merely a teacher.

*** The JPSP is one of the most respected journals in the field, maybe the most respected, influential, and frequently cited, as I pointed out here.

Surveys and Confirmation Bias

November 10, 2011
Posted by Jay Livingston

When he taught research methods as a grad student, Michael Schwartz gave his students this assignment: “Create a survey to show . . .” and he would tell them the conclusion he wanted the survey to support.  The next week, he’d give them the same assignment but with the desired conclusion the opposite of the first one.

A year and a half ago, I criticized (here) a much publicized study by Dan Klein and Zeljka Buturovic:  “This survey, I said, “wasn’t designed to discover what people think. It was designed to prove a political point,” and that point was that liberal ideology blinds people to economic facts. 

I was reminded of Mike’s assignment when I read Klein’s recent article at The Atlantic.  In a bit of academic fairness that’s probably all too rare, Klein went on to create a survey designed to see if conservative ideology has a similar effect.

Klein hoped that his conservative and libertarian allies would not so readily agree with politically friendly economic ideas that were nevertheless unsound. But conservatives in the new survey were “equally stupid” as the liberals in the earlier survey.

Klein also expected some nasty nyah-nyahing from his liberal critics.  But no, “The reaction to the new paper was quieter than I expected.”   In fact, one of those liberal critics, Matt Yglesias, offered an observation that Klein used as his takeaway from the two surveys: “there’s a lot of confirmation bias out there.” 

Yes, but confirmation bias is not just something that affects people who respond to surveys.  As Mike’s assignment makes clear, we also need to be wary of confirmation bias on the part of those who create the surveys. There is the further problem I mentioned in my earlier post:  a one-shot survey is inherently ambiguous. We can’t be sure just what the respondents really hear when they are asked the question. 

My own takeaway, besides admiration for Klein’s honesty, is that when you design your research as a statement (proving some point), you don’t learn nearly as much as when you design it as a genuine question.

Lying With Statistics, and Really Lying With Statistics

November 4, 2011
Posted by Jay Livingston

“The #1 way to lie with statistics is . . . to just lie!” says Andrew Gelman, who a) knows much about statistics and b) is very good at spotting statistical dishonesty.

But maybe there’s a difference between lying with statistics and just plain making stuff up.

I’ve commented before about social psychologists’ affinity for Candid-Camera deception, but this Dutch practitioner goes way beyond that.  [The Telegraph has the story .] 


The committee set up to investigate Prof Stapel said after its preliminary investigation it had found "several dozen publications in which use was made of fictitious data" . . .
[Stapel’s] paper that linked thoughts of eating meat eating with anti-social behaviour was met with scorn and disbelief when it was publicised in August, it took several doctoral candidates Stapel was mentoring to unmask him. . . .

the three graduate students grew suspicious of the data Prof Stapel had supplied them without allowing them to participate in the actual research. When they ran statistical tests on it themselves they found it too perfect to be true and went to the university's dean with their suspicions.
What’s truly unsettling is to think that maybe he’s not the only one.

Abstract Preferences and Real Choices

November 3, 2011
Posted by Jay Livingston
Cross-posted at Sociological Images

We’ve known for a long time that surveys are often very bad at predicting behavior.  To take the example that  Malcom Gladwell uses, if you ask Americans what kind of coffee they want,  most will say “a dark, rich, hearty roast.”  But what they actually prefer to drink is “milky, weak coffee.”

Something that sounds good in the abstract turns out to be different from the stuff you actually have to drink. 

Election polls usually have better luck since indicating your choice to a voting machine isn’t all that different from speaking that choice to a pollster.  But political preference polls as well can run into that abstract-vs.-actual problem.

Real Clear Politics recently printed some poll results that were anything but real clear.  RCP looked at polls matching Obama against the various Republican candidates.  In every case, if you use the average results of the different polls, Obama comes out on top. But in polls that matched Obama against “a Republican,” the Republican wins.


 The graph shows only the average of the polls.  RCP also provides the results
of the various polls (CNN, Rasmussen, ABCl, etc.) 

Apparently, the best strategy for the GOP is nominate a candidate but not tell anyone who it is.

If Your Survey Doesn’t Find What You Want It to Find . . .

October 19, 2011
Posted by Jay Livingston(Cross posted at Sociological Images)


. . . say that it did.

Doug Schoen is a pollster who wants the Democrats to distance themselves from the Occupy Wall Street protesters.   (Schoen is Mayor Bloomberg’s pollster.  He has also worked for Bill Clinton.)  In The Wall Street Journal yesterday (here),  he reported on a survey done by a researcher at his firm.  She interviewed 200 of the protesters in Zucotti Park.

Here is Schoen’s overall take:
What binds a large majority of the protesters together—regardless of age, socioeconomic status or education—is a deep commitment to left-wing policies: opposition to free-market capitalism and support for radical redistribution of wealth, intense regulation of the private sector, and protectionist policies to keep American jobs from going overseas.
I suppose it’s nitpicking to point out that the survey did not ask about SES or education.  Even if it had, breaking the 200 respondents down into these categories would give numbers too small for comparison. 

More to the point, that “large majority” opposed to free-market capitalism is 4% – eight of the people interviewed.  Another eight said they wanted “radical redistribution of wealth.”  So at most, 16 people, 8%, mentioned these goals.  (The full results of the survey are available here.)
What would you like to see the Occupy Wall Street movement achieve? {Open Ended}
35% Influence the Democratic Party the way the Tea Party has influenced the GOP
4% Radical redistribution of wealth 5% Overhaul of tax system: replace income tax with flat tax
7% Direct Democracy
9% Engage & mobilize Progressives 
9% Promote a national conversation
11% Break the two-party duopoly
4% Dissolution of our representative democracy/capitalist system  4% Single payer health care
4% Pull out of Afghanistan immediately 
8% Not sure
Schoen’s distortion reminded me of this photo that I took on Saturday (it was our semi-annual Sociology New York Walk, and Zucotti Park was our first stop).



The big poster in the foreground, the one that captures your attention, is radical militance – the waif from the “Les Mis” poster turned revolutionary.  But the specific points on the sign at the right are conventional liberal policies – the policies of the current Administration.*

There are other ways to misinterpret survey results.  Here is Schoen in the WSJ:
Sixty-five percent say that government has a moral responsibility to guarantee all citizens access to affordable health care, a college education, and a secure retirement—no matter the cost.
Here is the actual question:
Do you agree or disagree with the following statement: Government has a moral responsibility to guarantee healthcare, college education, and a secure retirement for all.
“No matter the cost” is not in the question.  As careful survey researchers know, even slight changes in wording can affect responses.  And including or omitting “no matter the cost” is hardly a slight change.

As evidence for the extreme radicalism of the protestors, Schoen says,
By a large margin (77%-22%), they support raising taxes on the wealthiest Americans,
Schoen doesn’t bother to mention that this isn’t much different from what you’d find outside Zucotti Park.  Recent polls by Pew and Gallup find support for increased taxes on the wealthy ($250,000 or more) at 67%.  (Given the small sample size of the Zucotti poll, 67% may be within the margin of error.)  Gallup also finds the majorities of two-thirds or more think that banks, large corporations, and lobbyists have too much power. 
Thus Occupy Wall Street is a group of engaged progressives who are disillusioned with the capitalist system and have a distinct activist orientation. . . . .Half (52%) have participated in a political movement before.
That means that half the protesters were never politically active until Occupy Wall Street inspired them.

Reading Schoen, you get the impression that these are hard-core activists, old hands at political demonstrations, with Phil Ochs on their iPods and a well-thumbed copy of “The Manifesto” in their pockets.  In fact, the protesters were mostly young people with not much political experience who wanted to work within the system (i.e., with the Democratic party) to achieve fairly conventional goals, like keeping the financial industry from driving the economy into a ditch again.

And according to a recent Time survey, more than half of America views them favorably.

------------------------------
* There were other signs with other messages.  In fact, sign-making seemed to be one of the major activities in Zucotti Park.  Some of them. like these, did not seem designed to get much play in the media. 

Chart Art - FBI-Style

September 17, 2011
Posted by Jay Livingston
(Cross-posted at Sociological Images.)

The FBI is teaching its counter-terrorism agents that Islam is an inherently violent religion.  So are the followers of Islam.  Not just the extremists and radicals, but the mainstream. 
There may not be a ‘radical’ threat as much as it is simply a normal assertion of the orthodox ideology. . . .The strategic themes animating these Islamic values are not fringe; they are main stream.
Wired  got hold of the training materials.  The Times has more today, including a section of the report that describes Muhammad as “a cult leader for a small inner circle.” (How small? Twelve perhaps?)  He also “employed torture to extract information.”*

An FBI PowerPoint slide has a graph with the data to support its assertions.


The graph clearly shows that followers of the Torah and the Bible have gotten progressively less violent since 1400 BC, while followers of the Koran flatline starting around 620 AD and remain just as violent as ever.

Unfortunately, the creators of the chart do not say how they operationalized “violent” and “non-violent.”  But since the title of the presentation is “Militancy Considerations,” it might have something to do with military, para-military, and quasi-military violence.  When it comes to quantities of death, destruction, and injury, these overwhelm other types of violence. 

I must confess that my knowledge of history is sadly wanting, and I was educated before liberals imposed all this global, multicultural nonsense on schools, so I know nothing about wars that might have happened among Muslims during the period in question.  What I was taught was that the really big wars, the important wars, the wars that killed the most people, were mostly affairs among followers of the Bible.  Some of these were so big that they were called “World Wars” even though followers of the Qur’an had very low levels of participation.  Some of these wars lasted quite a long time – thirty years, a hundred years.  I was also taught that in the important violence that did involve Muslims – i.e., the Crusades** – it was the followers of the Bible who were doing most of the killing. 

Perhaps those with a more knowledge of Muslim militant violence can provide the data.


-----------------------------

* To be fair, the FBI seems to have been innocent of any of the torture that took place during the Bush years.  That was all done by the military and the CIA – and by the non-Christian governments to which the Bush administration outsourced the work. 

** Followers of the Bible crusading to “take back our city” from a Muslim-led regime may have familiar overtones.

Home Team Advantage

September 14, 2011
Posted by Jay Livingston
If you’re looking for an example of the Lake Wobegon effect (“all the children are above average”), you can’t do much better than this one.  It’s almost literal.


The survey didn’t ask about the children.  It asked about schools – schools in general and your local school.  As with “Congress / my Congressional rep,” people rated America’s schools as only so-so.  Barely a fifth of respondents gave America’s schools an above-average grade.  But when people rated their own local schools, 46% gave B’s and A’s.  The effect was even stronger among the affluent (upper tenth of the income distribution for their state) and among teachers.

The findings about the affluent are no surprise, nor are their perceptions skewed.  Schools in wealthy neighborhoods really are above average.  What’s surprising is that only 47% of the wealthy gave their local schools an above-average grade. 

The teachers, though, are presumably a representative sample, yet 64% of their schools are above average.  I can think of two explanations for the generosity of the grades they assign their own schools:
  • Self-enhancement.  Teachers have a personal stake in the rating of schools generally.  They have an even larger stake in the rating of their own school.
  • Familiarity.  We feel more comfortable with the familiar.  (On crime, people feel safer in their own neighborhoods, even the people who live in high-crime neighborhoods.)  So we rate familiar things more charitably.  For teachers, schools are something they’re very familiar with, especially their local schools.
[Research by Howell, Peterson, and West reported here.
HT: Jonathan Robinson at The Monkey Cage]

Bought Sex?

July 20, 2011
Posted by Jay Livingston

Did you buy sex last year?

You probably said no, even if you’re a man. But wait. First look at “The John Next Door” an article currently up at Newsweek (subhead: “The men who buy sex are your neighbors and colleagues”). It features a study by Melissa Farley called “Comparing Sex Buyers With Men Who Don’t Buy Sex”
No one even knows what proportion of the male population does it; estimates range from 16 percent to 80 percent.
Actually, a considerably lower estimate comes from the GSS.
PAIDSEX Had sex for pay last year: If you had other partners, please indicate all categories that apply to them. d. Person you paid or paid you for sex.
Here are the results since the GSS started asking this question..
(Click in the graph for a larger view.)
Not 16-80%, but somewhere around 5%.

Not to get too Clintonian, but it seems to depend on what the meaning of “sex” is. The GSS respondents probably thought that paying for sex meant paying someone to have sex. Farley’s definition was somewhat broader.
Buying sex is so pervasive that Farley’s team had a shockingly difficult time locating men who really don’t do it. The use of pornography, phone sex, lap dances, and other services has become so widespread that the researchers were forced to loosen their definition in order to assemble a 100-person control group.
So if you bought a copy of Playboy, you paid for sex. And if you looked at it twice last month, you are disqualified from the control of “men who don’t buy sex.”
“We had big, big trouble finding nonusers,” Farley says. “We finally had to settle on a definition of non-sex-buyers as men who have not been to a strip club more than two times in the past year, have not purchased a lap dance, have not used pornography more than one time in the last month, and have not purchased phone sex or the services of a sex worker, escort, erotic masseuse, or prostitute.”
I don’t have Farley’s data. If the control group of nonusers was 100, I assume that the user group n was the same – not really large enough for estimating the prevalence of the different forms of buying sex. How many had paid a prostitute, how many had looked at porn twice in a month? Some people probably think that there’s a meaningful distinction between those two. The implication of much of the Newsweek article is that they are all “sex buyers” and that they therefore share the same ugly attitudes towards women.

SAT, GPA, and Bias

July 8, 2011
Posted by Jay Livingston

(Cross-posted at Sociological Images)


Is the SAT biased? If so, against who is it biased?

It has long been part of the leftist creed that the SAT and other standardized tests are biased against the culturally disadvantaged - racial minorities, the poor, et. al. Those kids may be just as academically capable as more privileged kids, but the tests don’t show it.

But maybe SATs are biased against privileged kids. That’s the implication in a blog post by Greg Mankiw. Mankiw is not a liberal. In the Bush-Cheney first term, he was the head of the Council of Economic Advisors. He is also a Harvard professor and the author of a best-selling economics text book. Back in May he had a blog post called “A Regression I’d Like to See.” If tests are biased in the way liberals say they are, says Mankiw, let’s regress GPA on SAT scores and family income. The correlation with family income should be negative.
a lower-income student should do better in college, holding reported SAT score constant, because he managed to get that SAT score without all those extra benefits.
In fact, the regression had been done, and Mankiw added this update:
Todd Stinebrickner, an economist at The University of Western Ontario, emails me this comment: “Regardless, within the income groups we examine, students from higher income backgrounds have significantly higher grades throughout college conditional on college entrance exam . . . scores. [Mankiw added the boldface for emphasis.]

What this means is that if you are a college admissions officer trying to identify the students who will do best in college, as measured by grades, you would give positive rather than negative weight on family income.
Not to give positive weight to income, therefore, is bias against those with higher incomes.

To see what Mankiw means, look at some made-up data on two groups. To keep things civil, I’m just going to call them Group One and Group Two. (You might imagine them as White and Black, Richer and Poorer, or whatever your preferred categories of injustice are. I’m sticking with One and Two.) Following Mankiw, we regress GPA on SAT scores. That is, we use SAT scores as our predictor and we measure how well they predict students’ performance in college (their GPA).

(Click on the image for a larger, clearer view)

In both groups, the higher the SAT, the higher the GPA. As the regression line shows, the test is a good predictor of performance. But you can also see that the Group One students are higher on both. If we put the two groups together we get this.

Just as Mankiw says, if you’re a college admissions director and you want the students who do best, at any level of SAT score, you should give preference to Group One. For example, look at all the students who scored 500 on the SAT (i.e., holding SAT constant at 500). The Group One kids got better grades than did the Group Two kids. So just using the SATs, without taking the Group factor (e..g., income ) into account, biases things against Group One. The Group One students can complain: “the SAT underestimates our abilities, so the SAT is biased against us.”

Case closed? Not yet. I hesitate to go up against an academic superstar like Mankiw, and I don’t want to insult him (I’ll leave that to Paul Krugman). But there are two ways to regress the data. So there’s another regression, maybe one that Mankiw does not want to see.

What happens if we take the same data and regress SAT scores on GPA? Now GPA is our predictor variable. In effect, we’re using it as an indicator of how smart the student really is, the same way we used the SAT in the first graph.
Let’s hold GPA constant at 3.0. The Group One students at that GPA have, on average, higher SAT scores. So the Group Two students can legitimately say, “We’re just as smart as the Group One kids; we have the same GPA. But the SAT gives the impression that we’re less smart. So the SAT is biased against us.”

So where are we?
  • The test makers say that it’s a good test - it predicts who will do well in college.
  • The Group One students say the test is biased against them.
  • The Group Two students say the test is biased against them.
And they all are right.


Huge hat tip to my brother, S.A. Livingston. He told me of this idea (it dates back to a paper from the1970s by Nancy Cole) and provided the made-up data to illustrate it. He also suggested these lines from Gilbert and Sullivan:
And you'll allow, as I expect
That they are right to so object
And I am right, and you are right
And everything is quite correct.




Overcoming Social Desirability Bias – He’s Got a Little List

April 19, 2011
Posted by Jay Livingston

As some day it may happen that a survey must be done, you need a little list, a quick five-item list – for sex or race or crime or things quite non-PC but fun, where pollsters all have missed, despite what they insist. There’s the guy who says he’d vote for blacks if they are qualified; he’d vote for women too, but are we sure he hasn’t lied? “How many partners have you had?” Or “Did you ever stray?” With things like this you can’t always believe what people say. You tell them it’s anonymous, but still their doubts persist, and so your methodology can use this little twist.

It’s called the List Experiment (also the Unmatched Count Technique). It’s been around for a few years, though I confess I wasn’t aware of it until I came across this recent Monkey Cage post by John Sides that linked to another post from the presidential year of 2008. Most surveys then were finding that fewer than 10% of the electorate were unwilling to vote for a woman (Hillary was not mentioned by name). But skeptical researchers (Matthew Streb et al., here gated), instead of asking the question directly, split the sample in half. They asked one half

How many of the following things make you angry or upset?
  • The way gasoline prices keep going up.
  • Professional athletes getting million dollar-plus salaries.
  • Requiring seat belts to be used when driving.
  • Large corporations polluting the environment.
Respondents were told not to say which ones pissed them off, merely how many. Researchers calculated the average number of items people found irritating. The second half got the same list but with one addition:
  • A woman serving as president.
If the other surveys are correct, adding this one item should increase the mean by no more than 10%. As it turned out, 26% of the electorate would be upset or angry about a woman president, considerably more than the 6% in the GSS sample who said they wouldn’t vote for a woman.

The technique reminds me of a mentalist act: “Look at this list, sir, and while my back is turned tell me how many of those things you have done. Don’t tell me which ones, just the total number. Now I want you to concentrate very hard . . . .” But I can certainly see its usefulness as a way to check for social desirability bias.

What’s Wrong With (Percentages in) Mississippi

April 10, 2011
Posted by Jay Livingston

A Public Policy Polling survey asked Mississippi Republicans about their opinion on interracial marriage. It also asked how they felt about various politicians. The report concludes, “Tells you something about the kinds of folks who like each of those candidates.”

Not quite.

What’s been getting the most attention is the finding that Mississippi Republicans think interracial marriage should be illegal. Not all Mississippi Republicans. Just 46% of them (40% think it should be legal).* Does their position on intermarriage tell us anything about who they might like as a candidate? Does a Klaxon wear a sheet?

(Click on the chart for a larger view.)

It’s no surprise that Sarah Palin is much preferred to Romney. But as PPP points out racial attitudes figure differently depending on the candidate. When you go from racists to nonracists,** Palin’s favorable/unfavorable ratio takes a hit. But Romney’s gets a boost.

But does this tells us something about “the kinds of folks who like each of those candidates”? The trouble is that statement is percentaging on the dependent variable, implicitly comparing Romney supporters with Palin supporters. But the percentages actually given by PPP compare racists with nonracists** The statement is implying that candidate preferences tell us about racial attitudes. But what the data show is that racial attitudes tell us about candidate preferences. The two are not the same. From the data PPP gives, we don’t actually know what percent of Palin supporters favor laws against intermarriage. Ditto for Romney supporters.

In any case, neither Palin nor Romney is the top choice of Mississippi Republicans (especially the racists), who may be thinking racially but are acting locally and going with their own governor first and the former governor of neighboring Arkansas second.


* The sample was only 400. But the results aren’t too different from what the GSS has found. The most recent GSS I could find that included RACMAR was from 2002. In the “East South Central” region, the percent favoring laws against interracial marriage was 36%. So among Republicans, it might have been ten points higher.

**I realize that neither of these terms “racist” and “nonracist” is necessarily accurate. I use them as shorthand for, respectively, “people who think interracial marriage should be illegal” and “people who think interracial marriage should be legal.”

Graphing Ideas about Marriage (Me vs. USA Today)

February 3, 2011
Posted by Jay Livingston

As someone with the visual aptitude of gravel, I shouldn’t be edging into Flâneuse territory. But when I saw this graph in USA Today this morning, I was frustrated.

(Click on the image for a larger view.)
Responses, by age group, when asked if they want to marry:
SOURCES: Match.com/MarketTools survey of 5,199 men and women who either have never been married or are widowed, divorced or separated.

I found it hard to make comparisons from one age group to another. In the online edition, the layout was better – all in a row – and the addition of even a single color helped. (Odd that USA Today, the newspaper that led the way in using color, gave its print readers the graph in only black-and-white, or more accurately gray-and-gray.)

(Click on the image for a larger view.)

I thought I’d try my own hand with my rudimentary knowledge of Excel.

(Click on the image for a larger view.)

What do you think?

The Law of Ungraspably Large Numbers

December 23, 2010
Posted by Jay Livingston

Been here long?

Gallup regularly asks this question:
Which of the following statements comes closest to your views on the origin and development of human beings --
  1. Human beings have developed over millions of years from less advanced forms of life, but God guided this process,
  2. Human beings have developed over millions of years from less advanced forms of life, but God had no part in this process
  3. God created human beings pretty much in their present form at one time within the last 10,000 years or so?
Here are the results:

(Click on the graph for a larger view.)

For better or worse, Godless evolutionism has been rising steadily if slowly for the past decade – 16%, and counting. And “only” 40% of us Americans, down from 47%, believe that humans are johnnies-come-lately. Scientific fact is making some headway. But a lot of people still believe in something that’s just not true.

Andrew Gelman explains it in psycho-economic terms. The “belief in young-earth creationism . . . is costless.” What you hear from religion contradicts what you hear from science class in school. The cost (“discomfort” in Andrew’s terms) of rejecting one belief outweighs the cost of rejecting the other. That’s probably true, and it helps explain the popularity of the have-it-both-ways choice – evolution guided by God.

I think there’s something else – the law of ungraspably large numbers. For example, I know how far it is to California (3000 miles), and I even think I know how far it is to the moon (240,000 miles – and I’m not looking this up on the Internet; if I’m wrong, I’ll let my ignorance stand since that’s partly the point I’m trying to make). But once you get past that – how far is it to the sun or to Jupiter or to Betelgeuse? – you could tell me any number up in the millions or more – a number so wrong as to make any astronomer chuckle – and I’d think it sounded reasonable.

Those big numbers and the differences between them are meaningful only to people who are familiar with them. They are so large that they lie outside the realm of everyday human experience. The same holds for distances in time. Ten thousand years – that seems like a long, long time ago, long enough for any species to have been around. But “millions of years” is like those millions or hundreds of millions of miles – ungraspably large.

Since the number is outside the realm of human experience, it doesn’t make sense that humans or anything resembling them or even this familiar planet could have existed that long ago.

I suspect that it’s this same law of ungraspably large numbers that allows politicians to posture as doing something about “the huge deficit” by attacking a wasteful government program that costs $3 million. If I spend a few thousand dollars for something, that’s a big ticket item, so three million sounds like a lot. Millions and billions both translate to the same thing: “a lot of money” just as distances in millions of miles and billions of miles are both “a long way away.” The difference between them is hard to grasp.*

*How many such programs would the government have to cancel to cover the revenue losses we just signed on for by extending the tax cuts on incomes over $250,000? And if you think those tax cuts for the rich will pay for themselves or increase revenue, there’s a lovely piece of 1883 pontine architecture I’d like to show you for possible purchase.

Methods Fraud - Right and Left

June 30, 2010

Two links:

1. Fox News used a really, really deceptive graph to make job loss data look even worse than it really is. Media Matters has the story.

2. Research 2000, a polling firm, may have been faking its data. Kos, who has been relying on their polls, has a long post detailing the tell-tale signs – things people would do if they were trying to make their polls appear to follow random sampling. (Makes me feel a bit more confident of my own criticism of a Research 2000 poll.)

UPDATE, July 1: I had thought that the Kos/Research 2000 story was just for those interested in technical matters (sampling, data distributions) and maybe political blogs. But the both the Times and WaPo and perhaps other newspapers have stories about it today.

The Ecological Fallacy

May 10, 2010
Posted by Jay Livingston

The ecological fallacy is alive and well. Ross Douthat, the New York Times’s other conservative (the one that isn’t David Brooks), breathes life into it in his op-ed today on Red Families v. Blue Families, the new book by Naomi Cahn and June Carbone.

First Douthat gives props to the “blue family” model:
couples with college and (especially) graduate degrees tend to cohabit early and marry late, delaying childbirth and raising smaller families than their parents, while enjoying low divorce rates and bearing relatively few children out of wedlock.
Then there’s the “red family” for whom the stable, two-parent family is more a hope than a reality:
early marriages coexist with frequent divorces, and the out-of-wedlock birth rate keeps inching upward.
Blue looks good – good for the couples, good for the kids, good for society. But Douthat finds a moral thorn among the blue roses – abortion.
The teen pregnancy rate in blue Connecticut, for instance, is roughly identical to the teen pregnancy rate in red Montana. But in Connecticut, those pregnancies are half as likely to be carried to term.

So it isn’t just contraception that delays childbearing in liberal states, and it isn’t just a foolish devotion to abstinence education that leads to teen births and hasty marriages in conservative America. It’s also a matter of how plausible an option abortion seems, both morally and practically, depending on who and where you are.
Douthat is channeling Balzac: Behind every great fortune lies a great crime. Behind every more modest fortune – say, enough to live in Danbury if not Greenwich – is a more modest crime, i.e., an abortion or two.

But here’s the fallacy: Douthat makes it appear that the Connecticut residents who are getting those abortions are the same “couples with college and (especially) graduate degrees” we met in the paragraph on blue families. The illogic goes like this:
Blue states with higher levels of income and education also have higher levels of abortion than do Red states.
Therefore more Blue chip people have more abortions than do Red necks.
No, no, no (I hear myself repeating to my students). You cannot assume that a correlation at the state level also exists at the individual level. Just because wealthier states have higher rates of abortion, you cannot assume that wealthier individuals have higher rates of abortion. To make that assumption is to commit the ecological fallacy.

In fact, the Connecticut women who are getting abortions may also be relatively poor and uneducated. The difference is that abortion may give them access to further education or employment – not a graduate degree and a 6-figure job, but something better than what they could expect were they in Alabama. Or Montana.

The Uses and Abuses of Surveys

May 10, 2010
Posted by Jay Livingston

Ask a silly question, you get a silly answer. Ask a politically loaded question, you get a political answer – even if the literal meaning of your question seems to be asking about matters of fact and not opinion..

Here are eight questions from a Zogby poll. Respondents were given a Likert scale from Strongly Agree to Strongly Disagree, but the authors treat answers as either correct or incorrect according to basic economic principles.
1. Restrictions on housing development make housing less affordable.
2. Mandatory licensing of professional services increases the prices of those services.
3. Overall, the standard of living is higher today than it was 30 years ago.
4. Rent control leads to housing shortages.
5. A company with the largest market share is a monopoly.
6. Third-world workers working for American companies overseas are being exploited.
7. Free trade leads to unemployment.
8. Minimum wage laws raise unemployment.
Respondents were also asked to classify themselves on a political spectrum – Progressive, Liberal, Moderate, Conservative, Very Conservative, Libertarian.

This survey wasn’t designed to discover what people think. It was designed to prove a political point: “The Further Left You Are the Less You Know About Economics.” That’s the title of a post about it at Volokh Conspiracy. A paper by Zeljka Buturovic and Dan Klein, who designed the survey, gives the results.

(Click on the image for a view large enough to actually read)

The results were similar for the other questions.

To be sure, the liberals view of economic cause-effect relationships reflects the way they would like the world to be rather than the way the world actually is. But the bias of the poll is obvious. As monkeyesq says in his comment at Volokh,
1. Pick 8 liberal positions that have a questionable economic basis;
2. Ask people whether they “agree” or “disagree” with the statements;
3. Find that liberals are more likely to support liberal positions;
4. Claim that liberals don’t understand economics.
There’s an even larger problem here – a problem that affects not just polls that have an obvious ax to grind,* but a basic problem of all survey research: the question the survey asks may not be the question the respondent hears or answers.

These eight questions have a literal meaning. As Todd Zywicki, who wrote the Volokh post, says, “Note that the questions here are not whether the benefits of these policies might outweigh the costs, but the basic economic effects of these policies.”

True, the questions do not ask about costs and benefits, although I don’t think that the survey included an explicit caveat like the one Zywicki adds after the fact. Still, we have to wonder about how people really heard these questions.

“Mandatory licensing of professional services increases the prices of those services” – Agree or Disagree? Maybe some people hear a different question, a question about policy implications: “Would you like cheaper, but unlicensed, doctors.”

“A company with the largest market share is a monopoly.” Maybe the what the person hears is: “Can companies with large market share – though less than the share required for it to be a monopoly (100%?) – still exercise monopolistic powers?”

As for the “exploitation” of third-world workers, the word may have a precise economic definition (e.g., it’s exploitation only if the worker has no choice) – I don’t know. But even if such an economic definition exists, to most people the word evokes moral judgment, not economics.

The other items also have flaws, as some of the comments at Volokh (now 200 and counting) point out. (I confess that I’m still puzzled by the responses to Standard of Living. Nearly a third of all the respondents think that the standard of living today is no better than it was 30 years ago – 55% on the left, 12% on the right 21% of libertarians.)

The survey may tell us that “epistemic closure” is a disease that can infect the left as well the right. But it also tells us to be cautious about interpreting survey questions literally. Even innocuous questions may mean different things to survey respondents. Until a question has been tested several times, we can’t be sure what respondents hear when they are asked that question.

*A Kos poll that set out to show that quite a few Republicans were extremist nuts suffers from a similar problem. I blogged it here.