Showing posts with label Methods. Show all posts
Showing posts with label Methods. Show all posts

Methodology in the News

April 20, 2012
Posted by Jay Livingston

1. “Survey Research Can Save Your Life,” says Joshua Tucker at the Monkey Cage. He links to this NBC news story about a woman who went into diabetic shock while on the phone with a student pollster working for Marist.  He sensed something was wrong and told his supervisor.  She spoke to the woman and then called 911.  (The news story does not identify the student working the phone survey, only the supervisor.  Nor does it say whether the woman approved or disapproved of Mayor Bloomberg.)

2.  The New York Times this week reported on a RAND study that found no relation between obesity and “food deserts.”  The study used a large national sample; it’s undoubtedly comprehensive.  The problem is that if you are using a national sample of schools or supermarkets or stores or whatever,  two units that fall into the same category on your coding sheet might look vastly different if you went there and looked at them from close range. 

Peter Moskos at Cop in the Hood took a closer look at the RAND study, reported in the Times, RAND relied on a pre-existing classification of businesses. The prefix code 445 indicates  a grocery store. Peter, an ethnographer at heart, has his doubts:
New York is filled with bodega “grocery stores” (probably coded 445120) that don't sell groceries. You think this matters? It does. And the study even acknowledges as much, before simply plowing on like it doesn't. A cigarette and lottery seller behind bullet-proof glass is not a purveyor of fine foodstuffs, and if your data doesn't make that distinction, you need to do more than list it as a “limitation.” You need to stop and start over.
3.  NPR’s “Morning Edition” had a story (here) on death penalty research, specifically on the question of deterrence.  A National Research Council panel headed by Daniel Nagin of Carnegie Mellon University reviewed all the studies and concluded that they were inconclusive, mostly for methodological reasons.  For example, most deterrence studies looked at the death penalty in isolation rather than comparing it with other specified punishments. 

Another methodological problem not mentioned in the brief NPR story is that the number of executions may be too small to provide meaningful findings.  For that we’d need a much larger number of cases.  So this is one time when, at least if you are pro-life, an inadequate sample size isn’t all bad.

The Wall Street Journal Or Your Lying Eyes

March 13, 2012
Posted by Jay Livingston

This graph tracks the share of income going to the top 1% in seven countries.  It’s from a paper by two Swedish economists, Jesper Roine and Daniel Waldenström (pdf here).

(Click on the graph for a larger view.)

The trend was towards greater equality up to 1980 – the share of the 1% was shrinking.    Since then, the 1% have increased their share of the income pie in all seven countries.  But the graph seems to show important differences, especially in recent decades.  Here is a  cropped version of the graph showing the 1980-2004 years.  I have added straight lines connecting those two points for Sweden and for the US.


Both changes are increases, but are they the same or are they different?  The answer is crucial.  The US and Sweden have different economic policies.  If the changes are no different between countries, then inequality is just one of those inevitable things that’s happening no matter what governments do.  But if the growth of inequality in the US is much greater than in Sweden, maybe government policy can in fact mitigate the trend towards inequality.

The Swedish 1% share went from a little under 5% to about 7.5%.  In the US, the 1% share increased from about 7% to 16%.* You might see those increases as very similar.

In fact, Allan Meltzer in the Wall Street Journal takes precisely that view.  He stretches out the graph to de-emphasize the vertical differences, and adds a title implying that all countries are “together” in this shift of income to the top 1%.


He adds this explanation:
As the . . . chart . . . shows, the share of income for the top 1% in these seven countries generally follows the same trend line. That means domestic policy can’t be the principal reason for the current spread between high earners and others. Since the 1980s, that spread has increased in nearly all seven countries. The U.S. and Sweden, countries with very different systems of redistribution, along with the U.K. and Canada show the largest increase in the share of income for the top 1%. [emphasis added]
If your pay went from $5 an hour to $7.50 an hour while your co-worker’s went from $7 to $16, you might think that your co-worker had gotten a substantially heftier raise.  But if so, that’s because you’re not the Wall Street Journal.  

Meltzer’s main point in the article is that we should not raise taxes on the very wealthy.  However, as Bruce Barlett points out (here), if the rich are getting just as rich in high-tax countries like Sweden and the Netherlands as they are in low-tax countries like the US, we may as well raise taxes on them. They’ll be doing just as well, like their Swedish and Dutch counterparts, and the nation will have more revenue to put towards Medicare, education, deficit-reduction, etc. 

But Meltzer is wrong.  Sweden and the Netherlands are very different from the US.  As the graph shows, the income share of the 1% in the US is twice that of the 1% in Sweden and 3 times that of the 1% in the Netherlands.  And it has risen more rapidly.  Yet Meltzer claims that inequality trends are similar everywhere. 

So who are you going to believe - the Wall Street Journal or your lying eyes?

 -------------- 
* Big hat tip to Andrew Perrin at Scatterplot.  Several economics blogs have also looked at the Meltzer article. 

UPDATE March 16: Gwen Sharp at Sociological Images posted this link to a database of income data from various countries.  You can to create your own graphs of income shares.

Deep Change in the Deep South?

March 12, 2012
Posted by Jay Livingston

The polling news today is that very few Republicans in Alabama and Mississippi (14% and 12%, respectively) think that President Obama is a Christian.  Three times as many think he’s a Muslim. (A pdf of the entire survey is here.)

The poll also finds that only about one in four Republicans in those states believe in evolution.  Five times that many flatly reject evolution, with about 10% “not sure.” 


The results I found most curious were the opinions on interracial marriage.  Alabama 21% thought it should be illegal, 67% thought it should be legal; in Mississippi,  29% illegal, 54% legal.  None of the news stories I looked on this noted that when the same pollsters (Public Policy Polling) asked the same question of Mississippi Republicans less than a year ago, the results were very different.  A plurality thought it should be illegal.
  (My post on that poll is here.)


The margin of error is 4% (N = 600), so the 15-point swing supposedly reflects a real change.  But I’m skeptical.  What could account for such a large change if not sampling variation?  Did the GOP organize mass screenings of “The Help” and shame some of their number into allowing that maybe Loving v. Virginia wasn’t a mistake after all? Did the Heidi Klum - Seal breakup make it OK?   I can’t come up with even a dubiously speculative explanation.

Psychology (!!!) or Sociology (zzz)

February 8, 2012
Posted by Jay Livingston

News media have to come up with provocative headlines and ledes, even when they’re reporting on academic papers.  And even when the reasonable reaction would be “Well, duh,” rather than a gasp in 72-point caps.  But if that’s the route you want to go, it usually helps to think psychologically rather than sociologically.

Here’s a headline from Forbes
Facebook More Addictive Than Cigarettes, 
Study Says
And the Annenberg School Website started their story with this.
Cigarettes and alcohol may not be the most addicting drugs on the market, according to a recent study.
A team from the University of Chicago's business school has suggested everyone's suspicion: social networking is addictive. So addictive that constantly using sites like Facebook and Twitter may be a harder vice to kick than smoking and drinking.  [emphasis added]
The study in question is “Getting Beeped With the Hand In The Cookie Jar: Sampling Desire, Conflict, and Self-Control in Everyday Life” by Wilhelm Hofmann, Kathleen D. Vohs, and Roy F. Baumeister, presented at a recent Society for Personality and Social Psychology conference.  They had subjects (N=205) wear beepers and report on their desires. 
I found out about it in a Society Pages research round-up (here).
A study of 205 adults found that their desires for sleep and sex were the strongest, but the desire for media and work were the hardest to resist. Surprisingly, participants expressed relatively weak levels of desire for tobacco and alcohol. This implies that it is more difficult to resist checking Facebook or e-mail than smoking a cigarette, taking a nap, or satiating sexual desires.
Of course it’s more difficult.   But the difficulty has almost nothing to do with the power of the internal desire and everything to do with the external situation, as The Society Pages (a sociology front organization) should well know.  In a classroom, a restaurant, a church, on the street, in an elevator – just about anywhere – you can quietly glance down at your smartphone and check your e-mail or Facebook page.  But to indulge in smoking, sleeping, and “satiating sexual desires,” you have to be willing to violate some serious norms and even laws.

It’s not about which desires are difficult to resist.  It’s about which desires are easy to indulge.  The study tells us not about the strength of psychological desires but the strength of social norms.  You can whip out your Blackberry, and nobody blinks.  But people might react more strongly if you whipped out, you know, your Marlboros. 

The more accurate headline might be
Checking Twitter at Starbucks OK, Having Sex There, Not So Much, Study Finds
But that headline is not going to get nearly as much attention.

Doing the Math

February 7, 2012
Posted by Jay Livingston

My students sometimes have trouble with math, even what I think is simple math.  Percentage differences, for example.  I blame it on the local schools.  Once I explain it, I think most of them catch on. 

Stephen Moore is not from New Jersey.  His high school diploma is from the highly regarded New Trier, he has an economics masters degree from George Mason, and he writes frequently about economics.  A couple of days ago he wrote in the Wall Street Journal (here) about how much better it was to work for the government than for private employers.* 
Federal workers on balance still receive much better benefits and pay packages than comparable private sector workers, the Congressional Budget Office reports. The report says that on average the compensation paid to federal workers is nearly 50% higher than in the private sector, though even that figure understates the premium paid to federal bureaucrats.

CBO found that federal salaries were slightly higher (2%) on average, while benefits -- including health insurance, retirement and paid vacation -- are much more generous (48% higher) than what same-skilled private sector workers get.
It’s not clear how Moore arrived at that 50% number.  Maybe he added the 2% and the 48%. 

Let’s assume that the ratio of salary to benefits is 3 - 1.  A worker in the private sector who makes $100,000 in salary would get $33,000 worth of benefits. The government worker would get 2% more in salary and 48% more in benefits. 


Private
      Gov't.
Salary 100,000 102,000
Benefits   33,000 49,500
Total 133,000 151,500

If total compensation for private-sector workers is $133,000, and if government workers were getting 50% more than that, their total compensation would be $200,000. But the percentage difference between the $150K and the $133K is nowhere near 50%.  The government worker pay package is 14% higher. 

I think I could explain this so my students would understand it.  But then again, they don’t write columns for the Wall Street Journal.

----------------------------
* The WSJ gives the article the title “Still Club Fed.” The more accurate title would be “Government Jobs Are Good Jobs.” Of course, the latter takes the perspective of people looking for work, a viewpoint that doesn’t get much consideration at the WSJ.

Applied Probability

 February 6, 2012
Posted by Jay Livingston

Long-odds prop bets are sucker bets.  The odds that bookmakers offer are nowhere near the true probability.  But expected values matter only if you’re playing a large number of times, which is what the house is doing.  The bettor is betting just once, and 50-to-one odds sounds like a lot.

Take yesterday’s game. The odds that the first points of the game would be the Giants scoring a safety were 50-1.  That
’s what the bookies offered.

But what is the true probability?  In the previous NFL season, there were 2077 scores, not counting point-after-touchdown.  Here is the breakdown (I found the data here).

  • Touchdowns      1270
  • Field Goals           794
  • Safeties                    13
The probability of the first score being a safety by either team is 2064 to 13 or about 160 to 1.  The probability of the first score being a safety by a specified side is double that.  Even if that specified side is the Giants and their defense is twice as good as the Patriots defense, that still makes the probability at least 200 to 1.  The Las Vegas books were offering only 50 - 1, one-fourth of the correct odds.  So the expected return on a $1000 bet is $250 – a $750 loss.   What a ripoff.

Of course, not everyone feels duped.



Somewhere, someone is walking around with an I Brady t-shirt. 

HT: My colleague Faye Glass, though she tells me this picture is all over the Internet.

Do You Hear What I Hear? Maybe Not.

December 18, 2011
Posted by Jay Livingston

As I’ve said before (here), the question the researcher asks is not always the question people hear.   Thats especially true when the question is about probabilities.

Here, for example, is the ending of a fictional vignette from a recent study in the Journal of Personality and Social Psychology.
Richard found a wallet on the sidewalk. Nobody was looking, so he took all of the money out of the wallet. He then threw the wallet in a trash can.
Is it more probable that Richard is
    a. a teacher
    b. a teacher and a rapist
Since the category “a teacher” necessarily includes teacher/rapists as well, the correct answer is “a.” But many people choose “b.”  The study used this “conjunction fallacy”* to probe for prejudices by switching out the rapist for various other categories.  Some subjects were asked about atheist/teachers, others about Muslim/teachers, and so on.  The finding:
A description of a criminally untrustworthy individual was seen as comparably representative of atheists and rapists but not representative of Christians, Muslims, Jewish people, feminists, or homosexuals.
Andrew Gelman, a usually mild-mannered reporter on things methodological, had a post on this with the subject line, “This one is so dumb it makes me want to barf.”
What’s really disturbing about the study is that many people thought it was “more probable” that the dude is a rapist than that he is a Christian! Talk about the base-rate fallacy.
Maybe it would settle Andrew’s stomach to remember that the question the researchers asked was almost certainly not the question people heard.   What the researchers pretend to be asking is this:
Of all thieves, which are there more of – teachers or rapist/teachers? 
After all, that is indeed the literal meaning.  But it’s pretty obvious that the question people are answering is something different:
Which group has a higher proportion of thieves among them – all teachers or the subset rapist/teachers?
The researchers say they weren’t at all interested in demonstrating the conjunction fallacy.  They were just using it to uncover the distrust people feel towards atheists.  What they found was that when it comes to dishonesty, people (specifically, 75 female and 30 male undergrads at the University of British Columbia) rank atheists at about the same level as rapists.

But why resort to such roundabout tricks?  Why not ask the question directly?**
Who is more likely to steal a wallet when nobody is looking?
    a.  an atheist
    b. a rapist
    c.  neither; they are equally larcenous
Or:
On a seven-point scale, rank each of the following on how likely they would be to steal a wallet when nobody is looking:
  •     an atheist: 1   2   3   4   5   6   7
  •     a Christian: 1   2   3   4   5   6   7
  •     a rapist: 1   2   3   4   5   6   7
  •     etc. 
Instead, they asked questions that they knew would confuse nearly anyone not fluent in the language of statistics and probability.  I wonder what would happen if in their “who do you distrust” study they had included a category for experimental social psychologists.***

---------------
Daniel Kahneman and Amos Tversky pretty much invented the conjunction fallacy thirty years ago with their “Linda problem,” and Kahneman discusses it in his recent book Thinking Fast and Slow.  To get the right answer, you have to ignore intuition and make your thinking very, very slow.  Even then, people with no background in statistics and logic may still get it wrong.

** The authors presentation of their results is also designed to frustrate the ordinary reader. Each condition (rapist/teacher, atheist/teacher, homosexual/teacher, etc.) had 26 (or in one case 27) subjects.  The payoff was the number of errors in each group.  But the authors don’t say what that number was.  They give the chi-square, the odds ratios, the p’s and the b’s.  But they don’t tell us how many of the 26 subjects thought that the wallet snatcher was more likely to be an atheist/teacher or a Christian/teacher than to be merely a teacher.

*** The JPSP is one of the most respected journals in the field, maybe the most respected, influential, and frequently cited, as I pointed out here.

Surveys and Confirmation Bias

November 10, 2011
Posted by Jay Livingston

When he taught research methods as a grad student, Michael Schwartz gave his students this assignment: “Create a survey to show . . .” and he would tell them the conclusion he wanted the survey to support.  The next week, he’d give them the same assignment but with the desired conclusion the opposite of the first one.

A year and a half ago, I criticized (here) a much publicized study by Dan Klein and Zeljka Buturovic:  “This survey, I said, “wasn’t designed to discover what people think. It was designed to prove a political point,” and that point was that liberal ideology blinds people to economic facts. 

I was reminded of Mike’s assignment when I read Klein’s recent article at The Atlantic.  In a bit of academic fairness that’s probably all too rare, Klein went on to create a survey designed to see if conservative ideology has a similar effect.

Klein hoped that his conservative and libertarian allies would not so readily agree with politically friendly economic ideas that were nevertheless unsound. But conservatives in the new survey were “equally stupid” as the liberals in the earlier survey.

Klein also expected some nasty nyah-nyahing from his liberal critics.  But no, “The reaction to the new paper was quieter than I expected.”   In fact, one of those liberal critics, Matt Yglesias, offered an observation that Klein used as his takeaway from the two surveys: “there’s a lot of confirmation bias out there.” 

Yes, but confirmation bias is not just something that affects people who respond to surveys.  As Mike’s assignment makes clear, we also need to be wary of confirmation bias on the part of those who create the surveys. There is the further problem I mentioned in my earlier post:  a one-shot survey is inherently ambiguous. We can’t be sure just what the respondents really hear when they are asked the question. 

My own takeaway, besides admiration for Klein’s honesty, is that when you design your research as a statement (proving some point), you don’t learn nearly as much as when you design it as a genuine question.

Lying With Statistics, and Really Lying With Statistics

November 4, 2011
Posted by Jay Livingston

“The #1 way to lie with statistics is . . . to just lie!” says Andrew Gelman, who a) knows much about statistics and b) is very good at spotting statistical dishonesty.

But maybe there’s a difference between lying with statistics and just plain making stuff up.

I’ve commented before about social psychologists’ affinity for Candid-Camera deception, but this Dutch practitioner goes way beyond that.  [The Telegraph has the story .] 


The committee set up to investigate Prof Stapel said after its preliminary investigation it had found "several dozen publications in which use was made of fictitious data" . . .
[Stapel’s] paper that linked thoughts of eating meat eating with anti-social behaviour was met with scorn and disbelief when it was publicised in August, it took several doctoral candidates Stapel was mentoring to unmask him. . . .

the three graduate students grew suspicious of the data Prof Stapel had supplied them without allowing them to participate in the actual research. When they ran statistical tests on it themselves they found it too perfect to be true and went to the university's dean with their suspicions.
What’s truly unsettling is to think that maybe he’s not the only one.

Abstract Preferences and Real Choices

November 3, 2011
Posted by Jay Livingston
Cross-posted at Sociological Images

We’ve known for a long time that surveys are often very bad at predicting behavior.  To take the example that  Malcom Gladwell uses, if you ask Americans what kind of coffee they want,  most will say “a dark, rich, hearty roast.”  But what they actually prefer to drink is “milky, weak coffee.”

Something that sounds good in the abstract turns out to be different from the stuff you actually have to drink. 

Election polls usually have better luck since indicating your choice to a voting machine isn’t all that different from speaking that choice to a pollster.  But political preference polls as well can run into that abstract-vs.-actual problem.

Real Clear Politics recently printed some poll results that were anything but real clear.  RCP looked at polls matching Obama against the various Republican candidates.  In every case, if you use the average results of the different polls, Obama comes out on top. But in polls that matched Obama against “a Republican,” the Republican wins.


 The graph shows only the average of the polls.  RCP also provides the results
of the various polls (CNN, Rasmussen, ABCl, etc.) 

Apparently, the best strategy for the GOP is nominate a candidate but not tell anyone who it is.

If Your Survey Doesn’t Find What You Want It to Find . . .

October 19, 2011
Posted by Jay Livingston(Cross posted at Sociological Images)


. . . say that it did.

Doug Schoen is a pollster who wants the Democrats to distance themselves from the Occupy Wall Street protesters.   (Schoen is Mayor Bloomberg’s pollster.  He has also worked for Bill Clinton.)  In The Wall Street Journal yesterday (here),  he reported on a survey done by a researcher at his firm.  She interviewed 200 of the protesters in Zucotti Park.

Here is Schoen’s overall take:
What binds a large majority of the protesters together—regardless of age, socioeconomic status or education—is a deep commitment to left-wing policies: opposition to free-market capitalism and support for radical redistribution of wealth, intense regulation of the private sector, and protectionist policies to keep American jobs from going overseas.
I suppose it’s nitpicking to point out that the survey did not ask about SES or education.  Even if it had, breaking the 200 respondents down into these categories would give numbers too small for comparison. 

More to the point, that “large majority” opposed to free-market capitalism is 4% – eight of the people interviewed.  Another eight said they wanted “radical redistribution of wealth.”  So at most, 16 people, 8%, mentioned these goals.  (The full results of the survey are available here.)
What would you like to see the Occupy Wall Street movement achieve? {Open Ended}
35% Influence the Democratic Party the way the Tea Party has influenced the GOP
4% Radical redistribution of wealth 5% Overhaul of tax system: replace income tax with flat tax
7% Direct Democracy
9% Engage & mobilize Progressives 
9% Promote a national conversation
11% Break the two-party duopoly
4% Dissolution of our representative democracy/capitalist system  4% Single payer health care
4% Pull out of Afghanistan immediately 
8% Not sure
Schoen’s distortion reminded me of this photo that I took on Saturday (it was our semi-annual Sociology New York Walk, and Zucotti Park was our first stop).



The big poster in the foreground, the one that captures your attention, is radical militance – the waif from the “Les Mis” poster turned revolutionary.  But the specific points on the sign at the right are conventional liberal policies – the policies of the current Administration.*

There are other ways to misinterpret survey results.  Here is Schoen in the WSJ:
Sixty-five percent say that government has a moral responsibility to guarantee all citizens access to affordable health care, a college education, and a secure retirement—no matter the cost.
Here is the actual question:
Do you agree or disagree with the following statement: Government has a moral responsibility to guarantee healthcare, college education, and a secure retirement for all.
“No matter the cost” is not in the question.  As careful survey researchers know, even slight changes in wording can affect responses.  And including or omitting “no matter the cost” is hardly a slight change.

As evidence for the extreme radicalism of the protestors, Schoen says,
By a large margin (77%-22%), they support raising taxes on the wealthiest Americans,
Schoen doesn’t bother to mention that this isn’t much different from what you’d find outside Zucotti Park.  Recent polls by Pew and Gallup find support for increased taxes on the wealthy ($250,000 or more) at 67%.  (Given the small sample size of the Zucotti poll, 67% may be within the margin of error.)  Gallup also finds the majorities of two-thirds or more think that banks, large corporations, and lobbyists have too much power. 
Thus Occupy Wall Street is a group of engaged progressives who are disillusioned with the capitalist system and have a distinct activist orientation. . . . .Half (52%) have participated in a political movement before.
That means that half the protesters were never politically active until Occupy Wall Street inspired them.

Reading Schoen, you get the impression that these are hard-core activists, old hands at political demonstrations, with Phil Ochs on their iPods and a well-thumbed copy of “The Manifesto” in their pockets.  In fact, the protesters were mostly young people with not much political experience who wanted to work within the system (i.e., with the Democratic party) to achieve fairly conventional goals, like keeping the financial industry from driving the economy into a ditch again.

And according to a recent Time survey, more than half of America views them favorably.

------------------------------
* There were other signs with other messages.  In fact, sign-making seemed to be one of the major activities in Zucotti Park.  Some of them. like these, did not seem designed to get much play in the media. 

Chart Art - FBI-Style

September 17, 2011
Posted by Jay Livingston
(Cross-posted at Sociological Images.)

The FBI is teaching its counter-terrorism agents that Islam is an inherently violent religion.  So are the followers of Islam.  Not just the extremists and radicals, but the mainstream. 
There may not be a ‘radical’ threat as much as it is simply a normal assertion of the orthodox ideology. . . .The strategic themes animating these Islamic values are not fringe; they are main stream.
Wired  got hold of the training materials.  The Times has more today, including a section of the report that describes Muhammad as “a cult leader for a small inner circle.” (How small? Twelve perhaps?)  He also “employed torture to extract information.”*

An FBI PowerPoint slide has a graph with the data to support its assertions.


The graph clearly shows that followers of the Torah and the Bible have gotten progressively less violent since 1400 BC, while followers of the Koran flatline starting around 620 AD and remain just as violent as ever.

Unfortunately, the creators of the chart do not say how they operationalized “violent” and “non-violent.”  But since the title of the presentation is “Militancy Considerations,” it might have something to do with military, para-military, and quasi-military violence.  When it comes to quantities of death, destruction, and injury, these overwhelm other types of violence. 

I must confess that my knowledge of history is sadly wanting, and I was educated before liberals imposed all this global, multicultural nonsense on schools, so I know nothing about wars that might have happened among Muslims during the period in question.  What I was taught was that the really big wars, the important wars, the wars that killed the most people, were mostly affairs among followers of the Bible.  Some of these were so big that they were called “World Wars” even though followers of the Qur’an had very low levels of participation.  Some of these wars lasted quite a long time – thirty years, a hundred years.  I was also taught that in the important violence that did involve Muslims – i.e., the Crusades** – it was the followers of the Bible who were doing most of the killing. 

Perhaps those with a more knowledge of Muslim militant violence can provide the data.


-----------------------------

* To be fair, the FBI seems to have been innocent of any of the torture that took place during the Bush years.  That was all done by the military and the CIA – and by the non-Christian governments to which the Bush administration outsourced the work. 

** Followers of the Bible crusading to “take back our city” from a Muslim-led regime may have familiar overtones.

Home Team Advantage

September 14, 2011
Posted by Jay Livingston
If you’re looking for an example of the Lake Wobegon effect (“all the children are above average”), you can’t do much better than this one.  It’s almost literal.


The survey didn’t ask about the children.  It asked about schools – schools in general and your local school.  As with “Congress / my Congressional rep,” people rated America’s schools as only so-so.  Barely a fifth of respondents gave America’s schools an above-average grade.  But when people rated their own local schools, 46% gave B’s and A’s.  The effect was even stronger among the affluent (upper tenth of the income distribution for their state) and among teachers.

The findings about the affluent are no surprise, nor are their perceptions skewed.  Schools in wealthy neighborhoods really are above average.  What’s surprising is that only 47% of the wealthy gave their local schools an above-average grade. 

The teachers, though, are presumably a representative sample, yet 64% of their schools are above average.  I can think of two explanations for the generosity of the grades they assign their own schools:
  • Self-enhancement.  Teachers have a personal stake in the rating of schools generally.  They have an even larger stake in the rating of their own school.
  • Familiarity.  We feel more comfortable with the familiar.  (On crime, people feel safer in their own neighborhoods, even the people who live in high-crime neighborhoods.)  So we rate familiar things more charitably.  For teachers, schools are something they’re very familiar with, especially their local schools.
[Research by Howell, Peterson, and West reported here.
HT: Jonathan Robinson at The Monkey Cage]

Bought Sex?

July 20, 2011
Posted by Jay Livingston

Did you buy sex last year?

You probably said no, even if you’re a man. But wait. First look at “The John Next Door” an article currently up at Newsweek (subhead: “The men who buy sex are your neighbors and colleagues”). It features a study by Melissa Farley called “Comparing Sex Buyers With Men Who Don’t Buy Sex”
No one even knows what proportion of the male population does it; estimates range from 16 percent to 80 percent.
Actually, a considerably lower estimate comes from the GSS.
PAIDSEX Had sex for pay last year: If you had other partners, please indicate all categories that apply to them. d. Person you paid or paid you for sex.
Here are the results since the GSS started asking this question..
(Click in the graph for a larger view.)
Not 16-80%, but somewhere around 5%.

Not to get too Clintonian, but it seems to depend on what the meaning of “sex” is. The GSS respondents probably thought that paying for sex meant paying someone to have sex. Farley’s definition was somewhat broader.
Buying sex is so pervasive that Farley’s team had a shockingly difficult time locating men who really don’t do it. The use of pornography, phone sex, lap dances, and other services has become so widespread that the researchers were forced to loosen their definition in order to assemble a 100-person control group.
So if you bought a copy of Playboy, you paid for sex. And if you looked at it twice last month, you are disqualified from the control of “men who don’t buy sex.”
“We had big, big trouble finding nonusers,” Farley says. “We finally had to settle on a definition of non-sex-buyers as men who have not been to a strip club more than two times in the past year, have not purchased a lap dance, have not used pornography more than one time in the last month, and have not purchased phone sex or the services of a sex worker, escort, erotic masseuse, or prostitute.”
I don’t have Farley’s data. If the control group of nonusers was 100, I assume that the user group n was the same – not really large enough for estimating the prevalence of the different forms of buying sex. How many had paid a prostitute, how many had looked at porn twice in a month? Some people probably think that there’s a meaningful distinction between those two. The implication of much of the Newsweek article is that they are all “sex buyers” and that they therefore share the same ugly attitudes towards women.

SAT, GPA, and Bias

July 8, 2011
Posted by Jay Livingston

(Cross-posted at Sociological Images)


Is the SAT biased? If so, against who is it biased?

It has long been part of the leftist creed that the SAT and other standardized tests are biased against the culturally disadvantaged - racial minorities, the poor, et. al. Those kids may be just as academically capable as more privileged kids, but the tests don’t show it.

But maybe SATs are biased against privileged kids. That’s the implication in a blog post by Greg Mankiw. Mankiw is not a liberal. In the Bush-Cheney first term, he was the head of the Council of Economic Advisors. He is also a Harvard professor and the author of a best-selling economics text book. Back in May he had a blog post called “A Regression I’d Like to See.” If tests are biased in the way liberals say they are, says Mankiw, let’s regress GPA on SAT scores and family income. The correlation with family income should be negative.
a lower-income student should do better in college, holding reported SAT score constant, because he managed to get that SAT score without all those extra benefits.
In fact, the regression had been done, and Mankiw added this update:
Todd Stinebrickner, an economist at The University of Western Ontario, emails me this comment: “Regardless, within the income groups we examine, students from higher income backgrounds have significantly higher grades throughout college conditional on college entrance exam . . . scores. [Mankiw added the boldface for emphasis.]

What this means is that if you are a college admissions officer trying to identify the students who will do best in college, as measured by grades, you would give positive rather than negative weight on family income.
Not to give positive weight to income, therefore, is bias against those with higher incomes.

To see what Mankiw means, look at some made-up data on two groups. To keep things civil, I’m just going to call them Group One and Group Two. (You might imagine them as White and Black, Richer and Poorer, or whatever your preferred categories of injustice are. I’m sticking with One and Two.) Following Mankiw, we regress GPA on SAT scores. That is, we use SAT scores as our predictor and we measure how well they predict students’ performance in college (their GPA).

(Click on the image for a larger, clearer view)

In both groups, the higher the SAT, the higher the GPA. As the regression line shows, the test is a good predictor of performance. But you can also see that the Group One students are higher on both. If we put the two groups together we get this.

Just as Mankiw says, if you’re a college admissions director and you want the students who do best, at any level of SAT score, you should give preference to Group One. For example, look at all the students who scored 500 on the SAT (i.e., holding SAT constant at 500). The Group One kids got better grades than did the Group Two kids. So just using the SATs, without taking the Group factor (e..g., income ) into account, biases things against Group One. The Group One students can complain: “the SAT underestimates our abilities, so the SAT is biased against us.”

Case closed? Not yet. I hesitate to go up against an academic superstar like Mankiw, and I don’t want to insult him (I’ll leave that to Paul Krugman). But there are two ways to regress the data. So there’s another regression, maybe one that Mankiw does not want to see.

What happens if we take the same data and regress SAT scores on GPA? Now GPA is our predictor variable. In effect, we’re using it as an indicator of how smart the student really is, the same way we used the SAT in the first graph.
Let’s hold GPA constant at 3.0. The Group One students at that GPA have, on average, higher SAT scores. So the Group Two students can legitimately say, “We’re just as smart as the Group One kids; we have the same GPA. But the SAT gives the impression that we’re less smart. So the SAT is biased against us.”

So where are we?
  • The test makers say that it’s a good test - it predicts who will do well in college.
  • The Group One students say the test is biased against them.
  • The Group Two students say the test is biased against them.
And they all are right.


Huge hat tip to my brother, S.A. Livingston. He told me of this idea (it dates back to a paper from the1970s by Nancy Cole) and provided the made-up data to illustrate it. He also suggested these lines from Gilbert and Sullivan:
And you'll allow, as I expect
That they are right to so object
And I am right, and you are right
And everything is quite correct.




Overcoming Social Desirability Bias – He’s Got a Little List

April 19, 2011
Posted by Jay Livingston

As some day it may happen that a survey must be done, you need a little list, a quick five-item list – for sex or race or crime or things quite non-PC but fun, where pollsters all have missed, despite what they insist. There’s the guy who says he’d vote for blacks if they are qualified; he’d vote for women too, but are we sure he hasn’t lied? “How many partners have you had?” Or “Did you ever stray?” With things like this you can’t always believe what people say. You tell them it’s anonymous, but still their doubts persist, and so your methodology can use this little twist.

It’s called the List Experiment (also the Unmatched Count Technique). It’s been around for a few years, though I confess I wasn’t aware of it until I came across this recent Monkey Cage post by John Sides that linked to another post from the presidential year of 2008. Most surveys then were finding that fewer than 10% of the electorate were unwilling to vote for a woman (Hillary was not mentioned by name). But skeptical researchers (Matthew Streb et al., here gated), instead of asking the question directly, split the sample in half. They asked one half

How many of the following things make you angry or upset?
  • The way gasoline prices keep going up.
  • Professional athletes getting million dollar-plus salaries.
  • Requiring seat belts to be used when driving.
  • Large corporations polluting the environment.
Respondents were told not to say which ones pissed them off, merely how many. Researchers calculated the average number of items people found irritating. The second half got the same list but with one addition:
  • A woman serving as president.
If the other surveys are correct, adding this one item should increase the mean by no more than 10%. As it turned out, 26% of the electorate would be upset or angry about a woman president, considerably more than the 6% in the GSS sample who said they wouldn’t vote for a woman.

The technique reminds me of a mentalist act: “Look at this list, sir, and while my back is turned tell me how many of those things you have done. Don’t tell me which ones, just the total number. Now I want you to concentrate very hard . . . .” But I can certainly see its usefulness as a way to check for social desirability bias.

What’s Wrong With (Percentages in) Mississippi

April 10, 2011
Posted by Jay Livingston

A Public Policy Polling survey asked Mississippi Republicans about their opinion on interracial marriage. It also asked how they felt about various politicians. The report concludes, “Tells you something about the kinds of folks who like each of those candidates.”

Not quite.

What’s been getting the most attention is the finding that Mississippi Republicans think interracial marriage should be illegal. Not all Mississippi Republicans. Just 46% of them (40% think it should be legal).* Does their position on intermarriage tell us anything about who they might like as a candidate? Does a Klaxon wear a sheet?

(Click on the chart for a larger view.)

It’s no surprise that Sarah Palin is much preferred to Romney. But as PPP points out racial attitudes figure differently depending on the candidate. When you go from racists to nonracists,** Palin’s favorable/unfavorable ratio takes a hit. But Romney’s gets a boost.

But does this tells us something about “the kinds of folks who like each of those candidates”? The trouble is that statement is percentaging on the dependent variable, implicitly comparing Romney supporters with Palin supporters. But the percentages actually given by PPP compare racists with nonracists** The statement is implying that candidate preferences tell us about racial attitudes. But what the data show is that racial attitudes tell us about candidate preferences. The two are not the same. From the data PPP gives, we don’t actually know what percent of Palin supporters favor laws against intermarriage. Ditto for Romney supporters.

In any case, neither Palin nor Romney is the top choice of Mississippi Republicans (especially the racists), who may be thinking racially but are acting locally and going with their own governor first and the former governor of neighboring Arkansas second.


* The sample was only 400. But the results aren’t too different from what the GSS has found. The most recent GSS I could find that included RACMAR was from 2002. In the “East South Central” region, the percent favoring laws against interracial marriage was 36%. So among Republicans, it might have been ten points higher.

**I realize that neither of these terms “racist” and “nonracist” is necessarily accurate. I use them as shorthand for, respectively, “people who think interracial marriage should be illegal” and “people who think interracial marriage should be legal.”

Graphing Ideas about Marriage (Me vs. USA Today)

February 3, 2011
Posted by Jay Livingston

As someone with the visual aptitude of gravel, I shouldn’t be edging into Flâneuse territory. But when I saw this graph in USA Today this morning, I was frustrated.

(Click on the image for a larger view.)
Responses, by age group, when asked if they want to marry:
SOURCES: Match.com/MarketTools survey of 5,199 men and women who either have never been married or are widowed, divorced or separated.

I found it hard to make comparisons from one age group to another. In the online edition, the layout was better – all in a row – and the addition of even a single color helped. (Odd that USA Today, the newspaper that led the way in using color, gave its print readers the graph in only black-and-white, or more accurately gray-and-gray.)

(Click on the image for a larger view.)

I thought I’d try my own hand with my rudimentary knowledge of Excel.

(Click on the image for a larger view.)

What do you think?

The Law of Ungraspably Large Numbers

December 23, 2010
Posted by Jay Livingston

Been here long?

Gallup regularly asks this question:
Which of the following statements comes closest to your views on the origin and development of human beings --
  1. Human beings have developed over millions of years from less advanced forms of life, but God guided this process,
  2. Human beings have developed over millions of years from less advanced forms of life, but God had no part in this process
  3. God created human beings pretty much in their present form at one time within the last 10,000 years or so?
Here are the results:

(Click on the graph for a larger view.)

For better or worse, Godless evolutionism has been rising steadily if slowly for the past decade – 16%, and counting. And “only” 40% of us Americans, down from 47%, believe that humans are johnnies-come-lately. Scientific fact is making some headway. But a lot of people still believe in something that’s just not true.

Andrew Gelman explains it in psycho-economic terms. The “belief in young-earth creationism . . . is costless.” What you hear from religion contradicts what you hear from science class in school. The cost (“discomfort” in Andrew’s terms) of rejecting one belief outweighs the cost of rejecting the other. That’s probably true, and it helps explain the popularity of the have-it-both-ways choice – evolution guided by God.

I think there’s something else – the law of ungraspably large numbers. For example, I know how far it is to California (3000 miles), and I even think I know how far it is to the moon (240,000 miles – and I’m not looking this up on the Internet; if I’m wrong, I’ll let my ignorance stand since that’s partly the point I’m trying to make). But once you get past that – how far is it to the sun or to Jupiter or to Betelgeuse? – you could tell me any number up in the millions or more – a number so wrong as to make any astronomer chuckle – and I’d think it sounded reasonable.

Those big numbers and the differences between them are meaningful only to people who are familiar with them. They are so large that they lie outside the realm of everyday human experience. The same holds for distances in time. Ten thousand years – that seems like a long, long time ago, long enough for any species to have been around. But “millions of years” is like those millions or hundreds of millions of miles – ungraspably large.

Since the number is outside the realm of human experience, it doesn’t make sense that humans or anything resembling them or even this familiar planet could have existed that long ago.

I suspect that it’s this same law of ungraspably large numbers that allows politicians to posture as doing something about “the huge deficit” by attacking a wasteful government program that costs $3 million. If I spend a few thousand dollars for something, that’s a big ticket item, so three million sounds like a lot. Millions and billions both translate to the same thing: “a lot of money” just as distances in millions of miles and billions of miles are both “a long way away.” The difference between them is hard to grasp.*

*How many such programs would the government have to cancel to cover the revenue losses we just signed on for by extending the tax cuts on incomes over $250,000? And if you think those tax cuts for the rich will pay for themselves or increase revenue, there’s a lovely piece of 1883 pontine architecture I’d like to show you for possible purchase.

Methods Fraud - Right and Left

June 30, 2010

Two links:

1. Fox News used a really, really deceptive graph to make job loss data look even worse than it really is. Media Matters has the story.

2. Research 2000, a polling firm, may have been faking its data. Kos, who has been relying on their polls, has a long post detailing the tell-tale signs – things people would do if they were trying to make their polls appear to follow random sampling. (Makes me feel a bit more confident of my own criticism of a Research 2000 poll.)

UPDATE, July 1: I had thought that the Kos/Research 2000 story was just for those interested in technical matters (sampling, data distributions) and maybe political blogs. But the both the Times and WaPo and perhaps other newspapers have stories about it today.