Showing posts with label Methods. Show all posts
Showing posts with label Methods. Show all posts

Abstract Preferences and Real Choices

November 3, 2011
Posted by Jay Livingston
Cross-posted at Sociological Images

We’ve known for a long time that surveys are often very bad at predicting behavior.  To take the example that  Malcom Gladwell uses, if you ask Americans what kind of coffee they want,  most will say “a dark, rich, hearty roast.”  But what they actually prefer to drink is “milky, weak coffee.”

Something that sounds good in the abstract turns out to be different from the stuff you actually have to drink. 

Election polls usually have better luck since indicating your choice to a voting machine isn’t all that different from speaking that choice to a pollster.  But political preference polls as well can run into that abstract-vs.-actual problem.

Real Clear Politics recently printed some poll results that were anything but real clear.  RCP looked at polls matching Obama against the various Republican candidates.  In every case, if you use the average results of the different polls, Obama comes out on top. But in polls that matched Obama against “a Republican,” the Republican wins.


 The graph shows only the average of the polls.  RCP also provides the results
of the various polls (CNN, Rasmussen, ABCl, etc.) 

Apparently, the best strategy for the GOP is nominate a candidate but not tell anyone who it is.

If Your Survey Doesn’t Find What You Want It to Find . . .

October 19, 2011
Posted by Jay Livingston(Cross posted at Sociological Images)


. . . say that it did.

Doug Schoen is a pollster who wants the Democrats to distance themselves from the Occupy Wall Street protesters.   (Schoen is Mayor Bloomberg’s pollster.  He has also worked for Bill Clinton.)  In The Wall Street Journal yesterday (here),  he reported on a survey done by a researcher at his firm.  She interviewed 200 of the protesters in Zucotti Park.

Here is Schoen’s overall take:
What binds a large majority of the protesters together—regardless of age, socioeconomic status or education—is a deep commitment to left-wing policies: opposition to free-market capitalism and support for radical redistribution of wealth, intense regulation of the private sector, and protectionist policies to keep American jobs from going overseas.
I suppose it’s nitpicking to point out that the survey did not ask about SES or education.  Even if it had, breaking the 200 respondents down into these categories would give numbers too small for comparison. 

More to the point, that “large majority” opposed to free-market capitalism is 4% – eight of the people interviewed.  Another eight said they wanted “radical redistribution of wealth.”  So at most, 16 people, 8%, mentioned these goals.  (The full results of the survey are available here.)
What would you like to see the Occupy Wall Street movement achieve? {Open Ended}
35% Influence the Democratic Party the way the Tea Party has influenced the GOP
4% Radical redistribution of wealth 5% Overhaul of tax system: replace income tax with flat tax
7% Direct Democracy
9% Engage & mobilize Progressives 
9% Promote a national conversation
11% Break the two-party duopoly
4% Dissolution of our representative democracy/capitalist system  4% Single payer health care
4% Pull out of Afghanistan immediately 
8% Not sure
Schoen’s distortion reminded me of this photo that I took on Saturday (it was our semi-annual Sociology New York Walk, and Zucotti Park was our first stop).



The big poster in the foreground, the one that captures your attention, is radical militance – the waif from the “Les Mis” poster turned revolutionary.  But the specific points on the sign at the right are conventional liberal policies – the policies of the current Administration.*

There are other ways to misinterpret survey results.  Here is Schoen in the WSJ:
Sixty-five percent say that government has a moral responsibility to guarantee all citizens access to affordable health care, a college education, and a secure retirement—no matter the cost.
Here is the actual question:
Do you agree or disagree with the following statement: Government has a moral responsibility to guarantee healthcare, college education, and a secure retirement for all.
“No matter the cost” is not in the question.  As careful survey researchers know, even slight changes in wording can affect responses.  And including or omitting “no matter the cost” is hardly a slight change.

As evidence for the extreme radicalism of the protestors, Schoen says,
By a large margin (77%-22%), they support raising taxes on the wealthiest Americans,
Schoen doesn’t bother to mention that this isn’t much different from what you’d find outside Zucotti Park.  Recent polls by Pew and Gallup find support for increased taxes on the wealthy ($250,000 or more) at 67%.  (Given the small sample size of the Zucotti poll, 67% may be within the margin of error.)  Gallup also finds the majorities of two-thirds or more think that banks, large corporations, and lobbyists have too much power. 
Thus Occupy Wall Street is a group of engaged progressives who are disillusioned with the capitalist system and have a distinct activist orientation. . . . .Half (52%) have participated in a political movement before.
That means that half the protesters were never politically active until Occupy Wall Street inspired them.

Reading Schoen, you get the impression that these are hard-core activists, old hands at political demonstrations, with Phil Ochs on their iPods and a well-thumbed copy of “The Manifesto” in their pockets.  In fact, the protesters were mostly young people with not much political experience who wanted to work within the system (i.e., with the Democratic party) to achieve fairly conventional goals, like keeping the financial industry from driving the economy into a ditch again.

And according to a recent Time survey, more than half of America views them favorably.

------------------------------
* There were other signs with other messages.  In fact, sign-making seemed to be one of the major activities in Zucotti Park.  Some of them. like these, did not seem designed to get much play in the media. 

Chart Art - FBI-Style

September 17, 2011
Posted by Jay Livingston
(Cross-posted at Sociological Images.)

The FBI is teaching its counter-terrorism agents that Islam is an inherently violent religion.  So are the followers of Islam.  Not just the extremists and radicals, but the mainstream. 
There may not be a ‘radical’ threat as much as it is simply a normal assertion of the orthodox ideology. . . .The strategic themes animating these Islamic values are not fringe; they are main stream.
Wired  got hold of the training materials.  The Times has more today, including a section of the report that describes Muhammad as “a cult leader for a small inner circle.” (How small? Twelve perhaps?)  He also “employed torture to extract information.”*

An FBI PowerPoint slide has a graph with the data to support its assertions.


The graph clearly shows that followers of the Torah and the Bible have gotten progressively less violent since 1400 BC, while followers of the Koran flatline starting around 620 AD and remain just as violent as ever.

Unfortunately, the creators of the chart do not say how they operationalized “violent” and “non-violent.”  But since the title of the presentation is “Militancy Considerations,” it might have something to do with military, para-military, and quasi-military violence.  When it comes to quantities of death, destruction, and injury, these overwhelm other types of violence. 

I must confess that my knowledge of history is sadly wanting, and I was educated before liberals imposed all this global, multicultural nonsense on schools, so I know nothing about wars that might have happened among Muslims during the period in question.  What I was taught was that the really big wars, the important wars, the wars that killed the most people, were mostly affairs among followers of the Bible.  Some of these were so big that they were called “World Wars” even though followers of the Qur’an had very low levels of participation.  Some of these wars lasted quite a long time – thirty years, a hundred years.  I was also taught that in the important violence that did involve Muslims – i.e., the Crusades** – it was the followers of the Bible who were doing most of the killing. 

Perhaps those with a more knowledge of Muslim militant violence can provide the data.


-----------------------------

* To be fair, the FBI seems to have been innocent of any of the torture that took place during the Bush years.  That was all done by the military and the CIA – and by the non-Christian governments to which the Bush administration outsourced the work. 

** Followers of the Bible crusading to “take back our city” from a Muslim-led regime may have familiar overtones.

Home Team Advantage

September 14, 2011
Posted by Jay Livingston
If you’re looking for an example of the Lake Wobegon effect (“all the children are above average”), you can’t do much better than this one.  It’s almost literal.


The survey didn’t ask about the children.  It asked about schools – schools in general and your local school.  As with “Congress / my Congressional rep,” people rated America’s schools as only so-so.  Barely a fifth of respondents gave America’s schools an above-average grade.  But when people rated their own local schools, 46% gave B’s and A’s.  The effect was even stronger among the affluent (upper tenth of the income distribution for their state) and among teachers.

The findings about the affluent are no surprise, nor are their perceptions skewed.  Schools in wealthy neighborhoods really are above average.  What’s surprising is that only 47% of the wealthy gave their local schools an above-average grade. 

The teachers, though, are presumably a representative sample, yet 64% of their schools are above average.  I can think of two explanations for the generosity of the grades they assign their own schools:
  • Self-enhancement.  Teachers have a personal stake in the rating of schools generally.  They have an even larger stake in the rating of their own school.
  • Familiarity.  We feel more comfortable with the familiar.  (On crime, people feel safer in their own neighborhoods, even the people who live in high-crime neighborhoods.)  So we rate familiar things more charitably.  For teachers, schools are something they’re very familiar with, especially their local schools.
[Research by Howell, Peterson, and West reported here.
HT: Jonathan Robinson at The Monkey Cage]

Bought Sex?

July 20, 2011
Posted by Jay Livingston

Did you buy sex last year?

You probably said no, even if you’re a man. But wait. First look at “The John Next Door” an article currently up at Newsweek (subhead: “The men who buy sex are your neighbors and colleagues”). It features a study by Melissa Farley called “Comparing Sex Buyers With Men Who Don’t Buy Sex”
No one even knows what proportion of the male population does it; estimates range from 16 percent to 80 percent.
Actually, a considerably lower estimate comes from the GSS.
PAIDSEX Had sex for pay last year: If you had other partners, please indicate all categories that apply to them. d. Person you paid or paid you for sex.
Here are the results since the GSS started asking this question..
(Click in the graph for a larger view.)
Not 16-80%, but somewhere around 5%.

Not to get too Clintonian, but it seems to depend on what the meaning of “sex” is. The GSS respondents probably thought that paying for sex meant paying someone to have sex. Farley’s definition was somewhat broader.
Buying sex is so pervasive that Farley’s team had a shockingly difficult time locating men who really don’t do it. The use of pornography, phone sex, lap dances, and other services has become so widespread that the researchers were forced to loosen their definition in order to assemble a 100-person control group.
So if you bought a copy of Playboy, you paid for sex. And if you looked at it twice last month, you are disqualified from the control of “men who don’t buy sex.”
“We had big, big trouble finding nonusers,” Farley says. “We finally had to settle on a definition of non-sex-buyers as men who have not been to a strip club more than two times in the past year, have not purchased a lap dance, have not used pornography more than one time in the last month, and have not purchased phone sex or the services of a sex worker, escort, erotic masseuse, or prostitute.”
I don’t have Farley’s data. If the control group of nonusers was 100, I assume that the user group n was the same – not really large enough for estimating the prevalence of the different forms of buying sex. How many had paid a prostitute, how many had looked at porn twice in a month? Some people probably think that there’s a meaningful distinction between those two. The implication of much of the Newsweek article is that they are all “sex buyers” and that they therefore share the same ugly attitudes towards women.

SAT, GPA, and Bias

July 8, 2011
Posted by Jay Livingston

(Cross-posted at Sociological Images)


Is the SAT biased? If so, against who is it biased?

It has long been part of the leftist creed that the SAT and other standardized tests are biased against the culturally disadvantaged - racial minorities, the poor, et. al. Those kids may be just as academically capable as more privileged kids, but the tests don’t show it.

But maybe SATs are biased against privileged kids. That’s the implication in a blog post by Greg Mankiw. Mankiw is not a liberal. In the Bush-Cheney first term, he was the head of the Council of Economic Advisors. He is also a Harvard professor and the author of a best-selling economics text book. Back in May he had a blog post called “A Regression I’d Like to See.” If tests are biased in the way liberals say they are, says Mankiw, let’s regress GPA on SAT scores and family income. The correlation with family income should be negative.
a lower-income student should do better in college, holding reported SAT score constant, because he managed to get that SAT score without all those extra benefits.
In fact, the regression had been done, and Mankiw added this update:
Todd Stinebrickner, an economist at The University of Western Ontario, emails me this comment: “Regardless, within the income groups we examine, students from higher income backgrounds have significantly higher grades throughout college conditional on college entrance exam . . . scores. [Mankiw added the boldface for emphasis.]

What this means is that if you are a college admissions officer trying to identify the students who will do best in college, as measured by grades, you would give positive rather than negative weight on family income.
Not to give positive weight to income, therefore, is bias against those with higher incomes.

To see what Mankiw means, look at some made-up data on two groups. To keep things civil, I’m just going to call them Group One and Group Two. (You might imagine them as White and Black, Richer and Poorer, or whatever your preferred categories of injustice are. I’m sticking with One and Two.) Following Mankiw, we regress GPA on SAT scores. That is, we use SAT scores as our predictor and we measure how well they predict students’ performance in college (their GPA).

(Click on the image for a larger, clearer view)

In both groups, the higher the SAT, the higher the GPA. As the regression line shows, the test is a good predictor of performance. But you can also see that the Group One students are higher on both. If we put the two groups together we get this.

Just as Mankiw says, if you’re a college admissions director and you want the students who do best, at any level of SAT score, you should give preference to Group One. For example, look at all the students who scored 500 on the SAT (i.e., holding SAT constant at 500). The Group One kids got better grades than did the Group Two kids. So just using the SATs, without taking the Group factor (e..g., income ) into account, biases things against Group One. The Group One students can complain: “the SAT underestimates our abilities, so the SAT is biased against us.”

Case closed? Not yet. I hesitate to go up against an academic superstar like Mankiw, and I don’t want to insult him (I’ll leave that to Paul Krugman). But there are two ways to regress the data. So there’s another regression, maybe one that Mankiw does not want to see.

What happens if we take the same data and regress SAT scores on GPA? Now GPA is our predictor variable. In effect, we’re using it as an indicator of how smart the student really is, the same way we used the SAT in the first graph.
Let’s hold GPA constant at 3.0. The Group One students at that GPA have, on average, higher SAT scores. So the Group Two students can legitimately say, “We’re just as smart as the Group One kids; we have the same GPA. But the SAT gives the impression that we’re less smart. So the SAT is biased against us.”

So where are we?
  • The test makers say that it’s a good test - it predicts who will do well in college.
  • The Group One students say the test is biased against them.
  • The Group Two students say the test is biased against them.
And they all are right.


Huge hat tip to my brother, S.A. Livingston. He told me of this idea (it dates back to a paper from the1970s by Nancy Cole) and provided the made-up data to illustrate it. He also suggested these lines from Gilbert and Sullivan:
And you'll allow, as I expect
That they are right to so object
And I am right, and you are right
And everything is quite correct.




Overcoming Social Desirability Bias – He’s Got a Little List

April 19, 2011
Posted by Jay Livingston

As some day it may happen that a survey must be done, you need a little list, a quick five-item list – for sex or race or crime or things quite non-PC but fun, where pollsters all have missed, despite what they insist. There’s the guy who says he’d vote for blacks if they are qualified; he’d vote for women too, but are we sure he hasn’t lied? “How many partners have you had?” Or “Did you ever stray?” With things like this you can’t always believe what people say. You tell them it’s anonymous, but still their doubts persist, and so your methodology can use this little twist.

It’s called the List Experiment (also the Unmatched Count Technique). It’s been around for a few years, though I confess I wasn’t aware of it until I came across this recent Monkey Cage post by John Sides that linked to another post from the presidential year of 2008. Most surveys then were finding that fewer than 10% of the electorate were unwilling to vote for a woman (Hillary was not mentioned by name). But skeptical researchers (Matthew Streb et al., here gated), instead of asking the question directly, split the sample in half. They asked one half

How many of the following things make you angry or upset?
  • The way gasoline prices keep going up.
  • Professional athletes getting million dollar-plus salaries.
  • Requiring seat belts to be used when driving.
  • Large corporations polluting the environment.
Respondents were told not to say which ones pissed them off, merely how many. Researchers calculated the average number of items people found irritating. The second half got the same list but with one addition:
  • A woman serving as president.
If the other surveys are correct, adding this one item should increase the mean by no more than 10%. As it turned out, 26% of the electorate would be upset or angry about a woman president, considerably more than the 6% in the GSS sample who said they wouldn’t vote for a woman.

The technique reminds me of a mentalist act: “Look at this list, sir, and while my back is turned tell me how many of those things you have done. Don’t tell me which ones, just the total number. Now I want you to concentrate very hard . . . .” But I can certainly see its usefulness as a way to check for social desirability bias.

What’s Wrong With (Percentages in) Mississippi

April 10, 2011
Posted by Jay Livingston

A Public Policy Polling survey asked Mississippi Republicans about their opinion on interracial marriage. It also asked how they felt about various politicians. The report concludes, “Tells you something about the kinds of folks who like each of those candidates.”

Not quite.

What’s been getting the most attention is the finding that Mississippi Republicans think interracial marriage should be illegal. Not all Mississippi Republicans. Just 46% of them (40% think it should be legal).* Does their position on intermarriage tell us anything about who they might like as a candidate? Does a Klaxon wear a sheet?

(Click on the chart for a larger view.)

It’s no surprise that Sarah Palin is much preferred to Romney. But as PPP points out racial attitudes figure differently depending on the candidate. When you go from racists to nonracists,** Palin’s favorable/unfavorable ratio takes a hit. But Romney’s gets a boost.

But does this tells us something about “the kinds of folks who like each of those candidates”? The trouble is that statement is percentaging on the dependent variable, implicitly comparing Romney supporters with Palin supporters. But the percentages actually given by PPP compare racists with nonracists** The statement is implying that candidate preferences tell us about racial attitudes. But what the data show is that racial attitudes tell us about candidate preferences. The two are not the same. From the data PPP gives, we don’t actually know what percent of Palin supporters favor laws against intermarriage. Ditto for Romney supporters.

In any case, neither Palin nor Romney is the top choice of Mississippi Republicans (especially the racists), who may be thinking racially but are acting locally and going with their own governor first and the former governor of neighboring Arkansas second.


* The sample was only 400. But the results aren’t too different from what the GSS has found. The most recent GSS I could find that included RACMAR was from 2002. In the “East South Central” region, the percent favoring laws against interracial marriage was 36%. So among Republicans, it might have been ten points higher.

**I realize that neither of these terms “racist” and “nonracist” is necessarily accurate. I use them as shorthand for, respectively, “people who think interracial marriage should be illegal” and “people who think interracial marriage should be legal.”

Graphing Ideas about Marriage (Me vs. USA Today)

February 3, 2011
Posted by Jay Livingston

As someone with the visual aptitude of gravel, I shouldn’t be edging into Flâneuse territory. But when I saw this graph in USA Today this morning, I was frustrated.

(Click on the image for a larger view.)
Responses, by age group, when asked if they want to marry:
SOURCES: Match.com/MarketTools survey of 5,199 men and women who either have never been married or are widowed, divorced or separated.

I found it hard to make comparisons from one age group to another. In the online edition, the layout was better – all in a row – and the addition of even a single color helped. (Odd that USA Today, the newspaper that led the way in using color, gave its print readers the graph in only black-and-white, or more accurately gray-and-gray.)

(Click on the image for a larger view.)

I thought I’d try my own hand with my rudimentary knowledge of Excel.

(Click on the image for a larger view.)

What do you think?

The Law of Ungraspably Large Numbers

December 23, 2010
Posted by Jay Livingston

Been here long?

Gallup regularly asks this question:
Which of the following statements comes closest to your views on the origin and development of human beings --
  1. Human beings have developed over millions of years from less advanced forms of life, but God guided this process,
  2. Human beings have developed over millions of years from less advanced forms of life, but God had no part in this process
  3. God created human beings pretty much in their present form at one time within the last 10,000 years or so?
Here are the results:

(Click on the graph for a larger view.)

For better or worse, Godless evolutionism has been rising steadily if slowly for the past decade – 16%, and counting. And “only” 40% of us Americans, down from 47%, believe that humans are johnnies-come-lately. Scientific fact is making some headway. But a lot of people still believe in something that’s just not true.

Andrew Gelman explains it in psycho-economic terms. The “belief in young-earth creationism . . . is costless.” What you hear from religion contradicts what you hear from science class in school. The cost (“discomfort” in Andrew’s terms) of rejecting one belief outweighs the cost of rejecting the other. That’s probably true, and it helps explain the popularity of the have-it-both-ways choice – evolution guided by God.

I think there’s something else – the law of ungraspably large numbers. For example, I know how far it is to California (3000 miles), and I even think I know how far it is to the moon (240,000 miles – and I’m not looking this up on the Internet; if I’m wrong, I’ll let my ignorance stand since that’s partly the point I’m trying to make). But once you get past that – how far is it to the sun or to Jupiter or to Betelgeuse? – you could tell me any number up in the millions or more – a number so wrong as to make any astronomer chuckle – and I’d think it sounded reasonable.

Those big numbers and the differences between them are meaningful only to people who are familiar with them. They are so large that they lie outside the realm of everyday human experience. The same holds for distances in time. Ten thousand years – that seems like a long, long time ago, long enough for any species to have been around. But “millions of years” is like those millions or hundreds of millions of miles – ungraspably large.

Since the number is outside the realm of human experience, it doesn’t make sense that humans or anything resembling them or even this familiar planet could have existed that long ago.

I suspect that it’s this same law of ungraspably large numbers that allows politicians to posture as doing something about “the huge deficit” by attacking a wasteful government program that costs $3 million. If I spend a few thousand dollars for something, that’s a big ticket item, so three million sounds like a lot. Millions and billions both translate to the same thing: “a lot of money” just as distances in millions of miles and billions of miles are both “a long way away.” The difference between them is hard to grasp.*

*How many such programs would the government have to cancel to cover the revenue losses we just signed on for by extending the tax cuts on incomes over $250,000? And if you think those tax cuts for the rich will pay for themselves or increase revenue, there’s a lovely piece of 1883 pontine architecture I’d like to show you for possible purchase.

Methods Fraud - Right and Left

June 30, 2010

Two links:

1. Fox News used a really, really deceptive graph to make job loss data look even worse than it really is. Media Matters has the story.

2. Research 2000, a polling firm, may have been faking its data. Kos, who has been relying on their polls, has a long post detailing the tell-tale signs – things people would do if they were trying to make their polls appear to follow random sampling. (Makes me feel a bit more confident of my own criticism of a Research 2000 poll.)

UPDATE, July 1: I had thought that the Kos/Research 2000 story was just for those interested in technical matters (sampling, data distributions) and maybe political blogs. But the both the Times and WaPo and perhaps other newspapers have stories about it today.

The Ecological Fallacy

May 10, 2010
Posted by Jay Livingston

The ecological fallacy is alive and well. Ross Douthat, the New York Times’s other conservative (the one that isn’t David Brooks), breathes life into it in his op-ed today on Red Families v. Blue Families, the new book by Naomi Cahn and June Carbone.

First Douthat gives props to the “blue family” model:
couples with college and (especially) graduate degrees tend to cohabit early and marry late, delaying childbirth and raising smaller families than their parents, while enjoying low divorce rates and bearing relatively few children out of wedlock.
Then there’s the “red family” for whom the stable, two-parent family is more a hope than a reality:
early marriages coexist with frequent divorces, and the out-of-wedlock birth rate keeps inching upward.
Blue looks good – good for the couples, good for the kids, good for society. But Douthat finds a moral thorn among the blue roses – abortion.
The teen pregnancy rate in blue Connecticut, for instance, is roughly identical to the teen pregnancy rate in red Montana. But in Connecticut, those pregnancies are half as likely to be carried to term.

So it isn’t just contraception that delays childbearing in liberal states, and it isn’t just a foolish devotion to abstinence education that leads to teen births and hasty marriages in conservative America. It’s also a matter of how plausible an option abortion seems, both morally and practically, depending on who and where you are.
Douthat is channeling Balzac: Behind every great fortune lies a great crime. Behind every more modest fortune – say, enough to live in Danbury if not Greenwich – is a more modest crime, i.e., an abortion or two.

But here’s the fallacy: Douthat makes it appear that the Connecticut residents who are getting those abortions are the same “couples with college and (especially) graduate degrees” we met in the paragraph on blue families. The illogic goes like this:
Blue states with higher levels of income and education also have higher levels of abortion than do Red states.
Therefore more Blue chip people have more abortions than do Red necks.
No, no, no (I hear myself repeating to my students). You cannot assume that a correlation at the state level also exists at the individual level. Just because wealthier states have higher rates of abortion, you cannot assume that wealthier individuals have higher rates of abortion. To make that assumption is to commit the ecological fallacy.

In fact, the Connecticut women who are getting abortions may also be relatively poor and uneducated. The difference is that abortion may give them access to further education or employment – not a graduate degree and a 6-figure job, but something better than what they could expect were they in Alabama. Or Montana.

The Uses and Abuses of Surveys

May 10, 2010
Posted by Jay Livingston

Ask a silly question, you get a silly answer. Ask a politically loaded question, you get a political answer – even if the literal meaning of your question seems to be asking about matters of fact and not opinion..

Here are eight questions from a Zogby poll. Respondents were given a Likert scale from Strongly Agree to Strongly Disagree, but the authors treat answers as either correct or incorrect according to basic economic principles.
1. Restrictions on housing development make housing less affordable.
2. Mandatory licensing of professional services increases the prices of those services.
3. Overall, the standard of living is higher today than it was 30 years ago.
4. Rent control leads to housing shortages.
5. A company with the largest market share is a monopoly.
6. Third-world workers working for American companies overseas are being exploited.
7. Free trade leads to unemployment.
8. Minimum wage laws raise unemployment.
Respondents were also asked to classify themselves on a political spectrum – Progressive, Liberal, Moderate, Conservative, Very Conservative, Libertarian.

This survey wasn’t designed to discover what people think. It was designed to prove a political point: “The Further Left You Are the Less You Know About Economics.” That’s the title of a post about it at Volokh Conspiracy. A paper by Zeljka Buturovic and Dan Klein, who designed the survey, gives the results.

(Click on the image for a view large enough to actually read)

The results were similar for the other questions.

To be sure, the liberals view of economic cause-effect relationships reflects the way they would like the world to be rather than the way the world actually is. But the bias of the poll is obvious. As monkeyesq says in his comment at Volokh,
1. Pick 8 liberal positions that have a questionable economic basis;
2. Ask people whether they “agree” or “disagree” with the statements;
3. Find that liberals are more likely to support liberal positions;
4. Claim that liberals don’t understand economics.
There’s an even larger problem here – a problem that affects not just polls that have an obvious ax to grind,* but a basic problem of all survey research: the question the survey asks may not be the question the respondent hears or answers.

These eight questions have a literal meaning. As Todd Zywicki, who wrote the Volokh post, says, “Note that the questions here are not whether the benefits of these policies might outweigh the costs, but the basic economic effects of these policies.”

True, the questions do not ask about costs and benefits, although I don’t think that the survey included an explicit caveat like the one Zywicki adds after the fact. Still, we have to wonder about how people really heard these questions.

“Mandatory licensing of professional services increases the prices of those services” – Agree or Disagree? Maybe some people hear a different question, a question about policy implications: “Would you like cheaper, but unlicensed, doctors.”

“A company with the largest market share is a monopoly.” Maybe the what the person hears is: “Can companies with large market share – though less than the share required for it to be a monopoly (100%?) – still exercise monopolistic powers?”

As for the “exploitation” of third-world workers, the word may have a precise economic definition (e.g., it’s exploitation only if the worker has no choice) – I don’t know. But even if such an economic definition exists, to most people the word evokes moral judgment, not economics.

The other items also have flaws, as some of the comments at Volokh (now 200 and counting) point out. (I confess that I’m still puzzled by the responses to Standard of Living. Nearly a third of all the respondents think that the standard of living today is no better than it was 30 years ago – 55% on the left, 12% on the right 21% of libertarians.)

The survey may tell us that “epistemic closure” is a disease that can infect the left as well the right. But it also tells us to be cautious about interpreting survey questions literally. Even innocuous questions may mean different things to survey respondents. Until a question has been tested several times, we can’t be sure what respondents hear when they are asked that question.

*A Kos poll that set out to show that quite a few Republicans were extremist nuts suffers from a similar problem. I blogged it here.

Meanness and Means

April 2, 2010
Posted by Jay Livingston

On March 27, the Times ran an op-ed by David Elkind, “Playtime is Over,” about the causes of bullying:

it seems clear that there is a link among the rise of television and computer games, the decline in peer-to-peer socialization and the increase of bullying in our schools.
I was skeptical. Had there really been an increase in bullying? Elkind offered no evidence. He cited numbers for current years (school absences attributable to bullying), but he had no comparable data for the pre-computer or pre-TV eras. Maybe he was giving a persuasive explanation for something that didn’t exist.

I sent the Times a letter expressing my doubts. They didn’t publish it. Elkind is, after all, a distinguished psychologist, author many books on child development. As if to prove the point, three days later, the big bullying story broke. An Irish girl in South Hadley, Massachusetts committed suicide after having been bullied by several other girls in her high school. The nastiness had included Facebook postings and text messages.

I guess Elkind was right, and I was wrong. Bullying has really exploded out of control in the electronic age.

But today the op-ed page features “The Myth of Mean Girls,” by Mike Males and Meda-Chesney Lind. They look at all the available systematic evidence on nastiness by teenagers – crime data (arrests and victimizations), surveys on school safety, the Monitoring the Future survey, and the CDC’s Youth Risk Behavior Surveillance. They all show the same trend:
This mythical wave of girls’ violence and meanness is, in the end, contradicted by reams of evidence from almost every available and reliable source.
Worse, say the authors, the myth has had unfortunate consequences:

. . . more punitive treatment of girls, including arrests and incarceration for lesser offenses like minor assaults that were treated informally in the past, as well as alarmist calls for restrictions on their Internet use.*
This is not to say that bullying is O.K. and nothing to worry about. Mean girls exist. It’s just that the current generation has fewer of them than did their parents’ generation. Should we focus on the mean or on the average? On average, the kids are not just all right; they’re nicer. Funny that nobody is offering explanations of how the Internet and cell phones might have contributed to this decline in meanness.

*For a recent example, see my post about criminal charges brought against young teenage girls for “sexting,” even though the pictures showed no naughty bits.


UPDATE: At Salon.com, Sady Doyle argues that Lind and Males looked at the wrong data.

Unfortunately, cruelty between girls can't really be measured with the hard crime statistics on which Males and Lind's argument relies. . . . Bullying between teenage girls expresses itself as physical fighting less often than it does as relational aggression, a soft and social warfare often conducted between girls who seem to be friends. You can't measure rumors, passive-aggressive remarks, alienation and shaming with statistics.
She has a point. While most of the evidence Males and Lind cite is not “hard crime statistics,” it does focus on overt violence. But Doyle is wrong that you can’t measure “relational aggression.” If something exists, you can measure it. The problem is that your measure might not be valid enough to be of use.

If Doyle is right, if nonphysical bullying hasn’t been measured, that doesn’t mean that Males and Lind are wrong and that bullying has in fact increased. It means that we just don’t know. We do know that physical violence has decreased. So here are the possibilities.

  1. Physical and nonphysical aggression are inversely related. Girls have substituted nonphysical aggression for physical aggression – social bullying has increased.
  2. Less serious forms of aggression usually track with more serious forms (nationwide, the change in assault rates runs parallel to the change in murder rates). So we can use rates of physical aggression as a proxy for rates of bullying – social bullying has decreased.
  3. Physical and nonphysical aggression are completely unrelated, caused by different factors and in found in different places – the change in social bullying is anybody’s guess.

How Much is Three Percent?

March 11, 2010
Posted by Jay Livingston

The Freakonomics blog today assures us that emergency room overutilization is a “myth.” All that talk about the uninsured doing what George W. Bush suggested and using the emergency rooms as primary care, that’s just baseless scare tactics. Citing a Slate article, they give the data
E.R. care represents less than 3 percent of healthcare spending, only 12 percent of E.R. visits are non-urgent, and the majority of E.R. patients are insured U.S. citizens, not uninsured, illegal immigrants.
That “majority” might be 99.9% or it might be 50.1%. It turns out that the uninsured account for about 20% of E.R. visits.

My trouble is that I never know if those percents are a lot or a little. Take that 3% of spending. I’m not an economist, and although I haven't done the math, I figure that 3% of $2.3 trillion might still be a significant chunk of change. So just to make sure that 3% was in fact a pittance, a part of the “emergency room myth,” I looked for other Freakonomics articles with a similar number.

  • foreclosure rates began a steady rise from 1.7 percent in 2005 to 2.8 percent in 2007. [Three percent of healthcare spending is a little; 2.8% of mortgages is a lot.]
  • I was surprised at how high the fees were. . . . Even on big-ticket items like airline tickets, the credit-card company collects nearly 3 percent. [Three percent of healthcare spending is a little; 3% of an airline ticket is a lot.]
  • The homeownership rate in the U.S. increased by 3 percentage points over the past decade — a clear break from the two previous decades of stagnation. [Three percent of healthcare spending is a little; 3% of homeownership is a lot.]

You get the idea. Maybe whether 3% is a lot or a little depends on its political use. I don’t follow the Freaknomics political views closely, but I’m guessing that they don’t like Hugo Chavez down in Venezuela.
opposition voters [those who opposed Chavez] experienced a 5 percent drop in earnings and a 1.5 percent drop in employment rates after their names were released. The authors also conclude that the retaliatory measures may have cost Venezuela up to 3 percent of G.D.P. due to misallocation of workers across jobs.
Chavez “may have” cost his country a whopping 3% of GDP, i.e, $9.4 billion (or possibly less -- note that up to). E.R. visits cost the US only a negligible 3% of healthcare spending. And the uninsured are only one-fifth of that, a mere $14 billion.

Whether 3% is a lot or a little seems to depend on your politics and what the issue is.

Unions too are bad, at least for business.
a successful unionization vote significantly decreases the market value of the company even absent changes in organizational performance. Lee and Mas run a policy simulation and conclude that, “ … a policy-induced doubling of unionization would lead to a 4.3 percent decrease in the equity value of all firms at risk of unionization.”
For a paltry increase of 100% in the number of workers getting the benefits of unionization compaines would suffer on overwhelming decrease of 4.3% decrease in equity.

Now about those 20 people in front of you in line at the emergency room. Only four of them (20%) are there because they don’t have insurance. They are part of what Freakonomics calls a “rosier picture.” I wonder if Freakonomics maybe has one or two posts where 20% is pretty big amount, something to worry about, instead of being the equivalent of a bunch of roses in the hospital.

Cooking the Books - A Second Look

February 19, 2010
Posted by Jay Livingston

Do the police undercount crime?

The graph I cribbed from Rick Rosenfeld in yesterday’s post showed a remarkable similarity between victimization surveys and official crime statistics. In 2000, for example the rate of reported burglaries according to the NCVS was nearly identical to the UCR rate. Both were about 4.4 per 1,000.

Yet in the recent Eterno-Silverman study, police commanders, responding anonymously, said that crime statistics were suppressed. And Josh in his comment yesterday refers to Peter Moskos’s “let me count the ways” description of how the police keep crimes off the books. (See Moskos’s own take on the study at his website.)

The problem is that the graph I presented was somewhat misleading The NCVS and UCR rates of burglary do not measure exactly the same thing. It’s not exactly oranges and apples; more like oranges and tangerines.

1. The NCVS data are for the New York metro area, so we have to use similar UCR data even though the rap about fudging the stats is only about the NYPD. No way to get around that problem

2. More crucially, the NCVS counts only residential burglaries; the UCR number includes both commercial and residential burglaries. Nationwide, about 2/3 of all UCR burglaries are residential. Using that figure for the New York area we get a UCR rate for Residential burglaries of only 3.0 per 1,000 population, about one-third less than we would expect from the estimate of the number of residential burglaries that victims say they reported. Here’s an amended graph. I’ve added a line for residential burglaries that uses the simple 2/3 formula.

(Click on the graph for a larger view.)

The rate of residential burglaries that victims say that they report is usually one-and-a-half to two times greater than the rate of residential burglaries officially “known to the police.” For the year 2000, the NCVS rate of 4.4 per 1,000 population works out to 40,000 reported residential burglaries. If 2/3 of burglaries are residential, only 27,500 of those made it onto the police books.

Does that mean that the police canned 12,5000 reported burglaries? Probably not. There may be other explanations for the some of the discrepancy. But the data do provide some support for those who are skeptical of the precision of the police numbers.

Cooking the Crime Books?

February 18, 2010
Posted by Jay Livingston

“Crimes known to the police” is the official count of Crime in the United States – the annual report published by the FBI, which compiles data from local police departments. It’s also known as the Uniform Crime Reports (UCR).

Many years ago, a friend of mine found that his car had been broken into and wanted to report the crime to the police. He went to the local precinct, and when the desk sergeant finally acknowledged him, he said, “Someone broke into my car and stole my stuff.”

“So what do you want me to do?” said the sergeant.

That was one larceny that never became “known to the police,” at least not on the books of the 20th precinct.

The problem of uncounted crime has been around a long time. In the late 1940s, New York’s burglary rate grew by 1300% in a single year, a huge increase but entirely attributable to changes in bookkeeping. Word had gone out that burglaries should no longer be routinely assigned to “Detective Can.”

In the 1980s, Chicago’s robbery rate rose after the FBI threatened the city that it wouldn’t include their data because the numbers were so suspect. Atlanta kept its numbers artificially low prior to the Olympics. This week, the Dallas police chief is under attack for the way his department reports crimes.

Now two criminologists, John Eterno and Eli Silverman, are claiming that New York’s crime data have been fudged consistently for the last 15 years, and they point to CompStat as the culprit (NY Times article here.) CompStat is the system that William Bratton brought to New York when he became police commissioner in 1994. It required commanders to report every week on statistics and patterns of crime in their areas.

Eterno and Silverman gave anonymous surveys to retired precinct commanders, Under pressure to appear effective in the war on crime, precinct commanders might stretch the facts. The value of a theft might be creatively investigated to keep the total under the $1000 threshold a misdemeanor and the felony known as “grand larceny.” Felonies look worse on your statistical report.

A purse snatch might get recorded as a theft instead of a robbery because robberies fall into the broader category of “violent” crimes. Or victims, like my friend in the old days, might be persuaded not to bother reporting the crime.

In an op-ed in the Times yesterday, Bratton vigorously defended the NYPD numbers. He provided no data, but he could have.

Since 1973, the US has had an alternate count of crime, the National Crime Victimization Survey. Most of the data are for the US, but Rick Rosenfeld and Janet Lauritsen were able to get three-year averages for New York City, and they have looked at the data for burglary.

(Click on the graph for a larger view.)


The graph shows the rate (percents) of
  • people who told the NCVS they had been victims of a burglary
  • people who say they reported the burglary to the police
  • the official rate of burglaries “known to the police”
The numbers are not precisely comparable (the NCVS rate may be based on households rather than population, and the UCR rate includes commercial burglaries as well as residential). But the data in the graph do not support the idea that CompStat increased the fudging of burglary statistics If it had, then starting in 1994, we should see a widening gap between the NCVS line and the UCR line, with the UCR line moving downward much more. But if anything, it’s the Victimization line that descends more steeply.

In the decade following CompStat, both sources of data show a 68% decrease in burglary. So if commanders were cooking the books, they weren't including burglary in the recipe.

What Was the Question?

February 5, 2010
Posted by Jay Livingston

Survey questions may seem straightforward, but especially if the poll is a one-off, with questions that haven’t been used in other polls, you can’t always be sure how the respondents interpret them.

The Kos/Research 2000 poll of Republicans has been getting some notice, and no wonder. At first glance, it seems to show that one of our two major political parties is home to quite a few people who are not fully in touch with reality, especially when Obama is in view.

Do you believe Barack Obama is a racist who hates White people?
Yes 31
No 36
Not Sure 33


Do you believe Barack Obama wants the terrorists to win?
Yes 24
No 43
Not Sure 33


Should Barack Obama be impeached, or not?
Yes 39
No 32
Not Sure 29


I’m not sure what the results mean. Self-identified Republicans are about 25% of the electorate.* If one-third of them hold views that are “ludicrous” (Kos’s term), that’s still only 8% of the voters.

But what about non-ludicrous Republicans. Suppose you were a mainstream conservative and Research 2000 phoned you. To find out, I put some of the questions to a Republican I know – non-ludicrous (he reads the Wall Street Journal, he doesn’t watch Glenn Beck.)

Do you believe Sarah Palin is more qualified to be President than Barack Obama? (In the survey, 53% said, “yes.”)

Such a loaded question! I think she's nuts and he's sane – but in principle, she's right and he's wrong about most issues.


Do you believe Barack Obama wants the terrorists to win?

They don't WANT terrorists to win – no – but they don't care as much about the battle as most Americans do.

He might have said Yes to the interviewer just because he thought a Yes was more in line with the spirit of the question than with its actual wording. Or he would have refused to answer (and possibly have been put in the “Not sure” category?)

So the questions are more ambiguous than they seem, even on close reading.

Should public school students be taught that the book of Genesis in the Bible explains how God created the world?
Seventy-seven per cent of the sample said, “Yes.” And Kos, who commissioned the poll in connection with his book – to be called American Taliban – will see that result as rabid pro-creationism and anti-science. But re-read the actual question. Here’s what my sane Republican had to say:

This one's easy:
Absolutely yes. “public school students should be taught” a lot of important facts about our culture and civilization – that the Greeks invaded Ilium and destroyed Troy, that Confucius was the inspiration for a great religion, that Thomas A. Edison invented the electric light bulb, that Darwin in his Origin of the Species explained how animals change according to the process of natural selection, and “that the book of Genesis in the Bible explains how God created the world.” Why the hell not teach that fact? Who could say no to that?

Who indeed? Not me.

-----------------------
* The poll may have oversampled the fringe (see Emily Swanson at Pollster ), but those folks at the fringe are more likely to be active at the local level, so it’s possible they’ll swing some weight at the national level too. Their preferred candidate is, of course, Sarah Palin. So while political scientists think the poll may be exaggerating the far right (see Joshua Tucker’s excellent critique at The Monkey Cage), the Palinstas are hailing the poll as spot on.

Correlation and Cause - Feeding and Breeding

January 25, 2010
Posted by Jay Livingston

Andre Bauer’s idea that poor people are like stray animals is what will get most of the attention, as I suppose it should. Bauer* is running for governor of the enlightened state of South Carolina, where Appalachian Trail hiker Mark Sanford is still in that office.** Bauer is Lt. Gov., and here’s what he said à propos programs for free and reduced-price lunches in the public schools.
My grandmother was not a highly educated woman, but she told me as a small child to quit feeding stray animals. You know why? Because they breed. You're facilitating the problem if you give an animal or a person ample food supply. They will reproduce, especially ones that don't think too much further than that. And so what you've got to do is you've got to curtail that type of behavior. They don't know any better,
Bauer stands by his analogy and says he was quoted out of context. Right.

Obviously, Bauer did not take Sociology of Poverty. Of less importance politically is that he also skipped the methods course. Apparently, he has some data – a bar graph – but he mistakes correlation for cause.
I can show you a bar graph where free and reduced lunch has the worst test scores in the state of South Carolina. You show me the school that has the highest free and reduced lunch, and I'll show you the worst test scores, folks. It's there, period.
I suppose that it is somehow possible that providing food for impoverished kids makes them dumb. Maybe electing people to office in the Palmetto State has a similar effect.

*Lt. Gov. Andre Bauer is not to be confused with Andre Braugher, the excellent actor who plated Detective Pembleton on “Homicide” (the forerunner to “The Wire”) and is currently in “Men of a Certain Age.” Pictures below. You figure out which Andre is which.



** What’s up with The Palmetto State and its public servants? Lt. Gov. Bauer is incautious not just in his campaign speeches. He also tends to get stopped for speeding, and he once crash-landed a small plane. (CSM article here.) Then, besides Sanford and Bauer, there’s the former chair of the SC Board of Education, who home schooled her kids, believes that “intelligent design” and “abstinence only” should be taught in the schools, and resigned only when it was revealed that she also publishes online porn (oops, I mean erotic fiction.) The story and links to her very NSFW prose are here. I guess she just wanted to put the palm back in palmetto.