No Blame in My Game

May 23, 2010
Posted by Jay Livingston

The Texas school board approved those guidelines. They rehabilitated Thomas Jefferson – apparently he did well at the re-education center – but I fear that sociology did not fare so well. According to an earlier report,
In the field of sociology, another conservative member, Barbara Cargill, won passage of an amendment requiring the teaching of “the importance of personal responsibility for life choices” in a section on teenage suicide, dating violence, sexuality, drug use and eating disorders.
“The topic of sociology tends to blame society for everything,” Ms. Cargill said.
It’s not just Texas. Ms. Cargill’s comment took me back to a brief encounter during my first tour on jury duty in New York – nearly two weeks of boredom and frustration. I was eager and curious to serve on a jury, but I could never make it past voir dire. The prosecutors rejected me every time. (In principle, you are not told which side rejected you, but it was not hard for me to guess.)

One afternoon when I came back from lunch, I went to the men’s room, and there at the sink was the district attorney who had kept me off a jury that morning. “Why did you toss me off your case?” I asked trying to sound innocent. “Are you kidding?” he said. “A sociologist? You people don't think anyone’s responsible for what they do.”

At the time, I didn't know what to say, and the conversation ended there. But what I should have said is this: You are confusing two separate questions. It's one thing to understand the social forces that may have made a person more likely to commit a crime. It is quite another matter to absolve that person of legal blame for the crime. For a lawyer or a juror (even a juror who is also a sociologist), the question of guilt is paramount. But for the sociologist who is thinking about crime as a social problem, the issue of individual guilt or innocence is much less important.

I might have even offered this analogy from the tort side: if there’s a traffic accident, and one driver is suing another, we have to figure out who was to blame. But if we notice that several accidents have occurred at this same location, we also talk about “a dangerous intersection.” We’re not “blaming” the intersection. We are just saying that if you want to reduce the number of accidents, you can exhort drivers to be more careful, and you can punish those who smash into other cars. But you’ll have more success if you put in a stoplight.

Call It Please "Research"

May 21, 2010
Posted by Jay Livingston

The Internet seems to be a mostly copyright-free zone. Norms about using other people’s material are still evolving, and even when there’s consensus about the norms, who can enforce them?

Two weeks ago, in a post about political rumors, I took some information put together by another blogger, J.L. Bell, who had gotten it from Snopes. I turned Bell’s numbers into a simple bar graph, checked Snopes myself, and added a few comments. I linked to Bell’s blog. (I figured everyone knows Snopes, so I didn’t bother with a link.)

About ten days later, Lisa at Sociological Images, posted my graph along with a few sentences of mine and a few of Bell’s. along with a few brief comments of her own. She gave credit where due and provided the links.

Now a political blogger, Digby has pretty much copied Lisa’s post wholesale (including the links to me and Bell).

He
She has deleted one or two brief sentences and added one of

his
her own. Other than that, it’s Lisa’s post. No link to Sociological Images, no hat tip, nothing.

(Click on the image for a larger view.)
See for yourself. Lisa’s post is here ; Digby’s is here.

Digby is usually more careful. His Her posts often consist mostly of quotes from other sources with his her own brief comments interposed and links to the original sources. It’s the kind of borrowing and linking that many bloggers do. Bell borrowed from Snopes, I borrowed from Bell, Lisa borrowed from me.

But what Digby did with Lisa’s post goes beyond borrowing.

It Could Happen To You

May 20, 2010
Posted by Jay Livingston

Conservatives discount or ignore the importance of social forces. Even those observers who like to think of themselves as closer to the center, like David Brooks, emphasize “character.” As you turn the dial further to the right, you hear more and more about “individual responsibility.”
Parents need to communicate basic principles about character development, honor and individual responsibility. Young people need to know that they are not victims of their hormones . . . .
That’s from the Website of Concerned Women for America, a right-wing Christian group. (If you can’t guess their agenda, see their core issues here .)

One of their allies in congress is Mark Souder, a Republican from Indiana and a strong family-values guy. But now he’s resigning after news leaked out that he’d been having an affair with a woman on his staff. (A video of her interviewing him about his pro-abstinence views has the Internet ironists LOL in Schadenfreude.)

For conservatives, when a friend like Souder goes astray, the old responsibility rap sounds discordant, and they have to change the playlist. Penny Spence is the head of CWFA, and here’s her take on the Souder affair:
I am deeply saddened by the news of Congressman Mark Souder’s fall into the temptation of an affair. . . . If Mark Souder is capable of sexual misconduct, it could happen to anyone.
Right. The affair was not something Souder did. It “happened to” him. That seems a bit passive even by my standards. But then Spence gets downright sociological.
The frat house environment on Capitol Hill does nothing to encourage accountability. Most Members do not live with their families while they are working in D.C. during the week and have even ditched common rules of etiquette that even major corporations follow such as office doors with windows or careful examination of employee/boss interaction.
Her keen attention to situational forces does not extend to suggestions for structural changes that might discourage adultery. Instead, she merely encourages lawmakers in DC “to guard their hearts and reputations and to live by higher standards.”

To me, the interesting question is not how a solid, family-values Christian could fall into temptation. As Spence says, anyone can slip and fall.* But apparently this affair had been going on for years. How did Souder manage it? Did he change is ideas to accommodate his behavior – ideas not just about adultery but about himself – and what was that process like? What is the “moral career” of the adulterer?

*The subject line of this post is a reference to the great Burke-VanHeusen standard. If you are among the few who saw Woody Allen’s “Anything Else,” you heard this brief version by Diana Krall. Listen to the lyrics, for they reflect what Spence probably has in mind.
“Hide your heart from sight; lock your dreams at night; it could happen to you.Miles recorded it with his 1956 quintet. My favorite version is by Keith Jarrett in the 1996 Tokyo concert.)

Scouting for Titles

May 19, 2010
Posted by Jay Livingston

Tyler Cowen asks his readers to guess the best-selling book of all time (“And I mean the best-selling real book, not linked to either religion or communism?”)

Here is his answer:
Charles Dickens, A Tale of Two Cities, two hundred million copies.
Next in line is Baden-Powell's Boy Scouts book and then Lord of the Rings.
Note that Tyler tactfully refers to Baden-Powell’s 1908 classic in the generic rather than by its actual title, which is so much more delightfully ambiguous: Scouting for Boys.



(Wikipedia has the full list.)

Blowback

May 17, 2010
Posted by Jay Livingston

“Unintended consequences.” The term comes from sociology – it was coined by Robert Merton 75 years ago – but it has of late has become a favorite weapon of free-market economists and other conservative and libertarian types. They use it for bashing liberals and their government programs designed to limit harm and promote the general welfare.

Conservative policies too may suffer from a similar effect. Anne Coulter famously announced, in the days following the 9/11 attacks, “We should invade their countries, kill their leaders and convert them to Christianity.” The National Review fired her for her intemperate comments, but the Bush Administration, in its policies, seemed to take her proposals to heart.* Well, two out of three ain’t bad. Invading countries and killing leaders proved not to be too difficult. But that third one:
Since the U.S. invasion, Iraq's Christians have been mostly driven out of the country by violence directed against them for their religion. . . Relentless waves of bombings, assassinations, kidnappings, extortions and rapes have triggered a mass exodus of Christians from Iraq over the past seven years. Since 2003, over half of the estimated 1.5 million . . . have fled
So writes Nina Shea at the Washington Post’s On Faith page. Ms. Shea is identified as “director, Hudson Institute's Center for Religious Freedom.” The Hudson Institute is a neo-conservative think tank that, although Ms. Shea doesn’t mention it, strongly supported the invasion of Iraq. Surely the good conservatives there did not intend that their favored policy should wind up limiting religious freedom, certainly not the religious freedom of Christians. But that’s what happened.


* The Bushies even adopted Coulter’s view that it didn’t really much matter who “they” was. Any Muslim state would do. “This is no time to be precious about locating the exact individuals directly involved in this particular terrorist attack. Those responsible include anyone anywhere in the world who smiled in response to the annihilation of patriots . . .”

Frisks and Risks

May 13, 2010
Posted by Jay Livingston

In New York, a city of roughly 8 million people, the police stopped and frisked over half a million people* last year. (The Times story is here.)

“We are saving lives, and we are preventing crime.” said the NYPD spokesman. If you don’t believe him, just look at this pie chart of the yield of weapons that the searches turned up.

(Click on the graph for a larger view, but you still won’t be able to see that line for guns.)

Guns are the thin red line, so thin that it’s all but invisible – with 762 guns out of 570,000 stops, it’s hard to make it look like any thicker. (A post last year had a pie chart showing the proportion of stops that led to any official action, a larger slice of the pie than the slivers representing guns and other weapons.)

We have known the number of stop-and-frisks only since 2003.** In the next two years, the number doubled. In 2009 police made nearly three times as many stops as they had in 2003. Has this dramatic increase taken a bite out of crime? Let’s ask the experts.
Heather Mac Donald, a research fellow at the Manhattan Institute who has spoken to police officials about the tactic, said there was no question it had an effect on crime.
Ms. Mac Donald is apparently the Times’s go-to conservative on crime issues,*** and she must know. After all, as the Times says, she has spoken with police officials. I don’t have any contacts among the NYPD brass, so I had to look at the available crime statistics. I chose murder. It’s the crime where statistics are the most accurate. It’s also the crime most likely to be reduced by the cops taking guns away from bad guys on the street. So I expected a sharp decrease in the years following 2003.


Hmm. The trend is downward, not dramatic but gradual, and it seems to be a continuation of a trend that started before the big increase in stop-and-frisks. There’s also that rise in murders in 2006, when the number of stop-and-frisks also increased by about 25%, roughly from 400,000 to 500,000. (For a line graph showing the rise in the number of stops, see the Times article)

My analysis is just a quick-and-dirty. To draw a credible conclusion, you’d have to take several other variables into account. A good multivariate model might find that the effect of stop-and-frisk was greater than it appears – maybe Ms. Mac Donald knows of such studies and even mentioned them to the reporter, and he just left them out of his story. Or maybe those high-quality studies, if they exist, found no effect. But just looking at the basic data on the two variables – stops and murder– makes it hard to say that “there was no question of a deterrent effect.”  But Ms. MacDonald said it anyway.

* Or rather they made 570,000 stop-and-frisk searches. Since the usual suspects may have had more than one such encounter, we don’t know how many individuals were stopped. But we do know that 490,000 of them were black or Hispanic, 53,000 were white. Those numbers, while they do not reflect the population of New Yorkers, may reflect the population of street criminals.

** The police agreed to make the data public as part of the settlement of a lawsuit. Four cops stopped a man and wound up firing 41 bullets at him, killing him. They thought he had a gun. In fact, he was unarmed and innocent of any crime. Needless to say, the victim was black, and now the cops have to keep records of stop-and-frisks, including the race of the stop-and-friskee.

***
This post cites her view of the salutary effects of harsh drug laws, a view she supports with evidence comparable to that mentioned in the current article.

The Ecological Fallacy

May 10, 2010
Posted by Jay Livingston

The ecological fallacy is alive and well. Ross Douthat, the New York Times’s other conservative (the one that isn’t David Brooks), breathes life into it in his op-ed today on Red Families v. Blue Families, the new book by Naomi Cahn and June Carbone.

First Douthat gives props to the “blue family” model:
couples with college and (especially) graduate degrees tend to cohabit early and marry late, delaying childbirth and raising smaller families than their parents, while enjoying low divorce rates and bearing relatively few children out of wedlock.
Then there’s the “red family” for whom the stable, two-parent family is more a hope than a reality:
early marriages coexist with frequent divorces, and the out-of-wedlock birth rate keeps inching upward.
Blue looks good – good for the couples, good for the kids, good for society. But Douthat finds a moral thorn among the blue roses – abortion.
The teen pregnancy rate in blue Connecticut, for instance, is roughly identical to the teen pregnancy rate in red Montana. But in Connecticut, those pregnancies are half as likely to be carried to term.

So it isn’t just contraception that delays childbearing in liberal states, and it isn’t just a foolish devotion to abstinence education that leads to teen births and hasty marriages in conservative America. It’s also a matter of how plausible an option abortion seems, both morally and practically, depending on who and where you are.
Douthat is channeling Balzac: Behind every great fortune lies a great crime. Behind every more modest fortune – say, enough to live in Danbury if not Greenwich – is a more modest crime, i.e., an abortion or two.

But here’s the fallacy: Douthat makes it appear that the Connecticut residents who are getting those abortions are the same “couples with college and (especially) graduate degrees” we met in the paragraph on blue families. The illogic goes like this:
Blue states with higher levels of income and education also have higher levels of abortion than do Red states.
Therefore more Blue chip people have more abortions than do Red necks.
No, no, no (I hear myself repeating to my students). You cannot assume that a correlation at the state level also exists at the individual level. Just because wealthier states have higher rates of abortion, you cannot assume that wealthier individuals have higher rates of abortion. To make that assumption is to commit the ecological fallacy.

In fact, the Connecticut women who are getting abortions may also be relatively poor and uneducated. The difference is that abortion may give them access to further education or employment – not a graduate degree and a 6-figure job, but something better than what they could expect were they in Alabama. Or Montana.

The Uses and Abuses of Surveys

May 10, 2010
Posted by Jay Livingston

Ask a silly question, you get a silly answer. Ask a politically loaded question, you get a political answer – even if the literal meaning of your question seems to be asking about matters of fact and not opinion..

Here are eight questions from a Zogby poll. Respondents were given a Likert scale from Strongly Agree to Strongly Disagree, but the authors treat answers as either correct or incorrect according to basic economic principles.
1. Restrictions on housing development make housing less affordable.
2. Mandatory licensing of professional services increases the prices of those services.
3. Overall, the standard of living is higher today than it was 30 years ago.
4. Rent control leads to housing shortages.
5. A company with the largest market share is a monopoly.
6. Third-world workers working for American companies overseas are being exploited.
7. Free trade leads to unemployment.
8. Minimum wage laws raise unemployment.
Respondents were also asked to classify themselves on a political spectrum – Progressive, Liberal, Moderate, Conservative, Very Conservative, Libertarian.

This survey wasn’t designed to discover what people think. It was designed to prove a political point: “The Further Left You Are the Less You Know About Economics.” That’s the title of a post about it at Volokh Conspiracy. A paper by Zeljka Buturovic and Dan Klein, who designed the survey, gives the results.

(Click on the image for a view large enough to actually read)

The results were similar for the other questions.

To be sure, the liberals view of economic cause-effect relationships reflects the way they would like the world to be rather than the way the world actually is. But the bias of the poll is obvious. As monkeyesq says in his comment at Volokh,
1. Pick 8 liberal positions that have a questionable economic basis;
2. Ask people whether they “agree” or “disagree” with the statements;
3. Find that liberals are more likely to support liberal positions;
4. Claim that liberals don’t understand economics.
There’s an even larger problem here – a problem that affects not just polls that have an obvious ax to grind,* but a basic problem of all survey research: the question the survey asks may not be the question the respondent hears or answers.

These eight questions have a literal meaning. As Todd Zywicki, who wrote the Volokh post, says, “Note that the questions here are not whether the benefits of these policies might outweigh the costs, but the basic economic effects of these policies.”

True, the questions do not ask about costs and benefits, although I don’t think that the survey included an explicit caveat like the one Zywicki adds after the fact. Still, we have to wonder about how people really heard these questions.

“Mandatory licensing of professional services increases the prices of those services” – Agree or Disagree? Maybe some people hear a different question, a question about policy implications: “Would you like cheaper, but unlicensed, doctors.”

“A company with the largest market share is a monopoly.” Maybe the what the person hears is: “Can companies with large market share – though less than the share required for it to be a monopoly (100%?) – still exercise monopolistic powers?”

As for the “exploitation” of third-world workers, the word may have a precise economic definition (e.g., it’s exploitation only if the worker has no choice) – I don’t know. But even if such an economic definition exists, to most people the word evokes moral judgment, not economics.

The other items also have flaws, as some of the comments at Volokh (now 200 and counting) point out. (I confess that I’m still puzzled by the responses to Standard of Living. Nearly a third of all the respondents think that the standard of living today is no better than it was 30 years ago – 55% on the left, 12% on the right 21% of libertarians.)

The survey may tell us that “epistemic closure” is a disease that can infect the left as well the right. But it also tells us to be cautious about interpreting survey questions literally. Even innocuous questions may mean different things to survey respondents. Until a question has been tested several times, we can’t be sure what respondents hear when they are asked that question.

*A Kos poll that set out to show that quite a few Republicans were extremist nuts suffers from a similar problem. I blogged it here.

How Genetics Works (borrowed post)

May 8, 2010
Posted by Jay Livingston

Flaneuse at Graphic Sociology reposted this from some place. I couldn't resist reposting it as well.

(Click on the image for a slightly larger view.)

I know nothing about the photo. It reminds me of Elliot Erwitt, though I’m sure it’s not.
It looks too old to have been photoshopped.

Lies, Damn Lies, and Statistics About Lies

May 7, 2010
Posted by Jay Livingston

We all know that Barack Obama is a Muslim who was born in Kenya and that his campaign was funded by Hugo Chavez.

Yes, I know, presidents going back to Washington have been the subject of rumors. But it seems that with the arrival of Obama, presidential rumors have become something of a growth industry (maybe our only growth industry these days.)

J.L. Bell blogs mostly about comic books and fantasy literature at his Oz and Ends. But two weeks ago he posted some numbers about presidential rumors on the Internet.* For his data, he went where most of us would go – Snopes.com. Here’s what he found.

(Click on the graph for a larger view.)

In less than two years, Obama rumor-mongers have had nearly twice the output that their Bush counterparts managed in eight years – 87 to 47. And while the Bush rumors split almost evenly true-false, false Obama rumors dwarfed the true ones.** The false rumors about Obama outnumbered the total number of rumors about Bush. And while the lies about Obama are almost all negative, some of the false rumors about Bush are quite flattering, along the lines of the George Washington cheery tree rumor – like the rumor that had Bush paying for the funeral of a boy who had drowned near the Crawford ranch.

Is there really a right-left difference? If the “epistemic closure” hypothesis is accurate – if conservatives, even the chattering intellectuals, live and write in a bubble that keeps out any realities that might conflict with their ideology – then conservatives of all sorts might also welcome into the bubble even the most preposterous and unfounded rumors.

Surely, there must be a sociology of rumor. What are the demographic correlates, if any? What are the conditions under which rumors are more likely to arise and spread? I would imagine that lack of trust is important. The less we trust others who are outside our relatively small circle, and the less we interact with them, the more likely we will be to rely on rumor.

Trust, at least trust in government, has been decreasing generally, but conservatives, when they are not in control of the government, are especially mistrusting. Under any circumstances, false beliefs are frustratingly resistant to facts. It probably doesn’t improve conservative’s grasp of reality when they have a major TV network giving airtime to these rumors and when their leaders tell them about death panels.


* His post was picked up by Salon.com and then by Brendan Nyhan, who has had some interesting posts on the epistemic closure discussion.

** The other categories were “mixed,” “undetermined,” and “unclassifiable.” In the graph, I collapsed the latter two categories into “other.”

Messenger NAEP - or Charles Murray Channelling the Left

May 5, 2010
Posted by Jay Livingston

How fitting in these weeks of final exams to be reminded that tests are irrelevant – at least if you don’t like the results. They are the messenger who brings bad news. But will killing the messenger solve the problem?

I remember the good old days when academics railed at standardized tests like the SAT. If the tests showed group differences – between black and white, male and female – that just showed that the tests were biased and should not be used.

Now, Charles Murray, the man on the right that everyone on the left loves to hate,* has joined the anti-test chorus and, on the Times op-ed page today, he’s singing lead. Murray, like most conservatives, is a supporter of charter schools. He’s badmouthing standardized tests because they show that charter schools do no better than traditional public schools at educating students, especially the kinds of kids who are most in need of effective schools.

The volume of studies on school “choice” – comparing charter schools and voucher programs to traditional public schools – is now large. Most commonly, these find no difference in the progress of students in charters and those in publics. A few charters do better; a greater number underperform their public counterparts.

Rather than dismissing the research as merely wrong or turning up the volume on the few studies favorable to charters and ignoring the rest, as do some conservatives I know, Murray goes radical. What these studies really show, he says, is that tests don’t matter.
Why not instead finally acknowledge that standardized test scores are a terrible way to decide whether one school is better than another?
I’m not going to search through Murray’s oeuvre to see how many times he has cited test scores as meaningful or where he stood on No Child Left Behind. But I am going to guess that if the results had been different, if research showed that charters consistently outperformed publics on standardized tests, Murray would be putting up billboards praising NAEP and the rest.

But Murray goes way beyond the idea that tests are irrelevant. He says that when it comes to teaching kids to read and do math, the schools themselves are irrelevant.
Cognitive ability, personality and motivation come mostly from home. What happens in the classroom can have some effect, but smart and motivated children will tend to learn to read and do math even with poor instruction, while not-so-smart or unmotivated children will often have trouble with those subjects despite excellent instruction. If test scores in reading and math are the measure, a good school just doesn’t have that much room to prove it is better than a lesser school.
Murray is deliberately ignoring one inconvenient fact: that some teachers and some schools consistently do a better job of teaching hard-to-teach populations. But instead of wanting more effort to figure out just what those teachers and schools do so that others can also do more of it, Murray denies that they have any meaningful effect.

What Murray mostly wants is not good education, though he would probably not oppose such improvement. What he wants is charter schools. Since test scores that measure learning don’t support charters, Murray goes back to another song from the old left fake book – School as Ideology. The left used to complain – maybe it still does – that what schools really did was not so much teach subjects but indoctrinate kids into the dominant capitalist ideology so as to turn out a compliant labor force.

Right on, says Murray. But the dominant ideology, in his view, is now liberal and therefore to be avoided. Charter schools “would give parents a choice radically different from the progressive curriculum used in the county’s** other public schools.” Charters are a way for conservative parents to keep their kids out of the hands of the liberals – a sort of home-school away from home.

He has a point. If I lived in Texas, I might want the option of sending my kid to a state-funded charter school that included in the curriculum some of the arcane figures from America’s history that will now be excluded – people like Thomas Jefferson. Such choice would be welcome to middle-class parents like me and Murray, who are not much worried about our kids learning to read. But a choice of ideology is probably not high on the list of concerns of the parents whose kids are in schools where most of the students are years behind in reading and math.


* The mention of Murray’s name has been, for at least fifteen years now, a Pavlovian ringing of the bell curve, guaranteed to set lefties to frothing at the mouth. I actually used to admire Murray, at least for his writing. When a Marxist colleague asked me to review a manuscript she had written, a rather tendentious book on violence, I told her that she should try to write more like Murray – to present radical and probably offensive ideas in language that makes them seem calm and reasonable.

** Murray is referring to a specific charter proposal in the Maryland county where he lives, but for the argument he is making, he might just as easily have said country instead of county.

The Adoption Option

May 1, 2010
Posted by Jay Livingston

I’m on my way to another baby shower today. It’s a celebration, but as Lisa at Sociological Images pointed out a couple of weeks ago, how you view a pregnancy depends on where you are in the society. Lisa was responding to the recent PSA video by Bristol Palin (see the video and Lisa’s comments here .) Lisa’s take is that while the ad is telling teens to be cautious about sex, it also makes the point that the consequences of teen pregnancy are much harsher for girls who have little financial or social capital.

For me, the ad was a reminder of how different my own world is from the world of the intended audience of that ad. One obvious difference is abortion. For the cosmopolitan, educated, relatively well-off women I know, abortion is always an option. Not so for the Bristol Palins.

But there’s a cultural difference regarding adoption too.

Alice Eve Cohen’s memoir, what I thought I knew, gives a personal, poignant example. It’s a very complicated story, for as the title implies, everything that the medical experts tell her about her own fertility turns out to be wrong. She is told she can never conceive because she is a DES daughter, but in her forties, she becomes pregnant. Then she is told that the baby will have severe physical and mental defects, but she does not know this definitely until late in the pregnancy. A late-term abortion would be risky.

“I think adoption is the right path,” she writes, but her husband, sisters, and friends all disagree.
In this liberal, Upper West Side community, where abortion is accepted as a woman’s inalienable right, giving up a baby for adoption is inconceivable. . . . Where I live, I’d be more harshly judged for giving up my baby for adoption than for having an abortion.
[Full disclosure: where she lives is three short blocks from where I live, and we’ve known each other for 17 years.]

The debate about abortion – pro-life vs. pro-choice – may have something to do with religious beliefs. But, at least for those most deeply involved, as Kristin Luker pointed out a quarter-century ago in Abortion and the Politics of Motherhood, the debate has a strong subtext: clashing ideas about the position of women in society. Should a woman be more honored for success in her role in the family or her success in the world of work and career?

Luker’s explanation may be less useful for understanding the culturally different views of adoption. Adoption is not so much about the role of women; it’s more about the role of babies and children. For some women, babies are a gift from God, and the gifts just seem to keep on coming, even to those who are unmarried and who took abstinence pledges (see my earlier post on this here.) There’s no shame in sharing with people who have not been similarly gifted.

But here on the Upper West Side, the more typical woman’s plight is not so much that she didn’t want to get pregnant but did. It’s that she wanted to get pregnant but couldn’t. For these women, babies are rare and precious. You’d no more give one away than you would (forgive an extreme analogy) give away a winning lottery ticket.

It's Your Decision

April 26, 2010
Posted by Jay Livingston

The sitcoms of my youth often featured a plot line that might be called “It’s Your Decision.” Typically, a teenager, though sometimes a younger child, faces some moral dilemma:
  • go out with the handsome football player even though it means breaking a previous commitment to your best friend?
  • help your friend keep his drug use a secret or betray him and let grown-ups know about it?
  • take the offer of a wonderful summer in a villa on the Riviera with a rich friend or stay home and help out the family, who really need you right now?
Inevitably, at about five minutes before the end of the half hour, the child finally discusses the dilemma with the parent. And inevitably, after a thoughtful and sympathetic analysis of the problem, Cosby or Roseanne or whoever says, “I can’t tell you what to do. It’s your decision.”

Despite this disclaimer, the parent’s desires are quite obvious, and the child, no surprise here, always winds up making the right decision. (And, this being American TV, after the child makes the right decision, some deus ex machina Hollywood ending removes the cost of the decision. It turns out that her friend had always wanted her to go out with the football player anyway and has brought him along; the rich friend invites the whole family to the Riviera, etc.)

I was reminded of this plot-line when I started reading Claude Fischer’s new blog, Made in America,* a companion to his new book of the same name. In his most recent post, he takes on the question of American individualism. The post is a response to a right-wing song of praise to American exceptionalism by conservatives Richard Lowry and Ramesh Ponnuru writing in the National Review. America, they say, is “freer, more individualistic, more democratic, and more open and dynamic than any other nation on earth.”

Is it? Fischer provides some data from international surveys showing that when you compare the US to some of these other nations, we’re not always the most individualistic. Given a choice between individual conscience on the one hand and the law on the other, Americans are nowhere near the top in choosing individual conscience. In other hypothetical conflicts – individual vs. country, individual vs. marital norms, individual vs. conventional morality – several European countries are more on the side of the individual than are Americans. In fact, in all these cases, less than half of us put the individual ahead of the group. (See all of the Claude’s graphs here.)

(Click on the chart for a larger view.)

How can we keep insisting that we are individualistic – the most individualistic nation on earth?

Fischer offers some ways to explain the apparent contradiction, and the one he seems to favor is that America combines individualism with “voluntarism.”
Voluntarism is about being part of a community, but belonging voluntarily. Americans have long held that people can and should join or leave groups – families, congregations, clubs, townships, and so on – of their individual free will.
But once you voluntarily join a group, you must play by its rules. “Love it or leave it.”

I’ve commented elsewhere in this blog that a substantial part of the right wing in the US applies this ethic only to Democrats, not to themselves. When conservatives are in power, they see dissent as disloyalty (“Love it or leave it”), but when Obama is president, dissent (i.e., their dissent) becomes the higher patriotism).

Fischer’s formulation resolves the apparent contradiction between our professed love of liberty and our conformity, but only if you recognize that the voluntarism isn't really all that voluntary. As de Tocqueville noted 170 years ago
I know no country in which there is so little true independence of mind and freedom of discussion as in America. In any constitutional state in Europe every sort of religious and political theory may be advocated and propagated abroad. . . . .In America the majority raises very formidable barriers to the liberty of opinion: within these barriers an author may write whatever he pleases, but he will repent it if he ever step beyond them. (DIA, Vol. 2, Ch. XV.)
O.K. I know de Tocqueville seems like a long way from Cosby. But here’s my point. Other cultures – especially stable, aristocracy-like cultures – allow the individual a wide range of opinions and behaviors. They can tolerate eccentricity because the group is certain that the individual will never defect. He will always be a member of his class or his family. But in America, affiliation is voluntary and therefore not necessarily permanent. Because of our ideology of individual freedom, we must worry that the person who does not love the group might well leave it. So we must be on guard against anything that hints at deviation from the group's norms.

The more efficient solution is socialization – to instill in people both the desire to conform and the idea that their choices are their own and not influenced by the group. That's basically the message of Cosby and the other sitcom parents.**

As David McClelland,*** years ago, put it, the American is saying to himself, “I want to freely choose to do what others expect me to do.”

It’s your decision.

Yeah, right.
-----------------------------

* Hat tip: Chris Uggen.

** Not all sitcom parents are like Cosby. We also have the sitcom tradition of dad-as-buffoon, a TV line that extends back from Homer Simpson to Chester Riley. It probably has roots in radio and even earlier media.

*** I’m pretty sure that Claude Fischer began his graduate studies in Harvard’s Social Relations department when McClelland was on the faculty there, but I somehow doubt that their paths crossed very often.

Big Pink

April 24, 2010
Posted by Jay Livingston

Palermo is cleaning AC Milan’s clock, 2-0 in the first half. I don’t usually watch soccer – in fact, I’ve never seen either of these teams play before – but when I’m grading papers, it helps to have a sport that you don’t really have to watch, a sport where scoring is rare.

It wasn’t Palermo’s goals or outstanding saves that caught my attention. It was their uniforms.


If there were to be a new NFL franchise, what are the chances that its uniforms would be pink? Or a new NBA or MLB franchise? Maybe you have to be from Sicily to get away with this (“You got a problem with pink?”). More likely, pink doesn’t have the meaning there it has here.

In the US today, pink is the color of girls and of preppy guys who buy their shirts at Brooks Bros. It is definitely not the color of jocks. (I think that there are some evol-psych types who argue that seeing pink as feminine is part of human nature.) But pink hasn’t always had that meaning even in the US. Take a look at some of the pink posts by Lisa and Gwen at Sociological Images (here and here, for example).

Hail to Thee, Gay Spirit

April 23, 2010
Posted by Jay Livingston

I know it’s unfashionable in the sociology of culture to look to the mass media for signs of the zeitgeist (spirit of the times). But could it be mere coincidence that on the very same day (today) that the New York Times crossword uses the word “gaydar” for the first time (definition: “Sense of orientation”)

(Click on the image for a larger view.)

we also get the news that Riverdale High – where Archie, Betty, Jughead, Veronica and the rest have been attending in blissful heteronormativity all these years – will soon have an openly gay student?

Up Against the Wall Street Banks

April 22, 2010
Posted by Jay Livingston

President Obama could have stayed in Washington today to make his pitch for financial regulation of banks. But he decided to make the trip up to New York, and I think I know why. It’s all about “Wall Street.”

Words matter, as Bill Clinton recently said, alluding to an earlier speech by Barack Obama. Words matter to survey researchers too, not just to Democratic presidents. Ask people about the rights of “gays and lesbians,” you get one answer. Ask about “homosexuals,” you get less support. (See my earlier post here.)

Now it’s the banks. Should they be regulated? If you ask about “large banks and financial institutions,” the difference between Favor and Oppose is negligible* – three percentage points. But if you ask about regulating “Wall Street Banks” the difference jumps to 14 points – 50% in favor, 36% opposed. (From a recent Gallup poll.)

The choice of words matters, but it matters more to some people than others. Mostly Republicans. When you break down the Gallup data by political affiliation, the results look like this.

(Click on the chart for a larger view.)

For Democrats and Independents, the choice of words makes little difference.** A big bank is a big bank regardless of its address and regardless of what you call it. But not for Republicans. They are generally against regulation, of course. Only 22% of Republicans want to regulate “large banks,” and 70% oppose such regulation. But ask them about “Wall Street banks,” and regulation begins to look better. Thirty-five percent are in favor. And opposition drops to barely half (53%).

I leave it to readers to speculate as to why Republicans find that large banks are so much more in need of regulation when those banks are located in New York’s Wall Street district. But no wonder that the world’s largest banking group, HSBC, has been promoting itself, in the US at least, not as a multinational giant but as a repository of local values.


* That’s within the 0.95 margin of error of 4 points.
** The differences are within the margin of error, which is 5 points in the split-sample questions.

Who Do You Trust?

April 20, 2010
Posted by Jay Livingston

Last August, I said here that the TeaBaggers and other angry Republicans didn’t really believe in democracy. They don’t seem to accept the idea that the person who gets the most votes gets to be president. To them, Obama is an illegitimate usurper. He’s not the “real” president. And the people who voted for him are not “the real America.”

The recent Pew report on trust in government provides some support for this idea. The report also shows that these sentiments didn’t arise just with the Obama presidency. Generally, Democrats are more accepting of Republican administrations than are Republicans of Democrats.
The Pew survey did not ask about “legitimacy.” Instead, the key question is about “trust.” It asks whether the government in Washington can be trusted to do the right thing. The graph below shows the percent who answered either “always” or “most of the time.”


The trend is downward, and Democrats and Independents are generally not far apart in their levels of trust. Republicans are subject to greater mood swings.

Pew suggests that the low level of trust will translate into a loss of seats for incumbents come November. Political scientists, like John Sides at The Monkey Cage , suspect that if the economy bounces back, people will become more trusting of incumbent Democrats, or at least trust them enough to vote for them.
If the economy is the key, the Democrats may not have to fix the government to mitigate their losses in 2010, simply preside over an improving economy.
But when I look at the graph, I see Republicans whose trust of Republicans and distrust of Democrats seem impervious to economic winds. Look at the two previous presidencies. The Clinton era was, by any economic measure, a period of great economic prosperity. The Bush years, by contrast, started with a recession, continued with flat or slightly declining income for all but the top earners, and ended with the worst economic disaster since the Depression. Yet Republicans were twice as trusting of George W. Bush as they were of Bill Clinton.

April Showers / Finishing the Hat

April 19, 2010
Posted by Jay Livingston

Religion, says Durkheim, is all about group solidarity. Religious rituals both reflect and create this sentiment of unity and group feeling. The central ritual symbols, notably the group totem and objects imbued with its spirit, are really representations of the group. These objects are of the group, created by the group, and for the group – the group and not its individual members.

I don’t usually think of my world as particularly totemistic or even very religious – certainly not compared with the spiritually charged world inhabited by the members of the clans Durkheim was thinking about, with their churinga and other sacred objects. But I was at a baby shower yesterday, and the day before that, my wife went to a bridal shower. And both of these featured the Ceremony of the Hat.

This is a rite practiced by females in North America, particularly those of European descent, when they gather to celebrate one of their number who is in a state of transition – from single to married, from childlessness to motherhood. OK, no need to go all Horace Miner Nacerima here; most people know the drill. As the woman being honored unwraps her gifts, someone gathers the discarded ribbons and threads them into a paper plate or in some other way creates a hat, which the honoree then models.


(Click on the image for a larger view. Want to see more examples?
Search for “bridal shower hat” at Google Images.)

No doubt, showers have a very rational, utilitarian component. The bride-to-be or mother-to-be gets a lot of stuff that she’ll need in her new role. The online registry has rationalized the process even further, aligning demand and supply. No surprises. Gift-giving has become predictable, controlled, calculable (“number desired,” “number received”), and efficient.

So what’s up with the hat? I didn’t ask, but if I had, the explanation would surely have been along the lines of “Oh, it’s just silly, it’s just for fun.” But Durkheim, lurking in the far corner of the party room, sees something else. The shower is not just a party for the future bride or mom; it’s a ritual, and as such it is for the group itself. These people, come together from their disparate daily lives, and at least temporarily, they are united into something that transcends any individual.

The hat symbolizes the group – woven together from each person’s ribbon into a single unified and extraordinary object. If you’re at a shower and you have your camera, you might take a picture of the linens or lingerie, the porta-crib or Pat the Bunny. Or you might not. But you always take a picture of the hat.

Tax Day Post - Taxes On Parade

April 15, 2010
Posted by Jay Livingston

Greg Mankiw opened his copy of Parade on Sunday, and he didn’t much like what he saw. It was the “Annual Salary Survey,” and – surprise, surprise – readers saw a lot more rich celebs than they would have seen just by walking around their neighborhoods. Yes Parade was guilty – it “oversampled” the rich and the famous.
about 14 percent of the people in Parade's sample earn more than $1 million a year. In the real world, the actual percentage is about 0.2 percent.

Even worse than Parade’s methodology was its pernicious effect.
There is a common perception in some circles that we can solve all our fiscal problems if only we were willing to tax the rich some more. Yet, in reality, there are not enough rich for this to work. By presenting such a skewed cross-section of incomes, Parade inadvertently feeds an all-too-common misperception.
Now Greg Mankiw is a respected (and rich) economist, and I’m sure he doesn’t go making statements that can’t be supported by evidence. But this one seems awfully vague. These unidentified “circles”– what are they, and how large are they? Just how common is this “all-too common misperception.”

I also wonder how much power Parade has over public perceptions. Mankiw notes that Parade has a circulation of 32 million – all those folks who, just like Greg himself, find it folded into their Sunday newspaper along with the coupons for Pop Tarts and Fabreze. Do we really know what impact Walter Scott and Marilyn and the rest have had in shaping the American consciousness? (Surely someone has done this research. I just wish Greg had linked to it.)

Justin Wolfers at Freakonomics has the more important criticism: when you are deciding who to tax, the important variable is not numbers of people but numbers of dollars. So maybe the “misperception” is not really amiss.
Families earning more than $1 million probably do represent close to 14 percent of total income, and maybe more. By arguing that only 0.2 percent of families are this rich, Mankiw risks distracting his readers from the fact that increasing the taxes paid by the rich can be a big part of the solution to our fiscal woes.

Visualizing TV Viewers - Sports and Politics

April 14, 2010
Posted by Jay Livingston

How do you turn data into a good graph? Of course you could ask flâneuse . But suppose you wanted to do it yourself.

Here are the results of a study on preferences in TV sports and in politics – 218,000 interviews conducted over a 13-month period. I’m not sure what the questions were that determined the Democratic and Republican index. The other variables, “Likelihood of voting” and being “very interested” in watching the sport on TV, are fairly straightforward.

The data in the table are sorted on the politics column (R-minus-D Index). PGA golf has the most Republican audience, WNBA the most Democratic.

(Click on the image for a larger view.)

How would you graph the data?

Here’s one possibility, found at dqydj (which stands for “Don’t quit your day job,” but you knew that already, didn’t you?).


(Click on the image for a slightly larger view.)

Blue bars represent political leaning – the difference between the GOP and Democratic indices. Green bars show likelihood of voting. Sports are listed on the x-axis.

I prefer this one, found here.

(Click on the image for a larger view.)

For more on creating visualizations, go to Many Eyes , which has a ton of data sets to play around with.

(Hat tip: Andrew Gelman)

Do I Call You “Doctor” or “Professor”?

April 11, 2010
Posted by Jay Livingston

Fabio had a great post a few days ago at orgtheory about how authority structures in a hospital can be hazardous to your health. Even fatal. In the incident Fabio recounts, an anaesthesiologist could see that the patient was having a bad reaction to the surgeon’s latex gloves. But the surgeon refused to switch to non-latex gloves. Surgeons outrank anaesthesiologists, and if the anaesthesiologist had not resorted to extreme measures (threatening to disrupt the surgery in order to call the hospital administrators), the patient would have died.

My first reaction on reading this was: What an incredible asshole the surgeon must be. But Fabio’s point is that the problem is organizational not personal.* The authority structure of the hospital creates an institutionalized arrogance among doctors. The hazardous result is that information doesn’t flow upward from those “on the ground.” (Fabio’s post is here. The comments are also well worth reading.)

Then Fabio asks:
On a deeper level, what sort of organization would allow people to develop such toxic relationships?
And his speculative answer is:
. . . .a combination of high professional autonomy and a garbage can structure. Hospitals, as far as I can tell, aren’t organizations that make one product with a centrally controlled assembly line. Instead, they are a place were “problems” (patients) drift from place to place (ICU, regular, OR, etc) where they might be “solved” (stop showing symptoms) by some random assortment of people who have limited attention (the physicians, nurses, and surgeons). Each physician isn’t in charge of a patient, they do specific procedures and pass the problem along to other people.
Hmmm. Something about that description sounded familiar. So I tried a few “global replace” edits, and it came out like this:
. . . a combination of high professional autonomy and a garbage can structure. Universities are not organizations that make one product with a centrally controlled assembly line. Instead, they are a place were “students” drift from course to course where they might be “taught” by some random assortment of people who have limited attention. Each professor isn’t in charge of a student; they teach specific courses and pass the student along to other people.
I am not saying that universities are necessarily like this. But they do have the potential to resemble what Fabio calls a “toxic culture.”

* The book this anecdote comes from is Safe Patients, Smart Hospitals by Peter Pronovost. Note – not Safe Patients, Good Doctors. The problem lies in the institutional arrangements, not the individuals who work in the institution.

In Da Household

April 9, 2010

Posted by Jay Livingston

In my classes about inequality, I often use income data, and I don’t usually think too much about what “income” is. Much of the data is on “household income.” “Median household income” presumably reflects the economic well-being of the typical person in that category. It’s often used to compare different groups or regions or trends over time. In the last Presidential campaign, Democrats often pointed out that in the Bush years while the rich had gotten much richer, real median household income had fallen.

Since then, the economy has gotten worse. But it’s possible that household income may rise. The problem is in the denominator of the fraction.

Imagine two siblings, each with a home, each with an income of $100,000. So their average personal income and their average household income are both $100,000 Suppose that one of them loses his job, his house is foreclosed, and he moves in with his sister’s family. Eventually he finds a job paying $50,000 – not enough for him to move out.

Their average personal income in this family has decreased – it’s now $75,000. But since they now have only one household, their average household income has increased to $150,000.

I don’t know how economists deal with this sort of thing. Surely they must have some way of adjusting for it so that we have a fuller picture of what’s happening with the typical American. And one of the things that is happening is that people are doubling up in their houses.

The number of households is shrinking. Many owners who lose their homes are not moving to rentals. Despite the three million foreclosures, rental vacancy rates – nearly 11% – are higher than they’ve been in at least 50 years. Instead, people are moving in with someone else. The number of people per household is increasing.

The trend started in the latter part of the Bush era. Between 2005 and 2008, 1.2 million households disappeared (study commissioned by the Mortgage Bankers Association reported at CNN). So the decline in household income was even worse than the Democrats were making it out to be. In 2009, the trend has no doubt continued.

It’s something to keep in mind when we look at data on household incomes.

Playing Games with Names

April 7, 2010
Posted by Jay Livingston

The research findings on names, alluded to in yesterday’s post, seems absurd at first glance. Do people really make important life decisions – choices about where to live and what career to follow – because their “implicit egotism” makes a place or profession more attractive if it has echoes of their own names?

Even failure, these psychologists claim, can mesh with this egotism. A study of baseball players across 90 years found that players whose names began with a K were slightly more likely to strike out. The authors (Nelson and Simmons), in a press release, put it this way:
Even Karl ‘Koley’ Kolseth would find a strikeout aversive, but he might find it a little less aversive than players who do not share his initials, and therefore he might avoid striking out less enthusiastically.
The difference is small – 18.8% vs. 17.2% – but statistically significant. Still, I wonder about the studies that the authors didn’t report on. Did they see if those K hitters also had slightly higher rates of hitting home runs? Home runs – four-baggers, blasts, clouts, slams, moonshots – are not brought to you by the letter K. But I think that there may be a correlation between HRs and Ks. If so, those K batters (Kiner, Kingman, Kaline, Killebrew, Kluszewki) are not the guys for whom striking out is ego-syntonic. They’re the muscle boys who swing for the fences. Sometimes they connect, but they also tend to strike out more often.* Perhaps that slight difference in both statistics is a matter of ethnicity rather than egotism.

Something else – the article (or at least the press release about it) reports only on batters. There’s no mention of pitchers throwing strikeouts. Did the authors check to see if a K-hurler was more likely to outfan the rest of the alphabet? Or maybe they did, and just didn’t bother to report the results.*

Then there’s Georgia and Florence. One of the studies reported in the article by Pelham, et. al. finds that the number of women named Georgia living in Georgia is well above what we would expect; ditto for Florences in Florida.

Is this a Peach State effect or a Southern effect? I grew up in Pennsylvania, and I went to school in Massachusetts. I don’t recall meeting any girls or women named Georgia. And I haven’t encountered any among all the students I’ve taught over the years in New Jersey. I think Georgia is more popular in the South, and my guess is that you’d find the name over-represented not just in Georgia but in South Carolina and Alabama too. As far as I know, the researchers didn’t run that analysis.

Florence is also a southern name, at least in Florida. I’d bet a lot of money that those Florences are not evenly distributed throughout the Sunshine State. You probably won’t find too many of them in Tallahassee or Tampa or Orlando. You have to go farther south, say to Miami. Again, my guess is ethnicity, not egotism.

I’m reminded of the old Carnak joke. Carnak was a character Johnny Carson did on the Tonight Show – the mystic who could divine the answers to questions before he had even seen them. He would say the answer, then open the envelope and read the question that was inside. This one is from 1989, when the S&L crisis was at its peak.


I still remember this one after all these years:

The answer: Venice, Rome, and Florence.
The question: Name two Italian cities and the president of Hadassah.




* TheSocioBlog’s first ever post, inspired by a joke from Kieran Healy’s blog, was about negative results.

I Could Have Been a Sailor

April 6, 2010
Posted by Jay Livingston

My colleague Arnie Korotkin, who, as The Gadfly, blogs about local New Jersey matters, sent me this from today’s Star-Ledger

N.J. sees rise in vasectomies amid difficult economy
By Kathleen O'Brien/The Star-Ledger
April 06, 2010, 6:30AM

What caught my attention was the doctor’s name. I speak as someone who has heard the same “joke” about my name ever since I was old enough to understand what people were saying. Sometimes just the “I presume,” sometimes with a self-satisfied “heh-heh,” sometimes with an apologetic, “I guess you hear that a lot.”

This poor guy must get tired of the same joke. But he did choose that specialty.

There’s a whole cottage industry in psychology correlating people’s names with their biographies. The idea – which goes by the name of “implicit egotism” – is that people are fond of their own names and that this liking can influence life decisions. Dennis is more likely to become a dentist; George becomes a geoscientist and relocates to Georgia; Laura’s a lawyer. Florence moves to Florida. And Dr. Eric Seaman . . . well, you get the idea.

For more on this, see my earlier blog post on the GPAs of students whose names begin with A and B compared with the C and D students.

The studies are published in respectable psych journals, complete with statistics and references (author, year) in parentheses and academic prose:
Although a high level of exposure to the letters that occur in one’s own name probably plays a role in the development of the name letter effect (see Zajonc, 1968), it seems unlikely that the name letter effect is determined exclusively by mere exposure (Nuttin, 1987).
Even so, these studies get covered in the popular press. And when they do, the probability that the headline will be “What’s In a Name?” approaches 1.0.

If you caught the allusion in the subject line of this post, give yourself five bonus points. It’s a song by Peter Allen; you can see his video of it on YouTube. For a better version, listen here.

Meanness and Means

April 2, 2010
Posted by Jay Livingston

On March 27, the Times ran an op-ed by David Elkind, “Playtime is Over,” about the causes of bullying:

it seems clear that there is a link among the rise of television and computer games, the decline in peer-to-peer socialization and the increase of bullying in our schools.
I was skeptical. Had there really been an increase in bullying? Elkind offered no evidence. He cited numbers for current years (school absences attributable to bullying), but he had no comparable data for the pre-computer or pre-TV eras. Maybe he was giving a persuasive explanation for something that didn’t exist.

I sent the Times a letter expressing my doubts. They didn’t publish it. Elkind is, after all, a distinguished psychologist, author many books on child development. As if to prove the point, three days later, the big bullying story broke. An Irish girl in South Hadley, Massachusetts committed suicide after having been bullied by several other girls in her high school. The nastiness had included Facebook postings and text messages.

I guess Elkind was right, and I was wrong. Bullying has really exploded out of control in the electronic age.

But today the op-ed page features “The Myth of Mean Girls,” by Mike Males and Meda-Chesney Lind. They look at all the available systematic evidence on nastiness by teenagers – crime data (arrests and victimizations), surveys on school safety, the Monitoring the Future survey, and the CDC’s Youth Risk Behavior Surveillance. They all show the same trend:
This mythical wave of girls’ violence and meanness is, in the end, contradicted by reams of evidence from almost every available and reliable source.
Worse, say the authors, the myth has had unfortunate consequences:

. . . more punitive treatment of girls, including arrests and incarceration for lesser offenses like minor assaults that were treated informally in the past, as well as alarmist calls for restrictions on their Internet use.*
This is not to say that bullying is O.K. and nothing to worry about. Mean girls exist. It’s just that the current generation has fewer of them than did their parents’ generation. Should we focus on the mean or on the average? On average, the kids are not just all right; they’re nicer. Funny that nobody is offering explanations of how the Internet and cell phones might have contributed to this decline in meanness.

*For a recent example, see my post about criminal charges brought against young teenage girls for “sexting,” even though the pictures showed no naughty bits.


UPDATE: At Salon.com, Sady Doyle argues that Lind and Males looked at the wrong data.

Unfortunately, cruelty between girls can't really be measured with the hard crime statistics on which Males and Lind's argument relies. . . . Bullying between teenage girls expresses itself as physical fighting less often than it does as relational aggression, a soft and social warfare often conducted between girls who seem to be friends. You can't measure rumors, passive-aggressive remarks, alienation and shaming with statistics.
She has a point. While most of the evidence Males and Lind cite is not “hard crime statistics,” it does focus on overt violence. But Doyle is wrong that you can’t measure “relational aggression.” If something exists, you can measure it. The problem is that your measure might not be valid enough to be of use.

If Doyle is right, if nonphysical bullying hasn’t been measured, that doesn’t mean that Males and Lind are wrong and that bullying has in fact increased. It means that we just don’t know. We do know that physical violence has decreased. So here are the possibilities.

  1. Physical and nonphysical aggression are inversely related. Girls have substituted nonphysical aggression for physical aggression – social bullying has increased.
  2. Less serious forms of aggression usually track with more serious forms (nationwide, the change in assault rates runs parallel to the change in murder rates). So we can use rates of physical aggression as a proxy for rates of bullying – social bullying has decreased.
  3. Physical and nonphysical aggression are completely unrelated, caused by different factors and in found in different places – the change in social bullying is anybody’s guess.

You Got Truffles, My Friends

April 2, 2010
Posted by Jay Livingston

I first tasted truffles at Vivarois, a three-star restaurant in Paris. The first course was a paté chaud de becasses, warm woodcock paté – roughly equal sized chunks of woodcock and truffle under a golden pastry crust. The second course was a ragoût d’homard, the literal translation lobster stew hardly does it justice – medallions of lobster and similarly sized slices of truffle in some delicate pink sauce with tarragon and cognac.

Truffles show up frequently in the dishes at these fancy restaurants. But why?

The Veblen answer is simple – conspicuous consumption. Thorstein Veblen, Theory of the Leisure Class, 1889. Rich people spend their money conspicuously – in a manner so that others will know they are rich.

Truffles are expensive. They grow underground, usually near oak trees. They cannot be cultivated. The supply is limited to what nature provides (and what the truffle pigs and dogs can find), so the price remains high, very high. Good fresh truffles go for about $500 a pound.

Rich people like truffles, goes the Veblen line, because eating truffles announces to the world (or at least to yourself and those at your table) that you are rich. That’s also why rich people like lobster. Lobster meat runs upwards of $40 a pound.


A more Bourdieu-esque taste of the truffled lobster stew would discern not so much the display of crude financial capital but of cultural capital. In the world of three-star restaurants, food is art. Eating is certainly not the practical matter of allaying hunger and gaining nutrition. As Bourdieu says, it’s about form, about the “aestheticization of practice,” and about the “opposition between the easy and the difficult.” You’re not just eating a good meal; you’re appreciating difficult art, an appreciation possible only for those with sufficient cultural capital.

If only we could do an experiment – vastly increase the supply of truffles and lower the price. It may not happen right away, but an article in the Times holds out some hope. Biologists have decoded the truffle genome and discovered that truffles are sexual.
The precious fungi had long been thought to lead an asexual existence, but Dr. Martin and his colleagues have found that they have two sexes, or mating types.
How much longer before they will be forced to mate in captivity? When that happens, I suspect they will cease to be essentials for the hautest of haute cuisines. The restaurants that serve the rich will move on to some other rare and expensive foodstuff.

That’s what happened with lobsters in the US, though in reverse. Now lobster is a delicacy. But in the early days of the republic, lobsters were plentiful. Consequently they were poor people’s food.
In Massachusetts, some of the servants . . . had it put into their contracts that they would not be forced to eat lobster more than three times a week. (Maine source here.)
Truffles in abundance would no longer be special. The appreciation of dishes cooked with truffles would be open to all, it would be easy, not difficult; it would no longer mark the difference between a gentleman and a bum (with a capital B and that rhymes with T . . .).

Truffles in fact are very much like garlic – a strong and distinctive flavor that can be added to just about any dish except desserts. But foodies don’t go all rapturous just because the chef has blessed some dish with generous amounts of garlic.