It's Your Decision

April 26, 2010
Posted by Jay Livingston

The sitcoms of my youth often featured a plot line that might be called “It’s Your Decision.” Typically, a teenager, though sometimes a younger child, faces some moral dilemma:
  • go out with the handsome football player even though it means breaking a previous commitment to your best friend?
  • help your friend keep his drug use a secret or betray him and let grown-ups know about it?
  • take the offer of a wonderful summer in a villa on the Riviera with a rich friend or stay home and help out the family, who really need you right now?
Inevitably, at about five minutes before the end of the half hour, the child finally discusses the dilemma with the parent. And inevitably, after a thoughtful and sympathetic analysis of the problem, Cosby or Roseanne or whoever says, “I can’t tell you what to do. It’s your decision.”

Despite this disclaimer, the parent’s desires are quite obvious, and the child, no surprise here, always winds up making the right decision. (And, this being American TV, after the child makes the right decision, some deus ex machina Hollywood ending removes the cost of the decision. It turns out that her friend had always wanted her to go out with the football player anyway and has brought him along; the rich friend invites the whole family to the Riviera, etc.)

I was reminded of this plot-line when I started reading Claude Fischer’s new blog, Made in America,* a companion to his new book of the same name. In his most recent post, he takes on the question of American individualism. The post is a response to a right-wing song of praise to American exceptionalism by conservatives Richard Lowry and Ramesh Ponnuru writing in the National Review. America, they say, is “freer, more individualistic, more democratic, and more open and dynamic than any other nation on earth.”

Is it? Fischer provides some data from international surveys showing that when you compare the US to some of these other nations, we’re not always the most individualistic. Given a choice between individual conscience on the one hand and the law on the other, Americans are nowhere near the top in choosing individual conscience. In other hypothetical conflicts – individual vs. country, individual vs. marital norms, individual vs. conventional morality – several European countries are more on the side of the individual than are Americans. In fact, in all these cases, less than half of us put the individual ahead of the group. (See all of the Claude’s graphs here.)

(Click on the chart for a larger view.)

How can we keep insisting that we are individualistic – the most individualistic nation on earth?

Fischer offers some ways to explain the apparent contradiction, and the one he seems to favor is that America combines individualism with “voluntarism.”
Voluntarism is about being part of a community, but belonging voluntarily. Americans have long held that people can and should join or leave groups – families, congregations, clubs, townships, and so on – of their individual free will.
But once you voluntarily join a group, you must play by its rules. “Love it or leave it.”

I’ve commented elsewhere in this blog that a substantial part of the right wing in the US applies this ethic only to Democrats, not to themselves. When conservatives are in power, they see dissent as disloyalty (“Love it or leave it”), but when Obama is president, dissent (i.e., their dissent) becomes the higher patriotism).

Fischer’s formulation resolves the apparent contradiction between our professed love of liberty and our conformity, but only if you recognize that the voluntarism isn't really all that voluntary. As de Tocqueville noted 170 years ago
I know no country in which there is so little true independence of mind and freedom of discussion as in America. In any constitutional state in Europe every sort of religious and political theory may be advocated and propagated abroad. . . . .In America the majority raises very formidable barriers to the liberty of opinion: within these barriers an author may write whatever he pleases, but he will repent it if he ever step beyond them. (DIA, Vol. 2, Ch. XV.)
O.K. I know de Tocqueville seems like a long way from Cosby. But here’s my point. Other cultures – especially stable, aristocracy-like cultures – allow the individual a wide range of opinions and behaviors. They can tolerate eccentricity because the group is certain that the individual will never defect. He will always be a member of his class or his family. But in America, affiliation is voluntary and therefore not necessarily permanent. Because of our ideology of individual freedom, we must worry that the person who does not love the group might well leave it. So we must be on guard against anything that hints at deviation from the group's norms.

The more efficient solution is socialization – to instill in people both the desire to conform and the idea that their choices are their own and not influenced by the group. That's basically the message of Cosby and the other sitcom parents.**

As David McClelland,*** years ago, put it, the American is saying to himself, “I want to freely choose to do what others expect me to do.”

It’s your decision.

Yeah, right.
-----------------------------

* Hat tip: Chris Uggen.

** Not all sitcom parents are like Cosby. We also have the sitcom tradition of dad-as-buffoon, a TV line that extends back from Homer Simpson to Chester Riley. It probably has roots in radio and even earlier media.

*** I’m pretty sure that Claude Fischer began his graduate studies in Harvard’s Social Relations department when McClelland was on the faculty there, but I somehow doubt that their paths crossed very often.

Big Pink

April 24, 2010
Posted by Jay Livingston

Palermo is cleaning AC Milan’s clock, 2-0 in the first half. I don’t usually watch soccer – in fact, I’ve never seen either of these teams play before – but when I’m grading papers, it helps to have a sport that you don’t really have to watch, a sport where scoring is rare.

It wasn’t Palermo’s goals or outstanding saves that caught my attention. It was their uniforms.


If there were to be a new NFL franchise, what are the chances that its uniforms would be pink? Or a new NBA or MLB franchise? Maybe you have to be from Sicily to get away with this (“You got a problem with pink?”). More likely, pink doesn’t have the meaning there it has here.

In the US today, pink is the color of girls and of preppy guys who buy their shirts at Brooks Bros. It is definitely not the color of jocks. (I think that there are some evol-psych types who argue that seeing pink as feminine is part of human nature.) But pink hasn’t always had that meaning even in the US. Take a look at some of the pink posts by Lisa and Gwen at Sociological Images (here and here, for example).

Hail to Thee, Gay Spirit

April 23, 2010
Posted by Jay Livingston

I know it’s unfashionable in the sociology of culture to look to the mass media for signs of the zeitgeist (spirit of the times). But could it be mere coincidence that on the very same day (today) that the New York Times crossword uses the word “gaydar” for the first time (definition: “Sense of orientation”)

(Click on the image for a larger view.)

we also get the news that Riverdale High – where Archie, Betty, Jughead, Veronica and the rest have been attending in blissful heteronormativity all these years – will soon have an openly gay student?

Up Against the Wall Street Banks

April 22, 2010
Posted by Jay Livingston

President Obama could have stayed in Washington today to make his pitch for financial regulation of banks. But he decided to make the trip up to New York, and I think I know why. It’s all about “Wall Street.”

Words matter, as Bill Clinton recently said, alluding to an earlier speech by Barack Obama. Words matter to survey researchers too, not just to Democratic presidents. Ask people about the rights of “gays and lesbians,” you get one answer. Ask about “homosexuals,” you get less support. (See my earlier post here.)

Now it’s the banks. Should they be regulated? If you ask about “large banks and financial institutions,” the difference between Favor and Oppose is negligible* – three percentage points. But if you ask about regulating “Wall Street Banks” the difference jumps to 14 points – 50% in favor, 36% opposed. (From a recent Gallup poll.)

The choice of words matters, but it matters more to some people than others. Mostly Republicans. When you break down the Gallup data by political affiliation, the results look like this.

(Click on the chart for a larger view.)

For Democrats and Independents, the choice of words makes little difference.** A big bank is a big bank regardless of its address and regardless of what you call it. But not for Republicans. They are generally against regulation, of course. Only 22% of Republicans want to regulate “large banks,” and 70% oppose such regulation. But ask them about “Wall Street banks,” and regulation begins to look better. Thirty-five percent are in favor. And opposition drops to barely half (53%).

I leave it to readers to speculate as to why Republicans find that large banks are so much more in need of regulation when those banks are located in New York’s Wall Street district. But no wonder that the world’s largest banking group, HSBC, has been promoting itself, in the US at least, not as a multinational giant but as a repository of local values.


* That’s within the 0.95 margin of error of 4 points.
** The differences are within the margin of error, which is 5 points in the split-sample questions.

Who Do You Trust?

April 20, 2010
Posted by Jay Livingston

Last August, I said here that the TeaBaggers and other angry Republicans didn’t really believe in democracy. They don’t seem to accept the idea that the person who gets the most votes gets to be president. To them, Obama is an illegitimate usurper. He’s not the “real” president. And the people who voted for him are not “the real America.”

The recent Pew report on trust in government provides some support for this idea. The report also shows that these sentiments didn’t arise just with the Obama presidency. Generally, Democrats are more accepting of Republican administrations than are Republicans of Democrats.
The Pew survey did not ask about “legitimacy.” Instead, the key question is about “trust.” It asks whether the government in Washington can be trusted to do the right thing. The graph below shows the percent who answered either “always” or “most of the time.”


The trend is downward, and Democrats and Independents are generally not far apart in their levels of trust. Republicans are subject to greater mood swings.

Pew suggests that the low level of trust will translate into a loss of seats for incumbents come November. Political scientists, like John Sides at The Monkey Cage , suspect that if the economy bounces back, people will become more trusting of incumbent Democrats, or at least trust them enough to vote for them.
If the economy is the key, the Democrats may not have to fix the government to mitigate their losses in 2010, simply preside over an improving economy.
But when I look at the graph, I see Republicans whose trust of Republicans and distrust of Democrats seem impervious to economic winds. Look at the two previous presidencies. The Clinton era was, by any economic measure, a period of great economic prosperity. The Bush years, by contrast, started with a recession, continued with flat or slightly declining income for all but the top earners, and ended with the worst economic disaster since the Depression. Yet Republicans were twice as trusting of George W. Bush as they were of Bill Clinton.

April Showers / Finishing the Hat

April 19, 2010
Posted by Jay Livingston

Religion, says Durkheim, is all about group solidarity. Religious rituals both reflect and create this sentiment of unity and group feeling. The central ritual symbols, notably the group totem and objects imbued with its spirit, are really representations of the group. These objects are of the group, created by the group, and for the group – the group and not its individual members.

I don’t usually think of my world as particularly totemistic or even very religious – certainly not compared with the spiritually charged world inhabited by the members of the clans Durkheim was thinking about, with their churinga and other sacred objects. But I was at a baby shower yesterday, and the day before that, my wife went to a bridal shower. And both of these featured the Ceremony of the Hat.

This is a rite practiced by females in North America, particularly those of European descent, when they gather to celebrate one of their number who is in a state of transition – from single to married, from childlessness to motherhood. OK, no need to go all Horace Miner Nacerima here; most people know the drill. As the woman being honored unwraps her gifts, someone gathers the discarded ribbons and threads them into a paper plate or in some other way creates a hat, which the honoree then models.


(Click on the image for a larger view. Want to see more examples?
Search for “bridal shower hat” at Google Images.)

No doubt, showers have a very rational, utilitarian component. The bride-to-be or mother-to-be gets a lot of stuff that she’ll need in her new role. The online registry has rationalized the process even further, aligning demand and supply. No surprises. Gift-giving has become predictable, controlled, calculable (“number desired,” “number received”), and efficient.

So what’s up with the hat? I didn’t ask, but if I had, the explanation would surely have been along the lines of “Oh, it’s just silly, it’s just for fun.” But Durkheim, lurking in the far corner of the party room, sees something else. The shower is not just a party for the future bride or mom; it’s a ritual, and as such it is for the group itself. These people, come together from their disparate daily lives, and at least temporarily, they are united into something that transcends any individual.

The hat symbolizes the group – woven together from each person’s ribbon into a single unified and extraordinary object. If you’re at a shower and you have your camera, you might take a picture of the linens or lingerie, the porta-crib or Pat the Bunny. Or you might not. But you always take a picture of the hat.

Tax Day Post - Taxes On Parade

April 15, 2010
Posted by Jay Livingston

Greg Mankiw opened his copy of Parade on Sunday, and he didn’t much like what he saw. It was the “Annual Salary Survey,” and – surprise, surprise – readers saw a lot more rich celebs than they would have seen just by walking around their neighborhoods. Yes Parade was guilty – it “oversampled” the rich and the famous.
about 14 percent of the people in Parade's sample earn more than $1 million a year. In the real world, the actual percentage is about 0.2 percent.

Even worse than Parade’s methodology was its pernicious effect.
There is a common perception in some circles that we can solve all our fiscal problems if only we were willing to tax the rich some more. Yet, in reality, there are not enough rich for this to work. By presenting such a skewed cross-section of incomes, Parade inadvertently feeds an all-too-common misperception.
Now Greg Mankiw is a respected (and rich) economist, and I’m sure he doesn’t go making statements that can’t be supported by evidence. But this one seems awfully vague. These unidentified “circles”– what are they, and how large are they? Just how common is this “all-too common misperception.”

I also wonder how much power Parade has over public perceptions. Mankiw notes that Parade has a circulation of 32 million – all those folks who, just like Greg himself, find it folded into their Sunday newspaper along with the coupons for Pop Tarts and Fabreze. Do we really know what impact Walter Scott and Marilyn and the rest have had in shaping the American consciousness? (Surely someone has done this research. I just wish Greg had linked to it.)

Justin Wolfers at Freakonomics has the more important criticism: when you are deciding who to tax, the important variable is not numbers of people but numbers of dollars. So maybe the “misperception” is not really amiss.
Families earning more than $1 million probably do represent close to 14 percent of total income, and maybe more. By arguing that only 0.2 percent of families are this rich, Mankiw risks distracting his readers from the fact that increasing the taxes paid by the rich can be a big part of the solution to our fiscal woes.

Visualizing TV Viewers - Sports and Politics

April 14, 2010
Posted by Jay Livingston

How do you turn data into a good graph? Of course you could ask flâneuse . But suppose you wanted to do it yourself.

Here are the results of a study on preferences in TV sports and in politics – 218,000 interviews conducted over a 13-month period. I’m not sure what the questions were that determined the Democratic and Republican index. The other variables, “Likelihood of voting” and being “very interested” in watching the sport on TV, are fairly straightforward.

The data in the table are sorted on the politics column (R-minus-D Index). PGA golf has the most Republican audience, WNBA the most Democratic.

(Click on the image for a larger view.)

How would you graph the data?

Here’s one possibility, found at dqydj (which stands for “Don’t quit your day job,” but you knew that already, didn’t you?).


(Click on the image for a slightly larger view.)

Blue bars represent political leaning – the difference between the GOP and Democratic indices. Green bars show likelihood of voting. Sports are listed on the x-axis.

I prefer this one, found here.

(Click on the image for a larger view.)

For more on creating visualizations, go to Many Eyes , which has a ton of data sets to play around with.

(Hat tip: Andrew Gelman)

Do I Call You “Doctor” or “Professor”?

April 11, 2010
Posted by Jay Livingston

Fabio had a great post a few days ago at orgtheory about how authority structures in a hospital can be hazardous to your health. Even fatal. In the incident Fabio recounts, an anaesthesiologist could see that the patient was having a bad reaction to the surgeon’s latex gloves. But the surgeon refused to switch to non-latex gloves. Surgeons outrank anaesthesiologists, and if the anaesthesiologist had not resorted to extreme measures (threatening to disrupt the surgery in order to call the hospital administrators), the patient would have died.

My first reaction on reading this was: What an incredible asshole the surgeon must be. But Fabio’s point is that the problem is organizational not personal.* The authority structure of the hospital creates an institutionalized arrogance among doctors. The hazardous result is that information doesn’t flow upward from those “on the ground.” (Fabio’s post is here. The comments are also well worth reading.)

Then Fabio asks:
On a deeper level, what sort of organization would allow people to develop such toxic relationships?
And his speculative answer is:
. . . .a combination of high professional autonomy and a garbage can structure. Hospitals, as far as I can tell, aren’t organizations that make one product with a centrally controlled assembly line. Instead, they are a place were “problems” (patients) drift from place to place (ICU, regular, OR, etc) where they might be “solved” (stop showing symptoms) by some random assortment of people who have limited attention (the physicians, nurses, and surgeons). Each physician isn’t in charge of a patient, they do specific procedures and pass the problem along to other people.
Hmmm. Something about that description sounded familiar. So I tried a few “global replace” edits, and it came out like this:
. . . a combination of high professional autonomy and a garbage can structure. Universities are not organizations that make one product with a centrally controlled assembly line. Instead, they are a place were “students” drift from course to course where they might be “taught” by some random assortment of people who have limited attention. Each professor isn’t in charge of a student; they teach specific courses and pass the student along to other people.
I am not saying that universities are necessarily like this. But they do have the potential to resemble what Fabio calls a “toxic culture.”

* The book this anecdote comes from is Safe Patients, Smart Hospitals by Peter Pronovost. Note – not Safe Patients, Good Doctors. The problem lies in the institutional arrangements, not the individuals who work in the institution.

In Da Household

April 9, 2010

Posted by Jay Livingston

In my classes about inequality, I often use income data, and I don’t usually think too much about what “income” is. Much of the data is on “household income.” “Median household income” presumably reflects the economic well-being of the typical person in that category. It’s often used to compare different groups or regions or trends over time. In the last Presidential campaign, Democrats often pointed out that in the Bush years while the rich had gotten much richer, real median household income had fallen.

Since then, the economy has gotten worse. But it’s possible that household income may rise. The problem is in the denominator of the fraction.

Imagine two siblings, each with a home, each with an income of $100,000. So their average personal income and their average household income are both $100,000 Suppose that one of them loses his job, his house is foreclosed, and he moves in with his sister’s family. Eventually he finds a job paying $50,000 – not enough for him to move out.

Their average personal income in this family has decreased – it’s now $75,000. But since they now have only one household, their average household income has increased to $150,000.

I don’t know how economists deal with this sort of thing. Surely they must have some way of adjusting for it so that we have a fuller picture of what’s happening with the typical American. And one of the things that is happening is that people are doubling up in their houses.

The number of households is shrinking. Many owners who lose their homes are not moving to rentals. Despite the three million foreclosures, rental vacancy rates – nearly 11% – are higher than they’ve been in at least 50 years. Instead, people are moving in with someone else. The number of people per household is increasing.

The trend started in the latter part of the Bush era. Between 2005 and 2008, 1.2 million households disappeared (study commissioned by the Mortgage Bankers Association reported at CNN). So the decline in household income was even worse than the Democrats were making it out to be. In 2009, the trend has no doubt continued.

It’s something to keep in mind when we look at data on household incomes.

Playing Games with Names

April 7, 2010
Posted by Jay Livingston

The research findings on names, alluded to in yesterday’s post, seems absurd at first glance. Do people really make important life decisions – choices about where to live and what career to follow – because their “implicit egotism” makes a place or profession more attractive if it has echoes of their own names?

Even failure, these psychologists claim, can mesh with this egotism. A study of baseball players across 90 years found that players whose names began with a K were slightly more likely to strike out. The authors (Nelson and Simmons), in a press release, put it this way:
Even Karl ‘Koley’ Kolseth would find a strikeout aversive, but he might find it a little less aversive than players who do not share his initials, and therefore he might avoid striking out less enthusiastically.
The difference is small – 18.8% vs. 17.2% – but statistically significant. Still, I wonder about the studies that the authors didn’t report on. Did they see if those K hitters also had slightly higher rates of hitting home runs? Home runs – four-baggers, blasts, clouts, slams, moonshots – are not brought to you by the letter K. But I think that there may be a correlation between HRs and Ks. If so, those K batters (Kiner, Kingman, Kaline, Killebrew, Kluszewki) are not the guys for whom striking out is ego-syntonic. They’re the muscle boys who swing for the fences. Sometimes they connect, but they also tend to strike out more often.* Perhaps that slight difference in both statistics is a matter of ethnicity rather than egotism.

Something else – the article (or at least the press release about it) reports only on batters. There’s no mention of pitchers throwing strikeouts. Did the authors check to see if a K-hurler was more likely to outfan the rest of the alphabet? Or maybe they did, and just didn’t bother to report the results.*

Then there’s Georgia and Florence. One of the studies reported in the article by Pelham, et. al. finds that the number of women named Georgia living in Georgia is well above what we would expect; ditto for Florences in Florida.

Is this a Peach State effect or a Southern effect? I grew up in Pennsylvania, and I went to school in Massachusetts. I don’t recall meeting any girls or women named Georgia. And I haven’t encountered any among all the students I’ve taught over the years in New Jersey. I think Georgia is more popular in the South, and my guess is that you’d find the name over-represented not just in Georgia but in South Carolina and Alabama too. As far as I know, the researchers didn’t run that analysis.

Florence is also a southern name, at least in Florida. I’d bet a lot of money that those Florences are not evenly distributed throughout the Sunshine State. You probably won’t find too many of them in Tallahassee or Tampa or Orlando. You have to go farther south, say to Miami. Again, my guess is ethnicity, not egotism.

I’m reminded of the old Carnak joke. Carnak was a character Johnny Carson did on the Tonight Show – the mystic who could divine the answers to questions before he had even seen them. He would say the answer, then open the envelope and read the question that was inside. This one is from 1989, when the S&L crisis was at its peak.


I still remember this one after all these years:

The answer: Venice, Rome, and Florence.
The question: Name two Italian cities and the president of Hadassah.




* TheSocioBlog’s first ever post, inspired by a joke from Kieran Healy’s blog, was about negative results.

I Could Have Been a Sailor

April 6, 2010
Posted by Jay Livingston

My colleague Arnie Korotkin, who, as The Gadfly, blogs about local New Jersey matters, sent me this from today’s Star-Ledger

N.J. sees rise in vasectomies amid difficult economy
By Kathleen O'Brien/The Star-Ledger
April 06, 2010, 6:30AM

What caught my attention was the doctor’s name. I speak as someone who has heard the same “joke” about my name ever since I was old enough to understand what people were saying. Sometimes just the “I presume,” sometimes with a self-satisfied “heh-heh,” sometimes with an apologetic, “I guess you hear that a lot.”

This poor guy must get tired of the same joke. But he did choose that specialty.

There’s a whole cottage industry in psychology correlating people’s names with their biographies. The idea – which goes by the name of “implicit egotism” – is that people are fond of their own names and that this liking can influence life decisions. Dennis is more likely to become a dentist; George becomes a geoscientist and relocates to Georgia; Laura’s a lawyer. Florence moves to Florida. And Dr. Eric Seaman . . . well, you get the idea.

For more on this, see my earlier blog post on the GPAs of students whose names begin with A and B compared with the C and D students.

The studies are published in respectable psych journals, complete with statistics and references (author, year) in parentheses and academic prose:
Although a high level of exposure to the letters that occur in one’s own name probably plays a role in the development of the name letter effect (see Zajonc, 1968), it seems unlikely that the name letter effect is determined exclusively by mere exposure (Nuttin, 1987).
Even so, these studies get covered in the popular press. And when they do, the probability that the headline will be “What’s In a Name?” approaches 1.0.

If you caught the allusion in the subject line of this post, give yourself five bonus points. It’s a song by Peter Allen; you can see his video of it on YouTube. For a better version, listen here.

Meanness and Means

April 2, 2010
Posted by Jay Livingston

On March 27, the Times ran an op-ed by David Elkind, “Playtime is Over,” about the causes of bullying:

it seems clear that there is a link among the rise of television and computer games, the decline in peer-to-peer socialization and the increase of bullying in our schools.
I was skeptical. Had there really been an increase in bullying? Elkind offered no evidence. He cited numbers for current years (school absences attributable to bullying), but he had no comparable data for the pre-computer or pre-TV eras. Maybe he was giving a persuasive explanation for something that didn’t exist.

I sent the Times a letter expressing my doubts. They didn’t publish it. Elkind is, after all, a distinguished psychologist, author many books on child development. As if to prove the point, three days later, the big bullying story broke. An Irish girl in South Hadley, Massachusetts committed suicide after having been bullied by several other girls in her high school. The nastiness had included Facebook postings and text messages.

I guess Elkind was right, and I was wrong. Bullying has really exploded out of control in the electronic age.

But today the op-ed page features “The Myth of Mean Girls,” by Mike Males and Meda-Chesney Lind. They look at all the available systematic evidence on nastiness by teenagers – crime data (arrests and victimizations), surveys on school safety, the Monitoring the Future survey, and the CDC’s Youth Risk Behavior Surveillance. They all show the same trend:
This mythical wave of girls’ violence and meanness is, in the end, contradicted by reams of evidence from almost every available and reliable source.
Worse, say the authors, the myth has had unfortunate consequences:

. . . more punitive treatment of girls, including arrests and incarceration for lesser offenses like minor assaults that were treated informally in the past, as well as alarmist calls for restrictions on their Internet use.*
This is not to say that bullying is O.K. and nothing to worry about. Mean girls exist. It’s just that the current generation has fewer of them than did their parents’ generation. Should we focus on the mean or on the average? On average, the kids are not just all right; they’re nicer. Funny that nobody is offering explanations of how the Internet and cell phones might have contributed to this decline in meanness.

*For a recent example, see my post about criminal charges brought against young teenage girls for “sexting,” even though the pictures showed no naughty bits.


UPDATE: At Salon.com, Sady Doyle argues that Lind and Males looked at the wrong data.

Unfortunately, cruelty between girls can't really be measured with the hard crime statistics on which Males and Lind's argument relies. . . . Bullying between teenage girls expresses itself as physical fighting less often than it does as relational aggression, a soft and social warfare often conducted between girls who seem to be friends. You can't measure rumors, passive-aggressive remarks, alienation and shaming with statistics.
She has a point. While most of the evidence Males and Lind cite is not “hard crime statistics,” it does focus on overt violence. But Doyle is wrong that you can’t measure “relational aggression.” If something exists, you can measure it. The problem is that your measure might not be valid enough to be of use.

If Doyle is right, if nonphysical bullying hasn’t been measured, that doesn’t mean that Males and Lind are wrong and that bullying has in fact increased. It means that we just don’t know. We do know that physical violence has decreased. So here are the possibilities.

  1. Physical and nonphysical aggression are inversely related. Girls have substituted nonphysical aggression for physical aggression – social bullying has increased.
  2. Less serious forms of aggression usually track with more serious forms (nationwide, the change in assault rates runs parallel to the change in murder rates). So we can use rates of physical aggression as a proxy for rates of bullying – social bullying has decreased.
  3. Physical and nonphysical aggression are completely unrelated, caused by different factors and in found in different places – the change in social bullying is anybody’s guess.

You Got Truffles, My Friends

April 2, 2010
Posted by Jay Livingston

I first tasted truffles at Vivarois, a three-star restaurant in Paris. The first course was a paté chaud de becasses, warm woodcock paté – roughly equal sized chunks of woodcock and truffle under a golden pastry crust. The second course was a ragoût d’homard, the literal translation lobster stew hardly does it justice – medallions of lobster and similarly sized slices of truffle in some delicate pink sauce with tarragon and cognac.

Truffles show up frequently in the dishes at these fancy restaurants. But why?

The Veblen answer is simple – conspicuous consumption. Thorstein Veblen, Theory of the Leisure Class, 1889. Rich people spend their money conspicuously – in a manner so that others will know they are rich.

Truffles are expensive. They grow underground, usually near oak trees. They cannot be cultivated. The supply is limited to what nature provides (and what the truffle pigs and dogs can find), so the price remains high, very high. Good fresh truffles go for about $500 a pound.

Rich people like truffles, goes the Veblen line, because eating truffles announces to the world (or at least to yourself and those at your table) that you are rich. That’s also why rich people like lobster. Lobster meat runs upwards of $40 a pound.


A more Bourdieu-esque taste of the truffled lobster stew would discern not so much the display of crude financial capital but of cultural capital. In the world of three-star restaurants, food is art. Eating is certainly not the practical matter of allaying hunger and gaining nutrition. As Bourdieu says, it’s about form, about the “aestheticization of practice,” and about the “opposition between the easy and the difficult.” You’re not just eating a good meal; you’re appreciating difficult art, an appreciation possible only for those with sufficient cultural capital.

If only we could do an experiment – vastly increase the supply of truffles and lower the price. It may not happen right away, but an article in the Times holds out some hope. Biologists have decoded the truffle genome and discovered that truffles are sexual.
The precious fungi had long been thought to lead an asexual existence, but Dr. Martin and his colleagues have found that they have two sexes, or mating types.
How much longer before they will be forced to mate in captivity? When that happens, I suspect they will cease to be essentials for the hautest of haute cuisines. The restaurants that serve the rich will move on to some other rare and expensive foodstuff.

That’s what happened with lobsters in the US, though in reverse. Now lobster is a delicacy. But in the early days of the republic, lobsters were plentiful. Consequently they were poor people’s food.
In Massachusetts, some of the servants . . . had it put into their contracts that they would not be forced to eat lobster more than three times a week. (Maine source here.)
Truffles in abundance would no longer be special. The appreciation of dishes cooked with truffles would be open to all, it would be easy, not difficult; it would no longer mark the difference between a gentleman and a bum (with a capital B and that rhymes with T . . .).

Truffles in fact are very much like garlic – a strong and distinctive flavor that can be added to just about any dish except desserts. But foodies don’t go all rapturous just because the chef has blessed some dish with generous amounts of garlic.