In Da Household

April 9, 2010

Posted by Jay Livingston

In my classes about inequality, I often use income data, and I don’t usually think too much about what “income” is. Much of the data is on “household income.” “Median household income” presumably reflects the economic well-being of the typical person in that category. It’s often used to compare different groups or regions or trends over time. In the last Presidential campaign, Democrats often pointed out that in the Bush years while the rich had gotten much richer, real median household income had fallen.

Since then, the economy has gotten worse. But it’s possible that household income may rise. The problem is in the denominator of the fraction.

Imagine two siblings, each with a home, each with an income of $100,000. So their average personal income and their average household income are both $100,000 Suppose that one of them loses his job, his house is foreclosed, and he moves in with his sister’s family. Eventually he finds a job paying $50,000 – not enough for him to move out.

Their average personal income in this family has decreased – it’s now $75,000. But since they now have only one household, their average household income has increased to $150,000.

I don’t know how economists deal with this sort of thing. Surely they must have some way of adjusting for it so that we have a fuller picture of what’s happening with the typical American. And one of the things that is happening is that people are doubling up in their houses.

The number of households is shrinking. Many owners who lose their homes are not moving to rentals. Despite the three million foreclosures, rental vacancy rates – nearly 11% – are higher than they’ve been in at least 50 years. Instead, people are moving in with someone else. The number of people per household is increasing.

The trend started in the latter part of the Bush era. Between 2005 and 2008, 1.2 million households disappeared (study commissioned by the Mortgage Bankers Association reported at CNN). So the decline in household income was even worse than the Democrats were making it out to be. In 2009, the trend has no doubt continued.

It’s something to keep in mind when we look at data on household incomes.

Playing Games with Names

April 7, 2010
Posted by Jay Livingston

The research findings on names, alluded to in yesterday’s post, seems absurd at first glance. Do people really make important life decisions – choices about where to live and what career to follow – because their “implicit egotism” makes a place or profession more attractive if it has echoes of their own names?

Even failure, these psychologists claim, can mesh with this egotism. A study of baseball players across 90 years found that players whose names began with a K were slightly more likely to strike out. The authors (Nelson and Simmons), in a press release, put it this way:
Even Karl ‘Koley’ Kolseth would find a strikeout aversive, but he might find it a little less aversive than players who do not share his initials, and therefore he might avoid striking out less enthusiastically.
The difference is small – 18.8% vs. 17.2% – but statistically significant. Still, I wonder about the studies that the authors didn’t report on. Did they see if those K hitters also had slightly higher rates of hitting home runs? Home runs – four-baggers, blasts, clouts, slams, moonshots – are not brought to you by the letter K. But I think that there may be a correlation between HRs and Ks. If so, those K batters (Kiner, Kingman, Kaline, Killebrew, Kluszewki) are not the guys for whom striking out is ego-syntonic. They’re the muscle boys who swing for the fences. Sometimes they connect, but they also tend to strike out more often.* Perhaps that slight difference in both statistics is a matter of ethnicity rather than egotism.

Something else – the article (or at least the press release about it) reports only on batters. There’s no mention of pitchers throwing strikeouts. Did the authors check to see if a K-hurler was more likely to outfan the rest of the alphabet? Or maybe they did, and just didn’t bother to report the results.*

Then there’s Georgia and Florence. One of the studies reported in the article by Pelham, et. al. finds that the number of women named Georgia living in Georgia is well above what we would expect; ditto for Florences in Florida.

Is this a Peach State effect or a Southern effect? I grew up in Pennsylvania, and I went to school in Massachusetts. I don’t recall meeting any girls or women named Georgia. And I haven’t encountered any among all the students I’ve taught over the years in New Jersey. I think Georgia is more popular in the South, and my guess is that you’d find the name over-represented not just in Georgia but in South Carolina and Alabama too. As far as I know, the researchers didn’t run that analysis.

Florence is also a southern name, at least in Florida. I’d bet a lot of money that those Florences are not evenly distributed throughout the Sunshine State. You probably won’t find too many of them in Tallahassee or Tampa or Orlando. You have to go farther south, say to Miami. Again, my guess is ethnicity, not egotism.

I’m reminded of the old Carnak joke. Carnak was a character Johnny Carson did on the Tonight Show – the mystic who could divine the answers to questions before he had even seen them. He would say the answer, then open the envelope and read the question that was inside. This one is from 1989, when the S&L crisis was at its peak.


I still remember this one after all these years:

The answer: Venice, Rome, and Florence.
The question: Name two Italian cities and the president of Hadassah.




* TheSocioBlog’s first ever post, inspired by a joke from Kieran Healy’s blog, was about negative results.

I Could Have Been a Sailor

April 6, 2010
Posted by Jay Livingston

My colleague Arnie Korotkin, who, as The Gadfly, blogs about local New Jersey matters, sent me this from today’s Star-Ledger

N.J. sees rise in vasectomies amid difficult economy
By Kathleen O'Brien/The Star-Ledger
April 06, 2010, 6:30AM

What caught my attention was the doctor’s name. I speak as someone who has heard the same “joke” about my name ever since I was old enough to understand what people were saying. Sometimes just the “I presume,” sometimes with a self-satisfied “heh-heh,” sometimes with an apologetic, “I guess you hear that a lot.”

This poor guy must get tired of the same joke. But he did choose that specialty.

There’s a whole cottage industry in psychology correlating people’s names with their biographies. The idea – which goes by the name of “implicit egotism” – is that people are fond of their own names and that this liking can influence life decisions. Dennis is more likely to become a dentist; George becomes a geoscientist and relocates to Georgia; Laura’s a lawyer. Florence moves to Florida. And Dr. Eric Seaman . . . well, you get the idea.

For more on this, see my earlier blog post on the GPAs of students whose names begin with A and B compared with the C and D students.

The studies are published in respectable psych journals, complete with statistics and references (author, year) in parentheses and academic prose:
Although a high level of exposure to the letters that occur in one’s own name probably plays a role in the development of the name letter effect (see Zajonc, 1968), it seems unlikely that the name letter effect is determined exclusively by mere exposure (Nuttin, 1987).
Even so, these studies get covered in the popular press. And when they do, the probability that the headline will be “What’s In a Name?” approaches 1.0.

If you caught the allusion in the subject line of this post, give yourself five bonus points. It’s a song by Peter Allen; you can see his video of it on YouTube. For a better version, listen here.

Meanness and Means

April 2, 2010
Posted by Jay Livingston

On March 27, the Times ran an op-ed by David Elkind, “Playtime is Over,” about the causes of bullying:

it seems clear that there is a link among the rise of television and computer games, the decline in peer-to-peer socialization and the increase of bullying in our schools.
I was skeptical. Had there really been an increase in bullying? Elkind offered no evidence. He cited numbers for current years (school absences attributable to bullying), but he had no comparable data for the pre-computer or pre-TV eras. Maybe he was giving a persuasive explanation for something that didn’t exist.

I sent the Times a letter expressing my doubts. They didn’t publish it. Elkind is, after all, a distinguished psychologist, author many books on child development. As if to prove the point, three days later, the big bullying story broke. An Irish girl in South Hadley, Massachusetts committed suicide after having been bullied by several other girls in her high school. The nastiness had included Facebook postings and text messages.

I guess Elkind was right, and I was wrong. Bullying has really exploded out of control in the electronic age.

But today the op-ed page features “The Myth of Mean Girls,” by Mike Males and Meda-Chesney Lind. They look at all the available systematic evidence on nastiness by teenagers – crime data (arrests and victimizations), surveys on school safety, the Monitoring the Future survey, and the CDC’s Youth Risk Behavior Surveillance. They all show the same trend:
This mythical wave of girls’ violence and meanness is, in the end, contradicted by reams of evidence from almost every available and reliable source.
Worse, say the authors, the myth has had unfortunate consequences:

. . . more punitive treatment of girls, including arrests and incarceration for lesser offenses like minor assaults that were treated informally in the past, as well as alarmist calls for restrictions on their Internet use.*
This is not to say that bullying is O.K. and nothing to worry about. Mean girls exist. It’s just that the current generation has fewer of them than did their parents’ generation. Should we focus on the mean or on the average? On average, the kids are not just all right; they’re nicer. Funny that nobody is offering explanations of how the Internet and cell phones might have contributed to this decline in meanness.

*For a recent example, see my post about criminal charges brought against young teenage girls for “sexting,” even though the pictures showed no naughty bits.


UPDATE: At Salon.com, Sady Doyle argues that Lind and Males looked at the wrong data.

Unfortunately, cruelty between girls can't really be measured with the hard crime statistics on which Males and Lind's argument relies. . . . Bullying between teenage girls expresses itself as physical fighting less often than it does as relational aggression, a soft and social warfare often conducted between girls who seem to be friends. You can't measure rumors, passive-aggressive remarks, alienation and shaming with statistics.
She has a point. While most of the evidence Males and Lind cite is not “hard crime statistics,” it does focus on overt violence. But Doyle is wrong that you can’t measure “relational aggression.” If something exists, you can measure it. The problem is that your measure might not be valid enough to be of use.

If Doyle is right, if nonphysical bullying hasn’t been measured, that doesn’t mean that Males and Lind are wrong and that bullying has in fact increased. It means that we just don’t know. We do know that physical violence has decreased. So here are the possibilities.

  1. Physical and nonphysical aggression are inversely related. Girls have substituted nonphysical aggression for physical aggression – social bullying has increased.
  2. Less serious forms of aggression usually track with more serious forms (nationwide, the change in assault rates runs parallel to the change in murder rates). So we can use rates of physical aggression as a proxy for rates of bullying – social bullying has decreased.
  3. Physical and nonphysical aggression are completely unrelated, caused by different factors and in found in different places – the change in social bullying is anybody’s guess.