Cool Tool

November 21, 2006

Posted by Jay Livingston

Google Trends has information about the number of Google searches by time and place. If you go to http://www.google.com/trends and enter "turkey," you'll see a graph that looks like this (I've limited it to the US).

Not too surprising. The second line, below the search line, is the trend line for news stories mentioning the word. Of course, you can't be sure whether the newswriters and googlers were curious about recipes or about vacations in Istanbul.

I plugged in "Durkheim" and got this.

Not much interest in Durkheim during the summer. But comes the new semester, I guess I'm not the only one starting with social facts and suicide. Interesting that the sharp differences of 2004 and 2005 aren't repeated in 2006. Could it mean that sociology enrollments are down? Or that more students took sociology in the summer?
(Or it could be an artifact of sampling. Google does not use the total of all searches but selects a sample, though they won't tell you how they arrive at that sample.)

The results also show the top cities in the search— those with the highest percentage of searches for your keyword relative to the total of all searches from that city. Cambridge, MA came in first for Durkheim. But the city with the highest percentage of searches on "sociology" is Piscataway. Somebody help me out here. What's up with Piscataway and sociology?


Cities
Regions
Languages


Top cities (normalized)

1. Piscataway, NJ, USA


2. Madison, WI, USA


3. Cambridge, MA, USA


4. Columbus, OH, USA


5. Baltimore, MD, USA


6. Honolulu, HI, USA


7. Raleigh, NC, USA


8. Philadelphia, PA, USA


9. New York, NY, USA


10. Los Angeles, CA, USA


Borat, Milgram, Goffman

November 18, 2006
Posted by Jay Livingston
I showed the Milgram film in class last week—the film Stanley Milgram made of his famous experiments on “obedience to authority.” In the experiment, subjects are asked to deliver very painful and even apparently fatal shocks to a person in another room. When we discussed the ethics of the experiment, I drew an analogy to the Borat film, especially the amount of deception.

Both Borat (i.e., Sacha Baron Cohen in character) and Milgram lie about who the people involved really are and about what’s really happening. Borat is not really a Eurasian journalist making a documentary; in Milgram’s experiment, the “learner” supposedly receiving the shocks is not really a volunteer, and the experiment isn’t about learning. Both Borat and Milgram lie to their subjects about the true purpose of the project. It is not about the things taking place around the subject (a dinner party, a comedy coaching session, or a learning experiment); what it’s really about, and what the camera is zooming in on, is the reactions of the subjects themselves.

The two projects are similar not just in their ethically questionable methods but in their results. What both movies show is the power of social norms, the unwritten rules of everyday politeness.
Borat and Milgram can get away with their outrageous questions, requests, or behavior because people are just too polite to tell them that they are way out of line.

The rules of everyday politeness also require that both people in an interaction must agree as to when it ends. (Try breaking off a conversation with someone who wants to continue. It’s not so easy.) So once Borat’s victims have committed themselves to the interaction, which always starts out being normal enough, they can’t figure out how to end it even when Borat’s behavior goes far beyond the bounds of good taste. The humor, like that of the old TV show “Candid Camera” depends on people continuing to try to be polite even when circumstances would seem to call for confrontation and even when that politeness makes them increasingly uncomfortable.

The same goes for Milgram’s subjects. The experiment starts off quite normally— no howls of pain for the low-voltage shocks— and the subjects become committed to their place in the situation. The norm against breaking up the interaction kicks in. One subject shown at length in the film says to the experimenter, “I don’t mean to be rude, Sir, but . . . .” To us watching the film, it seems ridiculous that he’s apparently less affected by the extreme pain, injury or death of someone in the next room than he is by the possibility of being rude to the experimenter a few feet away. But that’s because we don’t realize the power of the norms in the immediate situation.

The other unwritten rule that enables Milgram and Borat (and Ali G and “Candid Camera”) to take things so far is this: don’t question what someone says he is, at least not
without very, very strong information to the contrary. (This insight is the basis for one of the classic books in sociology, The Presentation of Self in Everyday Life, by Erving Goffman.)

Borat presents himself as a very naive Eurasian journalist trying to learn about America. To act towards him as though he were an uncouth fool — even though he’s behaving like one — would be an insult. Milgram says in effect that this is a learning experiment. To discontinue the experiment would be saying in effect, “You’re not really the psychology researcher you say you are. You don’t know how to run an experiment.” Yes, some people discontinue the experiment, and no doubt some people didn’t go along with Borat (though of course they get edited out of the film). But even those brave people must still overcome the pull of very strong norms.

As in other scams, the set-up is crucial. For the game to work, Borat (like Ali G and Milgram) must first get the other person to commit himself to the interaction and to accept Borat for what he claims to be. For the scammer, going in cold may be risky, as Sacha Baron Cohen found out two weeks ago. After doing Saturday Night Live as Borat, he went out in New York still in character with fellow Brit Hugh Laurie. They were on the street in the Village when Cohen, with no set-up, approached a stranger and reportedly said, “I like your clothings. Are nice! Please, may I buying? I want to have sex with it...your clothings...very much.”
The guy began punching Cohen and didn’t stop till Laurie came and pulled him off (making a House call, I guess).

Chinquee.

Political Football

November 14, 2006
Posted by Jay Livingston


Thinking back on the Democratic sweep of a week ago, I now realize that I should have seen it coming last year during football season. It was the year of the Steelers.
I don’t mean anything silly, like the idea that the Superbowl forecasts the stock market— if the NFC team wins, buy; if the AFC wins, sell. It’s worked about three-quarters of the time, but if it’s anything more than coincidence, it’s mostly because the NFC has won more often than the AFC, and the stock market has gone up more often than down.

But the link between the Steelers and the election may be real. It wasn’t that the Steelers won the Superbowl; it was that somehow along the way, they had become “America’s Team.”

That title used to belong to the Dallas Cowboys. I imagine that some PR person for the Cowboys dreamed up the phrase, but it was true in a way. The Cowboys weren’t really America’s team so much as they were what we might now call the Red States’ Team. Through a wide swath of the South and West, people rooted for the Cowboys, mostly because football fans had no other good pro team to root for, maybe no team at all.

Today, fans in places like Arizona, North Carolina, and Tennessee have local teams. Not so in the 1960s and 70s. And the teams that did make their home in the South and West were in the AFC. On Sunday, NBC would broadcast the local AFC team (Broncos, Dolphins). But the CBS affiliate would be broadcasting the NFC, and usually it was the Cowboys.

So the people who listened to Country & Western on the radio watched the Cowboys on TV. Rooting for Dallas was easy in those days. The Cowboys were good. They went to the Superbowl four times in the 1970s, winning twice. Beyond the won-lost record, they had an image, a brand. The Cowboys represented the individualist strain in
American culture. The Cowboys were Texas, the land of big thinking, big opportunity, and every man for himself. They were rugged, independent, a football version of the Marlboro man. And just as Americans bought Marlboro cigarettes, America also bought a lot of Cowboys jerseys and other paraphernalia. For a while, the Cowboys alone accounted for 30% of all NFL merchandise sales.

As the red states got more NFL teams, the Cowboys position as “America’s Team” started to fade. There were teams closer to home to root for, and the Cowboys’ performance in the past few years hasn’t exactly been the kind that makes distant fans remain loyal.

The Steeler brand is something else entirely. If the Cowboys were the team of the Sun Belt, the Steelers are the team of the Rust Belt. Pittsburgh produces very little steel these days. The economy of the region is dominated by medical complexes. That and unemployment. But the team is still called the Steelers, not the Medics, and it still represents the values of an industrial past. Steelworkers are working class wage earners bringing home a paycheck. Their families depend on the New Deal kind of government they pay taxes to or the union they are part of to help protect them from the uncertainties of life — sudden turns of fortune like layoffs at work and serious illness at home. These people stress the public and collective over the private and individual. Remember, the Steelers’ powerful running back Jerome Bettis was not called the SUV or the Pick-up Truck; he was public transportation, The Bus.

Is there a parallel in the election? We all know that people were voting against Republican policies in Iraq and against Republican sleaze. But Democrats weren’t just non-Republicans. Many of the Democrats who won ran as economic populists. They support policies that benefit ordinary people and perhaps cut into the profits of corporations. One of the first things the new Democratic congress will do is pass an increase in the minimum wage. They will also try to change the new Medicare law to allow the Government to negotiate with drug companies to get lower prices, something forbidden under the Republicans’ Medicare bill.

In 2005, the Steelers became America’s team. They won the Superbowl. But more tellingly, Americans, voting with their wallets, bought more Steeler merchandise than that of any other team. Nine months later, Americans voted for a congressional majority that could easily be wearing black and gold under their red, white, and blue.

(An ironic footnote: The election did feature one actual Steeler. Lynn Swann, the great receiver for the great Steeler teams of the seventies, ran for governor of Pennsylvania as a Republican. He lost badly.)




They Blew It

November 13, 2006Posted by Jay Livingston
“Victory has a hundred fathers, but defeat is an orphan.” It’s not clear who originally said this. JFK used it after the disastrous the Bay of Pigs invasion. I’m surprised this quote hasn’t turned up again now that even the Bush administration is all but admitting that Iraq is pretty much what The Daily Show has been labeling it all along — a mess (“Mess-o-potamia”). Or worse.
The question is no longer how to achieve “victory”— after the election, that word has disappeared quickly from official talk— but which policy will give the least bad results.
Proponents of the war—the neoconservatives who, from in and out of government, pushed hard for the invasion— are starting to sing the chorus of “Don’t Blame Me.” In the latest issue of Vanity Fair, several neo-con biggies who have since left government insist that the invasion was a great idea. The trouble is that those incompetents in the Bush administration, including the president, botched the way that idea was put into action.
Here’s Richard Perle, a member of something called the Defense Policy Board, who pushed long and hard for the invasion: “I'm getting damn tired of being described as an architect of the war. I was in favor of bringing down Saddam. Nobody said, ‘Go design the campaign to do that.’ I had no responsibility for that.”
What about the millions of Americans who have supported the war, who talked about victory, who voted for Bush in 2000 and 2004? Will they similarly be able to distance themselves from their earlier enthusiasms and blame everything on the people they elected?
When I was in graduate school I remember hearing about a study on pronouns. The researchers called students at a large university, one of those places where football was very important, and asked them about the game. When the team won, students usually used “we.” When the team lost, the students used “they.” The perfect example came from one student after a disappointing loss: “We were winning up until the fourth quarter; then they blew it.”

"I Want to Be a Part of It. . . "

November 9, 2006
Posted by Jay Livingston
“Who voted?” I asked in class today. One student. And this was in New Jersey where the race for Senate looked to be close enough that your vote might have made a difference. One. The others were too busy.

I voted. I live in New York, where none of the races was going to be close. I knew my vote didn’t mean a thing. But I voted. I wonder why. Not out of civic duty or a belief that my vote will influence policy or any of those other reasons you learn in school.

Why do I vote, I asked myself. Then I remembered that “why” is the wrong question. Start with the other “reporter’s” questions – who, what, where, when, how. Get good answers to those, and you’ll be much closer to answering why.

What do I do when I vote; where and how do I do it?

I live in New York City. In my precinct, you vote an old building in a drab room with dull lighting and a coffee-stained linoleum floor. Usually, people are waiting in line, most of them people you’ve never seen, but you chat and joke with them. The voting booths and machines are the old kind with a curtain —an old piece of canvas that if you thought about or looked at closely you wouldn’t want to spend too much time touching. Inside the both is the machine. You push the big lever to the right, then you flip down the little levers beside the candidates’ names, then you pull the lever back to the left, and that’s it.

Every time I do it, I think – and sometimes I make this comment to the person next to me in line– that these are probably the same machines people voted on to elect LaGuardia mayor in 1934.

As I was thinking about this now, I realized that I felt good about this whole scene. I liked it. I liked the dirty floors, I liked standing there with these strangers. I liked it because even though we were strangers, even though we might be voting for different people (not really all that likely in my precinct), we were all there together as New Yorkers. I liked thinking that I was connected with New Yorkers and New York elections going back to Fiorello (who, by the way, was dead long before I ever set foot in the city). It’s the sense of being part of something that I want to be part of.

I was talking about this with a friend, and he had the same reaction. He said that when he votes, it always takes him back to the first time he voted. It was the Oregon Democratic primary in 1968. He voted for Bobby Kennedy against Hubert Humphrey and Eugene McCarthy. Kennedy didn’t win in Oregon, McCarthy did. But a couple of weeks later, Kennedy went to California, and on the night that he won that primary, he was assassinated. My friend’s point is that his vote then connected him with an event of historical importance. And now when he votes, he still feels he’s connecting to history.

I think that’s why I vote and why my students don’t. Older people feel more of a connection to history. I know I feel that connection much more now than when I was in my twenties.

But the larger point is that voting is not a rational act, or at least not completely and not always. It’s not a logical means towards some specific goal (like putting the people you like in office). It’s more about how you feel. If you don’t feel connected to the dominant institutions and the history of the country, come election day there will be something else
you feel emotionally closer to, and you’ll probably be “too busy” to vote.

The Old Rugged Cross Pressure

November 6, 2006
Posted by Jay Livingston
I don’t know if Paul Lazarsfeld (1901-1976) invented survey research and applied sociology, but he was certainly one of the most important figures in those fields. Everyone who does voter surveys today owes Lazarsfeld, big time. As we go into tomorrow’s election, I keep wondering about the Republican “base,” the Christian conservatives or conservative Christians, the “values voters” who have provided the Republicans not just votes but much of the campaign work force. And I keep remembering Lazarfeld’s concept of “cross pressures,” something he developed back in the 1940s.

Lazarsfeld thought you could make a pretty good prediction about how someone would vote if you knew about certain demographic markers — income, occupation, religion, urban or small town, etc. Often, these characteristics tended to cluster, especially in the 1940s with the dominance of the Roosevelt coalition. But what about the person who belonged to groups that pulled in different ways — the small-town Protestant (Republican pressure) who had a blue-collar union job (Democratic pressure)? Lazarsfeld’s answer was that these voters tend make up their minds later in the campaign, and sometimes they resolve their conflict by just not voting at all.
The conflict for the conservative base today is not so much between group affiliations or demographic categories but between image and reality. They have supported the war in Iraq, but more and more the reality in Iraq makes the war seem to have been a bad idea. They support the President and they support the military. But they may also hear that many generals and the military newspapers want Rumsfeld to resign while Bush wants Rumsfeld to stay. These conservatives are against sex outside of marriage, especially when it involves minors or homosexuals or both. But each week seems to bring some new scandal about homosexuality or infidelity, and the perpetrators and their protectors are Republicans.

Obviously, the Republican leadership is worried about these pressures and about the response that Lazarsfeld would predict
— staying home on election day. From the top of the party on down, GOP professionals are trying to make sure that their traditional voters come out. It’s not about converting Democrats or persuading the Independents and undecided. It’s about making sure that the hard core keep the faith, that they do not give in to cross pressures and just avoid the voting booth.

The election is no longer about issues; it’s about turnout. And that’s what a lot of people — the politicos, the network analysts, me — are going to be looking at tomorrow.

Cheating the Executioner

November 5, 2006
Posted by Jay Livingston

Saddam Hussein was sentenced to death today. No doubt, he will be under close watch to make sure that he does not kill himself.

It’s called “cheating the executioner.” It's a phrase you hear when a murderer shoots himself just as the cops are closing in on him. Or when a prisoner on death row dies of some disease while his case is still pending. It cropped up in the news two weeks ago when a death-row prisoner in Texas, Michael Johnson, committed suicide the day before he was to be executed. He cut his own throat and used the blood to write “I didn’t do it” on his cell wall.

The headline in the Washington Times (the online version at least) was “Death Row Inmate Cheats Executioner,” and some other papers had similar wording. That headline, along with the reported detail that death-row inmates are checked on every fifteen minutes, tells us a lot about the real reasons for the death penalty, and they are not the ones usually given.

One rationale for the death penalty is that it saves innocent lives. Supposedly, it deters other potential murderers. Or it “incapacitates” the executed murderer so that he can’t kill again. In reality, there’s not a lot of convincing data to support the idea that executions have any impact on murder rates. But evidence is not really relevant because most death-penalty supporters base their opinions not on the practical effects of executions but on principles of justice and morality: a person who commits a horrible crime does not “deserve” to live. It’s a matter of right and wrong, and regardless of the impact on future murders, it would be wrong to let him live.

If the criminal’s death were the central issue, as it is in these three rationales, it wouldn’t matter how he died; he would still have been removed from society. So we are not looking at a simple rational process. The irrationality is clear in the standard death-row procedure of the 15-minute suicide watch. If the guards had caught Mr. Johnson in time, the best medical help would have been called and no effort spared to save his life. Then, weeks or months later, when he had recovered, the state would kill him.

Why does the state go to such extraordinary lengths —checking every fifteen minutes— to make sure that some condemned man doesn’t pull a fast one and kill himself. Why, when death comes by suicide or cancer rather than execution, do some people feel “cheated”? What were they cheated of?

The answer is clear. The death-row suicide deprives us of only one thing: the chance to inflict the punishment ourselves via our representative the executioner. The importance of the execution is not the effect it has on the criminal — that effect is the same regardless of the cause of death — but its effect on society, on those who carry out the execution. It allows them to dramatize that they and their morality are in control. It draws a clear line, with “us” on the good side and the criminal on the other.

This is the logic behind President Bush’s characterizing the 9/11 bombers as “cowards.” It was not only that they killed unsuspecting civilians. They also cheated us of the privilege of trying and executing them, of showing them who was boss and who was right. The trial, sentence, and execution would have drawn that line between us and them, between good and evil, a line which the president and many other Americans desperately wanted to draw. No doubt, many Iraqis— and Americans— will feel the same way about Saddam.

By executing the criminal, the “good” people confirm their own virtue. Any other form of death cheats them of this occasion to feel good about themselves and secure in their morality.

Can We Talk?

November 1, 2006
Posted by Jay Livingston


The news today is that North Korea has agreed to sit down in talks about their nuclear bomb. North Korea leader Kim Jong-il (son of former leader Kim Il Sung) had previously demanded that the US talk with North Korea one-to-one, but US leader George W. Bush (son of former leader George Bush) had refused. Lil’ Bush refused direct talks and insisted that four other countries had to be there. Lil’ Kim eventually caved, probably because China was threatening to cut off his oil.

North Korea isn’t the only country we won’t talk to directly. Syria, Iran, maybe others. As with North Korea, if we’re going to communicate with them at all, we need other countries as intermediaries to relay the messages.

When I was a kid, I would sometimes have a dispute with one of my brothers, and we’d get so angry, we’d refuse to talk to each other. At the dinner table, I’d say something like, “Tell Skip that if  he doesn’t give back my racer, I’m not going tell him where I hid his airplane.” My mother would dutifully turn to her right and repeat the message, as though my brother hadn’t been right there to hear it. Then she’d do the same with his answer. You see similar scenes in sitcoms and movies. Maybe it happened in your family too.

In real life, at least in my house, it never lasted long. Everyone would see how stupid it was, how impossible to sustain, and usually we’d wind up dissolving in laughter at how ridiculous we were.

I imagine our ambassador turning to the Chinese representative and saying, “You tell North Korea that we aren’t going to give it any food unless they stop making bombs.” China turns to North Korea, just as my mother turned to my brother, and repeats the same message. Then North Korea says to China, “Yeah, well you tell the US . . . .” and so on. That’s pretty much what these countries have been doing anyway, though without actually sitting down in the same room.

When people insist on this “I’m not talking to him” charade, we call it childish and silly. When nations do it, we call it foreign policy.

(Full disclosure: I think I may be borrowing — i.e., stealing— this observation from something I heard Philip Slater say many years ago.)

Halloween

October 30, 2006
Posted by Jay Livingston
They were lined up down the street to get into Ricky’s this afternoon, all the last-minute costume buyers. Costumes are bought nowadays. Almost nobody has a homemade costume, even kids. In more and more areas of our lives, we are now consumers where we used to be producers. Fewer meals cooked at home, more eaten in restaurants or bought in a store and microwaved. Nobody has the time, buying is so much more convenient, and besides, the people who specialize in making these things make them better than we can. My neighborhood grocery store sells pumpkins already painted with faces. You don’t even have to carve your own.

The odd thing is that even though the costumes are better, they’re not as much fun. I’d rather open my door and see kids in costumes their parents patched together from odd clothes and stuff they had lying around the house. A professional ninja or princess costume doesn’t just substitute cash for creativity, it depersonalizes; anybody with the $34.99 can have the same costume (made in China). And many do. Stores have sold out of the popular costumes.

I like to think of holiday celebrations as islands of community, where things are personal, created and controlled by the group of people involved. But Halloween (and perhaps other holidays) is becoming standardized, controlled by the costume industry. It has become McDonaldized.
So it’s not just the witches and vampires that come out at Halloween; you can also see social trends and themes, like McDonaldization. Parental protectiveness is another. Back in the day (my day at least), kids went out trick-or-treating, and parents stayed home. Now, trick-or-treaters making their rounds have parents following along lest some stranger kidnap their child. And at the end of the night parents inspect the kids’ haul for suspicious looking treats. We’ve all heard the stories about razor blades in apples, LSD on decals, poison in candy.

Twenty years ago, sociologist Joel Best investigated all the reported incidents of “Halloween sadism” that he could find in the press, and he concluded that Halloween sadism is best seen as an urban legend. He’s updated his research and found nothing to change his original view. Yet parents still go out of their way to guard against unknown, sinister evildoers who would harm their kids. No doubt, many of these are the same parents who buy their kids a skateboard or put a swimming pool in the back yard.

Mendacity

October 27, 2006
Posted by Jay Livingston
“Mendacity,” says Big Daddy in “Cat on a Hot Tin Roof.” I was channel surfing tonight and watched some of the 1958 film version on TCM.

The few Tennessee Williams plays I’ve seen all follow the same pattern: the principle characters, usually a family, are all hiding important facts about themselves; they have agreed not to see the obvious truths and to let one another live these lies. Then something happens — an outsider not in on the game arrives or some event blows someone’s cover — and the whole fabric starts to unravel. Pathetically or viciously, they begin to reveal one another’s secrets, and the characters must face what they had tried so long to avoid. Big Daddy’s cancer, Blanche’s promiscuous past, what really happened on that fateful day long ago, etc. The plays leave you wondering how these characters will go on with their shattered lives now that they no longer have the fictions— the lies and mendacity— which kept them afloat for so long.

The real-life play of the White House and Iraq seems to be following a similar dramatic arc. A majority of Americans have long since concluded that the war was a terrible mistake, a mistake based at best on faulty intelligence and at worst on outright mendacity. Now, the Administration itself can no longer maintain the false facade. Generals have been giving grim assessments, and these have made it into the news. Even the president admits that things are going badly, that we can no longer “stay the course,” and that some kind of change is required. If the Democrats win control of Congress, they will be in a position to investigate and reveal even further unpleasant truths that the Republicans have suppressed. The folks in Washington have begun to resemble the characters in a Tennessee Williams play.

The troubling difference is that when the play is over, you leave the theater, and you don’t really have to worry about what will become of these characters. They have no existence beyond the end of the last act. But while the voters may ring down the curtain on the characters who brought us this war, the disaster that they created in Iraq will remain.

Maybe Tennessee-Williams-style plot is typical of American culture, maybe not. But many observers have noted out characteristically American preference for not thinking so much about the past but rather looking optimistically to the future. We also tend to view the world as a story, and we don’t like difficult or unhappy endings.
So this election or the next one in 2008 will be The End of our drama. The show is over and we can head for the exit. Troops will be withdrawn, and the media will lose interest in what happens in some strange and complicated foreign land. But in real life, the war will have consequences far into the future — for our economy, for our position in the world. The trouble is that Americans in 2010 or later may not be able to see the connection between those problems and the events of 2003-2006. The Iraq war? That play closed a long time ago.

Who's to Blame, Who's in Control, Who Is the Accused?

October 26, 2006
Posted by Jay Livingston
An Islamic priest in Sydney, Australia, Sheik Al Hilaly, had some controversial things to say about rape recently.
If you take out uncovered meat and place it outside on the street, or in the garden or in the park, or in the backyard without a cover, and the cats come and eat it ... whose fault is it, the cats or the uncovered meat. The uncovered meat is the problem. If she was in her room, in her home, in her hijab, no problem would have occurred.
There’s a history to the story. In 2000, several Muslim men were convicted for a series of gang rapes of white women. They received very harsh sentences. Whites were angered by the rapes; some Muslims, like Sheik Al Hilaly, were angered by the sentences:

She is the one who takes her clothes off, cuts them short, acts flirtatious, puts on makeup, shows off, and goes on the streets acting silly. She is the one wearing a short dress, lifting it up, lowering it down, then a look, then a smile, then a word, then a greeting, then a word, then a date, then a meeting, then a crime, then Long Bay Jail, then comes a merciless judge who gives you 65 years.
The story reminded me of the 1983 gang rape in New Bedford, Massachusetts — the incident that became the basis for the 1988 film “The Accused,” with Jodie Foster as the victim— and not just because it raises the question of who is being accused. In New Bedford, as in Sydney, the rapists were from an ethnic minority. They were first-generation Portuguese. But the Portuguese are a fairly large ethnic minority in that area, and many turned out at organized marches in support of the rapists.

One woman was quoted in the paper as saying, “I am Portuguese and proud of it. I’m also a woman, but you don’t see me getting raped. If you throw a dog a bone, he’s gonna take it — if you walk around naked, men are just going to go for you.” Nearly identical to the Sheik’s cat analogy 23 years later. And a Catholic priest, foreshadowing today's Muslim priest, said, “The girl is to blame. She led them on.”

There’s much to be said about the element of ethnic relations—dominant culture and minority group— but it’s the “blame the victim” idea that interests me. To put the blame and responsibility on the women you have to assume that men just cannot control themselves. They act purely on the basis of instinct, like animals.

This view of men may seem to contradict the image of Muslim men who are so controlled that they keep a strict diet, avoiding pork and alcohol, pray five times a day on cue, and willingly live under other constraints that we would find intolerably oppressive. But the contradiction is only apparent. It’s not a question of the presence or absence of control but where that control is located — inside the individual or outside in the situation and the group. In stable, traditional societies, life’s situations are predictable and under control. People can rely on the others around them to keep their impulses from leading to dangerous actions. But in modern, mobile individualized society, we don’t have the comfort of these external restraints. We have had to develop an elaborate set of internal controls that will keep us in check regardless of the situation.

For people with less internalized controls, it must seem incredible that people can live in a world filled with so much sexual stimulation and opportunity and yet not take action. So they fight sexuality wherever in becomes publicly visible, as when John Ashcroft, US attorney general in the first years of the Bush administration, had a cloth draped over the exposed breast of a statue in the Justice Department. The slightest hint of sex might cause men to lose control.

But the conservatives, Muslim and Christian, are fighting a losing, rear-guard action. They are right that sexual mores are becoming more liberal. There’s more sex in the media, especially with the Internet. Clothes are more revealing than ever. (Two summers ago, the Times had an article on back-to-school shopping
mothers and their teenage daughters in the liberal New York suburbs. The fashions preferred by the girls ran towards what their mothers referred to as “hookerwear.”)

The change, as with so many other fashions, seems to have filtered downward through the social class system, starting near the top. Co-ed dormitories, for example, appeared first at elite schools in the early 1970s; now it’s hard to find a campus that doesn’t have them. And along with the liberalization, the reliance on internal controls seems to have followed the same paths of diffusion through the society. How else to explain the decline in rape over the last fifteen years?

In the short run, the sexual conservatives may win a few rounds, passing a law here or there. But younger people do not share their attitudes, and over time, the rigid external sexual controls will become sort like Buicks, heavy and bulky mechanisms possessed by fewer and fewer people all getting grayer and grayer.


Starbucks as the Habermasian Public Sphere?

October 23, 2006
Posted by: Yasemin Besen

This weekend New York Times had an interesting article on the new marketing of Starbucks coffee chains as coffeehouses. The stores are marketed not as uniform, standardized assembly lines of coffee, but as local and community based cultural spaces, where books, movies and music are discussed. Recently, they have sponsored the “Salon events”, where community based, not so mainstream authors, musicians and artists read their books, sang and talked while sipping coffee (I can’t deny enjoying Jonathan Lethem’s book reading). This new positioning of Starbucks as a local, cultural space, where intellectual rational conversation takes place reminds me of Habermas.

A student of Frankfurt school and a critique of capitalism and its discontents, Jurgen Habermas, he developed the concept of public sphere in The Structural Transformation of the Public Sphere (1962), where he explores the role of individuals in the practice of democracy and social change. He defines public sphere as "made up of private people gathered together as a public and articulating the needs of society with the state." People conglomerating in pockets of society, through dialogue and discussion, have the power through discussion, to critique, mainly the discontents of capitalism and voice their opinion. This rational, critical discourse is the very essence of democracy. Public sphere, in the Habermasian sense, started to emerge in the 18th century with voluntary associations, literary groups and organization and most importantly coffeehouses.

Now we see a very different form of the coffeehouse: a national chain of “coffeehouses” that market the public sphere as a consumption item to be enjoyed with a tall, skim, no foam latte. I wonder if Habermas could predict the public sphere of cool discussion would itself be commodified, packaged and sold along with the music, books and the coffee that makes the public sphere possible.

My Ethnocentrism

October 22, 2006
Posted by Jay Livingston

What you’re reading on your screen right now probably got there via a broadband connection. Nobody uses phone modems anymore; everybody’s got DSL or cable broadband, right? And the US is way ahead of other countries on this score, right?

Wrong. A former chair of the FCC, William Kennard, noted in a Times op-ed piece yesterday, “Since 2000, the United States has slipped from second to 19th in the world in broadband penetration, with Slovenia threatening to push us into 20th.”

I must admit that I was surprised . . . not this time, since I'd heard this before. But I was surprised when I first read about this a few months ago, when the US had just fallen out of the top ten. I had just assumed that the US had more technology than any other country.

I guess the lesson here is that even social scientists can fall prey to ethnocentrism — the “we’re number one” mentality. (When Bush, in one of the 2004 Presidential debates, said, “America’s health care system is the envy of the world,” nobody challenged him on it.) Or was I falling victim to the sampling error of personal experience. After all, I have high-speed access —at home, at work, at Starbuck’s— and so does everybody I know.

Or maybe my assumption was that Capitalism, the Market, the Invisible Hand, would work best; companies smelling profits would all be intent on bringing the latest technology to as many people as possible.

So why has the US fallen so far down on the list from its earlier high rank? Kennard gives us a pretty good clue: “Studies by the federal government conclude that our rural and low-income areas trail urban and high-income areas in the rate of broadband use. Indeed, this year the Government Accountability Office found that 42 percent of households have either no computer or a computer with no Internet connection.”

Most of the countries that have higher percentages of their populations with broadband are more urbanized than is the US. Iceland, probably not the country that pops into your mind when you think of high-tech and the Internet, ranks third. But it’s over 90% urban. Sweden, Belgium, the UK, and others — all are more urbanized than the US. And broadband providers can reach more people when those people live closer together in cities and not on farms.

But income also matters. Canada’s percent urban is the same as that of the US, but it’s in the top ten on broadband. The same is true for Norway and Japan. But in these countries, the people at the lower ends of the income distribution are not as far away from middle and upper incomes as are the poor in the US. On income inequality, we're number one.

Maybe I was only half right about US capitalism (and capitalism generally). Yes, it’s a very good system for producing more and better stuff. But when it comes to distributing that stuff, the invisible hand deals the good cards to the players with the large chip stacks and is content to ignore others.

Narrowband is all right for text, but that's not where the Internet is going. For things like music and video roadhogs of the information highway you need broadband. So when you’re creating that video to upload to YouTube, you might think about adding a soundtrack in Slovenian.

Asking About Housework

October 20, 2006
Posted by Jay Livingston
The working mother — how does she find the time? Did being a worker mean that she would spend less time being a mom? A new study by Suzanne Bianchi finds that contrary to expectations some years back, work did not reduce the time mothers spent with their kids. In fact, parents — both moms and dads— are spending more time with their kids than parents did in previous generations. What’s been cut back is housework. (NPR did a longish (16 minute) report on the study — an interview with one of the authors, calls from listeners— which you can find here.)
There’s much to be said and blogged about Bianchi’s findings, but I want to make one small methodological observation, something I’ve mentioned to students. Some questions have a built-in “social desirability” bias. Suppose you want to know about reading habits. It’s socially desirable to have read more books (at least I hope it still is), so if you ask “How many books do you read a year?” or “How many books did you read last year?” you’re likely to get something higher than the actual number. Instead, you ask, “Did you read a book last week?” A person who never reads a book might be reluctant to say that he hadn’t read a single book last year. But there’s no social stigma attached to not having read a book last week.
The same thing goes for housework and parenting. Ask me how many hours I spend on housework and childcare each week, even though as a good friend of social research I’d try to be accurate, I’d probably try to be accurate on the good side. So as the Times reports, “Using a standard set of questions, professional interviewers asked parents to chronicle all their activities on the day before the interview.” (The study notes that we dads are doing more of both than fathers of only a few years ago.)
(More later. Right now, I have to put the wash in the drier, start making dinner, and help my son with his homework.)

The New York Walk

October 16, 2006

Posted by Jay Livingston

The New York Walk last Saturday. Here we are, some of us, after lunch, sitting on a bench in Tomkins Square Park in the East Village.

(Left to right: Yasemin, George, Tracy, Marisella, Nila, Laura, Jay)


Some of the people on that bench have been doing this walk for thirty years; others were first-timers. That difference became a theme running through my mind for much of the afternoon, for I realized that a lot of the time, even when our eyes were focused on the same sights, we weren’t really seeing the same thing.

My first inkling of this came soon after we left Port Authority. Our first stop was a flea market on a closed-to-traffic block of 39th St. One of the booths was selling old magazines, and two of our younger walkers and I were leafing through them looking at ads and photos from the 1950s and 60s. For the twenty-year-olds, this was history. Even though these girls pointed out the style elements that had come back into fashion, they might as well have been looking at pictures of Marie Antoinette. But for me it was memory, not history. I knew those clothes, those news stories, those celebrities. They were part of my biography. C. Wright Mills says that sociology is about the intersection of history and biography, and here it all was on W. 39th St.

The same was true of New York geography. If you see Times Square for the first time, it’s very impressive— the buildings, the lights, the tourists. But I wasn’t seeing just those things. I was also remembering what it used to look like ten or twenty or thirty years ago. I wished that I could have shown these kids pictures so that they too could understand the transformation and think about how it had happened.

Sometimes the historical juxtapositions are there for all to see. The New York Public Library at 42nd and Fifth, one hundred years old—broad steps and stone lions outside, and inside the feeling of something vast and solid. You can’t help feeling that you’ve walked into another century. In the reading room upstairs (what library today would have such high ceilings on an upper floor?), on the long, oak tables, you don’t see many books, just flat-screen computer monitors.

Grand Central, too, is nearly a hundred years old. The main room looks and feels largely unchanged, but downstairs, the old walls house a food court with several interesting fast-food restaurants.

(Officially it’s called the “dining concourse” so as to distinguish it from the food court at a suburban mall.) The most fascinating thing for sociology walkers seemed to be the “whispering gallery.”

From Grand Central, we took a very crowded subway down to Astor Square. Tower Records nearby is going out of business. The name says it all: they are a victim of technological change. The change from records (LPs) to CDs was minor; you still have to buy some physical object in a store. But with CDs giving way to MP3 downloads online, it's game over.

The East Village is a study in urban transformation. Here are Laura and Nila at Pommes Frites, a hole-in-the-wall on Second Ave at 8th St. that sells nothing but French fries. Lots of them. As much as 1000 pounds a day.

It’s Nila’s first time here, but Laura grew up just a couple of blocks away long before gentrification came to the East Village. Back in the day, Pommes Frites would have been unthinkable here. And the trendy and pricey restaurants you see now at every corner down here — fuggedaboudit.


Death, Statistics, and Politics

October 13, 2006

Posted by Jay Livingston


It’s hard to wrap your mind around large numbers, especially when they refer to things you’re not familiar with. I remember when my son was learning about dinosaurs in kindergarten (when did dinosaurs became such an important part of the early curriculum anyway, and why?). I couldn’t really grasp the difference between “165 million years ago” and “65 million years ago,” even though a difference of 100 million years is a long time and even though it made quite a difference for the dinosaurs— a difference between being dominant and being extinct.

A couple of days ago, the British journal The Lancet published a study estimating that 600,00 Iraqis had died from violence since the U.S. invasion three and a half years ago. That works out to 470 deaths a day, every day. The confidence interval was broad, so the low-end estimate was “only” 420,000, a number which still sounds incredibly large. (That confidence interval also means that the upper-limit of 790,000 was as likely as the low-end figure.)

Obviously there are political implications in the Iraqi death rate. Arguments in favor of the war would seem a bit weaker if the blessings of liberty which the US invasion brought to Iraq also included far more violent death than Iraqis suffered under Saddam. Here’s President Bush at a press conference the same day.

QUESTION: A group of American and Iraqi health officials today released a report saying that 655,000 Iraqis have died since the Iraq war. That figure is 20 times the figure that you cited in December at 30,000. Do you care to amend or update your figure? And do you consider this a credible report?

BUSH: “No, I don’t consider it a credible report. Neither does General Casey and neither do Iraqi officials. . . .But this report is one -- they put it out before. It was pretty well -- the methodology is pretty well discredited.

QUESTION: So the figure's 30,000, Mr. President? Do you stand by your figure, 30,000?

BUSH: I, you know, I stand by the figure a lot of innocent people have lost their life.

Bush is, of course, wrong. The methodology (“cluster sampling”) has not been discredited. Even so, many people find it hard to imagine the 470-a-day figure. Yes, the news carries reports of violent death every day, but the numbers are smaller. Today’s news reports eleven killed at a satellite TV station. Sometimes the numbers are as high as 50. But 470 every day —is that plausible?

Yes. The news rarely reports on the killings outside of Baghdad, and it rarely reports on isolated killings of smaller numbers of people. Baghdad (about violent 100 deaths a day) is the largest city, and it holds 10% of the Iraqi population. But there are many, many other cities; some of them even make the news reports occasionally — Fallujah, Baquba, Ramadi. Blogger Juan Cole notes that the authorities in Basra admitted last may that people there were being assassinated at the rate of one an hour, 24 a day. And none of those deaths was reported in the US news (or any other Western press).

You can’t use news stories to arrive at statistical estimates. That’s why you need science-based techniques like cluster sampling. The results may at first seem hard to imagine —science always has a hard time when it comes up against “common sense”— but it’s also hard to imagine such a thing as “165 million years ago.” Which may be part of the reason that most Americans don't believe in evolution either.

AK-47s -- "Not uncommon"







October 9, 2006

Posted by Jay Livingston


Another school shooting, this one in Joplin, Missouri. The shooter was a thirteen-year-old Columbine wannabe. Fortunately, the gun jammed after the first shot, which was fired at the ceiling, so the only damage was a burst water pipe.
Living in the Northeast, I sometimes forget what the rest of the country is like. What really struck me was not that it was another school shooting— like most other good Americans, I’ve gotten used to these. It was these sentences from the AP story

The student left, and officers arrested him behind a nearby building. Police described his weapon as a Mac-90, a replica of an AK-47 assault rifle. . . . .
Jones said the gun belonged to the boy's parents. Farmer said it is not uncommon for people in the area to own assault weapons.
Not uncommon! Lots of folks have them. I wonder what they use these military assault rifles for.

It’s also interesting that these latest school shootings have not brought any calls for stricter gun laws. Polls consistently show a majority of Americans in favor of stricter gun controls. But gun laws, especially in the last 15-20 years, have basically been written by the NRA, even though these policies reflect a minority view. Most of the country has come to accept these laws as inevitable and therefore not worth talking about. In the latest spate of shootings, very few, if any, of the news reports on these shootings asked the NRA to defend its position. It’s as though the issue, for better or worse, has been settled and that the availability of very deadly weapons is sort of like the weather, a condition nobody can do anything about.

The CBS news website has a nice interactive map — click on a state and see a brief description of its rules on gun licences, sales, etc., and its rate of firearms deaths. New York and New Jersey have about 5 gun deaths for every 100,000 residents. States in the West and South, with more guns and fewer gun laws, have gun death rates double or in some cases triple that.

http://www.cbsnews.com/stories/2000/04/17/eveningnews/main185126.shtml

Small Town Life

Posted By: Yasemin Besen
It was no "art": I found Friday Night Lights pretty formulaic. However, what I liked about the show was the depiction of small town life. The show takes place in a small Texas town, where high school football is the main activity: not professional, not college: high school football.
Football is a central activity that creates social cohesion in a small community. It's also the source of pride and town identity in a deprived town. Football is the way to relate to others in town, but it's also the way out of it. In small towns, where economic opportunities are limited, the only way for social mobility is through football. While not many benefit from it, it's the ideal that keeps many from questioning the existing economic structure. Focusing on themes like social cohesion, inequality, small town life, rather than following a formula, would have made it much more interesting.

Friday Night Lite

October 4, 2006
Posted by Jay Livingston
I watched the first episode of “Friday Night Lights,” the new NBC show about high school football in a Texas town. The New York Times critic had given it a rave review, repeating words like “great” and “art.”
 
Art it may have been (the Times critic was especially impressed by the show’s sound editing, an artistic touch that viewers like me aren’t likely to notice, and not the sort of thing to make us want to tune in next week). But great? It was about football, about teenagers playing a football game. As such, it played on one long-standing idea in American movies and TV: all moral questions, all questions of character, can be settled in a contest. Typically, the story sets out some difficulties for the hero —conflicts with the society, conflicts with some other person or organization, conflicts within himself. It all leads up to some climactic confrontation. Usually the hero wins, occasionally he loses. But the outcome doesn’t matter so much as the nobility of the fight, for win or lose, the hero has fought, and that seems to resolve all issues. The classic example is the old Western with its quick-draw shootout, which resolves issues like economic conflicts between ranchers and farmers over land use. But sports and games figure prominently, especially one-on-one contests like boxing. Rocky is the obvious example, but there are lots of other fight films, and many of them have this same quality: the match seems to melt all problems no matter how complicated, no matter how seemingly unrelated to the match itself—problems between a man and a woman, a son and father, friend and friend.

My own list includes movies about everything from airplane dogfights to chess. Some are classics (“The Hustler,” which ends in a pool match, or “On the Waterfront,” which ends in a fistfight between a worker and a union boss), and many are best forgotten (“The Cincinnati Kid,” which ends in a poker game, or “The Karate Kid” and many, many, others).
More recently — and I guess this will be true of “Friday Night Lights”— the hero is not so much an individual as a team, as in all those “coach” movies. But the assumption is the same: getting ready for the big game and then playing it leads to triumph over all internal or external obstacles in life.
Last night’s episode of “Friday Night Lights” clutched at one other American cliche— the Hollywood Ending. The team is down by ten points with three minutes left; their star quarterback is taken off the field on a stretcher, possibly paralyzed for life with a spinal injury; the substitute quarterback muffs play after play. At this point, I turned to my son, who was watching too, and said, “If they win this game . . .” Guess what. No, you don’t even have to guess. You know. You’ve seen so many American movies that you know what happens.
Again I am reminded of what the Iranian immigrant in “The House of Sand and Fog” says (see the Sept. 27 entry in this blog) — Americans always wanting the sweet taste.

The Magic of Plagiarism; the Plagiarism of Magic

September 30, 2006 
Posted by Jay Livingston
You can’t copyright a joke, you can’t copyright a magic trick. So what do you when another performer steals your stuff?
Eric Walton is a magician. He’s doing a show, “Esoterica,” at a theater down on E. 15th. But some of his act resembles a show another magician, Ricky Jay [photo on the right], did a few years back. The Times ran a story (here) on the on the controversy.


















Jules Fisher, the Tony Award-winning lighting designer, who is also an amateur magician . . . .sent an e-mail message to Mr. Walton saying the presentation of the Knight's Tour “so closely approaches its inspiration as to border on plagiarism.”
“Does performing an existing effect, or variation thereof, confer upon the performer of it ownership of that effect, or the exclusive and perpetual right to all subsequent interpretations of it?” Mr. Walton asked in his message. “On this point you and I are obviously in disagreement.”
This was of some interest to me because many years ago, I was hanging around with magicians, thinking that there might be something interesting and sociological there. In survey research, you start with an idea, then you get data to support it. But I was doing ethnographic research, where you often begin with the “data,” usually a group of people in some setting, not sure exactly what you’re looking for but with the sense that, as I once heard William H. Whyte say, “If I look at something long enough, eventually I’ll see something nobody else has seen.”

I wasn’t as successful as Whyte. I never did figure out a framework for my observations with the magicians. I don’t even know where my fieldnotes are now. But on the topic of plagiarism, I do remember this: When magicians talk among themselves, when they demonstrate tricks for one another, they are unusually scrupulous about giving credit where it’s due. Much like academics, they footnote everything. They’ll say things like, “The routine combines Gene Finnell’s Free Cut Principle with the plot from Dai Vernon’s Aces.” They are especially careful to footnote the specific “moves” (sleights) that they use in a trick. “This is an extension of a coin change by Dr. E. Roberts in Bobo,” (Bobo being the author of a classic book on coin magic.)
The problem is that you can’t do this kind of footnoting in a performance. In the first place, it comes close to disclosing secrets of how the trick is done. But more important, the audience doesn’t care. They want to be entertained, not informed. What’s important to magicians — authorship, originality— is not important to the audience. I remember once seeing a street magician in New York who had taken most of his act from another street magician I’d seen a couple of years earlier but who had since moved on. He finished his little seven-minute show and passed the hat for donations from the small sidewalk crowd. The crowd was pleased. I was not really a magician but I was in the know, and I resented his stealing the other guy’s act. I had the feeling that real magicians would too. As the crowd dispersed and the magician turned back to arrange his props for the next show, I approached him and mentioned something about the other magician. “Oh yeah,” he exclaimed, “he’s my idol. I’ve patterned my whole act after his.” And for some reason, I felt that made it okay. I think other magicians hearing this would have had the same reaction. They might not have admired him; they might have looked down on his lack of originality. But his footnoting would have legitimized his act.
Eric Walton cannot get up on stage and say, “A lot of what I’m going to do tonight I took from Ricky Jay.” I suppose he could mention Jay in the notes in the Playbill. And he did, in a way. It turns out that Eric Walton had given an interview to a website where he said that Ricky Jay had been a source of inspiration to him. However, after others noted the similarity of the shows, Walton asked the website to remove that quote.
If students plagiariaze papers, they can be given an F for the paper or even the course. They can even be tossed out of school. If writers plagiarize, they can be sued for real money. But if a performer steals someone else’s act, he is subject only to informal social control

Hug a Thug

Posted by Jay Livingston

September 28, 2006


Yesterday’s Wall Street Journal had an article on a strategy used successfully against drug dealers in High Point, North Carolina and presumably elsewhere. Much of the strategy was familiar — undercover buys, videotapes, informants. The new twist was that the police did all they could to keep the dealers from going to prison.

The police sent out word to dealers who had more or less taken over one particular area turning it into a drug market: come to a meeting. The police got to know dealer’s mothers, grandmothers, and others who might be influential and asked them to pressure the dealers to come. A dozen were invited, with a promise of no arrests that night; nine showed up. In room one, clergy and community leaders talked to the dealers about all the harm they were doing the community. The dealers seemed bored. Then they moved to room two, where law enforcement people showed them all the evidence they had right up to arrest warrants filled out completely save for a judge’s signature.

The West End street drug market closed "overnight" and hasn't reopened in more than two years, says Chief Fealy, who was "shocked" at the success. High Point police say they have since shut down the city's two other major street drug markets, using the same strategy.
Not prosecuting people and not housing them in prison for years and years is obviously a lot cheaper than doing so. Yet something about the strategy, despite its success and savings, doesn’t quite sit right with some people. “Hug-a-thug,” some called it, and an Indiana prosecutor quoted in the article said, “Why not slam 'em from the beginning and forget this foolishness?”

The trouble with the program is that it doesn’t perform the symbolic function of clearly marking moral boundaries, and for some reason, that function is very important to a lot of Americans. Our typical way of thinking about a problem is to label it as evil and then declare war on it. The war on drugs is a good example. If you are fighting a war on some absolute evil, you can’t compromise, and you have to be punitive, even if your strategy, at least in terms or rational goal-attainment, is costly and ineffective. At least it makes you feel morally righteous. The moralistic orientation also explains why the war on drugs was (and continues to be) big on enforcement and light on treatment, despite much research showing that treatment is far more cost-effective. If someone is a “thug,” he should be punished, not hugged. To do otherwise would threaten our own moral purity.

Framing a policy as a war against evil does one more important thing. It justifies any means. If what you are fighting is an absolute evil, then it’s all right to violate the usual rules. As many others have pointed out, judges have been very willing to allow police and prosecutors in drug cases to do things that in previous criminal cases would have been unconstitutional if not unthinkable. I recall an article called “This Is Your Bill of Rights on Drugs” detailing some of these judicially approved violations of constitutional rights. (Obviously the war in Iraq and the War on Terror fall into the war-on-evil category.)

Over fifty years ago, sociologist Robin Williams listed a “moralistic” orientation as one of the characteristics of American culture. Basing policy on principles of moral purity may make us feel righteous, but we may be doing so at the cost of actually getting something done. But cultures are not monolithic, and in addition to our American moralism, we also have long history of pragmatism, which in the some cases may be our salvation. At least it saved the West End of High Point, NC.

Covers and Culture

Posted by Jay Livingston

September 27, 2006


Newsweek’s website shows the covers for their international editions. One of these things is not like the others. Can you tell me which one is not like the others? And why?



Is this just a “one-off” (as the British say), a unique occurrence? OK, here are the covers from the previous week (thanks to sociologist-blogger Kieran Healy).


Does Newsweek’s choice tell us something about American culture — that we prefer “lifestyle stories” to real news, especially when the news is bad? Annie Liebowitz, a very successful photographer who does portraits of the famous, here with her children; but spare us “losing Afghanistan.” Successful women— young, pretty, and smiling— but not China or Russia.
I’m reminded of a line from “The House of Sand and Fog.” Ben Kingsley as Behrani, an Iranian immigrant, a man who has worked hard and lived according to principle to achieve some moderate success, speaks to his son: “Americans, they do not deserve what they have. They have the eyes of small children who are forever looking for the next source of distraction, entertainment, sweet taste in the mouth.”

Small Worlds

September 26, 2006
Posted by Jay Livingston

Six degrees of separation. Stanley Milgram (yes, he of the obedience experiments) didn’t coin the phrase, but he may have been the one to come up with the concept. The “small-world” hypothesis, he called it.

I was reminded of it twice yesterday.
1. Last night I went to a memorial service for someone. I had barely known him, but our kids have been in the same schools since kindergarten. Even after this man and his wife split up and he had left, my family remained friendly with his wife and son. But at the service, I saw people I knew not from the schools but from completely different contexts. Here was a couple I knew only because they were friends of a mutual friend, and we’d see them at her house. Another woman I knew asked me almost accusingly, “What are you doing here?” She had known the deceased professionally (they were musicians) but knew me only because our kids had been in the same school. Two different networks leading to the same place. Small world.
2. I think the explanation for the six degrees is that you go from small people to big people and then back down. People fairly far down in some organization will have some connection to those higher up in that organization. Those higher-ups know higher-ups in other organizations, who will in turn have a connection to those lower down.
Example. My wife has a good friend whose husband is high up in some do-gooder federal agency or program (something about the arts perhaps). Yesterday, the friend phoned to say that she’d been at some benefit in Washington at the Kennedy Center focused on this agency. So while her husband was on stage in some official capacity, she was seated in the Presidential box with just a few people, including Laura Bush. For some reason, the President himself decided to pop in, and he chatted in his friendly way with the people there. Politicians are glad-handers— Bush is a good example — they like people, they remember them.
So now I know someone who “knows” Bush. From there it wouldn’t be too difficult to connect me with anyone in the country, maybe anyone on the planet. (We’re talking about “knowing” and “connections,” not liking or agreeing with.) And if you are acquainted with me, then you yourself are only three degrees from Bush.

Negative Results

September 20, 2006

Posted by Jay Livingston
A man gets thrown into a jail cell with a long-term occupant and then begins a series of attempts to escape, each by some different method. He fails every time, getting captured and thrown back in the cell. The older prisoner looks at him silently after each failure. Finally, after six or seven attempts, the man loses his patience with the old prisoner and says “Well, couldn’t you help me a little?” “Oh,” says the old guy, “I’ve tried all the ways you thought of—they don’t work.” “Well why the hell didn’t you tell me?!” shouts the man. “Who reports negative results?” says the old prisoner.
thanks to sociologist and blogger Kieran Healy (http://www.kieranhealy.org/blog/).
I hadn’t heard the joke before, but I’ve certainly heard of the bias towards positive results. Back when I was in graduate school, one of my professors, an experimental social psychologist, proposed that journals evaluate papers solely on the basis of research design. Researchers would submit all the preliminary stuff including the design but not the results. Then, if they accepted the article, it would be published regardless of whether the results showed the intended effects.
Healy used the joke in connection with an article on biases in political science journals. Articles which just make the p <.05 level are much more likely to be published than are those that just miss it. I’m not sure if political science and other disciplines (like sociology) that rely on survey data could use the same strategy on deciding on publication before they see the data. That strategy may be more applicable to experiments rather than surveys.
I find it interesting, even ironic, that my social psych professor who proposed this soon became very well known for his own experimental study, whose results were widely discussed even outside academia. But statisticians reviewing the data claimed that he had used the wrong statistical analyses in order to make his results look significant. His idea might be right, the critics said —in fact they probably hoped it was right — but the numbers in the study didn’t prove it. The professor and others claimed that the numbers did support the idea and defended their analyses. Clearly, it was a case that needed replication studies, lots of them. But I’m not sure what attempts to replicate the study have been done, nor do I know what the results have been. But I am fairly certain that researchers who attempted to replicate and got negative results had a harder time getting published than did those who got positive results.
This professor also had our class do a replication of one of his experiments. It didn’t work. In fact, the strongest correlation was with a variable that by design was randomized. There were two versions of some test, A and B, distributed randomly to seats in a room. We wanted to see if the two versions of the test produced different results. People came in, sat down, and did the test. But it turned out that the strongest correlation was between sex and test version. That is, the A version wound up being taken mostly by girls, the B version by boys, just by accident of where they chose to sit. No other difference between the two versions was nearly so strong. It made me a bit skeptical about the whole enterprise.