Charlie Haden (1937-2014)

July 12, 2014
Posted by Jay Livingston

At age 22, Charlie Haden was the bassist the original Ornette Coleman quartet.  He had already been playing for a couple of years with bebop pianist Hampton Hawes.  Ornette played music that, at the time (1959), was considered so far out that many listeners dismissed it as noise. (“They play ‘Some of These Days’ in five different keys simultaneously.”) Ornette became even freer, moving even further from the basic changes, and Charlie followed along.

Haden was also a very melodic bass player. That’s especially clear in his duo work with guitarists like Pat Metheny and Egberto Gismonti and pianists Keith Jarrett, Hank Jones, Kenny Barron (“Night and the City” is one of my favorite albums). He remained rooted in bebop, notably as leader of Quartet West (with Ernie Watts, the man responsible for my giving up saxophone). 

He had polio as a child in Iowa, and in recent years suffered from post-polio syndrome.

Here is a brief video made at the time Charlie recorded the duo album with Keith Jarrett, who does much of the talking here.

Needs (One More Time)

July 10, 2014
Posted by Jay Livingston

Before I read Benjamin Schmidt’s post in the Atlantic (here) about anachronistic language in “Mad Men,” I had never noticed how today we use “need to” where earlier generations would have said “ought to” or “should.” Now, each “need to” jumps out at me from the screen.*  Here is today’s example.


Why not: “Even more proof health care records should go digital”?

In a post a year ago (here), I speculated that the change was part of a more general shift away from the language of morality and towards the language of individual psychology, from what is good for society to what is good for the self.  But now need to has become almost an exact synonym for should. Just as with  issue replacing problem** – another substitution flowing from the brook of psychobabble – the therapy-based origins of need to are an unheard undertone.  Few people reading that headline today will get even a subliminal image of a bureaucratic archive having needs or of health care records going digital so as to bring themselves one Maslow need-level closer to self-actualization.

It looks like need to and issue will stick around for a while. Other terms currently in use may have a shorter life. In the future (or as we now say, going forward), “because + noun” will probably go the way of  “my bad.” Because fashion. And by me, its demise will be just groovy.  I wonder if language scholars have some way of predicting these life-spans. Are there certain kinds of words or phrases that practically announce themselves as mayflies?

Oh well, at the end of the day, the bottom line is that it is what it is.

-------------------
* As Nabokov says at the end of Speak, Memory “. . . something in a scrambled picture — Find What the Sailor Has Hidden — that the finder cannot unsee once it has been seen.”

** In 1970, Jim Lovell would not have said, “Houston, we have an issue.”  But if a 2014 remake of “Apollo 13” had that line, and if the original weren’t so well known,  most people wouldn’t notice.

Replication and Bullshit

July 9, 2014
Posted by Jay Livingston

A bet is tax on bullshit, says Marginal Revolution’s Alex Tabarrok (here).  So is replication.

Here’s one of my favorite examples of both – the cold-open scene from “The Hustler” (1961). Charlie is proposing replication. Without it, he considers the effect to be random variation.



It’s a great three minutes of film, but to spare you the time, here’s the relevant exchange.

CHARLIE
    You ought to take up crap shooting. Talk about luck!

         EDDIE
    Luck! Whaddya mean, luck?

         CHARLIE
    You know what I mean. You couldn't make that shot again in a million years.

       EDDIE
    I couldn’t, huh? Okay. Go ahead. Set ’em up the way they were before.

         CHARLIE
    Why?

         EDDIE
    Go ahead. Set ’em up the way they were before. Bet ya twenty bucks. Make that shot just the way I made it before.

         CHARLIE
    Nobody can make that shot and you know it. Not even a lucky lush.


After some by-play and betting and a deliberate miss, Eddie (aka Fast Eddie) replicates the effect, and we segue to the opening credits* confident that the results are indeed not random variation but a true indicator of Eddie’s skill.

But now Jason Mitchell, a psychologist at Harvard, has published a long throw-down against replication. (The essay is here.) Psychologists shouldn’t try to replicate others’ experiments, he says. And if they do replicate and find no effect, the results shouldn’t be published.  Experiments are delicate mechanisms, and you have to do everything just right. The failure to replicate results means only that someone messed up.

Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way.  Unless direct replications are conducted by flawless experimenters, nothing interesting can be learned from them.


L. J. Zigerell, in a comment at Scatterplot thinks that Mitchell may have gotten it switched around. Zigerell begins by quoting Mitchell,

“When an experiment succeeds, we can celebrate that the phenomenon survived these all-too-frequent shortcomings.”

But, actually, when an experiment succeeds, we can only wallow in uncertainty about whether a phenomenon exists, or whether a phenomenon appears to exist only because a researcher invented the data, because the research report revealed a non-representative selection of results, because the research design biased results away from the null, or because the researcher performed the experiment in a context in which the effect size for some reason appeared much larger than the true effect size.

It would probably be more accurate to say that replication is not so much a tax on bullshit as a tax on those other factors Zigerell mentions. But he left out one other possibility: that the experimenter hadn’t taken all the relevant variables into account.  The best-known of these unincluded variables is the experimenter himself or herself, even in this post-Rosenthal world. But Zigerell’s comment reminded me of my own experience in an experimental psych lab. A full description is here, but in brief, here’s what happened. The experimenters claimed that a monkey watching the face of another monkey on a small black-and-white TV monitor could read the other monkey’s facial expressions.  Their publications made no mention of something that should have been clear to anyone in the lab: that the monkey was responding to the shrieks and pounding of the other monkey – auditory signals that could be clearly heard even though the monkeys were in different rooms.

Imagine another researcher trying to replicate the experiment. She puts the monkeys in rooms where they cannot hear each other, and what they have is a failure to communicate. Should a journal publish her results? Should she have even tried to replicate in the first place?  In response, here are Mitchell’s general principles:


    •    failed replications do not provide meaningful information if they closely follow original methodology;
    •     Replication efforts appear to reflect strong prior expectations that published findings are not reliable, and as such, do not constitute scientific output.
    •    The field of social psychology can be improved, but not by the publication of negative findings.
    •    authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues.


Mitchell makes research sound like a zero-sum game, with “mean-spirited” replicators out to win some easy money from a “a lucky lush.” But often, the attempt to replicate is not motivated by skepticism and envy. Just the opposite. You hear about some finding, and you want to see where the underlying idea might lead.** So as a first step, to see if you’ve got it right, you try to imitate the original research. And if you fail to get similar results, you usually question your own methods.

My guess is that the arrogance Mitchell attributes to the replicators is more common among those who have gotten positive findings.  How often do they reflect on their experiments and wonder if it might have been luck or some other element not in their model?

----
* Those credits can be seen here – with the correct aspect ratio and a saxophone on the soundtrack that has to be Phil Woods. 

** (Update, July 10) ** DrugMonkey, a bio-medical research scientist says something similar:   
Trying to replicate another paper's effects is a compliment! Failing to do so is not an attack on the authors’ “integrity.” It is how science advances.  

Don’t Explain

July 3, 2014
Posted by Jay Livingston

Adam Kramer, one of the authors of the notorious Facebook study has defended this research. Bad idea. Even when an explanation is done well, it’s not as a good as a simple apology. And Kramer does not do it well. (His full post is here.)

OK so. A lot of people have asked me about my and Jamie and Jeff's recent study published in PNAS, and I wanted to give a brief public explanation.

“OK so.” That’s the way we begin explanations these days. It implies that this is a continuation of a conversation. Combined with the first-names-only reference to co-authors it implies that we’re all old friends here – me, you, Jamie, Jeff – picking up where we left off.

The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product.

“We care.” This will persuade approximately nobody. Do you believe that Facebook researchers care about you? Does anyone believe that?

Regarding methodology, our research sought to investigate the above claim by very minimally deprioritizing a small percentage of content in News Feed (based on whether there was an emotional word in the post) for a group of people (about 0.04% of users, or 1 in 2500) for a short period (one week, in early 2012).

See, we inconvenienced only a handful of people – a teensy tiny 0.04%. Compare that with the actual publication, where the first words you see, in a box above the abstract, are these: 
We show, via a massive (N = 689,003) experiment on Facebook . . .[emphasis added]
The experiment involved editing posts that people saw. For some FB users, the researchers filtered out posts with negative words; other users saw fewer positive posts.

Nobody's posts were “hidden,” they just didn’t show up on some loads of Feed. Those posts were always visible on friends’ timelines, and could have shown up on subsequent News Feed loads.

“Not hidden, they just didn’t show up.” I’m not a sophisticated Facebook user, so I don’t catch the distinction here. Anyway, all you had to do was guess which of your friends had posted things that didn’t show up and then go to their timelines. Simple.

Kramer than goes to the findings.

at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it

That’s true. At the end of the day, the bottom line – well, it is what it is. But you might not have realized how minuscule the effect was if you had read only the title of the article:
Experimental evidence of massive-scale emotional contagion through social network  [emphasis added]
On Monday, it was massive. By Thursday, it was minimal.

Finally comes a paragraph with the hint of an apology.

The goal of all of our research at Facebook is to learn how to provide a better service. Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone.

I might have been more willing to believe this “Provide a better service” idea, but Kramer lost me at “We care.” Worse, Kramer follows it with “our goal was never to upset.” Well, duh. A drunk driver’s goal is to drive from the bar to his home. It’s never his goal to smash into other cars. Then comes the classic non-apology: it’s your fault.

I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.

This isn’t much different from, “If people were offended . . .” implying that if people were less hypersensitive and more intelligent, there would be no problem. If only we had described the research in such a way that you morons realized what we were doing, you wouldn’t have gotten upset. Kramer doesn’t get it.

Here’s whey I’m pissed off about this study.
  • First, I resent Facebook because of its power over us. It’s essentially a monopoly. I’m on it because everyone I know is on it. We are dependent on it.
  • Second, because it’s a monopoly, we have to trust it, and this experiment shows that Facebook is not trustworthy. It’s sneaky. People had the same reaction a couple of years ago when it was revealed that even after you logged out of Facebook, it continued to monitor your Internet activity.
  • Third, Facebook is using its power to interfere with what I say to my friends and they to me. I had assumed that if I posted something, my friends saw it.
  • Fourth, Facebook is manipulating my emotions. It matters little that they weren’t very good at it . . . this time. Yes, advertisers manipulate, but they don’t do so by screwing around with communications between me and my friends.
  • Fifth, sixth, seventh . . . I’m sure people can identify many other things in this study that exemplify the distasteful things Facebook does on a larger scale. But for now, it’s the only game in town.
And one more objection to Kramer’s justification. It is so tone-deaf, so to the likely reactions of people both to the research and the explanation, that it furthers the stereotype of the data-crunching nerd – a whiz with an algorithm but possessed of no intepersonal intelligence.

--------------
Earlier posts on apologies are here and here

The title of this post is borrowed from a Billie Holiday song, which begins, “Hush now, don’t explain.” Kramer should have listened to Lady Day.

UPDATE, July 4
At Vox, Nilay Patel says many of these same things.  “What we're mad about is the idea of Facebook having so much power we don't understand — a power that feels completely unchecked when it’s described as ‘manipulating our emotions.’”  Patel is much better informed about how Facebook works than I am. He understands how Facebook decides which 20% of the posts in your newsfeed to allow through and which 80% (!) to delete. Patel also explains why my Facebook feed has so many of those Buzzfeed things like “18 Celebrities Who Are Lactose Intolerant."