Image or Brand

April 22, 2016
Posted by Jay Livingston

One word in today’s headlines seemed like a throwback to an earlier era: image.



It was in the 1960s that politicians, their handlers, and the people who write about them discovered image. The word carries the cynical implication that voters, like shoppers, respond to the surface image rather than the substance – the picture on the box rather than what’s inside.  A presidential campaign was based on the same thing as an advertising campaign – image.  You sold a candidate the same way you sold cigarettes, at least according to the title and book jacket of Joe McGinnis’s book.

 (That pack of Nixon’s should have carried the Surgeon General’s warning.) 
 
Then, sometime around 1980, image began to fade. In its place we now have brand. I went to Google N-grams and looked at the ratio of image to brand in both the corporate and the political realm. The pattern is nearly identical.



The ratio rises steeply from 1960 to 1980 – lots more talk about image, no increase in brand. Then the trend reverses. Sightings of image were still rising, but nowhere nearly as rapidly as brand, which doubled from 1980 to 2000 in politics and quadrupled in the corporate world.

Image sounds too deceptive and manipulative; you can change it quickly according to the needs of the moment. Brand implies permanence and substance (not to mention Marlboro-man-like rugged independence and integrity.) No wonder people in the biz prefer brand.

Decades ago, when my son was in grade school, I met another parent who worked in the general area of public relations. On seeing him at the next school function a few weeks later, I said, “Oh right, you work in corporate image-mongering.” I thought I said it jokingly, but he seemed offended. He was, I quickly learned, a brand consultant. Image bad; brand good.

In later communications, he also said that a company’s attempt to brand itself as something it’s not will inevitably fail.  The same thing supposedly goes for politics

“One thing you learn very quickly in political consulting is the fruitlessness of trying to get a candidate to change who he or she fundamentally is at their core,” said Republican strategist Whit Ayres, who did polling for Rubio’s presidential campaign before he dropped out of the race. “So, is the snide, insulting, misogynistic guy we’ve seen really who Donald Trump is? Or is it the disciplined, respectful, unifying Trump we saw for seven minutes after the New York primary?

These consultants are saying what another Republican said a century and a half ago: “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.”

This seems to argue that political image-mongers have to be honest about who their candidate really is. But there’s another way of reading Lincoln’s famous line: You only need to fool half the people every four years.

Miles Ahead

April 21, 2016
Posted by Jay Livingston

Don Cheadle didn’t want to do a biopic, and how can you blame him? As a person, Miles was not an attractive or sympathetic character. What he cared about was himself – his music, his painting, his drugs. He treated the women in his his life as though they were possessions, like his stylish clothes and cars, except that as far as we know he did not beat his suits or Ferraris. Besides, Cheadle probably wanted to avoid the biopic cliches – the significant childhood scene where the boy’s talent first becomes apparent, the early struggles, the success (fast montage of club posters with the hero’s name rising in the billing), the downfall, the redemption.

“I was born, I moved to New York, met some cats, made some music, did some dope, made some more music, then you showed up at my house.” Miles delivers this line, along with a right hook to the face, to a journalist who shows up at his door wanting to do a story. It’s as though Cheadle is saying to the audience, “You want biography? Here’s your biography.”

But the movie avoids only some of the biopic cliches. It keeps others.  That journalist who wants to track down the “real” person, for example, is a familiar movie device (“Citizen Kane”). At least the movie doesn’t end with him rolling a sheet of paper into his typewriter and tapping out the “The Real Miles” or some such.

Instead of biopic, Cheadle gives us a completely made-up story, complete with guns and high-speed
car chases – not exactly what comes to mind when you hear the name Miles Davis. The trailer, as usual, gives you the plot such as it is.



It’s another venerable plot line – the artist preserving his art from the  vultures who want it only because they can turn it into filthy lucre. The art in this case is that tape that Miles recorded privately and keeps locked in a drawer. (There may have been such a tape, but on it Miles plays organ, and from all reports, not very well. And nobody stole it.)

There’s even a song-origins scene hokey enough to be in a 1940s songwriting team “and then we wrote” movie. Miles, at home watching his wife Frances Taylor dance, picks up his horn and starts to play a melody that sounds like the children’s song “Put Your Little Foot.” Jazzers will get the reference: that melody turned up as “Fran-Dance” on the 1958 album “Jazz Track.”

Music was the best and most important thing about Miles, so the big disappointment (for me at least) is that so little of the film is about the making of music. The secondary role is logical given that Cheadle chose to set the film in the late 1970s when Miles stopped playing and disappeared from public view for five years. So it’s mostly in the flashbacks that we hear Miles’s music. Cheadle apparently learned to play trumpet, and he fingers accurately to Miles’s recorded solos from well-known albums like “Kind of Blue.” But these snatches rarely last more than about 15 seconds. There is one music-making scene: Miles and Gil Evans discuss some details in the arrangement of “Gone” during the recording session of “Porgy and Bess.”

Still, Cheadle carries the film. He captures an essential part of Miles’s character – the absolute confidence and the apparent indifference to what anyone else thinks. Miles, who, even in the 50s when jazz was struggling to be respectable, walked  off the set during his sidemen’s solos and literally turned his back on his audience, who then to the dismay of many turned his back on bebop for electric and rock (there’s a parallel here with Dylan and his audience). That’s the Miles we see. I just wish that we got to hear more of his music.   

Who’s Smarter?

April 17, 2016
Posted by Jay Livingston

I was not optimistic when I tried a version of the old Quizmaster-Contestant-Observer game* that illustrates the fundamental attribution error. This is the error that occurs when we explain someone else’s behavior mostly in terms of their personal traits and ignore the effects of the situation. My attempts to replicate well-known effects usually fail. My students and I are just bad replicators I guess. But this time it worked.

Here’s the set-up: I asked students to come to class with seven trivia questions – challenging but not impossible, the kind that even when you can’t think of the answer, and then someone tells you, you say, “Oh, of course.” I gave them some samples and added, “This is not a graded assignment. You can get help from your friends. You can use the internet (I did).”

In class, I divided them into groups of three and told them to choose** roles. They could be the Questioner, the Contestant, or the Observer. Then the Questioner asked the seven questions, the Contestant tried to answer and was told if the answer was right or wrong, the Observer observed. They then returned to their regular seats and got this form.

We assume that most Contestants will get a few wrong. We also can be sure that the Questioner knows all the answers. But what are we to conclude from that? That the Questioner knows more than the Contestant? Of course not. It’s easy to ace the quiz if you’re the one who makes up the questions. If the roles had been reversed, if the Contestant had been the one to ask the questions she thought up, she would be the one with more answers.

But we just cannot resist the temptation to draw conclusions about the persons themselves and to ignore the advantages and disadvantages of the positions. The chart shows how participants in the different roles rated the Questioners and Contestants.



As attribution theory predicts, Observers were quick to make judgments about the relative knowledge of the Questioner and Contestant. They ignored the role differences and concluded that if the Questioner knew more answers, that’s who must know more trivia.

Attribution theory also says that when we look at our own behavior, we are more likely to make “situational attributions.” Accordingly, Questioners and Contestants may have taken the constraints of role into consideration. In any case, they did not see so large a difference, though even they could not escape the conclusion that the Questioner was better informed. Contestants were impressed by the knowledge that the Questioner supposedly possessed, even though they knew that the Questioner could have gotten the questions from the Internet.

Sometimes the fundamental attribution error extends beyond what might be suggested by the specific tasks – in this case, knowledge in a trivia game – and gets to more permanent qualities. We see someone trip and stumble. Why? Because he’s clumsy, we say. We have assigned him a more or less permanent trait on the basis of one brief event. And similarly, if we see a rigged trivia game where the person who wrote the questions knows all the answers . . .




---------------------------------------

* See Ross, Amabilie, and Steinmetz,1977 (here).

** The roles should be assigned randomly. That was not the only way in which my experimental design was deeply flawed, and I report the results here not because they are convincing, but merely because I was so pleased that in at least one way they turned out the way the theory says they should.


Screwed

April 13, 2016
Posted by Jay Livingston

Does anyone remember what Charles Murray said about Black political choices in his 1984 book Losing Ground – the part where he says that African Americans had been “screwed”?

Call it “Jesse Jackson-ism” – the willingness of Blacks to support demagogues like Jackson and Louis Farrakhan. It goes along with a general attitude of resignation and alienation. These are expressions of a lot of legitimate grievances that Blacks have with the ruling class in this country. Those grievances include  the cultural disdain that the White ruling class has for Blacks. Those grievances include the nature of the labor market for Blacks – the loss of manufacturing jobs, the relegation to the least secure and lowest paying sectors, and, as has been shown in study after study about hiring and promotion, employers’ preference for Whites. Basically, it’s all the ways in which, if you’re Black and working class, you’ve been screwed.

Of course you don’t remember that passage. I made it up. I based it on what what Murray actually did say recently about Whites who support Trump

Trumpism is the expression by the White working class of a lot of legitimate grievances that it has with the ruling class – everything from the cultural disdain that the elite holds the working class in to the loss of all kinds of manufacturing jobs, the importation of low-skilled labor – all the ways in which, if you’re a member of the working class, you have, over the last thirty forty years, been screwed. [from a walk-and-talk interview with Paul Solman on PBS].

What Murray actually did say in 1984 about Blacks was that while “discouragement” might explain the alienation, unemployment, and decreasing labor force participation of rural populations, “it is not possible to use discouragement as an explanation for the long-term trend [in Black labor force participation].”

The problem was not in the kinds of jobs available to working-class Blacks.

The problem with this new form of unemployment was . . . that young black males – or young poor males . . . moved in and out of the labor force at precisely that point in their lives when it was most important that they acquire skills, work habits, and a work record. [p. 82.]

In Murray’s view, everything in the US was fine. The trouble was not that people had been screwed by forces they had no control over. The trouble was that these Black guys turned their backs and refused to seize opportunities – skills, work habits, a work record.

Murray’s divining rod for finding dysfunction used to point to poor people themselves. Now, it hovers over more abstract sources – the culture, the economy. Some see this change as evidence of Murray’s racism – one kind of explanation for Black poverty, another for Whites. But there are geographic differences – urban, non-urban – and maybe the economy is different in important ways than it was thirty years ago.

Not all Murray’s conservative brethren shift their attention to these broader forces to explain Trumpism. For readers who might be getting nostalgic for “It’s their own damn fault”  – the idea that poor people and their culture are to blame for poverty and its attendant miseries – I close with an excerpt from Kevin Williamson’s recent fire-and-brimstone sermon in The National Review :

The truth about these dysfunctional, downscale communities is that they deserve to die. Economically, they are negative assets. Morally, they are indefensible. Forget all your cheap theatrical Bruce Springsteen crap. Forget your sanctimony about struggling Rust Belt factory towns and your conspiracy theories about the wily Orientals stealing our jobs. . . .  The white American underclass is in thrall to a vicious, selfish culture whose main products are misery and used heroin needles. Donald Trump’s speeches make them feel good. So does OxyContin.