Bloggiversary

September 19, 2017
Posted by Jay Livingston

After eleven years, I’m finally getting paid for this gig. The publisher of the leading intro sociology textbook e-mailed me asking for permission to use one my blogposts in the next edition (17e) of the book. I asked if they would cross my palm with silver. It turns out they would. I did the math, and it works out that my average weekly income from the blog is now 60¢ a week. Not bad.   

The 2017 turning of the blog year lines up closely with the Jewish high holidays – a time for reflection and repentance. So many blogposts worthy of the latter. But here are a handful I’d post again.
   
1.    Trump did not actually shoot someone in the middle of Fifth Avenue. But if he had, his supporters might well have rethought their position on homicide. Witness how conservative Christians changed their views on the importance of a politician’s private life.    
One Question Where Trump Turned Conservatives More Liberal

2.    In Italian, there is no word for “bedtime.”  Bedtime – Construct or Cruelty

3.    “Their early stuff was way better.” The dilemma of music groups – repeat or change?  Chasing the Dragon

4.    Asking who’s happier – liberals or conservatives – without looking at who’s in power is like asking the same question about Redsox fans and Yankee fans without checking the standings in the AL East. Political Baseball – Whose Fans Are Happy?

5.    “It’s your decision,” say the sitcom parents. Yeah, right. “black-ish” – Voluntary Conformism                  

Matching Evidence With Ideas — an Arthur Brooks Fail

September 16, 2017
Posted by Jay Livingston

Linking ideas and evidence – that’s the most important thing I try to teach students. What kind of evidence can we get to see if some idea or assertion is valid? I’m glad Arthur Brooks isn’t in my class. I would have to conclude that I failed at my task.

In his op-ed in the Times yesterday (“Don’t Shun Conservative Professors,” here), Brooks writes about the plight of conservative faculty members. In the academic world, dominated by leftish ideologies, they are second-class citizens.

Generally, these professors fear they have little hope for advancement to leadership roles. Research backs up this fear, suggesting that intellectual conformity is still a key driver of personal success in academic communities. In a study published in 2012 in the Journal of Experimental Social Psychology, researchers asked students to evaluate candidates vying to represent them with the faculty. In some cases, the candidate identified him- or herself as a “typical student at this college”; other subjects were given a candidate who was “a relatively untypical student at this college.” Even though both pledged to represent the students faithfully, in the same language, the untypical student consistently received significantly less support.

Really Arthur? That’s the best you got? One experiment on one campus? That’s your evidence?

And what is it evidence of? Brooks claims that conservative professors get shabby treatment at the hands of their liberal colleagues. The reason is that academic success depends on “intellectual conformity,” and conservatives’ political views  do not conform those of their more numerous and more powerful liberal colleagues.

Does this one study Brooks cites explore the attitudes and actions of faculty? No. Does it measure the rewards faculty receive? No. Does it use variables related to conservatism and liberalism? No. Does it even measure “intellectual conformity”? No.

It asks students who they would prefer to represent them as a student-faculty liaison. And here’s the stunning conclusion of that experiment: students prefer the hypothetical “prototypical” student rep over the “non-prototypical rep.” The typicality did not include political views. The typical student did not say, “I’m politically liberal”; the untypical student did not say, “I’m conservative.”*.

Preferring a student-faculty liaison who is typical rather than untypical – is that, as Brooks would have it, “intellectual conformity”?  Even so, it was only among the students who said they “felt certain right now” that the “typical” candidate won. Students who said they were “uncertain” were equally likely to choose the untypical as the typical.

It may well be that conservative academics are victims of discrimination. But the evidence Brooks offers isn’t just unconvincing. It’s an embarrassment. I certainly hope that other conservative intellectuals do a better job of matching their evidence to their ideas.
-----------------
* Here are the candidates’ statements shown to the student participants in the experiment:

“As a relatively typical student at this college, I feel as though I represent
the interests, values, and opinions of the students very well. I will fit in with the culture and climate of the college because I also share these same interests, values, and opinions. As a student of this college, I have deep ties in the community so I also want the college to make decisions that are the best for the students…”

“As a relatively untypical student at this college, I will try to represent the interests, values, and opinions of the students. While I do not share the same interests, values, and opinions, I will do my best to fit in with the culture and climate of the college. Although I may not have deep ties in the community, I will still try to make decisions that are the best for the students …”

The article is “Leadership under uncertainty: When leaders who are non-prototypical group members can gain support,” by David E. Rast III, Amber M. Gaffney, Michael A. Hogg, Richard J. Crisp. JESP 48.3 (May 2012).

Dreamers and the Trump Base

September 15, 2017
Posted by Jay Livingston

People whose life is in politics develop a firm ideology. Ordinary voters have no such need for consistency.

“Word of Deal Bewilders and Angers Trump’s Base,” says a subhead in today’s New York Times about DACA.  The deal in question was Trump’s agreement with his new friends Chuck and Nancy to let the Dreamers keep dreaming for at least another half year. Over on the right, the loud voices are getting shrill. The Times story quotes people like Ann Coulter (“At this point, who DOESN’T want Trump impeached?”), Rep. Steve King, and some talk-radio conservatives. 

But the people who voted for Trump are more loyal to him. Also more ideologically flexible. It’s Trump the person they want, not any particular policy. On some matters, their ardor for Trump has led them to change their long-held views. Russia is no longer a terrible villain. A politician’s private peccadilloes now mean little for his performance in office. Obamacare isn’t so terrible after all.

Given Trump’s campaign rhetoric and the “Build the Wall” chant, you might expect his supporters to be more adamant on immigration. But even before Trump’s change of heart on DACA, his base was soft on Dreamers, though the polls on this are not consistent. A YouGov poll taken September 3 -5 asked Trump voters.

Do you favor or oppose DACA, Deferred Action for Childhood Arrivals, which is a policy that grants temporary legal status to “dreamers,” otherwise law-abiding children and young adults who were brought into the United States at a very young age by parents who were illegal immigrants?

(Click on an image for a larger view.)


The DACA glass is half empty. A third of the Trumpistas are firm opponents. But on the other side a third of the Trump voters actually support DACA, and the rest aren’t sure.

A Morning Consult Poll for Politico taken a few days earlier found Trump voters to be still more accepting of Dreamers and even non-dreamers.
“As you may know, Dreamers are young people who were brought to the United States illegally when they were children, often with their parents. Which of the following do you think is the best way to handle Dreamers?” The poll also about the best way to handle “immigrants currently living in the United States illegally.” The choices were”
  • They should beallowed to stay and become citizens if they meet certain requirements 
  • They should be allowed to stay and become legal sidents, but NOT citizens, if they meet certain requirements
  • They should be removed or deported from the United States
  • Don’t Know / No Opinion           

Two-thirds of Trump voters wanted to allow the Dreamers to stay. Slightly more than half were OK with granting residence (22%) or even citizenship (33%) to all immigrants now living in the US illegally.

When Trump took office, his net approval was +4 (45% Approve, 41% Disapprove). Since then, he has managed to drive that figure to – 14 (39 - 55). His recent change on DACA may have cost him cred with Coulter and other people deeply involved in politics. But it seems unlikely that his support with the public at large or even his base will fall any farther.

Algorithms and False Positives

September 13, 2017
Posted by Jay Livingston

Can face-recognition software tell if you’re gay?

Here’s the headline from The Guardian a week ago.


Yilun Wang and Michal Kosinski at Stanford’s School of Business have written an article showing that artificial intelligence – machines that can learn from their experiences – can develop algorithms to distinguish the gay from the straight. Kosinski goes farther. According to Business Insider,
He predicts that self-learning algorithms with human characteristics will also be able to identify:
  • a person’s political beliefs
  • whether they have high IQs
  • whether they are predisposed to criminal behaviour
When I read that last line, something clicked. I remembered that a while ago I had blogged about an Israeli company, Faception, that claimed its face recognition software could pick out the faces of terrorists, professional poker players, and other types. It all reminded me of Cesare Lombroso, the Italian criminologist. Nearly 150 years ago, Lombroso claimed that criminals could be distinguished by the shape of their skulls, ears, noses, chins, etc. (That blog post, complete with pictures from Lombroso’s book, is here.) So I was not surprised to learn that Kosinski had worked with Faception.

For a thorough (3000 word) critique of the Wang-Kosinski paper, see Greggor Mattson’s post at Scatterplot. The part I want to emphasize here is the problem of False Positives.

Wang-Kosinski tested their algorithm by showing a series of paired pictures from a dating site. In each pair, one person was gay, the other straight. The task was to guess which was which. The machine’s accuracy was roughly 80% – much better than guessing randomly and better than the guesses made by actual humans, who got about 60% right. (These are the numbers for photos of men only. The machine and humans were not as good at spotting lesbians. In my hypothetical example that follows, assume that all the photos are of men.)

But does that mean that the face-recognition algorithm can spot the gay person? The trouble with Wang-Kosinki’s gaydar test was that it created a world where half the population was gay. For each trial, people or machine saw one gay person and one straight.

Let’s suppose that the machine had an accuracy rate of 90%. Let’s also present the machine with a 50-50 world. Looking at the 50 gays, the machine will guess correctly on 45. These are “True Positives.” It identified them as gay, and they were gay. But it will also classify 5 of the gay people as not-gay. These are the False Negatives.

It will have the same ratio of true and false for the not-gay population. It will correctly identify 45 of the not-gays (True Negatives), but it will guess incorrectly that 5 of these straight people are gay (False Positive).


It looks pretty good. But how well will this work in the real world, where the gay-straight ratio is nowhere near 50-50? Just what that ratio is depends on definitions. But to make the math easier, I’m going to use 5% as my estimate. In a sample of 1000, only 50 will be gay. The other 950 will be straight.

Again, let’s give the machine an accuracy rate of 90%. For the 50 gays, it will again have 45 True Positives and 5 False Negatives. But what about the 950 not-gays. It will be correct 90% of the time and identify 885 of them as not-gay (True Negatives). But it will also guess incorrectly that 10% are gay. That’s 95 False Positives.


The number of False Positives is more than double the number of True Positives. The overall accuracy may be 90%, but when it comes to picking out gays, the machine is wrong far more often than it’s right.

The rarer the thing that you’re trying to predict, the greater the ratio of False Positives to True Positives. And those False Positives can have bad consequences. In medicine, a false positive diagnosis can lead to unnecessary treatment that is physically and psychologically damaging. As for politics and policy, think of the consequences if the government goes full Lomborso and uses algorithms for predicting “predisposition to criminal behavior.”