Jose L. Duarte
  • Home
  • Blog
  • Media Tips
  • Research
  • Data
  • An example
  • Haiku
  • About

Give me your racists, sexists, and homophobes

2/18/2015

2 Comments

 
Marquette's decision to strip Professor John McAdams of tenure is a disgrace and may influence my long-term plans with respect to the intellectually craven environment that is American academia.

A key pivot point for this story is that a student in a philosophy class was concerned about the tolerance for dissent in that class on the issue of gay marriage. He spoke with the instructor, a graduate TA. She ultimately told him that "homophobic" views would not be tolerated, and added the obligatory pre-emptive censure of "racist" and "sexist" views. (Professor McAdams criticized the teacher's handling of this episode in a blog post, and also criticized prior hyper-PC behavior by the Assistant Dean and the Philosophy Department Chair, which is what I believe actually did him in.)

The student had not expressed homophobic views and seemed articulate. His only specific comment seemed to be the idea that there is empirical evidence of bad outcomes for children of gay couples.

I was struck by the teacher's robotic injunctions against the recurrent and seemingly empty phoneme strings: racism, sexism, and homophobia.

We can't have any of that stuff, cause it's bad m'kay?

I'd like to make this clear. I want to hear racist, sexist, and homophobic ideas. I think you should want to hear them too. I mean it, and I mean it at two levels of analysis.

Level 1: I most definitely want to hear ideas that academic leftists label as racist, sexist, and homophobic.

I want this because experience tells us that the ideologies and theories academic leftists apply in tagging discourse with those labels will be unsatisfying to many scholars on counts of rigor, merit, and framing assumptions. Similarly, we know that the vast majority of the earth's population will not consider those discourses and ideas to be racist, sexist, or homophobic, which might suggest fertile ground.

So we most definitely need to hear, discuss, test, and debate those ideas given that their prohibition by a the current ideological spasms of American academia should be of no interest to us. We should brush away this unscholarly screaming without even slowing down.

Level 2 breaks into near isomorphic variants. For my purposes here I'm basically aggregating the three variants.

Level 2a: I want to hear ideas that I would consider racist, sexist, or homophobic.
Level 2b: I want to hear ideas that most people who are not American academics would consider racist, sexist, or homophobic.
Level 2c: I want to hear ideas that are in fact racist, sexist, or homophobic.


Why?

First, I have no reason to suppose – to decide in advance – that nothing can be learned, no insights gained, by hearing and perhaps engaging or interrogating robustly racist, sexist, and homophobic ideas. I'm surprised people would just assume that nothing can be gained. This seems unlikely to me, and I would place the burden on those people to prove that nothing can be gained. The null here, or the default intellectual/scholar mode, is the mode of engagement, exposure, listening, considering, weighing, arguing, etc.

Second, I have no reason to suppose – to decide in advance – that there will be no merit or wisdom in canons of racist, sexist, and homophobic thought. That would be strange to me, seems unlikely given what we know about the scope and texture of human discourse, what we know about history, about human psychology, about the marginalization of ideas and peoples, and so forth. I wouldn't be shocked to find a diamond in the rough, a speck of gold in the prospector's pan. I imagine these wisps or chunks of truth or wisdom will be orthogonal to the broader racist, sexist, or homophobic systems to which they belong, but I don't want to pin too much on that assumption. I want to go in clean.

Third, I freely embrace a notion that would perhaps not be controversial in brighter eras. I think it's possible that I'm wrong. I think it's possible that I'm wrong about lots of things. I think it's possible that I'm wrong in the beliefs in which I'm most certain. It surprises me that today's scholars don't seem to imagine a universe where they are substantially wrong.

This also means you might be wrong, any of us might be wrong. I might be wrong, in some sense, about racism, or some subset of it, or some other dimension of it that I can't foresee. What would that look like? Well, I'm never going to be a racist in the sense of malevolence or hate toward humans because of their race. It's not realistic that I could change that much. It's like imagining I was raised by a different set of parents – I wouldn't be me anymore.

There are levels of analysis with respect to racism, sexism, and homophobia that are philosophical. That's where philosophers go to work. Then are levels of analysis that are empirical, things we can measure, things we can go find out and come back with the answer. That's where scientists go to work. There may be things racists, sexists, and homophobes believe that are simply true as an empirical matter.

For example, there might be stable innate differences in IQ between various racial-ethnic groups. Here I tend to bring the same cautious attitude I invoke above – I think it's nuts for anyone to be settling on the answer to that issue right now, especially racists. It's way too early. Give it another fifty years at least. There are a host of complex cultural and environmental issues that may be in play, including ones we don't know about. There are many known unknowns and unknown unknowns here, and I really don't like jumping the gun. There's no reason to assume that any question we have can be readily answered by some dude in a lab coat in the era in which we live. That's just not going to work out.

So while I don't think this can be settled in the near future, I am open to the idea that there are innate differences and that we will know this in 2060 or 2090. These differences may even be unfavorable to my group, Mexicans, Native Americans, whatever we'd call the brown or the genetic substrate in my case. I do not assume that these sorts of differences matter, or will matter, or that people have to care about them, or that it is rational or ethical to employ heuristics based on them in day-to-day life. Also, I don't assume there will be such differences. I have no idea. Reality is a very complicated place. They might be faint. Who knows, but what to do with that kind of reality is a philosophical question. We also have to remember variance and how that works, get people trained up to not focus on mean differences.

It's the same with the outcomes for children of gay couples. I doubt there's much of an effect there, nor do I assume anyone has to care about such an effect, but I wouldn't want to shut down that conversation. The student in this case was wrong in saying such children "do a lot worse in life." I've not seen large effects, not the last time I checked. The teacher was wildly incompetent in responding to the student's arguments, shifting the issue to single people having kids or adopting or something. That was so lazy and invalid. It's also not a fruitful path because we know that the number of children born out of wedlock has exploded, and we know that there are indeed very real consequences for those children. We know a lot about that.

Social Science Qualifier: From experience, I know people don't come in with the assumptions about probabilistic truths that social scientists take for granted. When I say "those children", I am speaking of statistical effects that rest on mean differences and differences in variance detected by inferential statistical methods. Call it averages. I am not saying anything about you. I'm not saying anything about your family, your background, your parent(s), or your friends. Any given single-parent household might be the best, most loving environment possible in our civilization. Any given single-parent household might send kid after kid to Harvard. There is plenty of room for variance. It's the aggregate reality that is at issue here, the net effect. You can assume that it's a bigger problem for lower income contexts, rather than Murphy Brown affluent professional single mother situations. There do appear to be father-specific benefits and all sorts of interactions, but this is always about aggregates, not any one context.

What we do with statistics about children of gay couples or children born out of wedlock is a whole different journey than the empirical journey we just shared. This is philosopher-hat business. You get to decide whether or how to use these empirical findings, how to situate them in a broader context or a political platform. Let many flowers bloom. We need many voices. I for one am not going to stop supporting gay marriage if it turns out that children of gay parents have 4-point lower average SAT scores than children of straight Catholics. See what I mean? An effect doesn't imply a political position. I'm not a utilitarian, certainly not a knee-jerk rationalistic drowning in data utilitarian. (I do think we have a real problem with children born out of wedlock. The conservatives were simply right in their intuitions about how that would impact children and family life. You should give them props on that one.)

Back to Marquette, the bastards. I'm worried about scholarship in our time. Do people have no sense of history? Do they have no sense of the grand sweep and our place in it? Do they have no sense of the enormous range and complexity of fruitful inquiry and scholarship? Are they really that small that their ideology reduces to empty phoneme strings and incantations about racism and all the other baddies? Do they not understand that some or all of their ideological tenets can be disputed by capable, rational, and benevolent thinkers? Do they really not consider the possibility that they may have gotten something wrong? What the hell are these people doing in a university?

Do they not realize that modern American academia is a culture, and that their culture is going to yield a different set of experiences and insights than people who come from other cultures, or even twenty miles west? Why would their culture be better than all the others, with a complete package of The Truth over the Iowa farmers, the Montana ranchers, the Brooklyn ballers, the Phoenix suburbanites, or the church choir?

I hate cowardice, and these are cowards. Cowardice will have some implications for the quality of scholarship and how much longer we can keep the lights on. What bastards. Give Professor McAdams his damn chair back.

2 Comments

Significance

2/15/2015

3 Comments

 
I was struck by this quote from a Forbes piece on the secondhand smoke research.

"there’s no such thing as borderline statistical significance. It’s either significant or it’s not."

It's attributed to a journalist named Christopher Snowdon (I don't know who that is.)

It's false, and I think it's important for us to convey a clearer message to the public about what statistical significance is.

tl;dr: It's a business decision, and by the way, how many fingers do you have? (thumbs included...)

Significance is not a binary or discrete property of a scientific finding. Our convention in social science, and I think in lots of biomedical fields, is the .05 threshold. I'll return to this.

The statistical significance of an effect is the likelihood of drawing a random sample with the measured characteristics of our sample if the null hypothesis is true. Note that this is not the same thing as saying the likelihood of our research hypothesis being true is 1 – p, or 95% or greater given our standard .05 threshold. Significance is often mis-explained as the inverse likelihood of our hypothesis being true. That's not what it means. And there are other assumptions, particularly regarding normal distributions, that will impact the meaning of all of this.

By measured characteristics, I mean a sample that looks like our sample in the study. So if we conduct a longitudinal study with a large sample of women and track them on variables like lung cancer and passive smoking, we end up with X% having lung cancer, Y% having lived in home with a smoker, Z% who have lung cancer and lived with a smoker, and variance on other variables like length of time a person has lived with a smoker, age, race, lifestyle, etc.

The null hypothesis is that passive smoking does not cause lung cancer in nonsmokers – that there is no relationship between these variables (that would be one of several hypotheses in the actual study referenced by Forbes, because they also tracked smokers.)

So significance here means the probability of drawing a random sample from the population, with the exact percentages and so forth that we see in our sample, assuming there is in fact no link between passive smoking and lung cancer rates (that the null hypothesis is true.)

We can see a few things here. Given typical sample sizes, if 2.00% of women who lived with a smoker get lung cancer and 2.00% of women who never lived with a smoker get lung cancer, there won't be a significant effect. The core reason is that this is exactly what we'd expect to see if the null hypothesis is true. If there's no actual link between these variables in the population, it's likely that we'd draw a random sample that looked like ours – a sample with no differences between the groups. This likelihood goes up as the sample size goes up. In this kind of scenario, your p-value might be something like 0.70 or 0.80. The particular value doesn't matter – what matters is that it's well above our threshold of 0.05 (and more importantly, that there is no difference between groups, no effect.)

If we do see differences between groups in our sample, the p-value will be lower, because if the null hypothesis is true, we wouldn't expect to see such differences.

How low that p-value goes will depend on the size of the difference between these groups (the effect size) and the sample size. As the effect size goes up, the p-value goes down because it becomes less and less likely that we'd see such differences in a random sample if the null hypothesis is true.

If the sample size goes up, the p-value goes down, because a larger sample size reduces the likelihood of random sampling error. It's like flipping a coin over and over. If you flip it only three times, you could easily get three straight tails, but as you keep flipping you'll get to a more or less even split of heads and tails.

As I said, our threshold is 0.05, meaning a 5% or less chance that we would draw a sample like ours if the null hypothesis were true.

Why .05? At the margins, it's arbitrary. What's the point of a threshold? The point is to have some standard that reduces Type 1 error – detecting an effect that is not real. At the same time, we want to be able to talk about effects and to report findings that are likely to be real.

Scientists could have settled on .10 or .04 or any of a number of values. Like I said, at the margins it's arbitrary. The specific choice of .05 is partly due to the fact that you probably have ten fingers. You might remember I asked you to count them. A lot of our choices of thresholds and rules of thumb are driven by the fact that we use a base-ten number system. Five is half of ten, and so we tend to settle on values that are multiples of five or ten. There was almost no chance we would've chosen .04 or .06. Those numbers don't satisfy us the way fives and tens do. (If humans had eight fingers instead of ten, we might very well have chosen .04.)

As you can infer from above, significance is a continuum. We could have a p = 0.08 situation and that effect could easily be a true effect. In fact, an effect with p = 0.30 could be a true effect. But especially in that case, when we're getting up to a 30% chance of drawing a sample like ours, we don't want to report that as significant. Whereas, effects with 0.06 or 0.09 p-values are often reported, and should be. We report them as something like "this was significant at p = 0.06" or "this was marginally significant at p = 0.09". Note that we're still using the word significant. We can use that word given any p-value, as long as we include the p-value.

That's why Snowdon is wrong. The choice of 0.05 is a business decision that achieves a good tradeoff in our levels of Type 1 error vs Type 2 error (failing to detect a true effect.) But there's nothing natural or inherently meaningful about p = 0.05. It's not a value derived from nature, like Planck's constant. It wasn't discovered. There's no "significance" in nature. Like I said, it's a business decision.

3 Comments

On prestige of universities vs. graduate programs

2/14/2015

0 Comments

 

I wanted to clarify something based on a comment by a supporter on the discrimination post.

I think this clarification is useful because from my own experience, the general public doesn't know that the prestige of a university has virtually nothing to do with the prestige of any given graduate program at that university. The commenter on that post was defending my credibility as a scholar or smart person based on the fact that I was accepted to UC – Berkeley's social psychology program. I think this is a common heuristic, and I want to make sure ASU gets its due.

When it comes to PhD and other graduate programs, it's only the prestige and quality of the particular program that matters.

The prestige and selectivity of the university as a whole is much less important to scientists and prospective graduate students. The public prestige of universities is largely driven by their selectivity at the undergraduate level, average incoming SAT scores, etc.

Within the field of social psychology, getting into Berkeley's program is hard because they have a good program, not because they're Berkeley.

However, it's not harder than getting into Arizona State's program. The ASU program is elite. Getting into UNC - Chapel Hill was probably slightly easier than getting into ASU's program at that time, even though UNC is a public ivy and much more selective than ASU at the undergraduate level. In any case, there aren't big differences in the selectivity of these programs.


ASU's program is well-known for its focus on evolutionary psychology mechanisms, with researchers like Doug Kendrick, Steve Neuberg, Lani Shiota, and others drawing from that framework. There's a lot of cultural psychology work too with researchers like Adam Cohen and Virginia Kwan, who also employ evolutionary approaches. For decades, ASU was home to The Master, Robert Cialdini, the leading influence and persuasion researcher in the world, in whose lab I worked as an undergraduate research assistant. He's emeritus now, but we know he's here writing his book when we see a 1965-ish Ford Mustang parked outside.

Graduate students tend to choose programs based on the overall prestige of the program along with their perceived fit with the program. Additionally, depending on the structure of the program, a graduate student might be applying to work with a particular researcher/advisor more than applying to the program per se. Some programs are mentorship programs, meaning the Jedi/medieval model of master and apprentice, while others are more "programmatic" programs, though I don't think it's a clean line or a formal decision in many cases. In any case, the decision to choose a particular program might be driven solely by one researcher/adviser who happens to be in that program.

Just wanted to make sure ASU got it props. It's a top program. ASU gets mocked a lot in general media, 30 Rock, etc. based on its rep as party school, but that has no bearing on any of the graduate programs, many of which will be world class. I've been very lucky in that the quantitative psychology programs at both UNC and ASU are among the best in the world. (We get all our statistics training from the quantitative faculty.) One ranking has them stacked at #10 and #11, though I don't trust rankings of PhD programs too much. Paraphrasing von Hayek, all knowledge is local. The knowledge of which programs are best is often held only by people in the respective fields, not easily captured by US News and whomever.




0 Comments

Comment on Verheggen et al.; climate consensus research

2/13/2015

18 Comments

 

Here's my recent Comment on Verheggen et al. in Environmental Science & Technology. 

Their core argument in their Reply is that all the non-climate scientists who were invited to participate in this survey because of their climate-related papers – all the psychologists, sociologists, Marxists, economists, palladium experts, etc. – would deny having done any climate-related work, and would thus not be a significant portion of the responses to the survey. This is nonsensical and a waste of everyone's time.

Background: The authors don't know who is in their data, due to inadequacies in their design. That's the key fact that frames this issue.

1) We don't know who is in this data.

2) We know vast numbers of non-climate scientists were invited.

3) We don't know the actual results of this study. That is, we don't know the results that are strictly based on qualified experts – climate scientists. They should just retract, or the journal should. We might need to deploy arms-reach retraction buttons to grease the wheels for pro-science decisions, for reasons I explain below.

The larger issue here is that there is likely nothing happening in science right now that is of lower quality than climate consensus research. It's a disaster. Much of the research doesn't meet anyone's standards for science. It's pre-scientific. Many of these studies are politically motivated junk that we couldn't possibly draw any inferences from.

The junk studies report the highest consensus figures. They're inflating the consensus, probably distorting people's estimates of certainty, given the nature of human minds and how "97%" might be processed. (There is no 97%. That was a scam, as I'll explain below.)

This is a terrible distraction, because it casts doubt on the whole premise of a climate consensus, and it risks enormous harm to the reputation and standing of science in the public mind. Skeptics see this garbage and find it satisfying and convenient for their prior commitments to climate skepticism. They may not pay attention to the valid studies that show a consensus, albeit a smaller one. It unreasonable to expect the general public to wade through all the junk and find the good stuff.

There are valid studies. There is almost certainly a consensus in climate science with respect to human-caused warming. It's not as though anyone has done a survey and reported a 40% consensus or anything like that. There are no disconfirming findings, to my knowledge. Every survey reports a consensus well north of 50%.

The junk studies started with Oreskes (2004) and mimic her methods. That study is not a study. It's a one-page paper whose methods section is a paragraph or two, that offers no detail or validation of its methods. We don't even know how these subjective ratings of abstracts were conducted, or by whom. I'm slowly getting my head around the idea that she did them all herself, apparently. That's very confusing, for someone to basically say, hey I read 900 abstracts, decided what they mean, and none of them disagree with human-caused warming. I'm not a climate scientist or anything, but here's my one-page paper. (Oreskes is a staunch environmentalist activist who in a recent book projects a collapse of civilization because of our anti-environment ways. Also note the incredible fallacy in demanding to see explicit disagreement with a proposition or hypothesis, and treating a lack of explicit disagreement as positive support for the hypothesis. There are several problems embedded in that bizarre supposition.)

Subjective rating is a social science method. No social scientist would ever submit the results of a subjective rating study where he alone did all the ratings and had an ideological conflict of interest with respect to the outcome.

This is all a horrible joke. Let me pause and note that I have never been more confused than I have been over the past year by all the fraud and all these bizarre junk studies. It's disorienting. This can't be what science is. Science is this precious, wondrous thing. It's arguably the best thing humans do. Political ideology is eating science alive. This collapse of integrity, the incredibly bold acts of fraud and scientific authorities' attempts to protect that fraud, the apparent lack of serious peer review and of even minimal methodological standards, this is all a disaster. Science can't be this. Politics is just killing us right now. Politics is acid on science. It always has been. But I think our era is more political than many other eras. I think the the influence of political ideology in academia is at a historic peak.

The Oreskes method includes searching climate science literature with plain English phrases like "global warming". All the ideologically driven junk studies that copied her method likewise searched on "global warming" or "global climate change". That was it. They take the results of these searches and include them all in their mystical counting rituals. This is what Verheggen et al. did.

In fact, Verheggen et al. did not even bother to uncheck the Social Science and Arts & Humanities boxes in their search. (This is the Web of Science index.)

I'm sorry, I know this is very negative, but this honestly isn't even undergraduate-level work. You could get a kid to do this. This is so awful that it's a disgrace that any of these studies were published in 21st-century scientific journals. Politics is the reason they are published. Politics is just devastating us. It's turning science into a scam.

These people had no idea how to search scientific literature. This is confusing. Did they have no training? They think climate scientists are going to commonly say "global warming" in their titles or something. This is bizarre. 21st-century scientific fields have their own technical terminology. We could never search scientific literature with casual English phrases. Climate scientists don't talk like that. They'll be talking about aerosol spectra, ENSO, and seasonal variation in CFCs.

The search these people did has extreme asymmetries in its results. Searches on casual English phrases like "global warming" will capture lots of non-climate scientists who would use less technical language – most especially activists and people who are framing their non-climate science work around warming.

The search was never validated. Searching scientific literature is a well-documented discipline, most notably as a core feature of meta-analyses. It's not a casual thing. You have to test your search. A basic way to test it would be to compare the results to known sets of climate science papers, for example from climate scientists' CVs. The Cook 97% scam conducted this same search. Look at how many papers they included by James Hansen, compared to Richard Lindzen. That study was based on counting the papers. Those people conducted a bizarre casual English search, got thousands of papers, had environmental activists who had profound conflicts of interest subjectively rate the abstracts, à la Oreskes, and then counted the papers, sorting them into their rating categories. They simply... counted... the papers. They thus arbitrarily gave some scientists over a dozen votes and other scientists zero votes. They treated these votes, these papers, as quanta of consensus.

That is something I hope we never have to deal with again. The sheer dimwittedness of this stuff scares the hell out of me. I'm not sure that we can have a civilization if people can do that and plant a 97% meme all over the world. If people can do what they did, and get the President of the United States to tout their "study", holy cow. This seems incredibly dangerous. This is like watering your crops with Gatorade. We can't do too much stuff like this and expect to have a reliable food supply, technology, hospitals, smartphones, a stable civilization. We're not going to have nice things.

Anyway, for good consensus research:

Bray and von Storch are outstanding. Their studies are real, and bear the customary marks of scientific effort.

The AMS studies are excellent, and also feature valid scientific methods. You can read their latest, July, 2014 study here.

Note that the American Academy for the Advancement of Science (AAAS) only cited the junk studies in their What We Know missive. That was incredible. (I'm going to use the word incredible a lot this year.) They cherry-picked the studies that gave them the inflated, false 97% figure they apparently wanted. I couldn't believe it. They completely ignored virtually the entire body of research on the consensus. They ignored the two sources above, which are of massively higher quality than the bizarre "studies" they cited, and much more up to date. They ignore every sub-97% study. Instead, they cited Oreskes, the one-pager from 2004, as one of their three sources, along with Cook et al's bizarre counting study (which is also a fraud case, with three separate fraud acts, three different lies about their methods...unbelievable.) I doubt the AAAS membership are aware of this, would have paid a lot of attention to it. I assume someone's going to have to retract that report if they want to be considered a scientific body in the future. The report is incredibly misleading. It's simply lying to the public, inflating the consensus by citing ten-year-old one-page studies that no scientific body could ever cite. That's unconscionable, and again this behavior is killing us. AAAS is sabotaging the reputation and trustworthiness of science as a whole. I would not assume that these scandals, these politically-driven frauds, will have no impact on the public's respect for science. They know we're lying to them. Someday we might really need them to believe us.

AAAS is not some two-bit organization known for scams. They're much more serious than IOP. They're an august scientific body. When I think of august scientific bodies, I think of AAAS and the National Academy of Sciences. We're running out of august and honest scientific bodies. They're falling like dominoes to political ideology and fraud. We need to have a home, a place to go for truth, integrity, and sober science. We need a place where the average applied IQ is somewhat north of 80.


We can't run out of scientific bodies. If science has no home, no reliable non-fraudulent, non-political institutions, I think that could seriously weaken our civilization. We're nearing a point where science will be broadly associated with fraud that particularly serves left-wing political agendas. We're nearing a point where the rational knower would be well-advised to ignore what contemporary scientists say, because employing such a heuristic would lead to the most accurate set of beliefs about the world, that one's ratio of true to false beliefs would be maximized by ignoring scientists. That scenario entirely possible as an epistemic reality given our current course. Another ten or twenty years of this, and we're there. The "deniers" could end up being the rational knowers, the pro-science among us.


Note that there are some very large cases emerging right now, where the scientific consensus was wrong. There's a massive new study in Annals of Oncology by Wang, et al. that reports that second-hand smoke does not significantly increase lung cancer risk (in women.) A review in PLOS One by Yang et al. finds no link between second-hand smoke in breast cancer. The second-hand smoking case is emerging as something that might never have been well-researched. That whole issue seemed political, where people were using the political process to coerce property owners (bars, restaurants, etc.) into enacting their preferred comfort and lifestyle settings, and using the machinery of the state to enforce various prejudices against smokers, who are now corralled into holding pens hundreds of feet from building entrances.


And the Washington Post reports that the US government is poised to withdraw its warnings about cholesterol. On both second-hand smoke and cholesterol, we were sold a consensus, we were sold "Science says X"-style propositions, which are generally barbaric and take no account of where science is in our lifetimes, in our era – which is an arbitrary era – and whether the methods employed in said arbitrary era are capable of giving us a workable grasp of reality on any arbitrary issue Y, and that our scientifically sourced grasp of reality will not be mediated or shaped by political processes, the media, and the ideological and financial biases of scientists. Scientists are wrong all the time. We have to be. In a sense, it's almost part of the job, bias notwithstanding. We need to be smarter about how we understand science.

In any case, there is indeed a consensus in climate science. It's probably in the 80s though, maybe as low as the 60s for some questions. It's not very meaningful to speak of "the" consensus, since there are a number of different propositions one could pose to climate scientists, and the two sources above do a great job of posing a range of relevant propositions. What you do with that consensus up to you. There will be all sorts of philosophical, ethical, and political factors that people will reasonably apply, and there will be a host of different perspectives along those dimensions. But I don't think climate change skepticism per se is justified. I would love to be wrong.

There are new arguments and insights in my Comment below, so I encourage you to read it. To my knowledge, no one has before made the argument about mitigation studies, for example.


18 Comments

I was denied admission to a PhD program because of my perceived political views: reflections of a sellout; how diversity would strengthen social science (Updated)

2/10/2015

20 Comments

 

 Some of my mentors advised me to take down the post from July 22, 2014 about discrimination because it would hurt me in the job market – that hiring committees would discriminate against me (again) for my perceived political views, or would discriminate against me for documenting past discrimination.

So I took it down last week. I now think that was a terrible mistake. People were looking for the post to cite in journal articles and e-mailed me to ask where it was. That sparked more reflection on this issue. This post is the result.

I think I sold out, and I will not do it again. I've reposted the discrimination story below. If modern American academia is so intellectually emaciated that they would discriminate against someone for having criticized something Jimmy Carter said seven years ago, well... I don't have a closing clause for that sentence.

This is ridiculous in part because there's really nothing to see here. There are no raging politics here. There's not much ideology at play. I'm not a conservative (not that there's anything wrong with that), or a theist (not that there's anything wrong with that). This could be an issue only in an incredibly tribal ideological environment, and I think most academic departments are operating at a higher level than that (i.e. I don't think discrimination will cripple my career.)

Discrimination against dissenting voices harms the field, and is morally repugnant. Let me offer some reasons why.

Our upcoming paper in Behavioral and Brain Sciences documents the impact political bias has had on the validity of some social science research (only some.) Of all the points we make in the paper, I care most about the practices that undermine scientific validity (and much of my focus was on that section). I care about the validity issues far, far more than I care about the political demographics of the field, how those demographics have changed over time, issues of hostile climate, or discrimination. Validity is everything. Science is centrally about method – valid methods.

I'm appalled at the idea that I could be excluded from the field for writing about my past discrimination, or for pointing out scientific scams, or for writing what I'm writing now. We need more diverse voices in social science. We need them for functional scientific reasons. Ideally, politics should have no place, and if there's anything I would change about the paper, it's that we focused on politics as the only level of analysis. We spoke of political diversity. I would have preferred to focus on intellectual diversity – to also get at deeper levels of analysis where there the field is homogeneous in some of its assumptions about human nature, the inferences that can be drawn from certain inferential statistics, and the particular brand of empiricism we operate with. We voted on most things, and I lost that vote. The paper is outstanding, far better than anything I could have produced without my collaborators, so I don't care too much about that omission. My point is just that this is deeper than politics, that my focus is more at the level of methodological validity, not simple political bias.

In the present era, political ideology is remarkably influential in academia. And it's one ideology in particular, which raises several challenges. Ideological assumptions are baked into research questions and measures in ways that I doubt the researchers are even aware of in many cases. Given the starting conditions of ideological homogeneity, this is inevitable if we assume human researchers with normal human minds. We need people in the field who can call out bias, and who can articulate why a method is invalid. We need people who will call out cases where human beings have been harmed by research that falsely linked them to damaging beliefs because they happened to be conservatives. A vigorous science needs a wide range of vigorous voices. Mexicans have some utility here. And we benefit from researchers who investigate prejudice toward overachieving minorities (some of my empirical work) instead of looking exclusively at prejudice toward poorer minorities (if you think poverty is ipso facto interesting and exclusively important, you're operating with some ideological assumptions, not a descriptive framework.)

Examples of the bias and its consequences

1. We've got researchers asking participants if hard work tends to pay off in the long run and labeling it "rationalization of inequality" if they say yes (Napier & Jost, 2008). In that same paper, they also measured simple attitudes toward various forms of inequality, and treated those attitudes as, again, "rationalization of inequality". By pure fiat, by a wave of the hand, they converted an established attitudes measure into a measure of rationalization of those same attitudes, with no attempt to measure rationalization. So a non-leftist attitude is rationalization, by definition apparently. That's incredible.

2. We've also got researchers asking participants if they agree with the analogy "The earth is like a crowded spaceship with limited room and resources", and calling it "denial of environmental realities" if they say no (Feygina, Jost, & Goldsmith, 2010). In other words, we've got people treating ideological canards as descriptive realities. Even analogies are being confused for facts. Analogies. The inability to distinguish between 1) ideological tenets, value judgements, and favored analogies, and 2) descriptive, observable reality is an epistemic and scientific collapse. The importance of that distinction can't be overstated. Scientists have to be able to tell the difference. Social science is especially vulnerable to this conflation, so social scientists should be most alert to it. This collapse might be limited to a couple of labs, but if we don't address it and eliminate it, our standing as a scientific field will be at risk.

3. We've got people reporting that Americans would prefer a flatter income distribution, when in reality they asked participants to imagine a universe where their own incomes would be randomly determined – i.e. a casino universe (Norton & Ariely, 2011).

Actually, I was wrong about the Norton and Ariely paper. They also asked participants for their preferences in a non-Rawlsian, real-world context, so I've deleted the remainder of this section. I apologize to Michael Norton and Dan Ariely for the error. It was inexplicable and inexcusable. (Joe Duarte -- October 20, 2015.)

4. We've got people measuring a purported fundamental personality trait of Openness to Experience by asking participants "I see myself as someone who..."

... is ingenious, a deep thinker.

... values artistic, esthetic experiences.

... is inventive.

... is sophisticated in art, music, and literature.

... likes to reflect, play with ideas


You've got to be kidding. These items are obviously grounded in – and biased in favor of – academia. This core personality trait of "openness" is measuring intellectualism and urban sophistication. These items are invalid on their face, and should not have lasted this long.

How are people in rural communities going to show up on this scale? How about people in developing countries? How would they express their openness to experience? Where do we give them a voice? They don't have opera houses, symphonies, and gallery openings with which to express their "sophistication" in art, music, and literature. They're structurally excluded and marginalized here. The items are not situated at the level of analysis necessary for a valid underlying human personality construct that is commensurable across cultures and backgrounds. We're not even speaking their language. I guarantee that many people in rural communities would be embarrassed to say that they are "ingenious" or "sophisticated". It would be unseemly to them, narcissistic and snobby. They might never use words like esthetic or inventive, not because they're stupid, but because they live in a different world and don't necessarily have use for the same terminology that contemporary intellectuals use.

This is deeply offensive. We're denying these people a voice. I grew up in a rural copper mining town with a population of 2,000. I know these people. I am one of these people. The bias of these items should be obvious to anyone, but it will be most obvious to people who never lived within 100 miles of an opera house. If openness is a real personality trait (I doubt it), commensurable across cultures, then we might ask them:

If they enjoy learning new things from their kids.

If they enjoy looking at the stars at night (FYI, seeing the stars well outside of a city is a radically different, much more powerful experience)

If they enjoy being in the woods.

If they enjoy the tranquility of being on a boat at the lake.

If they enjoy figuring out how something works and repairing it, physical things, like car engines, transmissions, or TVs.

If they enjoy the uplifting experience of church.

If they enjoy reading.

Note also that rural communities are likely to be conservative, and urban sophisticates are likely to be liberal or libertarian, so we've rigged a systematic political bias against conservatives showing up on our "openness" measure. The fact that conservatives score lower on openness is widely reported and savored by politically biased and incurious science writers and sloppy scientists. Take another look at the items. That's what "openness" is. That's what conservatives are scoring lower on. It's urban intellectualism and perhaps narcissism. Given its profound cultural bias, we have no justification for calling it openness.


5. Duke political scientist Evan Charney makes some powerful points in his upcoming commentary on our BBS paper. He reports that the longer version of the five-factor personality measure – the NEO PI-R – includes the following items in its Openness to Experience scale:

I believe that we should look to our religious authorities for decisions on moral issues.

I believe the new morality of permissiveness is no morality at all.

I believe that the different ideas of right and wrong that people in other societies have may be right for them.

In what sense do any of these items bear on a personality construct we would call openness to experience? "Openness to experience" here is isomorphic with liberalism. These are political and philosophical positions. The first item is a measure of traditional religious faith. The second measures endorsement of moral subjectivism, which has become popular in academia. The third measures endorsement of cultural relativism, which has become popular in academia. This is pure politics. As we saw above with the other studies, liberal ideological tenets are being smuggled into the measures like jail cake. Openness to experience is simply being defined as one's agreement with the ideological framework favored by contemporary American academic culture. This is nonsense. There's no science here.

Why would they?

There's been lots of noise lately about how conservatives don't respect science or the intellectual sphere. Why would they? They know we're politically biased. They know that today's academic and scientific communities are dominated by liberals and leftists. They're not morons. Anyone who has lived a significant number of adult years is going to have common sense intuitions and wisdom about the nature of human bias and human frailty – intuitions that are fully validated by decades of flagship social psychology research on motivated reasoning and assorted cognitive biases. They know we're biased. They know that academia doesn't like them, doesn't respect them or their values. Why in the hell would they trust us? Why would they trust us given that we measure openness – and loudly report their paucity of it – based on self-reported "sophistication"? Vanity is a vice to them. In what universe, given what priors, would it be rational for members of one subculture to flatly believe everything scientists claim when they know that science is dominated by a subculture that despises them?

Conservative distrust of academia is trivially easy to justify given what we know about bias – and what they
know about bias. Let me remind you of examples 1, 2, 3, 4, and 5 above – that is what we do. Our bias isn't conjecture – it's simply a fact (and to be fair, we should expect any homogeneous ideological or intellectual community to be biased, and for some of those biases to be deeply embedded, nonconscious, or implicit, given basic facts about human nature – I don't assume that liberalism is itself to blame.) Given all of this, it makes perfect sense that the most highly educated conservatives are the most distrustful of academia (see Kahan.) It seems eminently rational and wise. They would have to be dimwitted to trust us given the basic facts here.

Taxonomy

The above examples are profound errors. They undermine the validity of the studies, and they are pervasive. It's important that these errors be caught and eliminated. They're too large, too severe, for a modern science. Our errors need to be much smaller. The field as it stands doesn't catch these errors, and these studies are published in top journals. No one has attended to these issues, or developed the underlying epistemological framework that social science may need to prevent them. We lack a taxonomy with which we could better classify and understand these validity issues. We don't even have labels for these phenomena, and that cripples us. Human cognition is heavy on categorization and conceptualization (and those are often isomorphic, in that having a concept for something serves to categorize it, like how the concept "flower" bundles together all past, present, and future flowers as a category of thing, and distinguishes flowers from trees, pine cones, and hot dogs.) Humans have a hard time grappling with things for which they have no ready concept or label (see Lisa Feldman Barrett's wonderful work on how having a concept for an emotion – a word for it – shapes whether and how we experience that emotion).
I think my methodological work here is valuable to social science, especially the upcoming systematic integration of these issues into an epistemological taxonomy, the new statistical methods, etc.

Moving on...

6. And of course we've got people linking climate skepticism to belief that the moon landings were a hoax when only 3 out of 1145 mysterious blog survey participants took those positions (Lewandowsky, Oberauer, & Gignac, 2012). This can't happen. It harms innocent people. We simply can't go around harming innocent people, certainly not our participants. Free market endorsement was linked to rejection of the HIV-AIDS link when 95% of free market endorsers agreed with the HIV-AIDS link. We might linger on the impact such a planted link might have on people's dating prospects and romantic lives. These false and incredibly damaging links were widely reported in the media. Not just by hacks like Chris Mooney, but by The New York Times and Scientific American.

No one looked at the data, except for climate skeptics, and they were ignored. No one in the field called out the scam. Quite the contrary – the Association for Psychological Science proudly reported the false findings in its news magazine, and fabricated new ones. They just made up findings that were never reported in the paper, and which are completely false, such as that endorsement of free markets predicted belief in the MLK assassination conspiracy and the moon hoax nonsense. There were no such links in the data. In fact, free market endorsement was negatively correlated with belief in the MLK conspiracy, and uncorrelated with the moon business. APS just made it up. They committed incredible, incomprehensible fraud. They've also refused to correct it, which any random small-town newspaper would do. I've never heard of a scientific body fabricating findings whole cloth. It's deeply disorienting, makes me want to double-check my sensory perception and connection with reality. How could APS fabricate? Who fabricates? Someone needs to either purge APS of those fraudsters, or purge the S and revisit their legal classification as a 501(c)(3). (More on the APS fraud next week.)

Apolitically, we've got researchers misusing statistics, from linear correlation to mediation to SEM, to make inferences that do not follow from those methods. Correlations are often converted into likelihoods, by both scientists and science writers. For example, if X is positively correlated with Y, people are taking this to mean that if a person is high on X, they're likely to be high on Y. This is wildly incorrect. Most researchers know this, but we're not doing anything to correct those who don't and science writers who convert our correlations into likelihoods, like Adam Corner. We've got lots of significant correlations that are driven by variance on one side of a scale, which will impact the inferences we can make from them. In general, natural human languages (e.g. English) aren't very good at describing probabilistic truths, which are the only kinds of truths we report. This is a deeply interesting issue to me, and I plan to do a lot of work there.

Conclusion

We need more researchers and scholars to identify these issues. We need to invite them in, not keep them out. We need more social psychologists who grew up in rural communities. We need people who are willing and able to advance our epistemological framework, and thus our science. As a field, we need to understand that the breadth and depth of fruitful scholarly inquiry might extend far beyond any present-day political ideology. And really, we do need more Mexicans, more African-Americans, and perhaps most of all more Native Americans. I know of two Mexican social psychology faculty in America. Two isn't enough. There are too many obvious vulnerabilities in doing behavioral science from the perspective of one culture.

I was deeply disturbed at the discrimination I've experienced in part because the loud commitment to racial-ethnic diversity was instantly sacrificed in favor of prejudice against a perceived conservative. It made the field's commitment to bringing in people of color and first-generation Americans seem less than serious, a subordinate and symbolic proxy for ideological conformity. An empirical field needs to put ideology in its proper place, which is no place at all. And my ethnicity is perhaps not unrelated to my willingness to speak out. Embracing diversity means embracing cultural differences, and those differences will manifest at multiple levels of analysis, in all sorts of domains. I'm not sure people think through what diversity means on the ground. The passivity of the field with respect to calling out scams and fraud might well be related to the dominant racial culture of the field, or a subculture, and people of different races and cultures might have different comfort levels with confrontation, directness, and so forth. Or not. It's an empirical question. No hay peor lucha que la que no se hace.

In any case, discriminating against me is discriminating against a person who brings a different racial-ethnic cultural background, one that the field severely lacks and has committed to including. Discriminating against me is discriminating against a person who brings a rural background, another underrepresented perspective in the field. It's discriminating against the son of immigrants, another dimension of inclusion marked by the field. It's discriminating against a person for whom English was a second language. It's discriminating against someone who brings a different intellectual framework, who won't have the some assumptions – both explicit and implicit – that prevail in our field.

Every single one of these dimensions of inclusion and diversity has functional, scientific, empirical value to the field. Those underrepresented dimensions will lead to new hypotheses, new discoveries, and more reliable identification of invalid and biased work. I trust that the six substantive examples I gave, and the various other reflections, were clearly articulated and satisfied the standards of intellectual rigor and insight that would be expected of a member of the academy. I trust that we agree that the practices I pointed out were worth pointing out. I trust that it's clear that a decision to discriminate against me is a decision to have a less diverse and less vibrant social science, and that such discrimination would vividly contradict the field's explicit and resounding commitments to diversity and inclusion. If it's going to happen again, if any scholar is inclined to ignore the merit in my contributions out of resentment, tribalism, or partisan political antipathy, be sure to clean your mirrors.

We were born in an arbitrary place and time – this one. There were never any guarantees. The state of social science is free to vary along those dimensions of place and time. There is no reason to assume that an early 21st-century social science dominated by a particular political ideology and the residue of postmodernism is the Platonic height of behavioral science. There's work to be done, and I'd very much appreciate the opportunity to do some of it.


The original post from July, 2014

I was denied admission to a graduate program because of my political views

Actually, it may have been more my perceived views than my actual views.

Now that the BBS paper is posted, I'll tell this story briefly (we cut it from the paper.) Some social psychologists have been skeptical of the idea that people in the field suffer discrimination due to their dissent from left-wing politics, even after many social psychologists explicitly said they would discriminate (against conservatives in particular.) Some skeptics have demanded systematic evidence of actual discrimination, which is a bit cheeky of them, since we know that such evidence is almost theoretically impossible to collect (especially in an academic field, given its career structure and stark power imbalances.) But I can offer one account.

I applied to several PhD programs in Social Psychology, and was accepted by Berkeley, Arizona State, and UNC - Chapel Hill.

At another program, the faculty had apparently seen my blog (an old blog that I canned later that year). Among posts about my recent marathon experience, I had posted about the mass resignation of all fourteen Jewish members of the board of advisors of the Carter Center, former President Jimmy Carter's nonprofit. They resigned because Carter's new book seemed to suggest that Palestinian terrorist bombings of Israeli civilian targets were justified until a Palestinian state was established, or a particular type of peace accord was accepted by Israel.

In my post, I supported the board members and criticized Carter's apparent tolerance of terrorism¹. On a phone interview, a faculty member from the social psychology program directly asked me about this blog post (and no others.) She also asked if I "really" felt that way about Jimmy Carter. She also openly stated that all of the faculty in the program had a problem with my post, except for her (it would've been 4 - 6 other professors), and that they all opposed my entry into the program. From her questions, I got the impression that my politics needed to clarified and vetted before final decisions were made. They subsequently denied me admission, with no further interaction or visits. (If it matters, this program was somewhat less selective and prestigious than the programs that accepted me.)

That was an extremely awkward phone call. I was blindsided, was not at all prepared to talk about politics or my precise feelings toward Jimmy Carter. It's the kind of thing that could not happen in a normal professional environment, and would give HR people nightmares if it did. Nothing like this ever happened before my entry into academia. That she was willing to openly discuss the fact that the faculty opposed me because of my apparent political views, and was willing to actively probe my political views, speaks volumes. The academic climate with respect to political/intellectual diversity is much like the Mad Men universe with respect to women – blind, clueless.

During the call, I got the impression that they thought / were worried that I was a conservative. The horror. I'm a secular libertarian, but many academic ideologues don't make such distinctions. They're not very aware of the intellectual landscape², know little of the enormous volume of space in that landscape outside of the modern leftist framework, and they collapse it into binary us/them boxes. You're either with them, or you're with Sarah Palin / Glenn Beck. (FYI, I know very little about Glenn Beck and his intellectual crimes – I just know they hate him.) Note that it was siding with a bunch of liberals who served on the board of the Carter Center that got me in trouble, along with direct criticism of Carter. Remarkably narrow straits...

1. I oppose murder, and mass murder, and I think one of the absolute tragedies of our era is how casually we justify mass killings, or ignore them, when they're distant from our daily lives – and how mass killings are so easily legitimized by politics. Killing a person is an enormous thing – there's nothing more enormous, and nothing more irreversible. The amount of suffering in the world staggers me, and can paralyze me if I let it – especially all the killing. If we target an alleged terrorist in some hardscrabble village in Yemen with a drone strike, and we also kill his kids, or his neighbor, or the mailman... that's an absolute catastrophe, a rupture in the universe. TV stations should cut to the news of the catastrophe – it should be as big as a space shuttle blowing up. We should know their names, and we should be in their debt, their families' debt. But there's so much of it, that we don't know their names, we don't pay our debt, and I feel terrible going back to coding data in SPSS. Steven Pinker is right that violence is down wordwide, but 1 is a towering number – if you're a just a kid, your whole life ahead of you, you should never be blown up while eating lunch with your family, so that we can get Israel to the bargaining table. So yeah, I was very disturbed by Carter's statements.

2. It's also worth noting that one's intellectual or philosophical framework need not give rise to a political identity. I'm not comfortable with politics being used as the primary sorting variable in the intellectual sphere. People don't have to care about politics, and separately, they don't need to have an easily labeled political identity that neatly fits the political terrain of our era. Many scholars and scientists do not fit into such schemas. (Politics as a sphere is central to the modern left in part because their ideology includes explicit and specific claims about the force of politics and power structures – life is largely about politics in that frame. Most of the forces in people's lives are attributed to politics, privilege, discrimination, and so forth, and there isn't a whole lot going on in human affairs that isn't political, at least not that they talk about.)



20 Comments
Forward>>

    José L. Duarte

    Social Psychology, Scientific Validity, and Research Methods.

    Archives

    February 2019
    August 2018
    July 2017
    December 2015
    October 2015
    September 2015
    August 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014

    Categories

    All

    RSS Feed

  • Home
  • Blog
  • Media Tips
  • Research
  • Data
  • An example
  • Haiku
  • About