I think I've been negligent in not making the actual climate science consensus known to people. In one of my projects it occurred to me that no one has made the information readily available, at least not to my knowledge. I'll save the details for the journal article, but I think a quick snapshot of the consensus might be helpful.
First, there is no 97%. There is not a single survey of climate scientists that reports 97% agreement with the proposition that most of the observed warming was caused by human activity. There are three recent and high quality studies that surveyed climate scientists on this question or its semantic equivalent. The scholars who performed this work are qualified, competent survey researchers, and in one case the study was published in an esteemed international journal of survey research. Here are the results:
The researchers, respectively: Farnsworth, S. J., & Lichter, S. R. (2012). The structure of scientific opinion on climate change. International Journal of Public Opinion Research, 24(1), 93-103. Link Bray and von Storch. "A survey of the perceptions of climate scientists 2013." Helmholtz-Zentrum Geesthacht, Geesthacht. (2014). Link Stenhouse, Neil, et al. "Meteorologists' Views About Global Warming: A Survey of American Meteorological Society Professional Members." Bulletin of the American Meteorological Society 95.7 (2014): 1029-1040. Link NOTE: I am not aware of any other studies published in the last five years that satisfy the requirements here – direct survey of climate scientists on the key attribution question. Other studies are either not direct surveys, do not restrict their samples to climate scientists, or do not ask whether humans are responsible for most of the warming. (There are earlier versions of the Bray and von Storch and Stenhouse et al. studies – I report the most recent surveys.) For example, Doran and Zimmerman, Anderegg et al., and Verheggen et al. do not satisfy the criteria. For more on the Verheggen study, see my paper in Environmental Science & Technology. * It may be more accurate to say 78-81%, since Farnsworth and Lichter ask a more ambiguous question – whether "human-induced greenhouse warming is now occurring", not explicitly whether most of the warming is human-caused. The explicit surveys give us 78-81%. The 97% meme is scam that will likely be a formal topic of study for future historians and other scholars. It arose chiefly from a fraudulent and invalid study – Cook et al (2013). This was not a survey of climate scientists, but rather a study where a team of activists read academic paper abstracts and decided what they mean. Setting aside for a moment the fraud that was later revealed, the study was based on a faulty search of broad academic literature using casual English terms like "global warming", which missed lots of climate science papers but included lots of non-climate-science papers that mentioned climate change – social science papers, surveys of the general public, surveys of cooking stove use, the economics of a carbon tax, and scientific papers from non-climate science fields that studied impacts and mitigation. The team seemed to have no idea how to search scientific literature and unfamiliar with meta-analysis techniques**. The team of activists wanted to deliver a high consensus figure to advance their political cause – an impossible conflict of interest. The paper includes repeated lies about their methods, and there are no valid findings from the study. No estimate of a consensus can be computed from their data by any method known to science. The journal editor who published and promoted the paper is Obama advisor and campaign donor Daniel Kammen, which created a massive conflict of interest when the fraud was disclosed to him and the journal – Obama had already famously cited and promoted the false finding, as it served his policy priorities. Kammen and the journal have done nothing to manage that conflict of interest, and have yet to retract the paper. Both scientists and the media have almost exclusively cited the junk studies conducted by unqualified political people like Cook and Oreskes, rather than qualified researchers. The junk studies generated the higher estimates, which is probably why they were cited more. The valid scientific studies, performed by trained researchers, have largely been ignored. This hints at a larger problem, may be an example of something like Gresham's Law, and will be more thoroughly explored in peer-reviewed literature. Tips for being a good science consumer and science writer. When you see an estimate of the climate science consensus:
Outlets like Chris Mooney, Scientific American, DeSmogBlog, ClimateWire, and the misnamed ScienceBlogs site are not alert to fraud and junk science if it promotes their political agenda. Channeling von Clausewitz, for those people science is just politics by other means. They'll cite and promote this stuff, and they won't cite actual scientific research – they've not reported the professional surveys above. None of them have yet retracted or corrected their promotion of the Cook fraud. When media and science writers start reporting the Cook fraud, outlets like Mooney and SciAm will probably be the very last to acknowledge the fraud, if ever. Our civilization is not in good shape in terms of how we manage the effects of politics on science, but we'll get better. ** In an earlier version of this post, I said that the Cook researchers were not scientists (I meant to say climate scientists.) Dana Nuccitelli, second-author of the Cook paper, objected to that claim. With this study, there are two groups of people who might be termed the researchers – the raters who conducted the study, and the authors of the paper, groups that only partially overlap. The raters were not generally scientists, including in their ranks a luggage entrepreneur and a blogger for whom English is a second language (Jokimaki, who displayed notable scientific promise on the fraud-revealing rater forum.) There are some scientists among the authors, e.g. Sarah Green, but not climate scientists. I've removed that clause. Only climate scientists would be qualified to interpret climate science abstracts, and even then they wouldn't understand some of them and would not be blind to the work of colleagues and rivals. This vague subjective rating method is not promising. Addendum: After I first posted, Georgia Tech climate scientist Judith Curry argued that the relevant figure for the Stenhouse et al study is much less than 78%. For the self-identified professional field category "Meteorology and Atmospheric Science", the consensus is 61%. The 78% figure I cited is from the professional category "Climate Science." I automatically chose the highest estimate to be conservative in my report. To laypeople or even scientific outsiders, the difference between atmospheric science and climate science is unclear, but it's quite common for similar-sounding terminology to carry major distinctions among researchers. Curry's intuition is that the "climate science" people likely work on climate impacts, and that she would have chosen "atmospheric science" to classify herself had she participated, even though she is a climate scientist in the common use of the term. She further reports that the atmospheric scientists are the experts on attribution, and therefore their agreement carries more weight than the self-identified climate scientists in this study. This raises an important issue – who are the real experts, and how do we identify them? You might think that anyone who is a climate/atmospheric scientist is an expert on the human contribution to global warming, but I suspect that most climate and atmospheric scientists would disagree with that. It's 2015, and science is very specialized. I'm not sure there are more than 200 experts on climate change attribution, or specifically atmospheric warming attribution. There might not even be 100. For now, I'm leaving the 78% figure, for at least one reason. In their discussion, Stenhouse et al. report that they asked respondents about warming over the last 150 years. Six respondents e-mailed them to say that their answers would have been different had asked only about the last 50 years. They're not explicit, but I take the implication to be that answers would have changed from less attribution or confidence in human forcing to more attribution. I don't know how many other respondents' answers were shaped by the wording of that question, so for now I go with the highest estimate, 78% from the "climate science" category rather than the 61% from "meteorology and atmospheric science."
11 Comments
Over at EverydayFeminism, Andrew Hernann wrote an article titled "Why the Idea That Islam Promotes Intolerance of the LGBTQIA+ Community Is a Lie"
At no point in the article does he present any evidence, even though this is an easy issue to resolve with an evidence-based approach. In the world, there is a resource called data. In particular, there is this thing we often do called a poll or survey. In Spanish we call it an encuesta or un estudio. Pew is very good at encuestas. They report that it is quite difficult to find a Muslim country where even 10% of people think that homosexuality is morally acceptable. Gallup is also exquisitely good at encuestas. In a survey of British Muslims, they were unable to find a single person who thought that homosexual acts were morally acceptable. Not one man. Not one woman. The news in better in the US. I've seen 40-ish percent support for gay marriage among American Muslims, lower than most other religious groups but not nearly as stark as most Muslim communities around the world. I don't know what the numbers are for the moral acceptability of homosexuality, which is the more important question. America has far fewer Muslims than Europe, and they're often not included in polls as a category the way Catholics would be. American Muslims tend to be less radical in general than European Muslims, and much less so than Muslims in Muslim societies. At Salon, Chris Stedman approvingly quotes John Corvino: “There’s no doubt that there’s a great deal of religion-based bigotry against LGBT people, although it’s hardly limited to Islam. The Hebrew Scriptures also prescribe the death penalty for some homosexual conduct, but you don’t typically see people using this to inflame anti-Semitic or anti-Christian sentiment,” said John Corvino, “To single out Muslims in this way is both unhelpful and unfair.” The issue isn't what the ancient books say – it's whether people embrace those edicts today. Jews are not executing gays, no matter what their book says. This is why no one is mad at Jews for executing gays. These are very simple observations. Muslims are executing gays. This whole issue is about behavior and attitudes, not source material. It's about what is happening to human beings right now. Ideology is making people pluck their own eyes out. It's crazier to say that Islam is not intolerant of gays than it is to say that evolution isn't true or that the earth isn't warming, because the evidence is much simpler and more accessible than the workings of natural selection, punctuated equilibrium, and pooled climate anomaly data. All you have to do is look at a poll. The EverydayFeminism writer thought this was an argument: Narrowly citing the Qur’an (Islam’s holy text) and various Hadiths (teachings and accounts of the Prophet Muhammad), some Muslims argue that Islam is cissexist, requires patriarchy, and forbids homosexuality. However, many other Muslims maintain that Islam demands compassion, acceptance, and love. Arguing that an omniscient God created humanity — including the vast diversity within it — they insist that we should not discriminate against one another. As such, Islam does not promote intolerance. People say that Islam promotes intolerance. Unfortunately, this happens in other religions, too. Some Christians, for example, have used the Bible to advance cissexism, patriarchy, and homophobia. (Emphasis in the original.) He thinks this issue is resolved by saying "many Muslims" and "some Christians." He thinks he can wave away Islamic intolerance by simply saying that people say it's intolerant, ergo, finis. All of this will be resolved by simply finding out what Muslims think about homosexuality. That is the answer, because that is the question. Reality must be the arbiter. If we're going to ignore reality to the extent we see above, all bets are off. Why am I upset? Mostly because people are dead. Iran's government murdered over 400 homosexuals last year alone (that's for the first half of 2014, so it might have reached 800 or more.) These are not small numbers, and Iran is only one of the eight Muslim countries where homosexuality is punishable by death. Muslim countries are the only countries where being an atheist is punishable by death – thirteen countries in all. The world is a serious place. People get killed because of ideologies and religions. It's also the kind of place where moral rebuke can be very powerful, the kind of place where if more people added their voices of rebuke to Muslim nations' persecution and murder of homosexuals, there might be more homosexuals alive today as a result. The world is definitely the kind of place where that kind of impact can arise from a moral roar. It's happened before. Outrage matters. Rebuke matters. Moral judgment matters. Sacking up matters. No thinker should ever be afraid to criticize a religion, ideology, or philosophy, no matter how their subscribers are "racialized." No one should be sanctified simply because they are brown. I love people. I wish happiness and achievement and struggle and love for them. This is only possible if they are alive. I enjoy reading about the fallen (lately Yonatan Netanyahu), but I generally prefer that people not be killed. People are so, so precious, including gay people. Addendum: Any discussion of group differences, certainly any criticism is Islam, leaves me with a nagging feeling that I need to articulate a broader framework. The account below is my first attempt. In the world we inhabit, whenever we matter-of-factly discuss group differences where one group comes out unfavorably compared to other groups, many people in that group will be offended. Being officially "offended" by data is a popular mode of response, a way of being, in American academia and among those who have been educated in American universities (it's somewhat less common in other countries, even Western ones.) People operating from this ideological framework tend to use the adjective offensive as though it's an objective property of an idea – they might say "Joe's claim that Muslims tend to be anti-gay is offensive" the same way that they'd say "This bike is blue" or "This window is cracked." They treat it as an undisputed objective property rather than a subjective appraisal highly dependent on the offendee's ideology or philosophy. That aside, I think it's reasonable for members of a group to be, at the very least, uneasy when they read or hear unfavorable comparisons of their group. I think it's especially reasonable for people to be uneasy if theirs is a minority group in the applicable context, as Muslims are in the US and in the West generally. My gut intuition is that when people rail against Muslims on a regular basis, there's a decent chance that they're racists – I'm not sure that they're likely to be racists, but I'd bet that it's a greater-than-baseline probability. A 30% chance that someone is a racist is more than enough for a minority to feel unsafe. I know what it's like to feel unsafe. I've been jumped by racists, even as an adult (and in Telluride, Colorado of all places.) I know how racism – or even the prospect of it – can make us sweep the room. I think it's common for racial minorities in America to want to know if they can count on someone when it matters, if they can count on a person to stand against racism when it surfaces. When someone criticizes a group, it's natural to be uncertain as to whether their criticism is bounded and intellectually sincere, or whether it's an outlet for racism, a dog whistle. If you're Muslim, and you're not sure where I'm coming from given my heavy criticism of Muslim anti-gay attitudes, let me open my hands and tell you where I stand and what you can expect from me. Call it a social contract, a Joe Code: If you're Muslim, it's all good in the hood. I've got your back. If I meet someone and he introduces himself by saying "Hi, I'm Mohammed, and I'm Muslim", I don't think less of him as a person. We could totally roll. We could be close friends. A Muslim identity is not a problem for me. I won't worry that you're a terrorist, I won't assume anything about your views, and I'd love to have you as a student. 1. I will defend you against racism: verbally, physically, and politically. If I witness someone issue a racist slur toward a Muslim, I will be much angrier than I was writing this post. I will intervene immediately and castigate the racist, to put it mildly. I will physically confront them – I'll gladly risk getting beat up in most contexts. If I witness a racist physical assault on a Muslim (e.g. a racist or anti-Muslim slur accompanied by an assault), in almost every context that I can imagine, I'll stop it. I'll tear them apart. Admittedly, it's unlikely that I'll ever witness a racist assault on a Muslim in the course of a normal life. But if I do, the Muslim will instantly be two deep. (Re: the possibility that my predictions of my actions in dangerous situations are delusional or vainglorious, I'm pretty sure all of my close friends would testify that it is extremely likely that I would indeed act as described here. This could of course be another layer of delusion. For now, you'll have to take my word for it – I've got your back.) 2. I will defend your rights, and the rights of Muslims in the Middle East. I will always back your right to worship, and your freedom of speech. It does not and will not bother me when you speak Arabic or Farsi in line at the supermarket. I like other cultures, and I don't feel threatened by you. I'm against American drone strikes as currently implemented, and I think Obama should have had to go on national television to explain why he killed a 16-year-old American boy named Abdulrahman Anwar al-Awlaki in a drone strike in Yemen. 3. I hold adults accountable for their beliefs. If you're at least, oh, 21 years old, I will hold you accountable for what you believe and what you advocate. I hold pretty much all adults accountable for what they believe and advocate. I'm an atheist, but not a New Atheist, so I don't perseverate on the fact that religions tend to be full of literal falsehoods. Becoming a social psychologist has made me more tolerant of religion, more focused on the needs it satisfies. Of course, if you're a fundamentalist Muslim (or Christian), I know from experience that my detached recognition of the needs religion satisfies will likely be unsatisfying for you – you'll need me to believe, and I don't believe. If you believe that homosexuality is immoral, or that women should obey their husbands, we might have a problem. The problem will arise – for me – because of the beliefs, not because you're Muslim. Now, it might be accurate in many cases to say that you hold those beliefs because you're Muslim, and most Muslims do indeed appear to hold those beliefs. However, since "most" does not mean all, and in fact means less than all, I don't worry about your beliefs until I actually know them. Another way to put this: When I meet Muslims, it's not critical that I implement a fast heuristic based on the probability that they think homosexuality is immoral or women inferior. When I meet people, I'm not recruiting for a special Avengers team of gay rights activists. I can take my time getting to know them, and that's typically what I do. (This touches on why, while many race-based group stereotypes are accurate, they're rarely useful in real-world contexts. We usually have much higher quality, higher resolution information. For example, instead of leaning on a group's mean SAT math score, you can just look at a job applicant's SAT math score, or their math coursework...) I don't grant immunity to people because of their religion or race. I treat adults like grown-ups. The fact that someone's sexist attitudes are grounded in a religion doesn't do anything to redeem those attitudes for me. In fact, if your justification for your views is that these views are embedded in a collection of war stories written by desert barbarians over a thousand years ago, I think your justification is extraordinarily weak and that expecting an automatically elevated level of respect for such foundations is so bold as to be rude. I understand what it's like to be a minority in America, what it's like to be brown 24-7. I know there is racism. I know that many people will be predisposed to judge you harshly because you are brown, and because you are Muslim. This does not impact how I'll evaluate you if you hold anti-gay or sexist attitudes. As an adult, I expect you to be able to handle racism and hold defensible views at the same time. I don't think it's especially burdensome to stop and think about how someone might be born gay, how two gay men might find love in each other, or to meditate on the moral beauty and force of love. If you think gays are disgusting, your brown will not turn my frown upside-down. Ultimately, I think adults are responsible for the religions they subscribe to, if any. I think it's reasonable to expect grown-ups to disavow their religions if they're sufficiently destructive or unethical. I'm not saying that Islam is in this category. That's up to you. Some practice Islam while distancing themselves from the canonical anti-gay and sexist doctrine. If you're an adult, I expect you to work something out, to distance yourself from such doctrines, or to renounce Islam (I was an altar boy as a kid, and was a devout Christian who prayed every night right up until I read Broca's Brain by Carl Sagan at age 19 – at a certain point in the book, I instantly became an atheist. It was epiphanic.) Or you could have a very good argument, philosophically, for viewing homosexuality as immoral or women as inferior. I've not yet encountered a quality argument, so I'm skeptical (quoting a dusty book is not an argument to me.) I can tell you in advance that I'm not going to respect your anti-gay and sexist views. 4. Group differences are legitimate topics of discussion for me. You can expect me to freely discuss group differences. As a social scientist, I have little patience for ideologically-driven data denial. Reality is what it is, and scientists are supposed to traffic in it. Sometimes I might discuss Muslim differences, as I did in this post, especially stark differences as we saw here. Feel free to discuss Mexican-American differences around me all you want. I'm not offended by reality, and I don't tend to see dark motives in people noticing the Mexican-American dropout rate or whatever. I'm very principled and idealistic about freedom of thought and conscience. I think it's critically important that we be free to encounter and document reality, including group differences – even those that are unflattering for my own group(s). In closing, I've got your back when it comes to racism – racism as consensually defined. I'll take punches for you. I'll go out of my way to vote to protect your rights. If you redefine racism to include frank disagreement with the tenets of a religion popular with non-white peoples, well then I'm probably a racist in your eyes. That framework saddens me. I'm giving a talk at the International Society of Political Psychology conference on July 4 in San Diego. From the program:
Sa4.8 Two Views of Views of Political Bias in Social Science Research Room: Gaslamp 3 Section: New Theoretical and Methodological Developments Liberal bias or status bias? Studying psychologists as a social group. *Michal Bilewicz, University of Warsaw How ideological assumptions are embedded in research in ways that undermine validity. *José L Duarte, Arizona State University Session Organizer: José L Duarte, Arizona State University I was fascinated by my colleague and mentor Jon Haidt's analysis of the rate of certainty words in Sam Harris' books, and the reaction it sparked.
I recently came upon Daniel Miessler's post, where he defends Harris thusly: Haidt makes a major mistake here by thinking (and saying) that because individuals are bad at judging their own objectivity it must mean that using science to gauge happiness and suffering is a fools errand. The error should be obvious: Science isn’t based on individuals making judgements. It’s based on evidence that has been validated objectively by many. Harris isn’t proposing that he, or any other individual, sits down and divines a solution to complex world problems using his logic Ouija board. He’s saying that science should be used to show how policy changes affect human happiness and suffering. Big difference. This gets at some things that have been coming up a lot lately. I see very similar arguments from social scientists who deny political bias in the field. Ultimately I think we need to do a better job of explaining what we mean by bias, what bias looks like, the different kinds of bias that can arise, etc. But let's focus on the Miessler passage. He thinks bias is inevitably avoided or expunged by scientists "objectively" validating evidence. This could only be true if we knew everything there was to know about bias and how to catch it. Another way of putting this: It could be only true if we knew all the forms of bias we were vulnerable to, and how to catch them. We do not have any such knowledge. Our knowledge and understanding of bias is a domain of discovery, scientific discovery. We haven't sorted it all out yet. In fact, I think it would be more accurate to say that we're just getting started. I'd venture that there are biases in science, whether social science or other sciences, that we won't know about for fifty years. We know about more about bias today – and different forms of bias – than we knew in 1960. In 2060, we'll know even more. So it's not possible for scientists to confidently say "we're not biased." Some scientists and some fields will be more justified in worrying less about bias, but social science is the most vulnerable. Social science is mostly made of words. We can string together some words, have people respond to those words, give the variable a label, correlate it with responses to other words we've strung together, give that a label, and make sweeping declarations. We can give awful sounding labels to our variables, loaded with an ideological sword, like "Social Dominance Orientation", and we can say that huge swaths of society are high in this "Social Dominance Orientation", even if they aren't. We do that all the time. We routinely link groups to beliefs they do not in fact hold (most conservatives do not endorse SDO, and researchers routinely conceal this.) This is a huge scientific and ethical problem. That kind of bias leads us to say things that are false, so bias can be very costly. This kind of bias hasn't been rooted out yet. We can say it will be, sticking to a "science is self-correcting" mantra, but for that mantra to be valid, science needs to be self-correcting at a reasonable rate of speed, and it isn't. Note that it is social science that will provide our knowledge of well-being and happiness, the kind of knowledge Harris anticipates. We can add neuroscience and biomedical research to the mix, but that won't change much. There could be all sorts of biases in our measures of well-being and happiness. We could be stuck with a profound error, an error rooted in bias, and not know it for decades. This isn't necessarily a dealbreaker for positive psychology and the study of well-being. I'm sympathetic to Harris' project and I've always admired his work. I think he has criticized positive psychology for perhaps similar reasons. I'm nervous about his confidence in brain scans. I wouldn't talk about brain scans as a definitive measure of well-being without qualifying it as a distant future possibility. I would expect the first generation or two of neuroimaging to lead us into all sorts of errors of method and inference. Another important point: Bias will not be caught if everyone has the same biases. Or even if a sufficiently large majority of a field has the same biases. Those kinds biases are only caught by history. Bias will definitely not be caught if, in response to claims of bias, scientists simply say "science is self-correcting." That's a good recipe for non-self-correction, for reinforcing bias. Although in such cases I think perhaps we need to more clearly communicate what bias is, or what kinds of biases we're talking about. In social science, many leftists acknowledge the political bias of the field, but some minority simply respond by declaring that the field is not biased. I have yet to see anyone engage the examples of biased research my colleagues and I have offered, or that I have offered separately. There's nothing but silence in response to the substantive examples. I'm not sure what's going on there, but I think in some cases they have no schema at all for social science being politically biased. They don't know what that would look like, have no account of that kind of bias as category of bias, and they also tend not to see leftist ideology as an ideology – only the other side is ideological. In these cases they'll try to argue that reality simply has a leftist bias. It hasn't yet occurred to them that when a field is biased, people are expected to make that argument. They haven't lingered on the fact that their perception of reality being left-friendly is compatible with two realities: reality having a leftist bias, and the field having a leftist bias. Nor have they meditated on how to go about finding out which it is. I'm always dumbfounded that any social scientist would not understand that political ideology can profoundly shape and mediate the "reality" we see, and that being in a field dominated by fellow leftists could have a profound impact on their construal of said reality. That's an elementary observation, one that is intuitive to lots of carpenters, nurses, and baristas. We of all people have to understand it. I know a few academics who think conservatives are inherently malevolent. That type of cartoon universe is probably unmovable – we'll just have to keep walking, focus on people whose minds can be engaged. It's unfortunate that academia has become so amenable to cartoon universes – it's incompatible with good scholarship. Scholars of that quality shouldn't be paid to be scholars – they're not good enough to be in academia, but we have lots of them. I think the poor quality of so much of modern scholarship outside of the physical sciences is a big problem, and one that will impose serious costs on our society for a long time. In any case, bias comes in many forms, and we don't know about all of them. In the meantime, I think we need to do a better job of explaining what we mean by bias. We need a full account of political bias, its nature and operation. We don't have that yet. Regarding Harris and Haidt, I think their views are closer than is assumed. I think they differ more in style than in substance. For example, I think Harris would agree with lots of things Jon says here. The difference in style comes down to the fact that Harris is a hardass about the obviousness of certain kinds of truths, and Jon is the opposite. I also think it's extremely important to always know when people are talking about morality descriptively vs. prescriptively. Harris' project is ultimately a unification of both, but I've known lots of intellectuals who seem incapable of thinking of morality prescriptively. Jon's good at distinguishing the two, but people following these debates sometimes forget what's what. Even though Sam's a hardass, I think he deserves a lot of respect for sharing his journey. He's as open and transparent as he can possibly be. He shares his struggles, his exasperation, his inner processes, like no other scholar I've seen. He takes us on his journey, his dialogues and debates (see his recent encounter with Chomsky.) I feel for him sometimes. His exasperation and pain is quite evident as he grapples with bizarre arguments and unconscionable misrepresentations of his views. It will be interesting to see where he is in ten years, what ground he's covered. I often wonder about where I'll be, intellectually, in ten years, and Sam sparks the same curiosity in me. We're looking for examples of politically-biased research in social science. The fields we're most interested in are political psychology, social psychology, developmental psychology, and economics, but any example from social science would be appreciated. Please send them to [email protected].
A canonical form of bias would be cases where the researchers embed their political ideologies into their research and papers. There are some examples in this paper, and more here. Most uses of scales like Right-Wing Authoritarianism (RWA) and Social Dominance Orientation (SDO) would likely be good examples, especially when they're linked to conservatism without disclosure of where conservatives place on the scales (as far as we know, most conservatives don't endorse the items, don't score above the midpoint on SDO or RWA.) We'd love to learn about any other caricature scales, with properties similar to RWA and SDO – that is, scales with extreme or cartoonish items that participants rarely endorse. Additionally, any scale that has the following characteristics would be worth sending along:
In the above case, assuming general population samples, it's mathematically likely that most conservatives will not endorse the relevant views. We are especially interested in those cases, where conservatives are linked to views or traits that most of them do not hold or possess. If there are any caricature scales that are written from a conservative perspective, we'd love to have them as well. We assume the examples of bias in most social sciences will be cases of left-wing bias, given the fact that leftists dominate most of these fields, but we're interested in any kind of political bias. (We wouldn't be surprised, for example, to find conservative or libertarian biases in some economics work.) The key feature is the presence of ideological tenets and value judgments in the research itself, either explicit or implicit, in labels, items, and so forth. In my comment on Piercarlo Valdesolo's excellent Scientific American guest column on our paper, I wrote up some points that deserve separate post. I'll expand on these issues in journal publications.
Some people come into social psychology with a political agenda – their research and careers are driven by left-wing ideology. A common pattern is to investigate how non-leftists can exist, why they believe what they do, what's wrong with them, and how we can change them. This leads to pathologizing conservatives, and lately, non-environmentalists. It sometimes looks like dissonance reduction on the part of researchers. If you believe your ideology is true, but look out upon the world and see that large numbers of people don't embrace it, it can be frustrating (I've been there as an occasional pro-immigration activist.) You have a list of issues you think must be urgently addressed by society, yet society is not addressing them, perhaps doesn't even see them as problems to begin with. This can create a lot of dissonance – why don't people see what we see or think as we think? One way to resolve that dissonance is to assume that there must be something wrong those people, that there must be "causes" behind their positions other than simple disagreement, much less any wisdom on their part. So the next step is to inventory the uncharitable reasons why people don't embrace your ideology, the ideology you just know is true and noble. Jost's system justification theory is a good example of searching for an explanation for the non-universal appeal of leftist ideology. There's a heavy effort to find out how people can possibly justify "the system" or the "status quo", as contemporary leftists put it. The framing is often something like: Obviously it's an unjust capitalist system, so why aren't people revolting? How can the poor support a system that disadvantages them? These questions and framings are loaded with a number of ideological assumptions, and it's noteworthy that the field has not policed this ideological bias and allows such ideological content in its peer-reviewed papers. Assumptions include:
Beyond system justification, a lot of other research focuses on why people aren't leftist in their outlook, how they can "tolerate" or "rationalize" income inequality, why they don't care about the things leftists care about, whether they are "pro-environmental" and how to make them more "pro-environmental". Environmentalism is a rather new political ideology, and possibly a religion or a substitute for traditional religion, and it's alarming that social psychologists are promoting it and trying to convert people to it. Embracing new, abstract, and somewhat ambiguous values like "nature" and "the environment" is just assumed to be equivalent to rationality or something. Environmentalist values are contested by scholars all over the place (though not so vigorously within academia), but the field seems unaware of this, and unaware of their status as values, as ideological tenets, as opposed to descriptive beliefs about the world. Biased measures and scales are a common outcome of this kind of bias. We have a number of caricature scales, like Right-Wing Authoritarianism and Social Dominance Orientation, full of cartoonish items that conservatives do not actually endorse. But we report correlations that ride on variance between people at the floor on the cartoon items (leftists and libertarians) and people at or just below the midpoint (conservatives). That is, our correlations don't actually reflect a reality of agreement with the items (in data I've seen, only single-digit percentages of participants score above the midpoint on these scales, and this is largely unreported in papers using these measures.) We have measures of "free market views" that are loaded with proprietary leftist terminology like "social justice", a conception of justice specific to the left and which has no currency anywhere else. A core issue in all this is that we have complete hegemony over the words. We basically force participants to respond to questions of our choosing, made of our words, resting on our (often ideological) assumptions. Participants have no voice other than the voice we allow them. This is a huge validity problem, and a focus of some of my upcoming work. There's no equivalent conservative or libertarian bias in the field, probably because there are virtually no conservatives or libertarians in the field, and if they framed their research around similar ideological agendas it would be an easy catch for a leftist field. For example, if a conservative researcher framed his research by talking about the "staggering" number of abortions, he'd be run out of town. What's more, we often see researchers declare outright that their motivation is to advance their ideology, to spark political action, and so forth. I think it's impossible to argue that the field is not biased when researchers declare themselves to be political activists and that their research is an outlet for said activism. Again, the Jost Lab is the canonical example (but not the only one) – it would be hard to distinguish it from a left-wing political action committee or lobby, given the declared intentions and ideology of its researchers. You can always find explicit activism there. For example, Sharareh Noorbaloochi's lab page reads, in its entirety: Sharareh’s research focuses on behavioral and neural bases of moral-political attitudes and behaviors. She is currently studying the root causes of moral exceptionalism in the context of foreign policy disputes. Specifically, she investigates how political ideology, moral orientation, and system justification motivation shape moral exceptionalism and intends to use the findings to develop wise interventions aimed at alleviating this barrier to global justice. I take it that "moral exceptionalism" is the position that it's possible for one party in a dispute to have greater moral standing than another party, for example that the United States or South Korea might have greater moral standing than North Korea, a country ruled by a cult dictator, full of forced labor camps and people so poor that they are several inches shorter than their South Korean brethren. Whether moral exceptionalism is defensible or not is a philosophical position. This researcher has not only ruled on the issue, but has decided that it is a "barrier to global justice." Her research is geared around developing "wise" interventions to alleviate this "barrier." She also says she wishes to study the "root causes" of moral exceptionalism. What if the root cause is people believing that it's wise? What if the root cause is simply disagreement with the researcher? Why is a mundane philosophical position assumed to have "neural bases"? Why isn't she investigating the "root causes" of disagreement with moral exceptionalism? What's the root cause of that? What are the neural bases of wanting to increase taxes? What's the root cause of being anti-business? What's really behind being a Methodist? This researcher has already decided that holding a particular position that she disfavors has a certain class of "causes", including behavioral and neural bases. She has pre-emptively shrunk reality, the reality that she will allow herself to see. She will not find a root cause of wisdom, merit or sincere, reasonable variance on philosophical matters and values. Rather, she is extremely likely to find what she is looking for, if she's allowed the use of invalid and biased measures like Right-Wing Authoritarianism. If people who endorse moral exceptionalism tend to register a 3 on a 1 - 7 scale of cartoon items (mild disagreement), while people who reject moral exceptionalism cluster around 1 or 2, we can expect to see "Right-Wing Authoritarianism Predicts Moral Exceptionalism". (In fact, such a finding could easily be reported even if almost no one endorsed moral exceptionalism.) (I won't even linger on the ocean of ideology likely resting underneath the researcher's use of "global justice.") Science requires us to be more sober than this. We can't go in having decided already what kinds of causes must be in force. The above example is pervasive in social psychology – the recurrent attempt to attach "causes" to various non-leftist philosophical or political positions. Notably, contemporary leftist thought seems to attach nefarious "motives" to disagreement with leftism. It's a good protective immune system for an ideology to have, to pre-emptively marginalize and de-legitimize dissent as corrupt or ignorant and thus deter one's members from closely examining alternative schools. I assume leftist ideology is not the only ideology with such an immune system regarding dissent – it might be a recurrent feature of ideologies as such. It would make sense. In any case, a valid social science needs to immunize itself from this sort of ideological embedding. Regarding the Jost Lab in particular, I think it would be difficult to characterize its activities and methods as valid social science. I'm not sure how we would craft a credible defense of the above example of political bias – what sort of argument would redeem it as not biased after all. In his Edge piece in response to Jon Haidt's SPSP talk on political bias, Jost disputed that the field is biased, saying: "This is because we, as a research community, take seriously the institutionalization of methodological safeguards against experimenter effects and other forms of bias." The field has no safeguards against political bias, as least none that are branded as such. No such safeguards have scoured the Jost Lab of its deep political bias, as the above example illustrates. As far as I'm aware, Jost has not instituted any efforts to strip his lab of such biases, or to otherwise reduce them. His Edge piece takes leftist ideology for granted, instead of treating it as one ideology among a broad constellation of alternative frameworks. For example, he treats "social justice" and "environmental sustainability" as descriptive categories of research, along with mental health and intergroup relations, where "the best scientific minds have found liberal ideas closer to the mark." These are not descriptive categories, but rather are leftist values. They are also highly abstract concepts that entail assumptions that not all scholars will grant – in other words, they are question-begging. It seems to be in the nature of ideology to convert ideological tenets and value judgments into descriptive facts/concepts in the mind of the ideologue. Here I don't think Jost necessarily sees his ideology as an ideology, or understands how contingent and optional concepts like "social justice" and "environmental sustainability" are. If the Jost Lab can't get their heads around the fact that leftist ideology is an ideology, and aren't able to zoom back from their ideological framework and see the border between descriptive facts and values, they'll struggle to conduct valid scientific research in political psychology. If that's the case, they should shut down the "lab", perhaps reconstitute it as a left-wing political lobby or think-tank independent of NYU. If they're not prepared to shut down, they'll probably want to consider major bias-corrective measures. One measure would be to actively recruit non-leftist researchers, since all their current and former researchers appear to be staunch leftists, even self-proclaimed activists. A conservative or libertarian (or two) in the Jost Lab would make it much more difficult for biased research to come out of it, especially if they were included on every paper. In this case, I think such inclusion would be required. Lastly and similarly, I was also stunned to see SPSP diversity travel award recipients – most of whom were not underrepresented minorities – start their bios with statements like "I am an activist." Not a scientist. An activist. We know what kind of activist they will be, what ideology they will be trying to advance. And it was depressing to see virtually all the minorities focus their research on prejudice and stereotyping against their own minority groups. It's like as soon as a minority steps into the field, they go into their assigned corner and conduct ideologically-biased and approved mesearch on prejudice. This marginalizes the very few minorities we have, and somewhat weakens the benefits of diversity, since they're not attending to core social psychology research and the cultural biases therein. Research psychologist Piercarlo Valdesolo wrote a nice guest column on our BBS paper over at Scientific American.
Scientific American doesn't let me comment on their website, rejecting it as spam, even on a new account. This has been true for months – I'm not sure what's triggering the spam flag. Maybe length. I've pasted a greatly expanded version of my comment on the Valdesolo column below: ------------------------------------------------- Joe Duarte here. Thanks Piercarlo for the generous coverage. I agree that pitting competing biases against each other is not the right approach, and I regret the line from our paper where we say "but we can diversify the field to the point where individual viewpoint biases begin to cancel each other out." That doesn't quite capture what we mean. Mostly we mean peer review. If politically biased researchers knew that a conservative, libertarian, or even an alert liberal were likely to review their paper, I think it would change things. That said, I don't think scientists need to have political identities, and I think it might be better if fewer of us did. I regret that we even need to discuss the political ideologies of researchers in a scientific field, but the biases in social psychology are quite evident and force our hand. And in a world like ours, certainly in an environment like contemporary academia, researchers who don't have notable political identities will still make all sorts of implicit assumptions that shape and frame their research questions, and those assumptions might be traced to a political ideology. Ideological bias seems to be a special kind of bias. It runs deep and rides on fierce tribal mechanisms. We can see it in some of the comments here – people don't seem to understand the nature of political bias in social science, and how profoundly it can affect what people think is true, what people consider "facts." For example, one commenter said: "Social psychology research shows that political liberals tend to be more open-minded and less enamoured of authority..." That's a perfect example. When people consider whether social psychology is biased, they really ought to zoom back and think about how constructs are created and labeled, how ideological and cultural biases can be embedded in measures. In this case, the measures we use for the variables the commenter invoked are themselves deeply biased, and I think, simply invalid. The "openness to experience" scale asks people whether they consider themselves "sophisticated in the arts, music, and literature", "inventive", and whether they "like to play with ideas." On its face, this is a roundabout way of asking whether someone is an academic, perhaps a measure of urban sophistication. It's culturally biased, and many laypeople would see this right away. It's unclear how people from rural communities are supposed to show up on this scale. They'd likely be embarrassed to call themselves "sophisticated", seeing it as arrogant, and I wouldn't be surprised if that item correlates with narcissism. Rural people probably score low on "openness" because we don't give them any items they could relate to, on which a valid personality trait of openness trait could be measured in their case. (And we do see that people in developing countries score lower, and some researchers have questioned its validity as a result of the psychometrics there.) To the extent the measure marginalizes rural communities, this could partly or fully explain conservatives' lower scores. We'll need more research with rural samples to find out, but I would never use that scale in a sincere effort to measure openness. (The effect on conservative scores in extant research could be explained by college student participants who are from rural communities, small towns, and even small cities with few outlets for "sophistication" in the arts – I never see this background info reported.) The popular Right-Wing Authoritarianism scale is invalid. It's a caricature scale teeming with cartoonish items meant to make conservatives look bad. One item reads "Our country desperately needs a mighty leader who will do what has to be done to destroy the radical new ways and sinfulness that are ruining us." This is not serious. Notably, conservatives don't actually endorse RWA on average, nor do they tend to endorse the Social Dominance Orientation (SDO) scale, another invalid but popular caricature scale. The reported effects, the links between conservatism and these awful sounding views, are based on linear correlations that ride the fact that liberals are at the floor and conservatives are at — or slightly below — the midpoint. They're not endorsing the items, which means they don't agree with them, yet these invalid correlations are reported as though conservatives are "high" on these constructs, implying positive agreement. Here we've also got an issue of how we do statistics, how we misuse linear correlation, often ignoring actual placement on scales and the substantive meaning therein. The fact that correlation, and all our GLM methods, are based on deviation from sample means, rather than substantive placement, endorsement, agreement, disagreement, etc., is a huge vulnerability for us. I explore these issues in another paper. These scales have been in use for decades. If there were more than a couple of conservatives or libertarians in the field, these scales would likely have been exposed as biased and invalid back in the 1990s. It's also possible for leftists to catch these issues. Part of the issue here is that we have an incomplete account of methodological validity. We don't learn about these issues in graduate school. The nature of a caricature scale is a new insight, and the limits of the validity of correlation tests on scale measures are not well explored. With advances in our understanding of validity, even biased researchers can catch these issues. There is also some cool research adding texture to things like openness. See Brandt, et al's new paper. Another issue I see in the comments and in the reaction by some people in the field is a striking lack of knowledge of conservative, libertarian or other non-leftist thought. This is a predictable product of the insular effects of homogeneity. Ideology is powerful in shrinking reality, shrinking the intellectual landscape. Some people are assuming that their contemporary American academic leftist ideology is a set of descriptive truths, but they don't seem to have much justification for such beliefs or awareness of competing schools. They often take left-wing value judgments for granted, like the idea that income inequality is unjust, and never consider that people might genuinely, fundamentally, and benevolently disagree with this. Lots of people don't find any reason to care about variance in incomes as a morally relevant dimension. As sociologist Chris Martin put it in a guest post on Jon Haidt's blog: For instance, liberals often talk about inequality as a synonym for unfairness. They then describe conservatives as tolerant of inequality. However, inequality (in itself) may simply not be salient for people who aren’t liberals. It’s not that these people don’t care about fairness, but rather that they don’t think that inequality of outcomes necessarily implies unfairness. People (and groups) may differ in how hard they work, or in how valuable their contributions are in the current economy. (See John Tamny for an articulate example of a pro-income-inequality position. Note that whether someone opposes or favors income inequality isn't the key issue – people might not see any reason to care about this dimension to begin with, as Martin notes.) Now, these are philosophical issues, and as such they have no apparent relevance to an empirical field like social psychology. However, social psychologists often frame their research around ideological positions, and it can undermine the validity of our research. We give several examples in the paper. I was alarmed to see an SPSP symposium this year framed with this statement: "Economic inequality is at historic highs. The wealthiest 1% own 40% of the nation’s wealth. This staggering inequality raises the question..." That's remarkably biased, with the "staggering" bit and the collectivist premise behind "the nation's" wealth. It's disturbing that social psychologists are comfortable issuing ideological proclamations in their research, in their official business so to speak. I think it's fine to have a political identity, to believe that leftist ideology is correct, but ideological positions have no place in our work. Some number of people are coming into social psychology with a political agenda – their research and careers are driven by left-wing ideology. A very common pattern is to seek to find out why non-leftists believe what they do, what's wrong with them, and how we can change them. This leads to pathologizing conservatives, and lately, non-environmentalists. It sometimes looks like dissonance reduction on the part of researchers. If you believe your ideology is true, but look out upon the world and see that large numbers of people disagree with it, well there must be something wrong those people. So the next step is to inventory the reasons why people don't embrace your ideology, the ideology you just know is true and noble. System justification theory is the canonical example. There's a heavy effort to find out how people can possibly justify "the system" or the "status quo", as contemporary leftists put it. The framing is often something like: Obviously it's an unjust capitalist system, so why aren't people revolting? How can the poor support a system that disadvantages them? These questions are loaded with a number of ideological assumptions, and it's noteworthy that the field has not policed this ideological bias and allows such ideological content in its peer-reviewed papers. The assumptions include: that our "system" does in fact "disadvantage" the poor; that the poor (the left nth of the bell curve) in America are victims of injustice; perhaps that poverty (again, the always present left side of the always present bell curve) is inherently unjust and/or morally relevant; that the poor would be better off or their interests better served in some other kind of system (socialism? communism? a 20% chunkier welfare state, a la Denmark?); perhaps implicitly that the rest of society would not be worse off in that system; that our current system should be seen principally in terms of its capitalist or market elements and not by the abundant socialistic and regulatory elements that have emerged since the 1930s; that any harms or suboptimal outcomes are caused by the capitalistic or market elements and not by the socialistic, tax, or regulatory elements; that some sort of materialist, collectivist utilitarianism, perhaps a Marxist variety, is how we should evaluate societies; and that social psychologists should base their research around such philosophical-ideological premises. Beyond system justification, a lot of research focuses on why people aren't leftist in their outlook, how they can "tolerate" or "rationalize" income inequality, why they don't care about the things leftists care about, whether they are "pro-environmental" and how to make them more "pro-environmental". Environmentalism is a rather new political ideology, and possibly a religion or a substitute for traditional religion, and it's alarming that social psychologists are promoting it and trying to convert people to it. Embracing new, abstract, and somewhat ambiguous values like "nature" and "the environment" is just assumed to be equivalent to rationality or something. Environmentalist values are contested by scholars all over the place (though not so vigorously within academia), but the field seems unaware of this, and unaware of their status as values, as ideological tenets, as opposed to descriptive beliefs about the world. There's no equivalent conservative or libertarian bias in the field, probably because there are virtually no conservatives or libertarians in the field, and if they framed their research around similar ideological agendas it would be an easy catch for a leftist field. For example, if a conservative researcher framed his research by talking about the "staggering" number of abortions, he'd be run out of town. I was also stunned to see SPSP diversity travel award recipients – most of whom were not underrepresented minorities – start their bios with statements like "I am an activist." Not a scientist. An activist. We know what kind of activist they will be, what ideology they will be trying to advance. And it was depressing to see virtually all the minorities focus their research on prejudice and stereotyping against their own minority groups. It's like as soon as a minority steps into the field, they go into their corner and do the ideologically-biased and approved mesearch on prejudice. This marginalizes the very few minorities we have, and somewhat weakens the benefits of diversity, since they're not attending to core social psychology research and the cultural biases therein. So people should be cautious in taking findings at face value. In a field where the left dominates, it would be strange to simply accept findings of conservative flaws -- it's worth digging deeper. To a large extent, social science is made of words. The semantic and lexical flexibility researchers enjoy, and how political and cultural biases can exploit this flexibility, pose large risks to the validity of research. This reality is underexposed. Some of what we see in social psychology is equivalent to what would happen if we gave people in the humanities some stats software. They would treat their extremely high-level, abstract, and ideologically-loaded concepts as descriptive realities and correlate them with each other. Over the next decade or two, I think we'll see make substantial methodological progress and avoid a lot of these issues. |
José L. DuarteSocial Psychology, Scientific Validity, and Research Methods. Archives
February 2019
Categories |