Our new BBS paper on intellectual-political diversity in research psychology is the culmination of a project that started, for me, with this essay. I posted it on the SPSP Discussion listserv on March 10, 2011. It exposes the issue of political bias in social psychology research in a way that I've wanted to do for years. Although my research focuses on emotions, I'm very interested in broader methodological issues and philosophy of science, and will make substantial contributions to those areas over the course of my career. I've been heartened by the overwhelmingly positive response to this post from researchers all over the world (not what I expected) – 30+ positive, even glowing responses (it's time to come out of the dark, guys), and not a single rebuke. Some have asked for permission to use it in graduate methods courses, or to distribute it beyond its original audience. Feel free.
I have mixed feelings about calling out some of these researchers, and I've had deeply mixed feelings, in retrospect, about calling out a graduate student. I think the integrity of social science is important enough to defend even if doing so hurts people's feelings, and that academia has become a place where feelings matter a bit too much in relation to facts, science, asking questions, etc., but I hope all my future rebukes are directed at aged professors. What motivated me: Some of these research practices ultimately strip the research of its scientific standing, and that's a major problem – non-science in a scientific field. Not everything is science. Not even if scientists do it. Calling something science doesn't do anything to make it science. Publishing something in a scientific journal doesn't make it science. Lots of math and statistics don't make something science. Science is a particular kind of thing, a particular kind of method, and it's important that science survive the ideological tribalism of today's academic monoculture.
Some will answer that they just "follow the data", but they don't understand that following the data buys you nothing if the data is invalid. If you measure sunshine and call it oppression, nothing you do from that point matters. It doesn't matter if you use robust regression. It doesn't matter if you use bootstrapping for your mediation model instead of Sobel. It doesn't matter how reliable your measure of sunshine/oppression is. None of that matters. Your study was over a long time ago – when you measured sunshine and called it oppression, when you decided that "oppression" was an observable, descriptive variable (like below, where they asked people if hard work pays off and called it "rationalization of inequality".) HINT: When you pull an ideological tenet out of your Marxist theory text, you'll never be able to measure it – there's no way to measure whether people who disagree with your ideology are exhibiting "false consciousness", their beliefs unconsciously shaped by "capitalist hegemony". Social science shouldn't be a vehicle for revealing the pathologies that keep people from embracing your obviously true ideology. People who use it for that purpose tend to be stunningly ignorant of the intellectual landscape – they don't know enough to do a good job of accounting for the reasons behind people's views, since they only know their own ideology.
Our time, the time we happen to be alive, the time we encounter academia, is arbitrary. There's no reason to assume our time is special, any more than there's any reason to assume our culture is special. Whatever people in academia happen to be saying during this time, our time... we can't just assume that it's smart or true. There's no reason to assume anyone will take structuralism or postmodernism seriously a hundred years from now, or a thousand years from now. The prevailing schools of thought in the humanities during our time don't have to be good – it's entirey possible that they're terrible. This is quite normal. If we randomly chose a decade, even a century, between 500 BC and today, it's likely to be a time when the humanities weren't very good, when philosophy wasn't very good. It's rare that scholars are good or enduring, for reasons I don't fully understand. All the more reason not to mindlessly push whatever ideology you were taught in undergrad – it's very telling that all these biased social scientists doing invalid work are marching to the same ideology, and it's the ideology they were taught in undergrad, the ideology popular in American academia right now. There's absolutely no good reason why that ideology should weigh more than any other perspective supplied by the last 2500 years, and ideology should never be embedded in scientific research.
Here's the essay:
--------------------------
I've followed with interest the controversy stemming from Jon Haidt's address at SPSP. One issue that has not been discussed is how the political biases of the field have severely undermined some of the research. I propose that we have a serious problem. Most research in social psychology does not touch on politics and has no obvious political implications. However, some of the research in sub-fields like political psychology and attitudes has deviated sharply from valid scientific methods. Researchers sometimes embed ideological assumptions into their hypotheses, constructs, and measures, in ways that make their studies invalid or even meaningless. Regrettably, I can't properly make my point without evaluating the work of noted social psychologists. I'm willing to do so here, and in future settings, because a) I think this is a serious problem for the field – these biases may ultimately weaken our very standing as a science, and b) these practices have gone unchecked for years, and a frank and open consideration of them is long overdue.
My first example of the phenomenon is the Napier and Jost (2008) Psych Science article "Why Are Conservatives Happier Than Liberals?"
In this article, the authors want to show that conservatives are happier than liberals because they "rationalize inequality" (by which they mean economic or financial inequality, such as unequal incomes). This is already an unanswerable research question. Why? To rationalize is to explain away an uncomfortable reality, often by making excuses for it. It is dissonance reduction. Thus, a basic precondition for conservatives to rationalize economic inequality is that economic inequality be uncomfortable for them. However, economic inequality is particularly uncomfortable only for leftists. Conservative ideology does not feature economic inequality as an injustice or a problem to be solved. (Libertarians are also largely unconcerned about it.) Therefore it's logically impossible for conservatives to rationalize it, since they aren't particularly bothered by it. (Jost's own data confirm that conservatives are relatively unconcerned about economic inequality.) A research program centered on conservative "rationalization" of something that only liberals care deeply about has no apparent way forward.
So how did the authors conduct the research? In Study 2, they operationalized the rationalization of inequality with a one-item measure: 1 (hard work doesn’t generally bring success—it’s more a matter of luck) to 10 (in the long run, hard work usually brings a better life). High ratings on this item were cast as rationalization of inequality. In other words, the authors took endorsement of the efficacy of hard work and called it rationalization, then plugged it in as a mediator between conservatism and happiness (note that this belief about the efficacy of hard work is a constituent conservative belief – we might find that other conservative beliefs work just as well as "mediators" here) . There was no attempt (in either study) to capture or measure any actual process of rationalization – they simply applied the label to conservatives for endorsing this standard conservative view on hard work. (It may be worth noting here that hard work actually does pay off, as I assume anyone who has mentored graduate students can attest – this is observationally self-evident and supported by massive amounts of data. So people are being labeled as rationalizers for simply endorsing an obviously true statement.)
Since no process of rationalization was exposed in Studies 1 or 2, and since it makes no sense that people who don't have a serious problem with economic inequality could be accused of rationalizing it, the article's results are essentially meaningless. The data don't tell us anything related to the hypotheses. This is what I mean by a lack a validity – the data do not represent the construct, and given the nature of this construct, it's unlikely that any data could. This research is a scientific non-sequitur: From (1) Conservatives are happier than liberals, and (2) Conservatives believe that hard work pays off, we conclude (3) Conservatives are happier than liberals because they rationalize inequality. Our only way out is if we treat the following statement as an objective fact: Economic inequality is unjust. If we treat this ideological claim as a fact, as a description of reality, then we might assume that all people are motivated to rationalize such economic inequality as exists in their communities, and proceed from there**. But of course we cannot take this ideological claim as fact. It's a philosophical position held by one particular political ideology, and many people would disagree with it. Social scientists are in no position to ratify the truth or falsity of such philosophical positions.
In other work, Jost uses words like legitimize or justify, in addition to rationalize. The question might be something like "how do conservatives legitimize the status quo system?" All of these verbs are ideologically loaded, and the questions which rest on them are not answerable by social science. To ask why anyone legitimizes the status quo is to presume that the status quo is unjust and thus requires legitimization, rationalization, or justification. This assumption is fully an ideological/philosophical assumption, and has no place, nor any real utility, in framing scientific research.
Here are some analogous research questions: Why do liberals legitimize gay marriage? Are liberals less happy than conservatives because they rationalize abortion? These are exactly the same sorts of questions, and fully as invalid as the above. They presume that gay marriage is wrong and needs to be legitimized, or that abortion is wrong and must therefore be an object of rationalization. But of course, liberals don't grant that gay marriage or abortion are wrong, so there is nothing for them to legitimize or rationalize. A research program thus framed would have nowhere to go. If a scientist presented research framed by these conservatively-biased, loaded questions, we would immediately recognize it as scientifically invalid. But framed from a leftist perspective, such loaded questions have escaped scrutiny.
The field should discard ideologically-loaded constructs like these – constructs that have no scientific meaning because they rest on ideological assumptions, rather than observable facts.
A second example of how research is framed in biased ways: If we look at the the Jost lab website (http://www.psych.nyu.edu/jostlab/), we find that many of the researchers frame their research around leftist ideological assumptions. To take just one example, Irina Feygina describes her research as focused on "the effect of motivation to justify the socioeconomic system on denial of environmental problems, such as ecological destruction and global warming, and resistance to implementing imperative pro-environmental changes to the status quo."
This is alarming. Social psychologists know what the imperative "pro-environmental" changes are? How? When did we discover the correct human values and ideals, or imperative policy reforms? Environmentalism is a political ideology, and as such it rests on various philosophical assumptions and values (e.g. a conception of the natural world as sacred; a view of human activities as unnatural; resources as static and collectively-owned; and a propensity to value the preservation of status quo ecologies more highly than some increment of human prosperity). Reasonable people might embrace or reject environmentalism, in whole or in part, for any number of reasons. We cannot treat environmentalism as self-evidently correct, any more than we can treat conservatism or Kantianism as self-evidently correct.
Imagine if a researcher focused her research on "resistance to imperative pro-Christian changes to the status quo", or "resistance to imperative pro-business changes to the status quo". I assume you get my point. We should not be in the business of investigating why people "resist" the truth of our personal ideologies, and a researcher so motivated will likely struggle to maintain an appropriate scientific posture.
I offer a principle from all this: If a research question requires that one assume that a particular ideology or value system is factually true, then that research question is invalid. Stated differently, if a research question has no meaning unless we assume that a given political ideology is factually true, then that research question is invalid (and cannot be meaningfully answered).
Critical to any science is the generation of testable hypotheses. The practices I've highlighted above will consistently yield untestable hypotheses, because they rest on the assumption that liberalism is true, which will never be testable. It is, after all, a question of values and value judgments, which are not subject to empirical validation (at least not by our methods). Modesty requires that we allow for the possibility that reasonable people might embrace values that differ from our own. Notably, researchers who employ such ideologically loaded hypotheses are very likely to find what they are looking for. For example, suppose I wanted to show that conservatives are happier because they "rationalize" war. Mirroring the Napier and Jost method, all I would have to do is ask conservatives if they think some countries are a threat to the USA, and label their affirmative responses as "rationalization of war". I would plug it in as a mediator between conservatism and happiness, which would likely work out. I could then publish my findings in a journal, concluding that conservatives are happier than liberals because they rationalize war, and garner some good media coverage. Apparently, no one would stop me. But would I respect myself in the morning? No, because such an article would have no standing as a work of science, and its conclusions would be completely unsupported by the data.
Among the sciences, social science operates with the most flexibility in constructs, methods, and measurement. This makes us especially vulnerable to bias (see John Ioannidis' work for more on this). I think we should be vigilant, ambitious, and idealistic about keeping our science clean. I assume nothing but the best of intentions on the part of the researchers I've critiqued, and I don't at all enjoy publicly critiquing them. Nevertheless, I submit that the issues I've raised here are not minor -- these are serious violations of the valid practice of social science. Our credibility and even our very standing as a science are at issue, and will be questioned by politicians, taxpayers, and scientists in other fields if these practices continue. Admittedly, this sort of validity issue has not been well-elaborated in our training or the literature. Yet I'm confident most researchers will agree that what I've offered here is a straightforward extension of construct validity and the features of testable hypotheses. The biases at issue represent a (correctable) blind spot in our field, and an unsurprising one, given the large overrepresentation of liberals.
Finally, if any journal editors would invite an article that more fully and systematically examines the issue of how political bias often undermines the validity of research designs, constructs, measures, and reported findings, I'm quite prepared to write such a treatment. In fact, I've already begun. I also invite a response from Jost, Napier, and other researchers who have used similar methods, either to my e-mail here in this discussion, or in a journal setting.
------------------------------------------------
** That point of mine is actually bogus – I retract it. We would never be able to just assume rationalization in a scenario, a weird universe, where "economic inequality is unjust" is a descriptive statement about the world. In a universe where the injustice of income inequality is a descriptive fact, like the fact that the earth orbits the sun, we still wouldn't assume anyone was rationalizing such inequality. For one thing, there would be no reason to assume that it would be known to people, anymore than people would know how far the moon is just by looking at it. Secondly, we couldn't assume that they were "rationalizing" it even if they could see/know injustice like they see a Honda Civic. Rationalization is a cognitive process in a person's mind. You have to actually measure it to say it's there, or have very solid inferential evidence. This is science.
You could probably induce something like "rationalization" in conservatives by, for example, showing them a bunch of pictures of homeless people and doing other things to put them on the defensive about a market economy or income inequality. But that wouldn't be informative. You could do that to anyone, in any camp – it would be just like showing liberals and libertarians a bunch of pictures of aborted fetuses (the kind of thing anti-abortion activists will display in campus protests). If you got liberals and libertarians to be very defensive about abortion for a few minutes by doing that, I'm not sure it would mean much. It's the scientific version of gotcha journalism, unless you could isolate an interesting and distinctive cognitive process, something that advances a valid scientific hypothesis. (I didn't even mention the part where you have to measure rationalization in these example scenarios – the graphic inductions are only a tenth of the battle. How to tap into any subsequent rationalization processes would require much more work.)