I posted this comment in reply to Andrew Gelman's post on his website. Neil Gross and Gelman have submitted a Comment to our upcoming BBS paper. I'll have a lot more to say about our paper in the coming months, especially when it's officially published:
Thanks for your Comment. A few thoughts:
1) We don’t see political or intellectual diversity as a portable, acontextual value to be shopped around for application to any field. This is not about simply taking fields that are politically homogeneous and making them better through political heterogeneity.
Rather, it’s about social science. Political homogeneity is a threat to the validity of social science because modern political ideologies and philosophies carry with them all sorts of assumptions and tenets regarding human nature, free will, the force of social contexts, the force of innate differences, the validity and accuracy of stereotyping, and so forth.
More deeply and broadly, modern scholars might have any number of assumptions about what counts as ethical behavior, or what counts as rational behavior (e.g. Lord, Ross, and Lepper (1978 or so) assuming that people should change their positions on capital punishment simply because they were given purported data on its deterrent effects by someone in a lab coat, completely missing the fact that such views are heavily driven by principles of justice and deontological concerns.) This can be a problem if they tend to have the same assumptions, the same philosophy.
An apolitical example is how we’ve somehow gotten away with measuring a purported personality trait of “openness to experience” by asking people to certify that they are “sophisticated in the arts” and “like to play with ideas.” This is an obvious and profound cultural bias. The field is dominated by white urban liberals, and not surprisingly the field has defined “openness to experience” as one’s congruence with white urban liberal tastes.
A credible and valid social science is extremely unlikely to arise from such a narrow cultural firmament. It would be miraculous if white urban liberals were able to expunge themselves of all their cultural and political biases in the conduct of their research. It would require remarkable training, training which does not exist, and may not be theoretically feasible.
In the paper, we gave a few examples of politically biased research (different from above.) I’m curious if you agree with those as examples of bias (e.g. treating environmentalist tenets, analogies, and prophecies as “realities” and calling it denial of reality if participants disagree with them, which is to say, if they disagree with environmentalist ideology. There, researchers conflated ideological tenets and values for descriptive reality, which is a radical deterioration of social science.)
2. The evidence you’ve requested – showing that greater intellectual diversity in other social sciences pays off on the outcome variables of interest – is not possible as far as I know. There are very few social sciences (six?). They’re all dominated by urban white liberals. There isn’t enough variance to detect an effect, and any such effect would require a decades-long induction of some kind. It would require the entry of large numbers of non-leftist researchers, be they conservative, libertarian, or some other heterodox perspectives (or even lots of apolitical people, but such people would be less effective at detecting left-wing bias than libertarians, conservatives, et al.) Without the entry of large numbers of non-leftists, your test would be impossible (and also, people from rural backgrounds and blacks, Latinos, and Native Americans – we’ve made essentially no progress on diversity on those fronts either.) But the entry of large numbers of fresh minds is exactly what we are proposing, so your preferred evidence logically requires your adoption of our proposals, yes?
(Economics is not as dominated by leftists as the other social sciences, but I still think there are structural leftist biases even there. Their strange brand of utilitarianism, their social welfare functions which they routinely seek to maximize by the massive use of state coercion to control and manipulate individuals, take their money, outlaw their preferences, etc. is deeply, deeply statist, almost psychopathic in how cavalier they are about using the coercive machinery of the state. Their core assumption that anyone’s “welfare” can be deduced and managed by distant strangers, by anointed economists and state agents, is quite radical and its ambition is not matched by its empirical support or coherence. We may come to find that economists are biased toward a central government role in economic life simply because it gives economists more to do, more power, more recognition, more stature…)
3. Doubling back to point 1, we didn’t talk about the military because we’re not in the military — we’re social psychologists focused on a problem and opportunity we see in our field. Perhaps the military would benefit from more intellectual diversity. They’re certainly extremely inefficient and are sometimes plagued by terrible leadership (see the Beirut Marine barracks bombing and their rules of engagement, and Mogadishu and the quality of the Army’s officers’ decisions in that debacle. The military seems to suffer from profoundly unintelligent decisionmaking at times. I don’t know if political diversity is the answer there, but some other kind of intellectual diversity might be. Some of their issues might be the general problems of large organizations, especially non-market organizations (I consulted at a nuclear power plant last year, and electric utilities have some of the same monopoly, non-market dynamics that breed complacency and bad decisions.)
Journalism has a well-documented leftist bias, which is to be expected given their demographic political homogeneity. I think some of the things we’re saying about social psychology could fruitfully apply to journalism.
I don't think you should be a skeptic. Not on the severity of human-caused warming, the safety of GMOs, or a few other things.
1. I'm a skeptic.
-- or --
2. I'm skeptical of the predictive power of climate science at this point.
When you choose the noun, you've chosen membership in a group. You've elected to classify yourself as an entity, saying "I am an X." You've adopted an identity, and in so doing you've set yourself up to be more biased and less flexible in the future. Why? Because it's harder to abandon an identity and one's tribal good standing than it is to modify a mere position. Plenty of social psychology research tells us this, or dovetails into this.
Right now, science is a flag of convenience for almost everyone, depending on the issue. Vast numbers of people on the left who know nothing about climate science and have no credible procedure for deciding what's true or not true tout their belief in significant future human-caused warming as a certification of their membership in a superior pro-science order.
People on the right, as well as many libertarians, have retreated to the skeptics' castle, fending off the classic Holocaust-revisionist tag of "denier"*, and casting the other side as "alarmists", or even "warmists", as though there is an extant ideology of temperature.
This is bad news for pretty much all of us.
If you're a climate skeptic, maybe you're right. Maybe you'll turn out to be right, say if the current slowdown in warming continues for many decades and we thus never come close to the amount of warming that the old models projected. In that case, human-caused warming will have mostly been a false alarm. (This would be a long-term disaster for the public's confidence in science, across fields, screwing us all, so I almost want some warming to happen. Almost.)
However, reality is famously fickle. You don't always get it your way. Sometimes we don't get to say "I told you so" when it's all over. Let's note some common skeptic positions: there is considerable uncertainty in our knowledge of the climate's sensitivity to CO2 in the context of myriad other forcings, most of the models didn't predict the current slowdown, estimates of climate sensitivity are constantly revised, we don't know enough yet, etc.
Those are all time-sensitive positions. That is, they can be resolved or dampened in time, as climate scientists advance their knowledge, lock down some forcings (assuming ECS is a real thing, or not too dynamic a thing), etc. If you're a skeptic, you should be prepared to no longer be a skeptic if, in five or fifteen years, some of these issues are resolved, if Judith Curry and other smart experts modify their positions, etc. This will likely be easier if you're not a skeptic, but just skeptical.
* You might take comfort in knowing that there is no scientific basis for a construct of "denier" or "denial" that distinguishes or differentiates how AGW skeptics or political conservatives process scientific evidence. Kahan reports that AGW skeptics know more climate science than believers do, so "denial" isn't going to be about knowing less. You should push back against any journalist, science writer, or scientist who uses the term denier, and ask them what scientific basis there is for it. Relatedly, Lewandowsky performed none of the normal scientific procedures to validate his construct of "conspiracist ideation". In research psychology, we don't generally invent new personality constructs without validating them by various methods. Just because someone says there is something called "denial of science" or "conspiracist ideation" doesn't mean these are real personality traits.
I was disappointed to see this article by Gayathri Vaidyanathan in Scientific American: How to Misinterpret Climate Change Research. It's an attempt to defang the recent surge of interest in Bjørn Stevens' aerosol forcing estimates. To some people, those estimates imply a lower estimate of ECS, and a reduced likelihood of the more extreme future warming scenarios (Stevens agrees with the latter point.)
The article is not just deeply biased – it's structurally biased.
1. It purports to debunk the implications climate scientist Nic Lewis draws from the new estimates, but does not link to his essay.
2. It does however link to Bjørn Stevens' extremely short press release where he disagrees with unspecified lower estimates of ECS that people have been making based on his new aerosol estimates.
3. The author interviewed and liberally quotes Stevens, but never quotes Lewis. The author gives no indication that she even attempted to speak to or interview Lewis. The whole thing is structurally biased against one view and in favor of the other – only interview one side, only link to one side.
4. The SciAm article suggests that Stevens debunked Lewis: "Soon after, he took the unusual step, for a climate scientist, of issuing a press release to correct the misconceptions. Lewis had used an extremely rudimentary, some would even say flawed, climate model to derive his estimates, Stevens said."
In reality, Stevens never mentioned Lewis in his press release. Moreover, he never said anything about any models or methods. He only said that he disagreed with some of the implications people are drawing, but he doesn't elaborate. And what's with "some would say"? Who? Did Stevens say that some would say that Lewis used a flawed climate model? This is weaselly Rolling Stone style journalism.
The above SciAm passage clearly implies that Stevens said these things in his press release. Unless Vaidyanathan based the above passage on an interview with Stevens, it's a fabrication. And if she did get this from an interview, the passage needs to corrected so as not to imply that Stevens said these things in his press release. None of that fixes the structural bias of the article, and the lack of engagement with the scientific issue the article purports to cover.
The article doesn't have anything to say about Lewis' purportedly flawed model. It just asserts that it's flawed, and that Stevens said it was. That's all it has to say.
If climate sensitivity turns out to be low, in the Lewis range, this politically-biased garbage is going to be an exhibit in future post-mortems on how human-caused warming was exaggerated and the public deceived by awful science journalism. The article is a hurried, slapdash effort to knock down any suggestion that warming isn't a crisis.
Which brings us to a broader issue. Scientific American is consistently politically biased, in a way that compromises its integrity. In general, they don't cover peer-reviewed scientific articles that offer lower estimates of warming, sensitivity, or impacts. They didn't cover Fyfe, et al's Nature article. They didn't cover Lewis and Curry's recent paper in Climate Dynamics (and Vaidyanathan never tells the reader that Lewis just published a peer-reviewed journal article on the very issue of reduced estimates of ECS.) They didn't cover any of the recent work that reported that parts of the ocean were cooling – but they were sure to cover papers showing that (other) parts of the ocean were warming.
They only covered Stevens' new paper because it's gotten lots of attention among skeptics for its reduced-warming implications, and they only covered it in an attempt to attack those implications.
Whenever Scientific American reports on scientific issues that have political implications, it only reports them from a left-wing perspective. Over many years, I have never seen Scientific American report any science that bolstered a conservative or libertarian position, and there are lots of bodies of scientific evidence that could bolster such positions (the effects of having children out of wedlock, the psychological costs of abortion, the invalidity of gun control studies that include criminal households, anything about the benefits of gun ownership, anything about the harms of government intervention, race differences, the dangers of believing scientific consensuses pushed by governments, like on cholesterol, and now apparently, secondhand smoke.)
I've always found it strange that they publish lots of political writers – people who inject political positions in their reporting on science. Every one of them has had a left-wing orientation. In short, you're simply not going to get balanced coverage from Scientific American on any issue that has political implications within the crude left-right landscape – you'll get a lot of cherry-picked coverage that invariably supports left-wing positions. You definitely won't get balanced coverage of climate science – if you rely on Scientific American, you'll never know about lots of major papers that give lower estimates of warming, even those in top journals.
It's extremely disappointing. They've compromised their standing and integrity as a source of science journalism.
At this point, they've gone a step further – they've abandoned basic standards of professional journalism as such, publishing structurally biased, rigged articles that deceive the public. Their writer here apparently knows nothing about the topic, and is just trafficking lazy, vague claims that x is true, and y is false, without saying much more than x is true, and y is false, and rigs an entire story to block one side from view. There's no science here.
UPDATE: Nic Lewis has an update on the SciAm article. It only looks worse. Scientific American needs to employ writers who know the science they're covering.
This is a popular idea, has gotten good coverage, most recently in FastCo. Leaf Van Boven has been doing great work on this for years. (He was Gilovich's graduate student – the FastCo article quotes Gilovich a lot because it's based on a new review article by Gilovich & Kumar.)
One of the things I want to nail in my book is the limited relevance of main effects. Meaning, statements like "X makes people happier", "Y is good for you", "Z is bad for you", "Self-esteem predicts bullying", and of course the FastCo title: "The Science of Why You Should Spend Your Money on Experiences, Not Things."
There's a systematic problem in the coarseness and lack of utility of main effects in social science, and in a lot of biomedical work. There are many different issues at play, at different levels of analysis, that make the above statements problematic (the epistemic and methodological problems with "X makes people happier" are different from the problems with claims like "Self-esteem predicts bullying".)
But let's focus on this story on the science of why you should spend your money on experiences rather than things...
34% of you should not do this.
Let's backtrack to the studies. This is often measured by asking people what made them happier in retrospect, which is probably a valid way of measuring this question (there are probably several valid ways of getting at this.)
Do you think 100% of people reported that experiences made them happier than purchases? Probably not. We never get universality in social psychology or positive psychology research. I think science writers should linger on these sorts of questions and should get the actual percentages for the effects they report (my new website will make these sorts of issues more salient and not dependent on a particular post.)
In one of the flagship studies on this (Van Boven & Gilovich, 2003), 57% of participants reported that an experience made them happier than a purchase (they were asked to think of one of each, where their motivation for the purchase or experience had been to increase their happiness.)
34% rated the purchase as making them happier than the experience. (The remainder were Not Sure or declined to answer. Also, I'm using "purchase" to mean purchase of a thing or object.)
So, if you read this article at FastCo – or similar articles at the NYT or Slate – and you act on it, spending your money on experiences instead of purchasing something you've treasured, 34% of you will be less happy than you would've been if you had bought whatever you wanted to buy.
We might even say 34% of you will be less happy than if you had never read the article, if it changed your decision. That's interesting.
It may not matter much, but it could. Qualifiers include:
-- "Less happy" is not necessarily unhappy. What was measured in this study was people's choices of which made them happier. In a lot of cases, both experiences and purchases might have made them happy, but one just made them happier than the other. (The mean score in how happy a purchase made them was above the midpoint at 6.62 on a 1 – 9 scale, and experiences were at 7.51. Different study, same paper.)
-- This is recalled, reflective judgement of what made them happier. I don't know much about any literature on the validity or nature of recalled happiness grading. I assume it's not as unreliable as predicting what will make us happy (see Dan Gilbert's excellent work there.)
-- There are likely to be all sorts of contextual effects and circumstances that dictate one's choices. Happiness as a concept may not be commensurable across individuals, and it certainly doesn't have to be the most important consideration. I think this reality sometimes gets lost in positive psychology – people don't have to care a lot about happiness. They might care about meaning or some other value more than happiness. They might care about the happiness of other people, like their spouses or children, more than their own (though making them happy would presumably make oneself happy in that case.) In fact, suffering can be very valuable. There seems to be an intolerance for suffering implied in positive psychology and in modern politics. I think suffering might be getting short shrift.
Obviously, this can all be very complicated, and the right decisions for you might be heavily and consciously influenced by your philosophy, values, or faith. Certainly what is happening when people answer a question like “When you think about these two purchases, which makes you happier?” might be very complicated.
But say we keep it simple. The headline and article only apply to 57% of people, if we take everything at face value, assume that readers of the article are very similar on average to the samples used in the studies, and so forth.
(This was an excellent sample, as it was a random digit dialing survey by professionals at Harris Interactive, so we don't have to worry about people making claims about human nature based on research using college students. It was conducted in 2000, so random digit dialing was quite valid – landlines were still common.)
This phenomenon of the media reporting an effect as though it applies to everyone is recurrent. And a lot of times, the percentage of the population to whom it applies is not very large, maybe a slim majority like 57%. Sometimes, it will be a minority – a minority of the sample can drive an effect, whether it's a correlation or a difference between experimental groups. In this case, it could easily have been that 40% were happier from experiences, 34% were happier from purchases, and the rest were don't know/no opinion. That's a very normal outcome.
Note that the researchers dig into the factors that shape the effects – Gilovich and Kumar address the heterogeneity, and Van Boven has done lots of work getting at moderators. I'm focused on how findings are communicated and simplified by the media. This isn't a scandal or anything – lots of people might live better lives because they read about this research. Some people might be worse off if they make decisions based on what they think science says they should do. It's complicated, and I don't have any easy answers.
Part of the story here is hedonic adaptation, how we can be happy for a time with some new circumstance or purchase, but over time our happiness returns to baseline – the thing that made us happier doesn't do much for us anymore. Researchers have noted we don't seem to be getting happier across generations, and the implication is that we should be (because our objective standard of living is much higher.) I've always found that strange, and discuss it my chapter on measuring well-being. For one thing, the baseline is happiness – most people report being slightly or somewhat happy, and mean happiness is above the midpoint. Given the way we measure happiness, there isn't a lot of room to capture movement upward, for a number of reasons.
Second, it's not clear to me why we would want or need more happiness. I certainly wouldn't think it would be wonderful if people were maximally happy all the time. We have negative emotions for a reason – lots of reasons, really. The environment is not neutral with respect to our well-being, and it's certainly not the kind of place where we could be super happy all the time and still function as organisms. Mild happiness might be optimal for a civilization – I don't really have a strong position there, but assuming that the mean level of happiness should be some value X, or in some range R, that is different from the status quo is a pretty big assumption. I think it would need some justification.
José L. Duarte
Social Psychology, Scientific Validity, and Research Methods.