Here are some tips for those who report on science (and science consumers):
1. Validity is the thing.
Don't be dazzled by numbers, math, statistics, and "significance". The largest weakness of social science research is validity, and validity isn't packaged as a number. People often mistakenly think science is all about math or quantifying everything. That's not quite right, and all the math and data in the world means nothing if the study is invalid. Consider the following example:
Let's say we run a study where we want to see if people change their views of capital punishment if they're given data that contests their viewpoint. We select participants who have scored at the extremes on this issue -- i.e. people who have particularly strong views for or against capital punishment. We bring these people into the lab, and someone in a lab coat hands them a news article that reports empirical research on the deterrent effects of capital punishment. Each participant reads the article that conflicts with their position—so, a staunch supporter of capital punishment reads that it has no deterrent effect on murder rates, and an opponent reads that it does.
After reading about evidence that conflicts with their positions, we ask them for their views on capital punishment. They haven't changed their minds. So then we report that people don't respond to evidence.
What just happened? What's the flaw there? Well, if you know people who have strong positions on capital punishment, or you have one yourself, you know that it isn't an empirical issue for them – whether deterrent effects or other narrow concerns. People's views on capital punishment are based on deeper principles of justice, which in formal terms is called a deontological view. A supporter might think that a person who has deliberately taken an innocent life has forfeited his own right to live, and should be executed as a matter of justice. Opponents might think that we shouldn't give the state the right to kill, or that the risk of wrongly executing one innocent person outweighs other considerations. These are not data-driven views, and they don't have to be. No one is obligated to be an empiricist or utilitarian. Giving these people data on deterrent effects didn't engage the foundations of their positions.
Moreover, we should also wonder why any sane person would change their views on such a grave issue in a span of minutes, simply because some guy in a lab coat hands them an article (and in an experimental laboratory setting). A person would have to be remarkably gullible, and hold very contingent and unstable views, to flip in this context.
Social psychologists tend to be very empirically minded, and here they embedded their own empiricist assumptions in their study design and imposed that framwork on their participants -- the study I just described is a real study, and famous and heavily cited (Lord, Ross, & Lepper, 1979). As far as I know, no one has pointed out the invalidity of the study, in all these years. (Note, that their broad conclusions may be entirely true -- people might indeed be resistant to evidence, but we wouldn't know it from this study. We would know it from valid studies, or to some extent, from our own experience.)
2. Anyone can say anything.
...including scientists. Running stories based on what one scientist says, when they haven't published their claim in a peer-reviewed journal, is very risky. Not all scientists are competent. Peer review weeds out some errors. At the very least, ask other scientists what they think. Also, a conference talk is not much different from a scientist calling you up on the phone -- there isn't a peer review process involved. Just because someone said something at a conference doesn't mean it's true.
I see this most often with climate skeptics and advocates of cult diets. Oftentimes, a climate skeptic will seize upon a news article that discusses the views of some guy at a university somewhere in the world. He'll be called a "scientist", but sometimes it's unclear if he's a climate scientist or in some other field. He'll have hypothesis along the lines that AGW is false or seriously overstated, perhaps an alternate explanation. In the worst cases, there's no mention of a journal article (even a submission), and the reporter won't question any other scientists. If we're outside climate science, we have no context within which to place such news articles. Climate science rests on a large body of evidence and publications, digging into dozens of subfields. One guy with a new hypothesis can't possibly outweigh hundreds of full-time climate scientists.
Yes, sure, it's possible that an unknown can come along with a eureka insight and upend a scientific field. Reality, nature, is full of surprises, and it answers to no consensus. But we have no reason to assume that's what's happening when we read a news article about a dissenter – as outsiders, we simply don't know enough to make that call, to be confident in that call. People who seize upon a news article that cites one dissenting scientist, while ignoring the evidence of a large consensus, are exhibiting obvious confirmation bias. It couldn't be more obvious, could it? Why do they never seize on news articles that interview a mainstream, "pro"-AGW climate scientist?
(Note that peer-review may not be as good we thought. A lot of scholars are thinking about this issue lately. Some reviewers are unprofessional, political, or incompetent. Scientific fields can exhibit groupthink and tribalism as much as other human institutions. I'm open to the possibility that by 2025 we'll have lots of research showing that peer-review adds little value. I'm open to it, but I'm not there yet, because I haven't seen comprehensive evidence.)
3. Watch out for the Duarte Large Number Fallacy.
I see it a lot in the news. This fallacy happens when an argument consists of nothing but the declaration of a large number, after which the person just stops for no reason. (It has nothing to do with Samuelson's brilliant Fallacy of Large Numbers, hence the Duarte part.) Like the DSNF below, this fallacy is part of a broader class of fallacies I term non-seq fallacies – non sequiturs that are cut off in the middle, you might say aspiring or abbreviated non sequiturs. Check it out:
-- Undocumented immigrants cost the federal government $10 billion in education spending every year!
-- Fossil fuels are adding 1 billion tons of CO2 to the atmosphere every year!
In both cases, these statements are supposed to tell us something meaningful by virtue of their massive numbers. The conclusion is implied, kind of hanging in the air, but usually unstated. Here it would be that we should do something about Mexicans, or about CO2.
But the statements tell us nothing, because the DLNF is acontextual by design. It's a hit-and-run, and provides no context to give meaning to those numbers. In the immigrant case, we have no idea how much undocumented immigrants pay in taxes to offset any claimed costs (or their broader economic impact, e.g. food prices, cheaper homes, labor market flexibility, etc.) – many taxes are inescapable, even by the undocumented (sales taxes, property taxes via rent, excise taxes, sometimes even income taxes, etc.) It also doesn't tell us that the federal budget is more than $3.5 trillion, which makes the large number somewhat small by comparison -- about a third of one percent of the budget.
In the CO2 case, again the necessary context is excluded. How much CO2 is there in the atmosphere to begin with? About 3,000 billion tons, or 3,000 gigatonnes, or 3,000 times as much as the annual emissions in the claim. (And here, the numbers tell us nothing – they're meaningless without a scientific assessment of greenhouse effects. We can't know anything just from these numbers – see the Small Number Fallacy below.) It also leaves out the benefits of pollution, which are the benefits of the industrial revolution, like long lives, cheap food distribution, easy transport, streets that aren't full of horse manure and the consequent health risks, being able to power MRI machines, and your ability to read stuff like this anytime anywhere, etc.
These cases give us one side of the ledger, and are silent on the other.
4. Watch out for the Duarte Small Number Fallacy.
Also common. This fallacy happens when an argument consists of nothing more than the declaration of a small number:
-- CO2 is only 0.04% of earth's atmosphere! That's too little to do anything, so global warming is a scam.
This is the inverse of the DLNF. Natural human intuitions are getting in the way of valid reasoning. The intuition with the large number fallacy is that adding a billion tons of CO2, or anything, to the atmosphere has to have an impact. But nothing has to have an impact. Nothing has to do anything. Reality is whatever it is, and can be structured in any number of ways – we have to discover how it is structured and how it works.
Likewise, the intuition here on the 0.04% figure is that a substance present in such small concentrations can't do anything significant to the system it is part of, can't cause dramatic things like melting ice caps and rising sea levels. The flipside of nothing has to do anything is that anything can do anything. There could easily be physical systems, atmospheres and the like, where even a 0.0000004% concentration of some agent causes a notable effect. Poisions are a good example. Our intuitions about what fractions of a percent can do are completely arbitrary, and are partly driven by the fact that we use a base-10 number system. As humans, we have no innate knowledge about what concentrations of CO2 will induce greenhouse effects – there's nothing in us that tells us anything about that. It could be any number – any number at all.
5. Significant effects can be caused by a minority of the participants in a study
This is wildly underexposed. Consider this scenario:
Hypothesis: Doing jumping jacks before class will cause students to remember more material. (Imagine a theory about wakefulness or blood flow...)
Two groups/classes of students. 30 per class. Assume everything else is controlled for. One class does jumping jacks. One doesn't.
On a later test of recall of material (imagine the study lasts a week or so), here are the results:
Jumping jacks condition: Mean test score is 88%
Control condition: Mean test score is 81%
This kind of mean difference, which could easily be statistically significant, could be driven by, oh, seven students who respond very well to jumping jacks (for various reasons). That's a very normal occurrence -- a minority of the sample is sensitive or responsive to the induction and drives the effect. It could also be the case that jumping jacks give a small boost to 22 students in that condition, but it's very common for effects to be driven by a minority of the sample. A handful of people in a group of 30 can easily push the mean score around, on any variable.
This means that we shouldn't make sweeping claims about human nature based on the effects of a study. The effects are not only not universal to humanity as a whole, but they're not even universal within the experimental sample – or even true of the majority of the sample much of the time. It's normal for us to discover some factor that influences human behavior and reduce it to "People do this." or "People are like X." In this case, "People remember more if they do jumping jacks before class." But obviously this is wrong, and it's false for the majority of humans (in this made-up case, which is mirrored in many real examples). "People" don't do squat. Some people do X under conditions Y. There is enormous variance in human behavior, and many factors, inductions, or manipulations of human behavior only work on a minority of the human race.
A core problem is that our language doesn't facilitate the accurate and succinct expression of probabilistic and heterogenous truths, and social science is entirely based on probabilistic and heterogenous truths. By "our language", I mean English, Spanish, French, whatever. It's not designed for this. It's designed to say "X does Y".
6. Sample size is already accounted for.
This section was bogus, and I've deleted it until I have something non-bogus to say here. Thanks to Daniël Lakens for pointing out its bogosity. -- Joe Duarte, October 20, 2015
7. Chris Mooney is not credible.
Mooney is a partisan hack making a quick buck by stroking liberal egos and telling them that only they are in touch with reality, that science is both extremely relevant to most political issues and turns out to be entirely on their side, and that their adversaries have defective brains.
He won't report any of the science that contests or debunks his narrative – and there is a lot of it – and he doesn't seem able to evaluate the findings he does report. He's not interested in science per se, but in viciously marginalizing American conservatives, even speaking in terms of a "Republican brain", and purporting to dissect it with his English degree. He doesn't understand the difference between ideological tenets, momentary political issues, and descriptive reality, so he clumsily gloms them all together under the false flag of "science". He's almost certainly harmed the credibility of science in the United States, especially to the extent that he promotes sloppy findings that are later revealed to be false (already happened). There is also real danger in his and similar efforts to cash in on the cheap thrill of co-opting not only science, but reality itself, for one's political tribe. It reduces science to a list of dichotomous beliefs, instead of a method of inquiry, and converts the intellectual landscape into those who see "reality" and "deniers", a noxious tactic that imposes serious friction costs, heavily anchors people against new evidence, intimidates dissenters, and slows progress.
A core issue is that he doesn't know how knowledge works, or how to distinguish between different kinds of knowledge claims, or how political claims can be evaluated. For example:
Mooney thinks your position on ObamaCare is like your position on evolution.
Mooney thinks your position on the debt ceiling is like your position on the age of the earth.
To him, any number of today's political issues, even somewhat granular and opaque fights in Congress, can be adjudicated and sorted into "facts" and "myths" – whatever the Democratic position is, that's factual and grounded in reality and wisdom, and always has a curious connection to science. Whatever the Republicans position is, it's based on myth, soaked with disinformation, pushed by people who are disconnected from reality, and again this will have a mysterious connection to science, because science tells us whether we should keep the debt ceiling and whether we should like ObamaCare.
He's not yet discovered that reasonable people can disagree on all sorts of political issues, without anyone being defective, and that his political issues are not the kinds of things we can approach as though they're rocks to be dated. I'm not sure he knows that there are profound philosophical disagreements underneath many of them, along with mundane differences in values, and that it's not at all obvious that modern American liberalism is correct. When he said this: "Let’s face it: We liberals and progressives are absolutely outraged by partisan misinformation.", I think we have to question his basic contact with reality. If someone truly believes that liberals are uniquely outraged by partisan misinformation, and by implication, that liberals don't generate a lot of partisan misinformation, well, that's remarkable. It takes an insular ideology to believe something like that – you'd have to filter out so much information and evidence. I'm not sure he realizes that there lots of smart, knowledgeable, and benevolent conservatives, libertarians, and no-category people out there in the world, walking around.
On the scientific findings, Mooney will never let you know about research that reports that conservatives are happier than liberals because of "greater personal agency (e.g., personal control, responsibility), more positive outlook (e.g., optimism, self-worth), more transcendent moral beliefs (e.g., greater religiosity, greater moral clarity, less tolerance of transgressions), and a generalized belief in fairness." He'll never report research that finds that fiscal conservatives are smarter than liberals. (Note the title of that paper uses "liberal" in the European sense, someone who supports economic freedom, basically an American libertarian or fiscal conservative. I think the author might've been trying to lure American liberals in, thinking that this was going to show they were smarter.)
When he reports findings that enable him to attack conservatives, he will never check those findings. He will never evaluate those papers for invalid methods or ridiculous biases. For example, he might tell you about a study that reports that conservatives are happier than liberals because they "rationalize inequality", but he won't tell you that the researchers never measured rationalization of anything – that the researchers merely asked participants if hard work tends to pay off in the long run, and called that "rationalization of inequality", or that in another study the researchers simply asked people for their attitudes about inequality and then converted those attitudes to, again, "rationalization of inequality". A lot of things Mooney tells the public will turn out to be false, for these sorts of reasons.
It hasn't occurred to Mooney that a field dominated by leftists might generate biased and at times completely bogus and invalid research that serves leftist values. We know that in advance. It doesn't matter if we're liberals, conservatives, libertarians, whatever... we know that if social psychology is dominated by leftists (or any camp), there will be biased research. There would have to be serious bias correction procedures and good epistemological training in place for bias not to sprout from such a situation, and we have no such bias correction procedures or training. But Mooney takes no account of that at all, does nothing to be mindful of the biases of the field, to catch invalid research like the above. He won't catch it. He's not even thinking about it. He's several levels below that right now – there's no rigor in what he does, and there's a massive error rate.
Many of his claims about liberals and conservatives will turn out to be false, and some are already known to be. He doesn't think about how new and crude a lot of this research is. He just assumes that whatever's happening in academia in his 20s is the last word, is shiny, sparkly science that will always be true. Our methods are somewhat coarse right now, and sometimes invalid. We're probably not going to have a thorough insight into the political psychology of liberals, conservatives, libertarians, populists, et al for another 20 or more years. And no one should hang their hats on anything coming out of neuroscience at this stage, on political psychology. That would be incredibly reckless. They just plugged in the fMRI machine the other day. This work is just starting, the field is dominated by leftists, and we don't have good bias correction measures yet. Writing a book about the Republican brain right now is just begging to be a future example of early 21st century phrenology.
What bothers me most about Mooney is how viciously mean he is to nearly half the American population. To try to marginalize and pathologize over a hundred million people because you disagree with them over debt ceilings and ObamaCare, or their faith – even to the point of dissecting their very brains – is an incredibly malicious and dark thing to do. I can't put it any other way. What he does is jawdropping in its malice. The fact that he's a complete hack and has no idea what he's talking about, doesn't even have a basic grasp of epistemology, knowledge types, scientific validity, confirmation bias, economics, the difference between values and descriptive facts, or even an awareness the depth and breadth of the political landscape outside his bubble... well, this is really something. ---------------- p.s. Only a slim majority of Republicans – 58% – believe God created humans in their current form. There are many millions who believe in evolution, so it's not right to smear them in the tribalist way Mooney does. I used to be a religion-basher myself, and I love Sam Harris, but I've since backed away from the simple swipes. Religion has a very important role in many people's lives, sometimes a salutary one, and I think we still need to do a lot of research to understand what motivates creationism and belief in evolution. I suspect that many people simply cannot see life as meaningful if evolution is true, because it's driven by random and ethically arbitrary processes that offer little explanation for the profundity and depth of meaning with which many (or most) human beings experience life on earth. There's also the matter of our mortality, and the sometimes crushing suggestion that when we die, we are simply gone. We do have some research there.
8. Most policy think-tanks and NGOs are partisan
The technical and legal definition of non-partisan is: not affiliated with a political party. That's too narrow to be useful, and I suspect that among the general public, people think non-partisan means not having a particular political perspective or ideology. So when journalists say "the non-partisan Citizens for Tax Justice reports..." or "the non-partisan Tax Foundation reports...", the public is not well served, since both organizations align with particular camps – the CTJ is a left-wing organization and the TF is a conservative, and perhaps slightly libertarian, organization.
How sources are framed is obviously important, and can influence the credibility readers attach to them. Recent evidence indicates that reporters are much more likely to label conservative think tanks (as conservative), than they are to label liberal think tanks as liberal. This is unfortunate, and I hope it starts to change as more research is published.
9. People who use words like "denier" and "warmist" are unlikely to be credible or reachable.
Call it a hunch (though it's testable). You can linger on it for a minute to see what I mean. Climate change is one of the most politically charged and warped issues of the modern era. It makes people crazy. Imagine a person who regularly blasts "deniers" – how readily will they accept new evidence that revises climate sensitivity downward? (Sensitivity is temperature change in response to a doubling of CO2 from pre-industrial baseline, either transient or equilibrium.) Generally, they'll ignore it. They never cite it or use it to update their views. Their whole worldview, even their identity as persons, their view of who they are at a deep level, seems to rest heavily on blasting deniers and seeing themselves champions of science. It would be very difficult and painful for them to be wrong, to accept that they were wrong, that the deniers were right about something, or to accept that warming might be mild and not deserve all the drama they've attached to it.
Imagine someone who blasts "warmists" (a derogatory term for people who believe in AGW and are worried about it). How readily will they accept new evidence that AGW may do serious harm? Or evidence that purports to refute downward revisions of climate sensitivity, restoring earlier, larger estimates of the impact of CO2? Or papers that argue that the current pause in mean global surface temps will soon end?
In general, I think politics makes us worse, makes people worse, and the politics of climate change is probably the best example. People who use words like denier and warmist are playing a team sport, and a vicious one at that. They tend to be deeply invested in their identities on one "side" of the issue, and they have a cartoonish view of the other side as evil and corrupt. Evil is somewhat rare, but politics makes people overestimate its prevalence. Politics also comes as a package deal, so people who are really into the left-right, liberal-conservative, Democrat-Republican contest don't want to give ground on anything, for a lot of reasons. I'm guessing one reason is that being wrong on one issue implies that you might be wrong about others...
I've had experiences where I've communicated with a scholar or scientist, and was confused by their unwillingness to uphold basic standards of evidence or play by normal scientific rules. Later, I'd discover that they were political activists who spend much of their time blasting "deniers", and everything suddenly made sense. They've joined a team, and have already decided what reality is – they're simply not in the market for discovery, new facts, revisions of their worldview, etc. It's hard to have a broad historical perspective on one's own era, but we might be in a dark age right now with respect to the force politics has on scientists – this could go very badly if we don't develop much better applied epistemology and bias–correction methods.
10. We can't know very many things just by sitting down and thinking about them
A lot of climate skeptics think they can know stuff about climate change just by sitting down and thinking about it, or by being an engineer and sitting down and thinking about it.
People who do that have a fundamental misconception of how reality is structured. A scientific hypothesis about climate change is based on an enormous body of data and very complex analyses that account for all sorts of variables and forcings. It's built on top of decades of work and discovery of how this very complex system that is climate works. How the climate works is not knowable by human senses, or simple human reflection. Most natural processes are not knowable in such a way. We can't possibly know if climate change is caused by natural variability, the sun, or Mexicans, without doing a hell of a lot of work, using a lot of math, sensors, satellites, models, and so forth. Reality is a complicated place, and modern science is incredibly hard.
This ultimately gets at the issue of experts and expert knowledge. It's a tough issue to straddle, if we want to be careful and rigorous. Acknowledgement of the validity and reality of expert knowledge can easily morph into the fallacies of arguments from authority and ad hominem. The difference between advocating some reliance on expert judgements and appealing to authority is unclear, and to my knowledge not well developed in philosophy of science, or epistemology. There's also the issue of the reliability and historic fallibility of experts, and the fact that expert reliability probably varies depending on the field or domain we're talking about. There may be specific issues of political bias or conflicts of interest in some fields. This is complicated, and deserves a much fuller treatment in a journal or magazine. All I really want to say here is that we can't know much about cloud feedbacks, solar forcing, and the role of aerosols just by being smart and thinking about it. I love the promise of citizen science, and I'm fine with people crashing the party, but many skeptics have a profoundly incorrect grasp of how reality is structured and how we can go about discovering it.
At the end of the day, the climate consensus – with all its data, instruments, math, and PhDs – could indeed be wrong, but the only way to discover that, the only way to know that, is by using all that data, instruments, and math.
Many more tips coming soon... On the first issue above, I'm writing a follow-up to our BBS paper that will focus much more narrowly on invalid research examples, and the underlying nature of the malpractice that ties them together. There will be lots more examples, including scales and psychometrics. There is an entire class of validity-invalidity problems that we (social scientists) are not trained to see or catch, and there are lots of invalid studies being published by major journals because of this, in combination with the political bias of the field.
I want to be clear – my point here is not a generic point about bias – I'm not just saying that because social psychologists are overwhelmingly leftist, they're sometimes biased in their research. My point is much deeper and more specific. Some social psychologists aren't just biased in some vague, conventional sense – they're embedding their ideology into their studies in ways that make those studies invalid and useless. Those studies do not represent knowledge about the world (or of human psychology), and they have no scientific standing. That's a pretty serious problem, hence my interest.