Correction / counting
In one of my posts on the Cook fraud, I listed 19 social science, survey, marketing, and opinion papers that were coded as "climate papers" endorsing of anthropogenic warming. I made a mistake listing this one:
Vandenplas, P. E. (1998). Reflections on the past and future of fusion and plasma physics research. Plasma physics and controlled fusion, 40(8A), A77.
It was coded as mitigation but not as endorsement. My apologies.
Since we have a 1:1 replacement policy at JoseDuarte.com, I'll give you another paper that was indeed counted as endorsement:
Larsson, M. J. K., Lundgren, M. J., Asbjörnsson, M. E., & Andersson, M. H. (2009). Extensive introduction of ultra high strength steels sets new standards for welding in the body shop. Welding in the World, 53(5-6), 4-14.
In fact, this one was counted as a "mitigation" paper and as "explicit endorsement."
As I said in that post, there are lots more where that came from, lots more that I haven't listed. I don't provide them all for two reasons: 1) I want to get people thinking about validity and discourage something I saw after I posted that list of 19 papers – counting and recomputation of meaningless data. 2) I'm saving some for the journal article.
The Cook study was invalid because of the search procedure, the resulting arbitrary dataset that gave some scientists far more votes than others, the subjective raters having a conflict of interest (which is unheard of), the basic design, the breaking of rater blindness and independence, etc.
That means there is nothing to count, no percent, not a 97%, not any percent. You could recompute the percentage as 70% or 90% or 30% if you went through the data, but it would still be meaningless. There is no method known to science by which we could take their set of papers and compute a climate science consensus. I need to do a better job of explaining what it means to say that a study is invalid. People have this instinct to still play with data, any data, because it's there, like Mt. Everest. It's an unfortunate artifact of human nature and first-mover advantage, especially in cases where lax journals don't act swiftly to retract.
As always, I want to stress that the study was a fraud, and that this is a completely separate issue from validity. I always remind people of this because I think it would be irresponsible to pretend that is wasn't. They lied about their method. They claimed their ratings were blind to author and independent – they routinely broke blindness and independence in a forum with other raters. Lying about your methods is fraud. That alone makes the study go away. There's no counting to be done. The above welding paper is an example of the third act of fraud – saying that these were climate papers. There are all kinds of these absurd papers in their 97%. This was never real. There was never a 97%.
I heard someone say something like: "If you don't like a study, run your own to debunk it." That's a common outlook in science, and it's good advice in most cases. In this case, it's inappropriate, bad scientific epistemology. Invalid studies, certainly fraudulent ones, never impose a burden on others go out and collect data to debunk it. Invalid research should simply be retracted and everyone should carry on with their lives as if the research never happened, because in a sense, it didn't. Fraud of course should be retracted with prejudice. In such cases, there's nothing to refute or debunk with new data – you'd be swinging at air. If we want to know the consensus, we already know it from quality surveys of climate scientists (it's 78 - 84%), so running around doing another study to "refute" the Cook study would be silly (especially a study based on subjective ratings of abstracts.) Sure, some people believe the 97% figure right now – Cook and company were good at the media angle. That will be corrected in time, this study will be dealt with, and I expect the people involved will face appropriate consequences.
6/4/2015 01:41:32 am
I appreciate the point you are making here. I was taught this long ago in statistics: bad sampling tends to be unfixable.
Leave a Reply.
José L. Duarte
Social Psychology, Scientific Validity, and Research Methods.