Jose L. Duarte
  • Home
  • Blog
  • Media Tips
  • Research
  • Data
  • An example
  • Haiku
  • About

Cooking stove use, housing associations, white males, and the 97%

8/28/2014

183 Comments

 
The Cook et al. (2013) 97% paper included a bunch of psychology studies, marketing papers, and surveys of the general public as scientific endorsement of anthropogenic climate change.

Let's walk through that sentence again. The Cook et al 97% paper included a bunch of psychology studies, marketing papers, and surveys of the general public as scientific endorsement of anthropogenic climate change. This study was multiply fraudulent and multiply invalid already – e.g their false claim that the raters were blind to the identities of the authors of the papers they were rating, absolutely crucial for a subjective rating study. (They maliciously and gleefully revealed "skeptic" climate science authors to each other in an online forum, as well as other authors. Since they were random people working at home, they could simply google the titles of papers and see the authors, making blindness impossible to enforce or claim to begin with. This all invalidates a subjective rater study.) But I was blindsided by the inclusion of non-climate papers. I found several of these in ten minutes with their database – there will be more such papers for those who search longer. I'm not willing to spend a lot of time with their data – invalid or fraudulent studies should simply be retracted, because they have no standing. Sifting through all the data is superfluous when the methods are invalid and structurally biased, which is the case here for several different reasons, as I discuss further down.

I contacted the journal – Environmental Research Letters – in June, and called for the retraction of this paper, and it's currently in IOP's hands (the publisher of ERL). I assume they found all these papers already, and more. The authors explicitly stated in their paper (Table 1) that "social science, education and research on people's views" were classified as Not Climate Related, and thus not counted as evidence of scientific endorsement of anthropogenic climate change. All of the papers below were counted as endorsement.

Chowdhury, M. S. H., Koike, M., Akther, S., & Miah, D. (2011). Biomass fuel use, burning technique and reasons for the denial of improved cooking stoves by Forest User Groups of Rema-Kalenga Wildlife Sanctuary, Bangladesh. International Journal of Sustainable Development & World Ecology, 18(1), 88–97.  (This is a survey of the public's stove choices in Bangladesh, and discusses their value as status symbols, defects in the improved stoves, the relative popularity of cow dung, wood, and leaves as fuel, etc. They mention climate somewhere in the abstract, or perhaps the word denial in the title sealed their fate.)

Boykoff, M. T. (2008). Lost in translation? United States television news coverage of anthropogenic climate change, 1995–2004. Climatic Change, 86(1-2), 1–11.

De Best-Waldhober, M., Daamen, D., & Faaij, A. (2009). Informed and uninformed public opinions on CO2 capture and storage technologies in the Netherlands. International Journal of Greenhouse Gas Control, 3(3), 322–332. 

Tokushige, K., Akimoto, K., & Tomoda, T. (2007). Public perceptions on the acceptance of geological storage of carbon dioxide and information influencing the acceptance. International Journal of Greenhouse Gas Control, 1(1), 101–112.

Egmond, C., Jonkers, R., & Kok, G. (2006). A strategy and protocol to increase diffusion of energy related innovations into the mainstream of housing associations. Energy Policy, 34(18), 4042–4049.

Gruber, E., & Brand, M. (1991). Promoting energy conservation in small and medium-sized companies. Energy Policy, 19(3), 279–287. 

Şentürk, İ., Erdem, C., Şimşek, T., & Kılınç, N. (2011). Determinants of vehicle fuel-type preference in developing countries: a case of Turkey.  (This was a web survey of the general public in Turkey.)

Grasso, V., Baronti, S., Guarnieri, F., Magno, R., Vaccari, F. P., & Zabini, F. (2011). Climate is changing, can we? A scientific exhibition in schools to understand climate change and raise awareness on sustainability good practices. International Journal of Global Warming, 3(1), 129–141.  (This paper is literally about going to schools in Italy and showing an exhibition.)

Palmgren, C. R., Morgan, M. G., Bruine de Bruin, W., & Keith, D. W. (2004). Initial public perceptions of deep geological and oceanic disposal of carbon dioxide. Environmental Science & Technology, 38(24), 6441–6450.  (Two surveys of the general public.)

Semenza, J. C., Ploubidis, G. B., & George, L. A. (2011). Climate change and climate variability: personal motivation for adaptation and mitigation. Environmental Health, 10(1), 46.  (This was a phone survey of the general public.)

Héguy, L., Garneau, M., Goldberg, M. S., Raphoz, M., Guay, F., & Valois, M.-F. (2008). Associations between grass and weed pollen and emergency department visits for asthma among children in Montreal. Environmental Research, 106(2), 203–211. (They mention in passing that there are some future climate scenarios predicting an increase in pollen, but their paper has nothing to do with that. It's just medical researchers talking about asthma and ER visits in Montreal, in the present. They don't link their findings to past or present climate change at all (in their abstract), and they never mention human-caused climate change – not that it would matter if they did.)

Lewis, S. (1994). An opinion on the global impact of meat consumption. The American Journal of Clinical Nutrition, 59(5), 1099S–1102S.  (Just what it sounds like.)

De Boer, I. J. (2003). Environmental impact assessment of conventional and organic milk production. Livestock Production Science, 80(1), 69–77

Acker, R. H., & Kammen, D. M. (1996). The quiet (energy) revolution: analysing the dissemination of photovoltaic power systems in Kenya. Energy Policy, 24(1), 81–111.  (This is about the "dissemination" of physical objects, presumably PV power systems in Kenya. To illustrate the issue here, if I went out and analyzed the adoption of PV power systems in Arizona, or of LED lighting in Lillehammer, my report would not be scientific evidence of anthropogenic climate change, or admissable into a meaningful climate consensus. Concretize it: Imagine a Mexican walking around counting solar panels, obtaining sales data, typing in MS Word, and e-mailing the result to Energy Policy. What just happened? Nothing relevant to a climate consensus.)

Vandenplas, P. E. (1998). Reflections on the past and future of fusion and plasma physics research. Plasma Physics and Controlled Fusion, 40(8A), A77.  (This is a pitch for public funding of the ITER tokamak reactor, and compares it to the old INTOR.  For example, we learn that the major radius of INTOR was 5.2 m, while ITER is 8.12 m. I've never liked the funding conflict-of-interest argument against the AGW consensus, but this is an obvious case. The abstract closes with "It is our deep moral obligation to convince the public at large of the enormous promise and urgency of controlled thermonuclear fusion as a safe, environmentally friendly and inexhaustible energy source." I love the ITER, but this paper has nothing to do with climate science.)

Gökçek, M., Erdem, H. H., & Bayülken, A. (2007). A techno-economical evaluation for installation of suitable wind energy plants in Western Marmara, Turkey. Energy, Exploration & Exploitation, 25(6), 407–427.  (This is a set of cost estimates for windmill installations in Turkey.)

Gampe, F. (2004). Space technologies for the building sector. Esa Bulletin, 118, 40–46.  (This is magazine article – a magazine published by the European Space Agency. Given that the ESA calls it a magazine, it's unlikely to be peer-reviewed, and it's not a climate paper of any kind – after making the obligatory comments about climate change, it proceeds to its actual topic, which is applying space vehicle technology to building design.)

Ha-Duong, M. (2008). Hierarchical fusion of expert opinions in the Transferable Belief Model, application to climate sensitivity. International Journal of Approximate Reasoning, 49(3), 555–574. (The TBM is a theory of evidence and in some sense a social science theory – JDM applied to situations where the stipulated outcomes are not exhaustive, and thus where the probability of the empty set is not zero. This paper uses a dataset (Morgan & Keith, 1995) that consists of interviews with 16 scientists in 1995, and applies TBM to that data. On the one hand, it's a consensus paper (though dated and small-sampled), and would therefore not count. A consensus paper can't include other consensus papers – circular. On the other hand, it purports to estimate of the plausible range of climate sensitivity, using the TBM, which could make it a substantive climate science paper. This is ultimately moot given everything else that happened here, but I'd exclude it from a valid study, given that it's not primary evidence, and the age of the source data. (I'm not sure if Ha-Duong is talking about TCS or ECS, but I think it's ECS.))

Douglas, J. (1995). Global climate research: Informing the decision process. EPRI Journal. (This is an industry newsletter essay – the Electric Power Research Institute. It has no abstract, which would make it impossible for the Cook crew to rate it. It also pervasively highlights downward revisions of warming and sea level rise estimates, touts Nordhaus' work, and stresses the uncertainties – everything you'd expect from an industry group. For example: "A nagging problem for policy-makers as they consider the potential costs and impacts of climate change is that the predictions of change made by various models often do not agree." In any case, this isn't a climate paper, or peer-reviewed, and it has no abstract. They counted it as Implicit Endorsement – Mitigation. (They didn't have the author listed in their post-publication database, so you won't find it with an author search.))

(I previously listed two other papers as included in the 97%, but I was wrong. They were rated as endorsement of AGW, but were also categorized as Not Climate Related.)

The inclusion of non-climate papers directly contradicts their stated exclusion criteria. The Not Climate Related category was supposed to include "Social science, education, research about people’s views on climate." (Their Table 1, page 2) Take another look at the list above. Look for social science (e.g. psychology, attitudes), education, and research on people's views...

(To be clear, I'm not at all criticizing the above-listed works. They all looks like perfectly fine scholarship, and many are not from fields I can evaluate. My point is that they don't belong in a paper-counting consensus of climate science, even if we were counting directly related fields outside of climate science proper.)

The authors' claim to have excluded these unrelated papers was false, and they should be investigated for fraud. There are more papers like this, and if we extrapolate, a longer search will yield even more. This paper should be retracted post haste, and perhaps the university will conduct a more thorough investigation and audit. There are many, many more issues with what happened here, as detailed below.

Now, let's look at a tiny sample of papers they didn't include:

Lindzen, R. S. (2002). Do deep ocean temperature records verify models? Geophysical Research Letters, 29(8), 1254.

Lindzen, R. S., Chou, M. D., & Hou, A. Y. (2001). Does the earth have an adaptive infrared iris? Bulletin of the American Meteorological Society, 82(3), 417–432.

Lindzen, R. S., & Giannitsis, C. (2002). Reconciling observations of global temperature change. Geophysical Research Letters, 29(12), 1583.

Spencer, R. W. (2007). How serious is the global warming threat? Society, 44(5), 45–50.

There are many, many more excluded papers like these. They excluded every paper Richard Lindzen has published since 1997. How is this possible? He has over 50 publications in that span, most of them journal articles. They excluded almost all of the relevant work of arguably the most prominent skeptical or lukewarm climate scientist in the world. Their search was staggering in its incompetence. They searched the Web of Science for the topics of "global warming" and "global climate change", using quotes, so those exact phrases. I don't know how Web of Science defines a topic, but designing the search that way, constrained to those exact phrases as topics, excluded everything Lindzen has done in the current century, and a lot more.

Anyone care to guess which kinds of papers will tend to use the exact phrase "global warming" as official keywords? Which way do you think such papers will lean? Did no one think about about any of this? Their search method excluded vast swaths of research by Lindzen, Spencer, Pielke, and others. I'm not going to do all the math on this – someone else should dig into the differential effects of their search strategy.

However, this doesn't explain the exclusion of the above Spencer paper. It comes up in the Web of Science search they say they ran, yet it's somehow absent from their database. They included – and counted as evidence of scientific endorsement – papers about TV coverage, public surveys, psychology theories, and magazine articles, but failed to count a journal article written by a climate scientist called "How serious is the global warming threat?" It was in the search results, so its exclusion is a mystery. If the idea is that it was in a non-climate journal, they clearly didn't exclude such journals (see above), and they were sure to count the paper's opposite number (as endorsement):

Oreskes, N., Stainforth, D. A., & Smith, L. A. (2010). Adaptation to global warming: do climate models tell us what we need to know? Philosophy of Science, 77(5), 1012–1028.

In any case, they excluded a vast number of relevant and inconvenient climate science papers. In all of this, I'm just scratching the surface.

Let's look at a climate-related paper they did include:

Idso, C. D., Idso, S. B., Kimball, B. A., HyoungShin, P., Hoober, J. K., Balling Jr, R. C., & others. (2000). Ultra-enhanced spring branch growth in CO2-enriched trees: can it alter the phase of the atmosphere’s seasonal CO2 cycle? Environmental and Experimental Botany, 43(2), 91–100.

The abstract says nothing about AGW or human activity. It doesn't even talk about CO2 causing an increase in temperature. In fact, it's the reverse. It talks about increases in temperature affecting the timing of seasonal CO2 oscillations, and talks about the amplitude of such oscillations. It's a focused and technical climate science paper talking about a seasonal mechanism. The raters apparently didn't understand it, which isn't surprising, since many of them lacked the scientific background to rate climate science abstracts – there are no climate scientists among them, although there is at least one scientist in another field, while several are laypeople. They counted it as endorsement of AGW. I guess the mere mention of CO2 doomed it.

Here's another paper, one of only three papers by Judith Curry in the present century that they included:

Curry, J. A., Webster, P. J., & Holland, G. J. (2006). Mixing politics and science in testing the hypothesis that greenhouse warming is causing a global increase in hurricane intensity. Bulletin of the American Meteorological Society, 87(8), 1025–1037.

Among other things, it disputes the attribution of increased hurricane intensity to increased sea surface temperature, stressing the uncertainty of the causal evidence. The Cook crew classified it as taking no "No Position". Note that they had an entire category of endorsement called Impacts. Their scheme was rigged in multiple ways, and one of those ways was their categorization. It was asymmetric, having various categories that assumed endorsement, like Impacts, without any counterpart categories that could offset them – nothing for contesting or disputing attributions of impacts. Such papers could be classified as No Position, and conveniently removed, inflating the "consensus". This is perhaps the 17th reason why the paper is invalid.

There is another pervasive phenomenon in their data – all sorts of random engineering papers that merely mention global warming and then proceed to talk about their engineering projects. For example:

Tran, T. H. Y., Haije, W. G., Longo, V., Kessels, W. M. M., & Schoonman, J. (2011). Plasma-enhanced atomic layer deposition of titania on alumina for its potential use as a hydrogen-selective membrane. Journal of Membrane Science, 378(1), 438–443.

They counted it as endorsement. There are lots of engineering papers like this. They usually classify them as "Mitigation." Most such "mitigation" papers do not represent or carry knowedge, data, or findings about AGW, or climate, or even the natural world. There are far more of these sorts of papers than actual climate science papers in their data. Those who have tried to defend the Cook paper should dig out all the social science papers that were included, all the engineering papers, all the surveys of the general public and op-eds. I've given you more than enough to go on – you're the ones who are obligated to do the work, since you engaged so little with the substance of the paper and apparently gave no thought to its methods and the unbelievable bias of its researchers. The paper will be invalid for many reasons, including the exclusion of articles that took no position on AGW, which were the majority.

Critically, they allowed themselves a category of "implicit endorsement". Combine this with the fact that the authors here were political activists who wanted to achieve a specific result and appointed themselves subjective raters of abstracts, and the results are predictable. Almost every paper I listed above was rated as implicit endorsement. The operative method seemed to be that if an abstract mentioned climate change (or even just CO2), it was treated as implicit endorsement by many raters, regardless of what the paper was about.

There's yet another major problem that interweaves with the above. Counting mitigation papers creates a fundamental structural bias that will inflate the consensus. In a ridiculous study where we're counting papers that endorse AGW and offsetting them only with papers that reject AGW, excluding the vast majority of topical climate science papers in the search results that don't take simple positions, what is the rejection equivalent of a mitigation paper? What is the disconfirming counterpart? Do the math in your head. Start with initial conditions of some kind of consensus in climate science, or the widespread belief that there is a consensus (it doesn't matter whether it's true for the math here.) Then model the outward propagation of the consensus to a bunch of mitigation papers from all sorts of non-climate fields. Climate science papers reporting anthropogenic forcing have clear dissenting counterparts – climate science papers that dispute or minimize anthropogenic forcing (ignore the fallacy of demanding that people prove a negative and all the other issues here.) Yet the mitigation papers generally do not have any such counterpart, any place for disconfirmation (at least not the engineering, survey, and social science papers, which were often counted as "mitigation"). As a category, they're whipped cream on top, a buy-one-get-three-free promotion – they will systematically inflate estimates of consensus.

It's even clearer when we consider social science papers. Like most "mitigation" papers, all that's happening is that climate change is mentioned in an abstract, public views of climate change are being surveyed, etc. In what way could a social science paper or survey of the general public be classified as rejecting AGW by this method? In theory, how would that work? Would we count social science papers that don't mention AGW or climate as rejection? Could a psychology paper say "we didn't ask about climate change" and be counted as rejection? Remember, what got them counted was asking people about climate, or the psychology of belief, adoption of solar panels, talking about TV coverage, etc. What's the opposite of mentioning climate? Would papers that report the lack of enthusiasm for solar panels, or the amount of gasoline purchased in Tanzania, count as evidence against AGW, as rejection? What if, instead of analyzing TV coverage of AGW, a researcher chose to do a content analysis of Taco Bell commercials from 1995-2004? Rejection? And if a physicist calling for funding for a tokamak reactor counts as endorsement, would a physicist not calling for funding of a tokamak reactor, or instead calling for funding of a particle accelerator, count as rejection? I assume this is clear at this point. Counting mitigation papers, and certainly social science, engineering, and economic papers, rigs the results (many of which I didn't list above, because I'm stubborn and shouldn't have to list them all.)

Not surprisingly, I haven't found a single psychology, social science, or survey study that they classified as rejection... (but it's not my job to find them -- in a catastrophe like this paper, all burdens shift to the authors. Note that finding a mitigation paper or several that was counted as rejection won't do anything to refute what I just said. First, I doubt you'd find several social science or survey papers that were counted as rejection. Second, you're not going to find more rejections than endorsements in the mitigation category, especially the engineering and social science papers, so the net bias would still be there. e.g. there won't be papers on TV coverage that counted as rejection, in symmetry with those that counted as endorsement. And no amount of counting or math will change anything here. We can't do anything with a study based on political activists rating abstracts on their cause, and we certainly can't do anything with it if they violated blindness, independence, and anonymity, or if their interrater reliability is incredibly low, which it is, or if they excluded neutrality in their measure of consensus, or given the other dozen issues here. We can't trust anything about this study, and if it had been published in a social science journal, I think this would've been obvious to the gatekeepers much sooner.)

(Some might argue that choice of research topic, e.g. choosing to research public opinion on climate change, TV coverage of the same, or the proliferation of solar panels, carries epistemic information relevant to scientific consensus on AGW. 1) This misses the point that they claimed not to include such studies, but included them anyway -- such a false claim would normally void a paper all by itself. 2) But you can just fast-forward the math in your head. The pool of non-climate science research that doesn't mention AGW overwhelms the pool of non-climate research that does mention it, so it's going to come down to an initial condition of 0% of non-climate science people talking about climate in their research (in the past), bumped up to at most something like 1% (but very likely less) because of a stipulated and imported consensus in climate science, where that delta carries epistemic information -- adds to our confidence estimate -- which would in part require you to show that this delta is driven by accurate exogenous consensus detection (AECD) by non-climate researchers, out of all the noise and factors that drive research topic choices, ruling out political biases, funding, advisors, self-reinforcement, social acceptance, etc., and that AECD by a micro-minority of non-climate people, combined with their importation of said consensus, adds to our confidence.)

The inclusion of so many non-climate papers is just one of the three acts of fraud in this publication. It might be a fraud record... There's too much fraud in scientific journals, just an unbelievable amount of fraud. Whatever we're doing isn't working. Peer review in its current form isn't working. There's an added vulnerability when journals publish work that is completely outside their field, as when a climate science journal publishes what is essentially a social science study (this fraudulent paper was published in Environmental Research Letters.)

They claimed to use independent raters, a crucial methodological feature of any subjective rater study conducted by actual researchers: "Each abstract was categorized by two independent, anonymized raters." (p. 2)

Here's an online forum where the raters are collaborating with each other on their ratings. The forum directory is here. Whistleblower Brandon Shollenberger deserves our thanks for exposing them.

And here's another discussion.

And another one. Here, Tom Curtis asks:

"Giving (sic) the objective to being as close to double blind in methodology as possible, isn't in inappropriate to discuss papers on the forum until all the ratings are complete?"

The man understands the nature of the enterprise (Curtis keeps coming up as someone with a lot of integrity and ability. He was apparently a rater, but is not one of the authors.) There was no response in the forum (I assume there was some backchannel communication.) The raters carried on in the forum, and in many other discussions, and violated the protocol they subsequently claimed in their paper.

In another discussion about what to do about the high level of rater disagreement on categories and such, one rater said: "But, this is clearly not an independent poll, nor really a statistical exercise. We are just assisting in the effort to apply defined criteria to the abstracts with the goal of classifying them as objectively as possible. Disagreements arise because neither the criteria nor the abstracts can be 100% precise. We have already gone down the path of trying to reach a consensus through the discussions of particular cases. From the start we would never be able to claim that ratings were done by independent, unbiased, or random people anyhow."

Linger on that last sentence. "We would never be able to claim that ratings were done by independent, unbiased, or randon people anyhow."

Yet in the paper, they tell us: "Each abstract was categorized by two independent, anonymized raters."

This is remarkable case of malpractice, and it only gets worse.

When a study uses subjective human raters, those raters must generally be blind to the identities of their participants. Here, where we have humans reading and rating abstracts of journal articles, the raters most definitely need to be blind to the identities of the authors (who are the participants here) to avoid bias. It wouldn't be valid otherwise, not when rating abstracts of a contentious issue like climate change. There are very few studies based on rating abstracts of journal articles, and there may be as many studies on the bias and disagreement in abstract raters (Cicchetti & Conn, 1976; Schmidt, Zhao, & Turkelson, 2009). Those studies were about scientists or surgeons rating abstracts in their fields – the researchers did not contemplate the idea of a study where laypeople rate scientific abstracts.

The Cook study makes these methodological issues secondary given the inherent invalidity of political activists subjectively rating abstracts on the issue of their activism – an impossible method. In addition to their false claim of independent raters, the authors claimed that the raters were blind to the authors of the papers they rated:

"Abstracts were randomly distributed via a web-based system to raters with only the title and abstract visible. All other information such as author names and affiliations, journal and publishing date were hidden." (p. 2)

They lied about that too. From one of their online discussions:

One rater asks: "So how would people rate this one:..."

After pasting the abstract, he asks:

"Now, once you know the author's name, would you change your rating?"

The phrase "the author's name" was linked as above, to the paper, openly obliterating rater blindness.

It was a Lindzen paper.

The rater openly outed the author of one the papers to all the other raters. He was never rebuked, and everyone carried on as if fraud didn't just happen, as if the protocol hadn't been explicitly violated on its most crucial feature (in addition to rater independence.)

The rater later said "I was mystified by the ambiguity of the abstract, with the author wanting his skeptical cake and eating it too. I thought, "that smells like Lindzen" and had to peek."

It smelled like Lindzen. They were sniffing out skeptics. Do I have to repeat how crucial rater blindness is to the validity of a subjective rating study?

Another rater says "I just sent you the stellar variability paper. I can access anything in Science if you need others."

She sent the paper to another rater. The whole paper. Meaning everything is revealed, including the authors.

Another rater says, about another paper: "It's a bad translation. The author, Francis Meunier, is very much pro alternative / green energy sources."

Helpfully, another rater links to the entire paper: "I have only skimmed it but it's not clear what he means by that and he merely assumes the quantity of it. Note that it was published in the journal Applied Thermal Engineering."

Cook helpfully adds "FYI, here are all papers in our database by the author Wayne Evans:"

Let's quote the methods section once more: "Abstracts were randomly distributed via a web-based system to raters with only the title and abstract visible. All other information such as author names and affiliations, journal and publishing date were hidden. Each abstract was categorized by two independent, anonymized raters."

They do it over and over and over. People kept posting links to the full articles, revealing their provenance. It wasn't enough to shatter blindness and reveal the sources of the papers they were rating – one rater even linked to Sourcewatch to expose the source and authors of an article, adding "Note the usual suspects listed in the above link."

Another rater openly discussed rigging the second part of the study, which involved e-mailing authors and getting their ratings using the same invalid categorization scheme, by waiting to the end to contact "skeptic" scientists (after all non-skeptic scientists had been contacted): "Just leave the skeptic paper e-mails until right near the end. If they wish to leak this, then they'll do a lot of the publicising for us."

I have no idea if they impemented such methodological innovations, but it's remarkable that no one in the forum censured this rater or voiced any objection to such corruption.

Throughout their rating period – tasked with rating scientific abstracts on the issue of AGW – the raters openly discussed their shared politics and campaign on AGW with each other, routinely linked to some article by or about "deniers", and savored the anticipated results of their "study". They openly discussed how to market those results to the media – in advance of obtaining said results. One rater bragged to the other raters that he had completed 100 ratings without a single rejection, contaminating and potentially biasing other raters to match his feat...

More partial results were posted before ratings were completed. They seemed to find every possible way to bias and invalidate the study, to encourage and reinforce bias.

And because their design rested on a team of activists working from their homes, raters were completely free to google the titles, find the papers and authors at will – that breaks the method all by itself. They exhibited zero commitment to maintaining blindness in the forums, and the design made it unenforceable.

The day after ratings started, Cook alerted the raters to "16 deniers" in the Wall Street Journal – a group of scientists who were authors or signatories on a WSJ op-ed, who argued that global warming had been overestimated by prevailing models (which is simply true), and that the disagreement between scientists about the size and nature of the human contribution, not dichotomous yes-no positions on AGW. Cook also called them "climate misinformers." There was no elaboration or explanation that would clue us in to why Cook concluded these scientists were deniers or misinformers, but that's less important than the fact that raters were repeatedly primed and stoked to villify and marginalize one side of the issue which they were tasked with rating.

After alerting his raters to the latest misinformation from their enemies/participants – the apparent fable that climate science has texture – Cook observed: "This underscores the value in including the category "humans are causing >50% of global warming" as I think it will be interesting to see how many papers come under this option (yeah, yeah, DAWAAR)."

Well... (FYI, I have no idea what DAWAAR means – anyone know?)

He then linked to an actual scientific study conducted by capable researchers commissioned by the American Meteorological Society, which to Cook found an unacceptably low consensus, and told the raters: "I see the need for TCP everywhere now!"

(TCP is The Consensus Project, their name for this rodeo.)

I was struck by how often they spoke of "deniers" and "denialists" when referring to scientists. They seemed to think that researchers who minimized the degree of anthropogenic forcing were deniers, as where Cook said: "Definitely, we'll show information about the # of denier authors vs endorsement authors." The forums were dripping with this ideological narrative, a completely impossible situation for a valid rater study of climate science abstracts. More broadly, they seemed to genuinely believe that anyone who disagrees with them is a "denier", that reality is arranged this way, consisting merely of their intensely ideological camp and "deniers". They had discussions about how to manage the media exposure and anticipating criticism, where the pre-emptively labeled any future critics as "deniers" (since there is no psychological scientific case for the use of this term, much less its application to the specific issue of global warming, or even a clear definition of what the hell it means, I'm always going to place it in quotes.)

This all tells us:

1) They blatantly, cavalierly, and repeatedly violated the methods they claimed in their paper, methods that are crucial to the validity of a subjective rater study – maintaining blindness to the authors of papers they rated, and conducting their ratings independently. This destroyed the validity of an already invalid study – more coherently, it destroyed the validity of the study if we assumed it was valid to begin with.

2) These people were not in a scientific mood. They had none of the integrity, neutrality, or discipline to serve as subjective raters on such a study. We could've confidently predicted this in advance, given that they're political activists on the subject of their ratings, had an enormous conflict of interest with respect to the results, and the design placed them in the position to deliver the results they so fervently sought. Just that fact – the basic design – invalidates the study and makes it unpublishable. We can't go around being this dumb. This is ridiculous, letting this kind of junk into a scientific journal. It's a disgrace.

They also claimed to use anonymized raters – this is false too. Anonymity of raters in this context could only mean anonymity to each other, since they didn't interact with participants. They were not anonymous to each other, which is clear in the forums, and were even e-mailing each other. It appears that almost everything they claimed about their method was false.

And given their clear hostility to skeptic authors, and their willingness to openly expose and mock them, we know that we have something here we can't possibly trust, many times over. This is fraud, and we should be careful to understand that it is very unlikely to be limited to the few discussion threads I quoted from. Any fraud is too much fraud, and since they were e-mailing papers to each other, we can't confidently claim to know the limits of their fraud. If we had any reason to entertain the validity of this study to begin with, the burden at this point would rest entirely on the people who behaved this way, who were absurdly biased, incompetent, and fraudulent in these examples to prove that the fraud was limited to these examples. We need to be serious. We can't possibly believe anything they tell us, absent abundant proof. But again, why would be okay with a moderate level of fraud, or a severely invalid study? It's never going to be valid – it's not in our power to make this study valid or meaningful.

It won't matter whether the fraud we can see here resulted in a hundred incorrect ratings, or no incorrect ratings. No one should be burdened with validating their ratings. That's a waste of time, a perverted burden, except for post-retraction investigators. These people had no apparent commitment to being serious or careful. Subjective raters in scientific research need to be sober and neutral professionals, not people who sniff out the identities of their participants and chronically violate the method that they would subsequently claim in their paper. (And the math of the dichotomous ratings doesn't matter, because they included a bunch of inadmissable mitigation/engineering papers, excluded of all the neutral papers, and because the Sesame Street counting method is inherently invalid for several reasons, which I discuss further down.)

There is a lot of content in this essay. That's the Joe way, and there are lots of issues to address – it's a textbook case. But the vastness of this essay might obscure the simplicity of many of these issues, so let me pause here for those who've made it this far:

When a scientific paper falsely describes its methods, it must be retracted. They falsely described their methods, several times on several issues. The methods they described are critical to a subjective human rater study – not using those methods invalidates this study, even if they didn't falsely claim those methods. The ratings were not independent at any stage, nor were they blind. Lots of irrelevant social science psychology, survey, and engineering papers were included. The design was invalid in multiple ways, deeply and structurally, and created a systematic inflating bias. There is nothing to lean on here. We will know nothing about the consensus from this study. That's what it means to say that it's deeply invalid. The numbers they gave us have no meaning at this point, cannot be evaluated. Fraudulent and invalid papers have no standing – there's no data here to evaluate. If ERL/IOP (or the authors) do not retract, they'd probably want to supply us with a new definition of fraud that would exclude false descriptions of methods, and a new theory of subjective rating validity that does not require blindness or independence.

The journal has all it needs at this point, as does IOP, and I'm not going to do everyone's work for them for free – there are far more invalid non-climate articles beyond the 17 I listed, and more forum discussions to illustrate the false claims of rater blindness and independence. Don't assume I exhausted the evidence from the discussion forums – I didn't. Nothing I've done here has been exhaustive – there is more. My attitude has always been that I'm providing more than enough, and the relevant authorities can do the rest on their own. The issues below tend to be broader.

It should be clear that a survey of what the general public knows or thinks about climate science is not scientific evidence of anthropogenic warming. It doesn't matter what the results of such a survey are -- it has nothing to do with the scientific evidence for AGW. It's not a climate paper. A survey of people's cooking stove use and why they don't like the new, improved cooking stoves, is not scientific evidence of anthropogenic warming. It's not a climate paper. An investigation of the psychology of personal motivation for adaptation or mitigation is not evidence of anthropogenic warming. It's not a climate paper.

This also makes us throw out the claim that 97% of the authors of the papers that were counted as taking a position said that their papers endorsed the consensus. That claim is vacated until we find out how many of those authors were authors of social science, psychology, marketing, and public survey papers like the above. It would be amazing that the authors of such papers responded and said that their papers counted as endorsement, but at this point all bets are off. (Only 14% of the surveyed authors responded at all, making the 97% figure difficult to take at face value anyway. But the inclusion of non-climate science papers throws it out.)

Note that the authors are still misrepresenting their 97% figure as consisting of "published climate papers with a position on human-caused global warming" in their promotional website. Similarly, for an upcoming event, they claim "that among relevant climate papers, 97% endorsed the consensus that humans were causing global warming." Most egregiously, in a bizarre journal submission, Cook and a different mix of coauthors cited Cook et al as having "found a 97% consensus in the peer-reviewed climate science literature." Clearly, this is false. There's no way we can call the above-listed papers on TV coverage, solar panel adoption, ER visits, psychology, and public opinion "relevant climate papers", and certainly not "peer-reviewed climate science." Don't let these people get away with such behavior – call them out. Ask them how psychology papers and papers about TV coverage and atomic layer deposition can be "relevant climate papers". Many of the random papers they counted as endorsement don't take any position on climate change -- they just mention it. Raise your hand at events, notify journalists, etc. Make them defend what they did. Hopefully it will be retracted soon, but until then, shine the light on them. For one thing, Cook should now have to disclose how many psychology and other irrelevant papers were included. Well, this is pointless given a simultaneously and multiply fraudulent and invalid paper. ERL and IOP are obligated to simply retract it.

For a glimpse of the thinking behind their inclusion of social science papers, see their online discussion here, where they discuss how to rate a psychology paper about white males and "denial" (McCright & Dunlap, 2011). Yes, they're seriously discussing how to rate a psychology paper about white males. These people are deeply, deeply confused. The world thought they were talking about climate science. Most of the raters wanted to count it as endorsement (unlike the psychology papers listed above, the white males study didn't make the cut, for unknown reasons.)

Some raters said it should count as a "mitigation" paper...

"I have classified this kind of papers as mitigation as I think that's mostly what they are related to (climate denial and public opinion are preventing mitigation)."

"I have classified many social science abstracts as mitigation; often they are studying how to motivate people to act to slow AGW."

Cook says "Second, I've been rating social science papers about climate change as Methods."

As a reminder, in their paper they tell us that "Social science, education, research about people's views on climate" were classified as Not Climate Related (see their Table 1.)

Cook elaborates on the forum: "I think either methods or mitigation but not "not climate related" if it's social science about climate. But I don't think it's mitigation. I only rate mitigation if it's about lowering emissions. There is no specific link between emissions & social science studies. So I think methods."

Methods? In what sense would a psychology paper about white males be a methods paper in the context of a study on the scientific consensus on anthropogenic climate change? Methods? What is the method? (The study's authors collected no data – they dug into Gallup poll data, zeroed in on the white male conservatives, and relabeled them deniers and "denialists".)

In their paper, they describe their Methods category as "Focus on measurements and modeling methods, or basic climate science not included in the other categories." They offer this example: "This paper focuses on automating the task of estimating Polar ice thickness from airborne radar data. . . "

We might also attend to the date of their white males discussion: March 22, 2012. This was a full month into the ratings, and most had been completed by then. Yet still people were flagrantly breaking the independence they explicitly claimed in their methods section.

We might also attend to the fact that the poster in that forum, who asked how they should rate the white males paper, provided a link to the full paper, once again shattering blindness to the authors. This suggests that the method these people described in their paper was never taken seriously and never enforced.

Valid Methods

I think for some people with a background in physical sciences, or other fields, it might not be obvious why some of these methods matter, so I'll elaborate a bit. I mean people at a journal like ERL, the professional staff at IOP, etc. who don't normally deal with subjective rater studies, or any social science method. (Note that I'm not aware of any study in history that used laypeople to read and interpret scientific abstracts, much less political activists. This method might very well be unprecedented.) I was puzzled that ERL editor Daniel Kammen had no substantive response at all to any of these issues, gave no sign that false claims about methods were alarming to him, or that raters sniffing out authors like Lindzen was alarming, and seemed unaware that blindness and independence are crucial to this kind of study, even absent the false claims of the same.

First, note that our view of the importance of blindness and independence in subjective rating studies won't change the fact that they falsely claimed to have used those methods, which is normally understood as fraud. If we're going to allow people to falsely describe their methods, we should be sure to turn off the lights before locking up. One commenter defended them by quoting this passage from the paper's discussion section: "While criteria for determining ratings were defined prior to the rating period, some clarifications and amendments were required as specific situations presented themselves." This doesn't seem to work as a defense, since it doesn't remotely suggest that they broke blindness, independence, or anonymity, and only refers to the rating criteria, which can be clarified and amended without breaking blindness, independence, or anonymity. They were quite explicit about their methods.

But what about these methods? Why do we care about independence or blindness?

The need for blindness is probably well-understood across fields, and features prominently in both social science and biomedical research (where it often refers to blindness to condition.) As I mentioned, the participants here were the authors of the papers that were rated (the abstracts.) The subjective raters need to be blind to the participants' identities in order to avoid bias. Bias from lack of blindness is a pervasive issue, from the classic studies of bias against female violinists (which was eliminated by having them perform behind a curtain, thus making the hiring listener blind to gender -- see Goldin and Rouse (1997) for a review of impacts), to any study where raters are evaluating authored works. For example, if we conducted a study evaluating the clarity of news reporting on some issue using subjective human raters/readers, the raters would most definitely have to be blind to the sources of the articles they were rating – e.g. they could not know that an article was from the New York Times or the Toledo Blade, nor could they know that the journalist was Tasheka Johnson or James Brubaker III. Without blindness the study wouldn't be valid, and shouldn't be published.

This issue is somewhat comical given the inherent bias of political activists deciding what abstracts mean for their cause, since the bias there likely overwhelms any bias-prevention from blindness to authors. It would be like Heritage staffers rating articles they knew were authored by Ezra Klein, or DNC staffers rating articles they knew were by penned by George Will (it's hard to imagine what the point of such studies would be, so bear with me.) The raters here amply illustrated the potential for such bias – remember, they sniffed out Lindzen. I assume I don't have to explain more about the importance of rater blindness to authors. We can't possibly use or trust a study based on ratings of authored works where raters weren't blind to the authors, even if we were in a fugue state where a paper-counting study using lay political activists as raters of scientific abstracts made sense to us.

Independence: It's important for raters to be independent so that they don't have systematic biases, or measurement error coming from the same place. Again, the nature of the raters here makes this discussion somewhat strange – I can't drive home enough how absurd all of this is. This should never have been published in any universe where science has been formalized. But ignoring the raters' invalidating biases, we would need independence for reliable data. This is rooted in the need for multiple observations/ratings. It's the same reason we need several items on a personality scale, instead of just one. (There is interesting work on the predictive validity of one-item scales of self-esteem, and recently, narcissism, but normally one item won't do the job.) It should be intuitive why we wouldn't want just one rater on any kind of subjective rating study. We want several, and the reason is so we have reliable data. But there's no point if they're not independent. Remember, the point is reliability and reducing the risk of systematic bias or measurement error.

So what does it mean if people who are supposed to be independent raters go to a rating forum that should not exist, and ask "Hey guys, how should I rate this one?" (and also "What if you knew the author was our sworn enemy?"...). What really triggers the problem is when other raters reply and chime in, which many did. It means that raters' ratings are now contaminated by the views of other raters. It means you don't have independent observations anymore, and have exposed yourself to systematic bias (in the form of persuasive raters on the forum, among other things). It means you needn't have bothered with multiple raters, and that your study is invalid.

It also means that you can't even measure reliability anymore. Subjective rating studies always measure and report interrater reliability (or interrater agreement) – there are formal statistics for this, formal metrics of measuring it validly. This study was notable for not reporting it – I've never encountered a rating study that did not report the reliability of its raters (it's not measured as percentages, which they strangely tried to do, but requires formal coefficients that account for sources of variance.) The right estimate here would probably be Krippendorff's alpha, but again, it's moot, on two counts – the method, and the fraud. We would expect political activists to have high agreement in rating something pertinent to their activism – they're all the same, so to speak, all activists on that issue, and with the same basic position and aims. It wouldn't tell us much if their biased ratings were reliable, that they tended to agree with each other. But here, they shattered independence on their forum, so interrater reliability statistics would be invalid anyway (their reported percentages of disagreement, besides being crude, are invalid anyway since the raters weren't independent -- they were discussing their ratings of specific papers in the forum, contaminating their ratings and making the percentage of agreement or disagreement meaningless.)

Another striking feature of the discussions is that they had no inkling of the proper methods for such a study. No one seemed to take the importance of blindness to author seriously, or independence (except for Curtis, who departed for other reasons.) No one seemed to have heard of interrater reliability. They discussed how to measure agreement from a zero-knowledge starting point, brainstorming what the math might be – no one mentions, at any point, the fact that all sorts of statistics for this exist, and are necessary. In one of the quotes above, a rater said: "Disagreements arise because neither the criteria nor the abstracts can be 100% precise." There was no apparent awareness that one source of variance, of disagreement, in ratings was actual disagreement between the raters – different judgments.

They applied their "consensus" mentality more broadly than I expected, applying it to their own study – they actually thought they needed to arrive at consensus on all the abstracts, that raters who disagreed had to reconsider and hopefully come to agreement. They didn't understand the meaning or importance of independent ratings in multiple ways, and destroyed their ability to assess interrater reliability from the very beginning, by destroying independence in their forums from day one. Then they had a third person, presumably Cook, resolve any remaining disagreements. None of these people were qualified for these tasks, but I was struck by the literal desire for consensus when they discussed this issue, and the belief that the only sources of variance in ratings were the criteria and abstracts. It makes me wonder if they embrace some kind of epistemology where substantive disagreement literally doesn't exist, where consensus is always possible and desirable. Having no account of genuine disagreement would neatly explain their worldview that anyone who disagrees with them is a "disinformer" or "denier". The post-mortem on this hopefully brief period of history during which science was reduced to hewing to a purported consensus vs. denial will be incredibly interesting. What's most offensive about this is how plainly dimwitted it is, how much we'd have to give up to accept a joint epistemology/ethics of consensus-seeking/consensus-obedience.

(It's probably importa

Phrases: The Mysticism of Consensus

Getting back to the raters' comments, rater Dana Nuccitelli also said the white males paper should count as "methods", and I was struck that he wrote "It's borderline implicit endorsement though, with all the 'climate change denial' phrases.  If you read the paper I'd bet it would be an explicit endorsement."

He thinks that if a psychology paper uses the phrase "climate change denial", it could count as scientific endorsement of anthropogenic climate change. We should linger on that. This is a profound misunderstanding of what counts as scientific evidence of AGW. The implied epistemology there is, well, I don't know that it has a name. Maybe it's a view that reality is based on belief, anyone's belief (except for the beliefs of skeptics), perhaps a misreading of Kuhn.

Even if we thought reality was best understood via consensus, a physical climate reality is not going to be created by consensus, and the only consensus we'd care about would be that of climate scientists. That leftist sociologists pepper their paper with the phrase "climate change denial" does not add to our confidence level about AGW, or feed into a valid scientific consensus. It's not evidence of anything but the unscientific assertions of two American sociologists and a failure of peer review. The paper was invalid – they labeled people as "deniers" for not being worried enough about global warming, and for rating the media coverage as overhyped. The authors did nothing to validate this "denial" construct as a psychological reality (to my knowledge, no one has.) Citing a couple of other invalid studies that invalidly denial won't do it, and it highlights the perils and circularity of "consensus" in biased fields (although there's no consensus in research psychology about a construct of denial – only a handful of people are irresponsible enough to use it.) They presented no evidence that a rational person must worry a given amount about distant future changes to earth's climate, or that not worrying a given amount is evidence of a process of "denial". They offered no evidence that the appropriateness of the media's coverage of an issue is a descriptive fact that can be denied, rather than a subjective and complex, value-laden judgment, or that it is an established and accessible fact that the media does not overhype AGW, and thus that people who say that it does are simply wrong as a matter of scientific fact, and also deniers.

No one who wants to count a paper about white males as evidence has any business being a rater in a subjective rating-based study of the climate science consensus. There are lots more of these alarming online discussions among raters. This was a disaster. This was an invalid study from the very outset, and with this much clear evidence of fraud, it's a mystery why it wasn't retracted already. We have explicit evidence here that these people had no idea what they were doing, were largely motivated by their ideology, and should probably submit to drug testing. These people had neither the integrity nor the ability to be raters on a scientific study of a scientific consensus on their pet political cause, and need to be kept as far as possible from scientific journals. They simply don't understand rudimentary epistemology, or what counts as evidence of anthropogenic climate change.

We need to keep in mind for the future that if people don't understand basic epistemology, or what counts as evidence, they won't be able to do a valid consensus study – not even a survey of scientists, because the construction of the questions and response scales will require such knowledge, basic epistemological competence, and ideally some familiarity with the research on expert opinion and how to measure and aggregate it. Anyone who conducts a scientific consensus study needs to think carefully about epistemology, needs to be comfortable with it, able to understand concepts of confidence, the nature of different kinds of evidence and claims, and the terms scientists are most comfortable using to describe their claims (and why.)

Let's retrace our steps..

The above papers have nothing to do, epistemologically, with the scientific consensus on global warming. The consensus only pertains to climate science, to those scientists who actually study and investigate climate. To include those papers was either a ridiculous error or fraud. I didn't expect this -- I expected general bias in rating climate papers. I never imagined they'd include surveys of the public, psychology papers, and marketing studies. In retrospect, this was entirely predictable given that the researchers are a bunch of militant anti-science political activists.

As I said, I found those papers in ten minutes with their database. I'm not willing to invest a lot of time with their data. The reason is what I've argued before -- a method like theirs is invalid and perverts the burden of proof. The world is never going to be interested in a study based on militant political activists reading scientific abstracts and deciding what they mean with respect to the issue that is the focus of their activism. That method is absurd on its face. We can't do anything with such studies, and no one should be burdened with going through all their ratings and identifying the bias. We can't trust a study where the researchers are political partisans who have placed themselves in the position to generate the data that would serve the political goals -- the position of being subjective raters of something as complex and malleable as a scientific abstract. That's not how science is normally done. I've never heard of researchers placing themselves in the position of subjectively rating complex text, articles and the like, on an issue on which they happen to be activists.

I don't think I spelled this out before because I thought it was obvious: There is enormous potential for bias in such a method, far more potential than in a normal scientific study where the researchers are collecting data, not creating it. Having human beings read a complicated passage, a short essay, and decide what it means, is already a very subjective and potentially messy method. It's special, and requires special training and guidelines, along with special analyses and statistics. The Cook paper is the only subjective rating study I've ever seen that did not report any of the statistics required of such studies. It was amazing -- they never reported interrater reliability. I can't imagine a rater study that doesn't report the reliability of the raters... This study is a teachable moment, a future textbook example of scientific scams.

But having humans read a scientific abstract and decide what it means is even more challenging than a normal rater study. For one thing, it's very complicated, and requires expert knowledge. And in this case, the researchers/raters were unqualified. Most people aren't going to be able to read the abstracts from any given scientific field and understand them. Climate science is no different from any other field in this respect. The raters here included luggage entrepreneurs, random bloggers, and an anonymous logician known only by his nom de guerre, "logicman", among others. Normally, we would immediately stop and ask how in the hell these people are qualified to read and rate climate science abstracts, or in logicman's case, who these people are. To illustrate my point,  here's a sample climate abstract, from LeGrande and Schmidt (2006):

We present a new 3-dimensional 1° × 1° gridded data set for the annual mean seawater oxygen isotope ratio (δ18O) to use in oceanographic and paleoceanographic applications. It is constructed from a large set of observations made over the last 50 years combined with estimates from regional δ18O to salinity relationships in areas of sparse data. We use ocean fronts and water mass tracer concentrations to help define distinct water masses over which consistent local relationships are valid. The resulting data set compares well to the GEOSECS data (where available); however, in certain regions, particularly where sea ice is present, significant seasonality may bias the results. As an example application of this data set, we use the resulting surface δ18O as a boundary condition for isotope-enabled GISS ModelE to yield a more realistic comparison to the isotopic composition of precipitation data, thus quantifying the ‘source effect’ of δ18O on the isotopic composition of precipitation.

Would a smart, educated layperson understand what this means? How? Would they know what the GISS ModelE is? What GEOSECS data and 3-dimensional 1° × 1° gridded data are? Even scientists in other fields wouldn't know what this means, unless they did a lot of reading. This study was ridiculous on these grounds alone. The burden lies with the authors trotting out such a questionable method to first establish that their raters had the requisite knowledge and qualifications to rate climate abstracts. No one should ever publish a study based on laypeople rating scientific abstracts without clear evidence that they're qualified. This is technically ad hominem, but I think ad hominem is a wise fallacy in some specific circumstances, like when it's an accurate probability estimate based on known base rates or reasonable inferences about a person's knowledge, honesty, etc. (it's going to be a technical fallacy in all cases because it's based on exogenous evidence, evidence not in the premises, but it's not always a fallacy in the popular usage of "fallacy" being an invalid or unreliable method of reasoning, an issue I explore in my book on valid reasoning.) Wanting a doctor to have gone to medical school is ad hominem (if we dismiss or downrate the diagnoses of those who haven't gone to medical school.) I'm perfectly fine with laypeople criticizing scientists, and I don't think such criticism should be dismissed out of hand (which I call the Insider Fallacy, an unwise species of ad hominem), but if you give me a study where laypeople rated thousands of abstracts, I'm not going to accept the burden of proving that their ratings were bad, one by one. Thousands of ratings is a much different situation than one critical essay by a layperson, which we can just read and evaluate. With thousands of ratings, I think the burden has to be on the lay researchers to establish that they were qualified.

When we add the fact that the raters were partisan political activists motivated to deliver a particular result from the study, we go home. The normal response might be several minutes of cognitive paralysis over the absurdity of such a method, of such a "study". ERL should be ashamed of what they did here. This is a disgrace. Political activists deciding what abstracts mean on the issue of their activism? Have we decided to cancel science? Are we being serious? It's 2014. We have decades and mountains of research on bias and motivated reasoning, scientific research. The idea of humans reading and rating abstracts on their implications for their political cause sparks multiple, loud, shrieking alarms. A decently bright teenager would be able to identify this method as absurd. This really isn't complicated. It shouldn't be happening in the modern world, or in modern scientific journals. It would pervert the burden if we had to run around validating the divinations of motley teams of lay political activists who designed "studies" where they appointed themselves to read science abstracts and decide what they mean for their political cause. It would pervert the burden if teams of politically-motivated scientists appointed themselves as subjective raters. This method is incompatible with long established and basic norms of scientific objectivity and validity.

I assume this article will be retracted. We need to be able to distinguish our political selves from our scientific selves, and politics should never dislodge our transcendent commitments to integrity, scientific rigor, valid methods – or our basic posture against fraud and in favor of using our brains.

One bat, two bats, three bats! Sesame Street Consensus Counting

Speaking of using our brains, I think we might also want to think about why we would ever count papers, and take a percentage, as a measure of the consensus on some issue in scientific field. There are several obvious issues there that we'd need to address first. And on this particular topic, it doesn't address the arguments skeptics make, e.g. publication bias. The publication bias argument is unruffled by counting publications. If we care about engaging with or refuting skeptics, this method won't do. But again, there are several obvious issues with counting papers as a method (which is layered on top of the issues with having humans read abstracts and decide what they mean with respect to a contentious environmental issue.) Like soylent green, a consensus is made of people. Papers have no consensus value apart from the people who wrote them. There is enormous epistemic duplication in counting papers – each paper will not have the same true epistemic weight. One reason is that papers by the same authors are often partly duplicative of their earlier work. If Joe said Xa in Paper 1, said Xa in Paper 2, and Xa in Paper 3 (among other things), Joe now has three "consensus" votes. Pedro says Xa in Paper 1, Xb in P2, and Xc in P3. I'll unpack that later. Other duplication will be across authors. 

A huge source of duplication is when related fields import AGW from climate science, and then proceed to work on some tie-in. These are often "mitigation" papers, and they can't be counted without explaining how they are commensurable with climate papers. A critical assumption of the simple counting method is that all these papers are commensurable. This can't possibly be true – it's false. Papers about atomic layer deposition or the ITER are not commensurable with a climate paper simply because they mention climate. This would require a mystical epistemology of incantation. Counting such exogenous mitigation papers would require a rich account of what consensus means, how it's expressed, and what it means when people who don't study climate nod their heads about AGW in their grant applications and articles. We'd need an account of what that represents. What knowledge are they basing their endorsement on? It's very likely from available evidence that they are simply going along with what they believe is a consensus in climate science. It's unlikely most of these researchers have read any consensus papers. It's more likely they just believe there's a consensus, because that's the common view and expressing doubt or even asking questions can get them smeared as "deniers", given the barbarism of the contemporary milieu.

If they're just assuming there's a consensus, then their endorsement doesn't count, unless you can present compelling evidence that the processes by which people in other fields ascertain a consensus in climate science is a remarkably accurate form of judgment – that the cues they're attending to, perhaps nonconsciously, are reliable cues of consensus, and that exaggerated consensuses do not convey those cues. In other words, they'd intuitively sniff out an exaggerated consensus. There a lot of things that would have to be true here for this kind of model to work out. For example, it couldn't be the case that linking your work to climate change helps you get grants or get published, or at least the implicit processes at work in these people would have to be such that these incentives would be overwhelmed by their accurate detection of a reliable consensus in other fields. But I'm being generous here, just hinting in the direction of what we'd need to count most mitigation papers. Determining that non-climate researchers are accurate in consensus detection wouldn't be enough. Being accurate at detection wouldn't make their membrane engineering papers commensurable with climate science papers. You'd need a comprehensive theory of secondary or auxiliary consensus, and in the best case, it might give you a weighting scheme that gave such papers non-zero weights, but much less than the weights for climate science attribution papers. There's no way to get to full commensurability without a revolutionary discovery about the nature of consensus, but remember that consensus is a probabilistic weight that we admit into our assessment of reality. Any theory of consensus needs to address the reliability of its congruence with reality. Without that, we have nothing.

Third, the practice of "least publishable units" will undermine the Sesame Street counting method, could destroy it all by itself. You can run that model in your head. (One way out here is to show that people who engage in this practice are more likely to be right on the issue, and that the degree of their accuracy advantage tracks to their mean pub count advantage over those less prone to LPU strategies. Not promising.)

We need careful work by epistemologists here, but I ultimately I think none of this will work out. Counting papers, even with weighting and fancy algorithms, is a poor proxy for consensus, which is made of people. People can talk, so you might as well just ask them -- and ask them good questions, not the scame binary question. Papers represent something different, and there are so many variables that will drive and shape patterns of publication. Papers should be the units of meta-analysis for substantive questions of "What do we know? What is true, and not true?" Meta-analysis requires careful evaluation of the studies, real substantive thinking and judgment criteria – not just counting and numerology.  Rating abstracts looks remarkably weak compared to what a meta-analysis does – why would we choose the less rigorous option?

Consensus is a different question. It's not "What is true?" It's "What proportion of experts think this is true?" It's one of several ways of approaching "What is true?", a special and complex type of probabilistic evidence. It won't be measured by counting papers – those aren't the units of consensus. And counting 15 and 20 year old papers is like finding out someone's political party by looking at their voting records from Clinton-Dole, which would be odd, and less accurate, when you can just call them and ask them "'sup?". All sorts of questions could be answered by carefully reviewing literature, possibly some kind of consensus questions, mostly past consensus or trajectories. But to get a present-tense consensus, you would need to do a lot of work to set it up -- it couldn't be anything as simple as counting papers, since consensus is people.

Interestingly, by simply counting papers like Count von Count, they actually applied weights. None of this was specified or seemingly intended. First, their odd search did something. It's unclear exactly what, and someone will need to unpack it and see how that search stripped Lindzen's votes by excluding everything he's published since 1997. But the basic weights are likely to be: those who publish more, especially those whose articles are counted in the Cook search; those who participate in commentaries; older researchers; English-language researchers; those in non-English countries who get good translations vs those who don't; reviewers -- a paper counting study gives a lot of vicarious votes to gatekeepers; journals -- same gatekeeper effect; people who thrive in the politics of science, the ones who get positions in scientific bodies, who gladhand, social climbers, et al – that should make it easier to be published; researchers with more graduate students get more votes, ceteris paribus (this is all ceteris paribus, or ceteris paribus given the weird search method); men over women, probably, given the culture of climate science being like most sciences; researchers who take positions in their abstracts vs those who don't or who tend to have it in the body only; researchers who write clear abstracts vs those who don't; researchers who aren't hated by the Cook team, since they clearly hated Lindzen and savaged a bunch of skeptical scientists on their bizarre website; probably height too, since it predicts success in various fields, and here that would mean publications (I'm being cute on that one, but paper counting will have all sorts of weights, predictors of admission, and I won't be shocked if height was one.) Those are just some of the likely weights, in various directions, enforced by crude counting.

(Some of these predictors might correlate with some concept of credibility, e.g. simple number of papers published. That's fine. If people want to make that case, make it. It will require lots of work, and it's not going to do anything to our assessment of the Cook study.)

There's also a profound structural bias and error in measuring consensus on an issue by counting only those who take a position on it. The framing here demands that scientists prove a negative -- it counts papers that presumably endorse AGW, and the only things that weigh against them are papers that dispute AGW. Papers that take no position (the vast majority in this case) are ignored. This is absurd. It violates the longstanding evidentiary framework of science, our method of hypothesis testing, and really, the culture of science. Such a method is invalid on its face, because it assumes that at any arbitrary timepoint, if the number who support a claim (Y) exceeds those who oppose it (N), this counts as compelling evidence for the claim, irrespective of the number who take no decisive position on the claim (S). That's a massive epistemic information loss -- there's a lot of what we might call epistemic weight in those who take no position, those who don't raise their hands or hold press conferences, a lot of possible reasons and evidence they might be attending to, in what their neutrality or silence represents. This will be very context-sensitive, will vary depending on the type of claim, the nature of the field, and so forth. We should be very worried when S ≫ (Y+N), which it is here. Note that this assumes that counting papers is valid to begin with -- as I said, there are several obvious issues with counting papers, and they'd have to be dealt with before we even get to this point. Validly mapping S, Y, and N, or something like them, to papers instead of people requires some work, and we won't get that kind of work from Cook and company.

And as I mentioned much earlier, mitigation papers have no obvious rejection counterparts – there is no way for social science papers to count as rejection, for a survey of the general public that is not about AGW to count as rejection, for a report on solar panels or gasoline consumption in Nigeria to count as rejection, for an engineering paper that is not about extracting hydrogen from seawater to count as rejection, for a physicist to not call for funding for a tokamak reactor and count as rejection, for a study of Taco Bell commercials, instead of TV coverage of AGW, to count as rejection, and so on and so forth... You can run the math with the starting conditions and see how counting mitigation papers rigs the results. It will be similar for impacts. Ultimately, this inflates the results by conflating interest for endorsement.

That method also selects for activism. You can model it in your head -- just iterate it a few times, on various issues (yes, I know I keep saying you can model things in your head – I believe it's true.) For example, it's quite plausible that using this absurd method we could report a consensus on intelligent design. Just count the papers that support it (for fun, count religious studies and psychology papers.) Then count the ones that reject it. Ignore all the papers on evolutionary biology that don't talk about ID. Voila! We could easily have a consensus, since I doubt many evolutionary scientists ever talk about ID in journals. (I have no idea if the numbers would come out like this for ID, but it's plausible, and you get my point. The numbers will be very malleable in any case.) At this point, I'm frustrated... This is all too dumb. It sounds mean, I know, but I'm just alarmed by how dumb all of this is. There are too many dumb things here: the use of lay political activists as subjective raters of science abstracts for their implications for their cause, the egregious fraud in falsely claiming to use the methods a subjective rater study requires while violating them with abandon, the inclusion of an alarming number of papers from the early 1990s, the exclusion of a massive number of relevant papers from this century, the inclusion of psychology and marketing papers, windmill cost studies, analyses of TV coverage, surveys of the general public, and magazine articles, the rigged categorization scheme in concert with the fallacy of demanding proof of a negative, the structural asymmetry of including mitigation papers, the calculating of a consensus by simply summing together a bunch of papers that make passing remarks about climate change, along with a few climate science papers (but not Lindzen's), the raters sniffing out skeptic authors and announcing them to all the other raters, raters at home able to google papers and reveal their authors at will, the unprecendented failure of a subjective rater study to report interrater reliability...

I need -- hopefully we need -- science to not be dumb, journals to not be dumb. I'm anti-dumb. I expect people to have a basic grasp of logic, epistemology, and to know the gist of decades of research on bias, or even just what wise adults understand about bias. This whole paper is just too dumb, even if it weren't a fraud case. We've been asleep at the wheel – this was a failure of peer-review. And the way people have strained to defend this paper is jawdropping. This paper never deserved any defense. The issues with it were too obvious and should not have been ignored, and now the fraud should definitely not be ignored. The issues were also very predictable given the starting conditions: the nature of the raters and the design of a study that empowered them to create the results they wanted. This was a waste of everyone's time. We don't need to be this dumb. Subjective rating studies that use political activists with profound conflicts of interest regarding the outcome, and the power to create said outcome, are bad jokes. We need to be serious. Science has always been associated with intelligence – people think we're smart – but all this junk, these ridiculous junk studies being published and cited, makes science a domain with lower standards of data collection than the market research Jiffy Lube might commission. Science needs to be serious – it can't be this dumb or this junky, and journals and bodies that publish this junk, especially fraudulent junk, need to be penalized. This is not acceptable.

The average layperson can identify this stuff as stupid and invalid, and many of them have – we can't keep doing this, tolerating this. The irony of the barbaric "denier! denier!" epistemology is that the targets of the smears are often smarter than the false ambassadors of science who smear them. It was a terrible mistake for these editors, bodies, and universities to ignore the laypeople who pointed out some of the flaws with this work. The laypeople were right, and the scientific authorities were wrong. This cannot be allowed to become a pattern.

There are no results

Some commenters have tried to apply arithmetic based on the ever-growing list of non-climate papers up top (18 so far) to claim that they only slightly change the results. That's amazing. They've missed the point entirely, actually several points. There are no results. There are no results to change. We never had a 97%. Meaningful results require valid methods. This is what science is about – if you're not taking the validity of methods seriously, you're not taking science seriously. This is a basic principle that's already been explained thousands of times in various contexts.

1. When you have a study that used lay political activists to subjectively rate scientific abstracts on the issue of their activism, empowered to create the results they wanted, there are no results. There is nothing we can do with such an invalid study. To my knowledge, no one has ever done this before, and we'd never accept this as science on any other topic, nor should we here (we don't know what Oreskes did in 2004, because that paper is one page long, and leaves out the details.)

2. When you have a study that that falsely describes its methods, which is normally understood as fraud, there are no results. The study goes away. We'd need a new definition of fraud, or extraordinary exonerating circumstances, for it to be retained. They didn't use the methods they claimed – this means we don't know what they did with respect to each rating, how often they communicated with each other about their ratings, how often they shared papers with each other, exposing the authors, etc. For deterrence purposes alone, I think we'd want to retract any study that lies about its methods, and I think that's what we normally do in science. It cannot become acceptable to lie about methods.

3. When you have a study where subjective raters tasked with rating authored works were not blind to authors and were not independent in their ratings, there are no results. A subjective rater study of this sort is not valid without blindness and independence. You certainly can't generate percentages of agreement or disagreement when ratings were not independent, and there are no valid ratings to begin with without blindness to authors. (This is all magnified by point 1 and 2.)

4. When you have a study that counted an unknown number of social science, psychology, and survey studies as scientific evidence of consensus on anthropogenic warming, there are no results – we no longer know what's in the data. None of the results, none of the percentages, are valid in that case, and this voids the second part of the study that collected authors' self-ratings of the papers (with a 14% response rate), because now we don't know how many of those authors were psychologists, social scientists, pollsters, engineers, etc. None of those papers are relevant to this study, and like the false claims about blindness and independence, the authors falsely claimed social science and surveys were excluded. When someone reveals even a couple of such papers in their results, from a few minutes with their database, the proper response is a complete inventory and audit by someone. It's pointless to just recalculate the results, substracting only those handful of papers, as though we know what's in the data.

5. When you have a study of the consensus that includes mitigation and impacts categories, which lack natural disconfirming counterparts, especially in the case of a large number of engineering papers that were counted as endorsement, this creates a systematic bias, and again, you don't have results given that bias. That issue would have to be dealt with before computing any percentage, and issues 1, 2, and 3 should each void the study by themselves.

6. When you have a study that just counts papers, there's no point touting any percentage, any results, until someone credibly explains why we're counting papers. This has no meaning unless it has meaning. It's not at all obvious why we're counting papers and how this represents consensus. The strange search results would have to be explained, like why there's nothing by Dick Lindzen since 1997. This all requires careful thought, and we don't know what the right answers are here until we think it through. It's not going to be automatically valid to run a sketchy search, add up papers like the Count on Sesame Street, and go home. All the de facto weights that such a method applies need to be addressed (see above). And maybe this sounds wild and crazy, but you might want to define consensus before you start adding papers and calling it consensus. They never define or discuss any of this. It would be purely accident for counting papers to represent a valid measure of consensus or something like it – there are so many potential biases and unintentional weights in such a method.

And all the other issues...Note that finding an error somewhere in this vast essay isn't going to salvage the study. Salvaging the study requires direct engagement with these issues, requires some sort of argument and evidence that rebuts these issues, and not just one of them. Attacking me won't do it. Saying this all needs to be in a journal to be credible, as the authors claimed, won't do it, especially since their paper was in a journal, so being in a journal doesn't seem to mean much. The claims here need to be addressed directly, and their failure to do so is extremely disappointing and telling. They don't dispute the evidence in the online forums, and anyone who is fluent in English can read their paper and see that they claimed to be blind and independent, that they claimed to exclude social science, can process the other issues, etc. We don't need a journal to tell us what's real – they're taking this epistemology of "consensus" too far. Reality isn't actually determined by what other people think, even journal reviewers – what others think is a crude heuristic useful in particular circumstances. Anyone can read this report and refer back to the paper. In any case, we don't normally handle fraud with journal submissions, and I'm extremely reluctant to legitimize this junk by addressing it in a journal – it should simply be retracted like every other fraud case, or invalidity case. We can't let postmodernism gash science. (Also, look at the Correigendum in the current issue of ERL – those authors fixed a typo. The 97% authors are guilty of far more than a typo.)

[My fellow scientists, let's huddle up for a minute. What are we doing? What the hell are we doing? I'm mostly speaking to climate scientists, so the "we" is presumptuous. Is this really what you want? Do you want to coarsen science this much? Do we want to establish a scientific culture where scientists must take polar positions on some issue in the field? Do you want to tout a "consensus" that ignores all those who don't take a polar position? Do we want to import the fallacy of demanding that people prove a negative, a fallacy that we often point out on issues like evolution, creationism, religion, and so forth? Modern scientific culture has long lionized the sober, cautious scientist, and has had an aversion to polar positions, simplistic truths, and loyalty oaths. Do we mean to change that culture? Have we tired of it? Are we anti-Popper now? No one is required to be Popperian, but if we're replacing the old man, it should be an improvement, not a step back to the Inquisition. Do we want dumb people who have no idea what they're doing speaking for us? Are you fraud-friendly now, if it serves your talking points? When did we start having talking points?

In any case, what the hell are we doing? What exactly do we want science to be and represent? Do we want "science" to mean mockery and malice toward those who doubt a fresh and poorly documented consensus? Do we want to be featured in future textbooks, and not in a good way? When did we discover that rationality requires sworn belief in fresh theories and models that the presumed rational knower cannot himself validate? When did we discover that rationality requires belief in the rumor of a consensus of researchers in a young and dynamic field whose estimates are under constant revision, and whose predictions center on the distant future? (A rumor, operationally, since laypeople aren't expected to engage directly with the journal articles about the consensus.) Who discovered that rationality entails these commitments, or even argued thusly? Give me some cites, please. When did we discover that people who doubt, or only mildly embrace, the rumor of a consensus of researchers in a young and dynamic field whose estimates are under constant revision, and whose predictions center on distant future developments, are "deniers"? When did science become a church? When did we abandon epistemology? Again, what are we doing?]

Those climate scientists who defended this garbage upset me the most. What are you doing? On what planet would this kind of study be valid or clean? Are you unfamiliar with the nature of human bias? Is this about environmentalism, about being an environmentalist? Do you think being a staunch leftist or environmentalist is the default rational position, or isomorphic with being pro-science? Do you think that environmentalism and other leftist commitments are simply a set of descriptive facts, instead of an optional ideological framework and set of values? Do you understand the difference between 1) descriptive facts, and 2) values and ideological tenets? I'm trying to understand how you came to defend a study based on the divinations of lay political activists interpeting scientific abstracts. Those scientists who endorsed this study are obligated openly and loudly retract their endorsement, unless you think you can overcome the points raised here and elsewhere. I really want to know what the hell you were thinking. We can't be this sloppy and biased in our read of studies just because they serve our political aims. The publication and promotion of a study this invalid and fraudulent will likely impact the future reception of valid studies of the climate science consensus. You might say that we should've hushed this up for that reason, that I should've remained silent, but that just takes us down another road with an interesting outcome.

Processes

As I said, I was puzzled that ERL editor Daniel Kammen did not respond to any of the issues I raised. I contacted him on June 5. For over month, there was no reply from him, not a word, then he finally emerged to suggest that some of the issues I raised were addressed on the authors' website, but did not specify which issues he was referring to. To my knowledge, none of these issues are addressed on their website, and many are not even anticipated. He's had nothing to say since, has never had any substantive response, has not argued against any allegation, has not corrected any error on my part, has not claimed that they did in fact follow their stated methods, or that the alleged fraud wasn't fraud at all, nothing at all.

Lack of familiarity with subjective rater studies might explain some of the initial reticence, but we can't have science work like this, where editors and other authorities are completely silent in response to such disclosures or allegations, and offer no substantive defense, or even insight, on issues as severe as those here. Everything breaks down if this is how science works, where there's no evidence of any cognitive activity at all in response to such reports. It would anchor institutional bias and corruption, and advantage power and establishment interests. I was surprised to later discover that Dr. Kammen advises President Obama, who widely publicized, benefitted from, and misrepresented the results of the study (by ascribing 97% agreement with a dangerousness variable that does not exist in the study.) We need to add political affiliations and ideology to our prevailing accounts of conflicts-of-interest, since such allegiances are likely to pull as strongly as a few scattered checks from oil companies.

I was further stunned that editor Daniel Kammen promoted, on his blog, the false Presidential tweet that 97% of scientists think AGW is "dangerous", and continues to promote it. The study never measured anyone's views of the dangerousness of AGW, not even as a scam. It was not a variable, nor was any synonym or conceptual cousin of danger, severity and the like – it's simply not in the study. It's incredible that a scientist and editor would behave this way, would promote a politician's manifestly false tweet, and would be comfortable reducing his field to a tweet to begin with. We seem to have suspended our basic scientific norms and standards regarding the accurate representation of the findings of research. This is rather dangerous. Run the model in your head. Iterate it a few times. You can easily see what could happen if it became normal and okay for scientists and editors to falsely assert findings that were never measured, not even mentioned, in an article. Run the model.

Because of Dr. Kammen's non-response, I escalated the retraction request to IOP, the publisher, on July 28, where it currently stands, and asked that they exclude Dr. Kammen from the decision given his profound conflict of interest. No word yet, just a neutral update that they were working on it. IOP seems quite professional to me, and I hope it's retracted. If they didn't retract a study that made false claims about its methods, that made it impossible to calculate interrater agreement, that included a large number of social science, survey, and engineering papers, and whose core methods are invalid, we'd probably want to know what is retraction-worthy. I don't think science can work that way.


Closure

Anyone who continues to defend this study should also be prepared to embrace and circulate the findings of Heartland or Heritage if they stoop to using a bunch of political activists to subjectively rate scientific abstracts. If ERL doesn't retract, for some unimaginable reason, they should cheerfully publish subjective rater studies conducted by conservative political activists on climate science, Mormons on the science of gay marriage, and Scientologists on the harms of psychiatry (well, if ERL weren't just an environmental journal...) This ultimately isn't about this study – it's about the method, about the implications of allowing studies based on subjective ratings of abstracts by people who have an obvious conflict of interest as to the outcome. Science is critically depends on valid methods, and is generally supposed to progress over time, not step back to a pre-modern ignorance of human bias.

I think some of you who've defended this study got on the wrong train. I don't think you meant to end up here. I think it was an accident. You thought you were getting on the Science Train. You thought these people -- Cook, Nuccitelli, Lewandowsky -- were the science crowd, and that the opposition was anti-science, "deniers" and so forth. I hope it's clear at this point that this was not the Science Train. This is a different train. These people care much less about science than they do about politics. They're willing to do absolutely stunning, unbelievable things to score political points. What they did still stuns me, that they did this on purpose, that it was published, that we live in a world where people can publish these sorts of obvious scams in normally scientific journals. If you got on this train, you're now at a place where you have to defend political activists rating scientific abstracts regarding the issue on which their activism is focused, able to generate the results they want. You have to defend people counting psychology studies and surveys of the general public as scientific evidence of endorsement of AGW. You have to defend false statements about the methods used in the study. Their falsity won't be a matter of opinion -- they were clear and simple claims, and they were false. You have to defend the use of raters who wanted to count a bad psychology study of white males as evidence of scientific endorsement of AGW. You have to defend vile behavior, dishonesty, and stunning hatred and malice as a standard way to deal with dissent.

I think many of you have too few categories. You might have science and anti-science categories, for example, or pro-science and denier. The world isn't going to be that simple. It's never been that simple. Reality is a complicated place, including the reality of human psychology and knowledge. Science is enormously complicated. We can't even understand the proper role of science, or how to evaluate what scientists say, without a good epistemological framework. No serious epistemological framework is going to lump the future projections of a young and dynamic scientific field with the truth of evolution, or the age of the earth. Those claims are very different in terms of their bodies of evidence, the levels of confidence a rational person should have in them, and how accessible the evidence is to inquiring laypeople.

Cognition is in large part categorization, and we need more than two categories to understand and sort people's views and frameworks when it comes to fresh scientific issues like AGW. If our science category or camp includes people like Cook and Nuccitelli, it's no longer a science category. We won't have credibility as pro-science people if those people are the standard bearers. Those people are in a different category, a different camp, and it won't be called "science". Those climate scientists who have touted, endorsed, and defended the Cook et al. study – I suggest you reconsider. I also suggest that you run some basic correction for the known bias and cognitive dissonance humans have against changing their position, admitting they were wrong, etc. Do you really want to be on the historical record as a defender of this absurd malpractice? It won't age well, and as a scientist, certain values and principles should matter more to you than politics.

If you're always on the side of people who share your political views, if you're always on the side of people who report a high AGW consensus figure, no matter what they do, something is wrong. It's unlikely that all the people who share our political perspectives, or all the studies conducted by them, are right or valid -- we know this in advance. We need more honesty on this issue, less political malice, better epistemology. I don't think science has been so distrusted in the modern era than it is today. When the public thinks of science, it should not trigger thoughts of liars and people trying to deceive them and take advantage of them. Journals need to take responsibility for what they do, and stop publishing politically motivated junk. Sadly, this paper is self-refuting. A paper-counting study assumes that the papers they're counting are valid and rigorous works, which assumes that peer-review screens out invalid, sloppy, or fraudulent work. Yet the Cook paper was published in a peer-reviewed climate journal. That it was sruvived peer-review undermines the critical assumption the study rests on, and will be important inductive evidence to outside observers.

So you want to know what the 97% is? You really want to know? It's a bunch of abstracts/grant applications that say: "We all know about global warming. Let me tell you about my atomic layer deposition project." "You all know the earth is melting. Let me tell you about my design for a grapeseed oil powered diesel engine." "We've all heard about global warming. Here we report a survey of the public." "...Denial of improved cooking stoves." Let's call that phenomenon A.

Now let's factor in a bunch of militant political activists rating abstracts on the issue of their activism, and who desire a certain outcome. Call that B.

Let's also factor in the fact these militant political activists are for the most part unqualified laypeople who will not be able to understand many science abstracts, who have no idea how to do a proper literature search or how to conduct a proper subjective rating study, have never heard of interrater reliability or meta-analysis, violate every critical methodological feature their study requires, and lie about it. Call that C.

Then add a politically biased journal editor who has a profound conflict of interest with respect to the findings, as he works for the politician whose aims such findings would serve, and which were widely touted and misrepresented by said politician. Call that D.

A + B + C + D = 97%

"97%" has become a bit of a meme over the past year. I predict that it will in the coming years become a meme of a different sort. "97%" will be the meme for scientific fraud and deception, for the assertion of overwhelming consensus where the reality is not nearly so simple or untextured. It may become the hashtag for every report of fraud, a compact version of "9 out of 10 dentists agree" (well, I'm abusing the definition of meme, but so does everyone else...) Because of this kind of fraud, bias, and incompetence, science is in danger of being associated with people who lie and deceive the public. Excellent. Just fantastic. Politics is eroding our scientific norms, and possibly our brains.

The laypeople who first identified the fraud in these cases and contacted the relevant authorities were roundly ignored. In the two cases I've covered, the evidence is surprisingly accessible, not rocket science, and the Australian universities who hosted the researchers have been inexcusably unwilling to investigate, at least not when asked by others. AAAS, who leaned on this fraud, has an Enron culture of denial and buck-passing. These institutions have become part of the story in a way they shouldn't have. The system is broken, at least as far as these politically motivated junk studies are concerned, and most of the responsible gatekeepers have been unwilling to discharge their ethical and scientific responsibilities, and should probably be discharged of those responsibilities. If this is science, then science is less rigorous than any plausible contrast objects we'd set it against – it would be the least rigorous thing we do. Some scientific fields are better than this. They retract papers all the time, often the authors themselves, for non-fraud reasons. Fraud is taken dead seriously, and a fraud report will be investigated thoroughly.

We've bumped into some corruption here. I'm disappointed in the extremely low quality of the arguments from the fraud-defenders and wagon-circlers (calling me a "right-wing extremist" or asking whether other people agree with me won't do it, and the latter scares the hell out of me, as it might signal that an absurdly primitive and simplistic epistemology of consensus actually enjoys a non-zero rate of popularity in academic circles). I'm also disappointed in the more common silence from the ertswhile defenders of this junk. In both cases, no one is refuting anything, presenting any sort of substantive argument. We're taking lots of risks here, potentially rupturing the classic association between science and fact, or between science and the rational mind. We can't allow science to become a recurrent source of deception and falsity, or risk the very concept science devolving into a label for an alternative lifestyle, a subculture of job security and unreliable claims. That outcome seems distant, but if we don't address behavior like the conduct and publication of this study, we'll probably see more of it.
183 Comments
Barry Woods
8/28/2014 10:53:34 pm

ref the white males paper - the one with the Cool Dudes in the title..?

http://judithcurry.com/2011/07/29/cool-dudes/

the only response to that sort of 'psychological research' is humour
http://www.cartoonsbyjosh.com/cool_dude_scr.jpg
http://www.cartoonsbyjosh.com/index.old.html

Reply
Graham Strouts link
8/29/2014 12:04:51 am

Awesome and amazing. Incredible post. As someone who has this "study" thrown at me by scientists on a weekly- nearly daily- basis in debates with scientists on FB and elsewhere, you have said everything I argue and far more, only far more eloquently and authoritatively. This could mark a turning point in the politics of AGW. Thankyou.

Reply
Vieras
8/29/2014 01:06:12 am

If you want to be even more amazed (or depressed), post this story to the Google+ science forum. You'll get attacked by the moderators there. I kid you not.

Reply
FreedomFan
9/5/2014 02:51:17 pm

Indeed. I tried posting this to the "Climate Change" community and the moderator deleted it and banned it.

More proof that Climate Change is a Liberal religion.

Reply
Joe Duarte
9/5/2014 07:30:46 pm

Can you point me to this forum?

FreedomFan
9/7/2014 07:05:34 am

Joe, the gal who blocked me owns the Google+ Community:

https://plus.google.com/communities
"Science"
"Climate Change"

-FF

Vieras
9/6/2014 04:31:28 am

The community is called "Science on Google+". It has over 400 000 members.

https://plus.google.com/communities/101996609942925099701

Jose, try posting it there yourself and see what happens.

Reply
Steve Ta
8/29/2014 01:12:04 am

I assume if if they were to rerun the study, they could include the previous study results as yet another endorsement of AGW.

Reply
Randizzle
8/29/2014 02:52:40 am

Brilliant Jose! This needs airing,,,,, widely. will send it along to others.

Reply
HK
8/29/2014 03:21:32 am

"which I detail in my book on valid reasoning"

What is the book's title? Is it published yet?

Reply
Joe Duarte link
8/29/2014 05:50:41 pm

Not sure about the title yet. I'm still writing it.

Reply
John Kannarr
9/5/2014 09:36:02 am

Eagerly awaiting your book.

Thanks for this incisive analysis!

Jim Pettit
8/29/2014 06:42:06 am

Irony: a psychology student whining that a published paper is invalid because it was based in part on psychology studies.

Awesome, dude. Simple awesome...

Reply
Carrick
8/29/2014 07:49:32 am

Boneheaded comment of the year award.

There's nothing wrong with psychological studies, they just shouldn't be used to measure consensus level regarding primarily physical science issues.

Reply
steven mosher
8/29/2014 09:10:31 am

epic logic fail

Reply
DHF
9/10/2014 08:28:02 am

Your comments are getting shorter and shorter. It is becoming quite hard to understand your argument / what you mean.
I have a challenge for you. Can you make your comments even shorter?
I am kidding. :) I really want you to speak out in full sentences / complete arguments.

Otter
8/29/2014 09:21:19 am

Irony: jimmy pettit whining that something he supports has had its' foundation ripped to hell and tossed into the River Lethe. Maybe you could explain how a psychology prof has ANYTHING to do with climate science?

Reply
Joe Duarte link
8/29/2014 05:55:39 pm

You guys might be jumping the gun here. I can read Jim's comment either way. Awesome as good, or awesome as sarcastic dismissal. I fully agree that it's slightly ironic for a psychologist to say psychology papers don't count as evidence of the scientific consensus on anthropogenic warming. Psychology papers are only evidence on psychological matters, and I don't think any research psychologists would argue otherwise.

Reply
T link
8/29/2014 09:18:22 pm

||||| "I fully agree that it's slightly ironic for a psychologist to say psychology papers don't count as evidence of the scientific consensus on anthropogenic warming. Psychology papers are only evidence on psychological matters, and I don't think any research psychologists would argue otherwise."
---
Is it proper for psychologists to examine credulity with regard to the great "We're all gonna die!" climate catastrophe fraud, and discuss its prevalence among specific populations, said groups to be defined by education in scientific method and practice in reasoned analysis?

Think of it as an epidemiological assessment, the better to determine those populations at risk of succumbing to such malfeasant manipulation.

CryptoHB
8/31/2014 03:09:18 am

Who better to discuss the delusional, narcissistic and mystical based psychoses that holders of CAGW beliefs experience. A psychology student is a perfect candidate for such a task.

Reply
MikeR
8/29/2014 06:46:15 am

"The world is never going to be interested in a study based on militant political activists reading scientific abstracts and deciding what they mean with respect to the issue that is the focus of their activism." Ah, but they are. This paper is very widely quoted by people who like the results, including the President of the United States.

Note that even aside from your complaints, there are _real_ climate papers which will surely be counted as supporting the consensus which are irrelevant, if we use your definition of "test the energy balance model, or revise or validate or estimates of transient climate sensitivity." Say, for instance, that there's a study done on how melting the Himalayan glaciers will affect Indian flowers. Presumably the paper will take for granted AGW as being true; that's probably the reason for the paper. But the author may know nothing about energy balance or transient climate sensitivity or the attribution problem. Which is fine - but if those issues are the ones you are focused on it makes no sense to ask him, or even figure out what he assumes.
It would really be much more helpful to identify a number of critical links in the AGW hypothesis, and test them one at a time. What percentage of _scientists who study it_ think that CO2 is increasing due to human activity? (Probably 100%?) What percentage of _scientists who study it_ think that temperature has increased in the last century by about __ degrees? What do _those who study it_ think is the percentage of that increase caused by AGW? Equilibrium climate sensitivity? Etc.

Reply
Joe Duarte
8/29/2014 06:00:12 pm

Hi Mike -- I don't think the world knows how this study was done, or knows anything about the crew that ran the study. That's what I mean -- that the world would not want this if they knew what it was. Most people don't read journal articles, and journalists often pay little attention to methods.

You make a great point on what counts as evidence. I'm surprised this wasn't hashed out a decade ago. Only primary evidence should count -- research directly focused on attribution and related issues. Mitigation papers are often derivative or secondary in nature, meaning they don't carry any new data or findings pertaining to attribution, or even about climate processes in general.

Reply
MikeR
8/31/2014 02:41:02 pm

"Only primary evidence should count -- research directly focused on attribution and related issues." I think you're going too far. It is important to know what Himalayan glaciers' melting will do to Indian flowers, and other ecological issues. It is important to know what the costs of the impacts will be. It is important to know what mitigation will cost, how effective it will be, how its politics will or won't work, when is adaptation more cost-efficient, etc. All these are worthy of study, and all are parts of the structure of AGW. All of them _must_ be, if you want to stand up and demand Carbon Taxes Now.
On the other hand, you will be hard-pressed to get 97% agreement on all those issues. Really hard-pressed. The ecology is probably somewhere in between. But I think once you move into the economics and the politics, you'll be lucky to get 50% on any of the pieces.
But it is very important to the pro-AGW to claim 97% consensus on their whole package. That number is very precious to them (look at the Wikipedia article on scientific consensus on climate change [http://en.wikipedia.org/wiki/Scientific_opinion_on_climate_change], and search for "97"). It allows them to claim that this is like Special Relativity, like the Theory of Evolution: there's study going on in peripheral areas, but everything important is basically settled. And anyone who says otherwise is obviously on the fringe, or off the edge.
All real scientists agree we must drastically cut fossil fuel usage worldwide. All others are clowns and fools, or paid for by Big Oil. Loads of people believe this, and say so. It doesn't help the rest of us to think of them as the "reality-based community".

MikeR
8/31/2014 03:00:24 pm

I want to try to make this point clearer. There really is a strong consensus of climate scientists on AGW. The surveys I have seen tend to give very high percentages of scientists who feel that CO2 causes most or all of the recent temperature rise, and that important temperature rises are expected in the 21st century. Something upwards of 85%. Scientists skeptical on various important parts of the consensus on attribution and sensitivity seem to be maybe a sixth. That's a high percentage consensus, it appears in all the surveys I've seen (see link above and actually read the surveys), and I see no reason to doubt it.

We have a special name for that in science. It's called an _open question_. If a sixth of the scientists in a field aren't sure about something, that issue is still completely open. A real consensus is one where nobody really working in the field doubts it. I do not think that anyone who works with particle accelerators questions Special Relativity; you can't and still do anything sensible in the field. You can't still be fighting against Pasteur's Germ Theory of Disease without pigeonholing yourself as a nut.

That is just not good enough for the political activists. They can't let this be an open question. That's where the 97% comes in. 3% is a small enough number that it makes it possible to claim that the real number is 0.

Joe Duarte
8/31/2014 06:59:22 pm

Hi Mike -- You're points seem right to me.

The impact of warming on some aspect of the natural world is a legitimate area of study. I wouldn't count such papers (e.g. your example of Indian flowers) as evidence of AGW. They're "impacts" papers for sure. But I don't see why we would count such impacts papers as evidence of AGW, an issue I haven't even touched on yet.

I think impacts should be catalogued separately in papers of their own. Same with most mitigation papers. From what I can tell these papers just insert a set of assumptions about warming, for example, a particular model or projection. Stipulating such future warming, they proceed to address their question, their area of research, such as the flowers, or in many cases here, agricultural questions, and also in many cases, economic and policy questions -- a whole swath of papers I haven't even listed yet. And many mitigation papers are not even that close to home -- they're just engineering papers (there actually is a grapeseed oil diesel engine paper, as I mentioned in the end of the essay.)

So those papers don't represent knowedge or findings about AGW directly. They don't report climate research. So if we have a reality where there's a consensus in a field (or not even a consensus, just a popular idea or plurality) like climate, which has spillover effects for lots of other fields, and people in those fields take it up and insert it as a premise in some research or paper, we can see how the math would work. It will amplify the numbers to count a bunch of derivative papers in other fields as part of the consensus. Now do those researchers in other fields do anything to test AGW as a climate phenomena? No. They don't do any research about climate. So in what sense are they part of the consensus? Only in the sense that they just import a model or a hypothesis from climate science and run with it. In some sense they agree with it, we can assume. But their agreement isn't the result of a test, not any kind of scientific test. You might call it trust, in what appears to be a consensus in climate science. Or just blind acceptance. We don't know.

Best case, they're studying some aspect of nature, like the flowers, or animal impacts. Someone might be able to justify a weighted scoring of such non-climate science impact or mitigation studies as part of the consensus (having a lesser weight than climate papers.) But that case needs to made and carefully thought out. I don't think there's any prospect that engineering papers can count as part of the consensus. And there are so many of those in this 97% nonsense.

It won't be possible to do a valid paper counting consensus study without a lot of careful consideration of these issues and many more, and without some sort of credible weighting scheme. A consensus paper based on the literature won't be valid without weighting, and that weighting will downweight old papers too. There are an alarming number of papers from 1991 and 1992 in their 97%.

Joe Duarte
8/31/2014 07:12:23 pm

One exception to what I just said are papers that report *present* impacts of warming, or impacts they attribute to warming. Some of that would count as evidence of AGW, especially if it's the kind of impact that is somehow contingent on human-causation.

Now that doesn't get at attribution, at it being human-caused, so there's a vulnerability there. But some of it would probably be valid evidence.

A lot of the impacts and mitigation papers are future-tense. Interestingly, that incredible paper on ER visits by asthmatic kids in Montreal said nothing in the abstract about AGW having anything to do with this increase in ER visits or asthma. All it did was make a passing remark about the possible or potential future warming and how it could lead to further increases. But their data and study was not tied to warming. It's incredible that these people counted it as endorsement. These were medical researchers talking about ER use.

hunter
9/1/2014 05:52:37 pm

It is not surprising that this President's handlers would be attracted to and highlight the Cook paper. Nor is it surprising that this President would have accepted and embraced Cook's paper. Considering his decision making process, he would see this paper as valid, solid and useful.

Reply
Geoff Chambers link
8/29/2014 06:50:36 am

This post is getting some publicity at
http://bishophill.squarespace.com/blog/2014/8/29/more-on-cooks-97.html
Here’s the comment I left there:
“Duarte’s article links to Cook’s upcoming event at Bristol University’s Festival of Ideas. Bristol lists him as “of the University of Queensland” without noting that he’s a student there - a student who got into the University circuit via his collaboration with Professor Lewandowsky, and who then lied to his collaborator Lewandowsky and to me about his collaboration on Lewandowskys’ paper, and then lied again in a paper he wrote with Lewandowsky which tried to hide the lies in the first paper.
I’ve been calling out Lewandowsky and Cook as liars at Chris Mooney’s, the New Yorker, Huffington Post, the Conversation, and anywhere else I can. Mooney and his colleagues at the Guardian, Telegraph, New York Times, Los Angeles Times etc. are only journalists. If they wants to repeat lies and conduct fawning interviews with known liars that’s their business. Bristol University is a bit different, and so is the Conversation, since it’s financed by a number of state-financed universities and institutions. If they lie and continue to publicise the work of known liars people just might start to ask questions. Or they might like to sue me.
Or they could start telling the truth.”

Reply
admrich
9/5/2014 01:14:33 am

FYI Cook is actually on staff at the Uni of Qld within the Global Change Institute - http://www.gci.uq.edu.au/researchers/john-cook1

He is listed on UQ internal pages as an Honorary Research Fellow.

Part of his function is to do with Communication for GCI

Reply
Denis Ables
8/29/2014 07:47:46 am

Holdren (Obama's "science adviser" ) needs to read this (or anything, for that matter, assuming he is capable of reading)

Reply
Tucci78 link
8/29/2014 09:30:58 pm

John Paul Holdren, our half-a-hubshi's "Assistant to the President for Science and Technology, Director of the White House Office of Science and Technology Policy, and Co-Chair of the President’s Council of Advisors on Science and Technology (PCAST)" is one of the losers in the famous Simon-Erlich Wager (1980-1990), having anticipated that the selected market basket of commodity metals (copper, chromium, nickel, tin, and tungsten) would rise in price over the ten-year interval of the bet.

As Julian Simon had predicted, their prices fell.

Dr. Holdren hasn't exactly got much of a record with regard to his opinions (indeed, his worldview) having any congruity with reality, nor has he evinced much interest in achieving that objective.

Reply
John Kannarr
9/5/2014 09:49:12 am

As I recall, Holdren et al's claim wasn't merely that this particular basket of commodities was going to become scarcer and run out. These were merely a representative sample - their choice - as a way to prove their thesis that all resources were going to become scarce, etc., etc., in the very near future. Julian Simon had a much better understanding of economics (and reality) than they did.

Tucci78 link
8/29/2014 09:31:06 pm

John Paul Holdren, our half-a-hubshi's "Assistant to the President for Science and Technology, Director of the White House Office of Science and Technology Policy, and Co-Chair of the President’s Council of Advisors on Science and Technology (PCAST)" is one of the losers in the famous Simon-Erlich Wager (1980-1990), having anticipated that the selected market basket of commodity metals (copper, chromium, nickel, tin, and tungsten) would rise in price over the ten-year interval of the bet.

As Julian Simon had predicted, their prices fell.

Dr. Holdren hasn't exactly got much of a record with regard to his opinions (indeed, his worldview) having any congruity with reality, nor has he evinced much interest in achieving that objective.

Reply
Tucci78 link
8/29/2014 09:31:23 pm

John Paul Holdren, our Indonesian-in-Chief's "Assistant to the President for Science and Technology, Director of the White House Office of Science and Technology Policy, and Co-Chair of the President’s Council of Advisors on Science and Technology (PCAST)" is one of the losers in the famous Simon-Erlich Wager (1980-1990), having anticipated that the selected market basket of commodity metals (copper, chromium, nickel, tin, and tungsten) would rise in price over the ten-year interval of the bet.

As Julian Simon had predicted, their prices fell.

Dr. Holdren hasn't exactly got much of a record with regard to his opinions (indeed, his worldview) having any congruity with reality, nor has he evinced much interest in achieving that objective.

Reply
Tucci78 link
8/29/2014 09:31:49 pm

John Paul Holdren, "Assistant to the President for Science and Technology, Director of the White House Office of Science and Technology Policy, and Co-Chair of the President’s Council of Advisors on Science and Technology (PCAST)" is one of the losers in the famous Simon-Erlich Wager (1980-1990), having anticipated that the selected market basket of commodity metals (copper, chromium, nickel, tin, and tungsten) would rise in price over the ten-year interval of the bet.

As Julian Simon had predicted, their prices fell.

Dr. Holdren hasn't exactly got much of a record with regard to his opinions (indeed, his worldview) having any congruity with reality, nor has he evinced much interest in achieving that objective.

Reply
Tucci78 link
9/1/2014 09:18:00 am

Would some site moderator please strike these repeat posts?

I have no idea how this happened, and would delete them if I could.

Brad Keyes link
9/1/2014 09:31:41 pm

I second Tucci78's request for someone to clean up our inadvertently-duplicated posts.

At the very remote risk of sounding almost pretentious, only lies need to be repeated—the truth is best told once.

Carrick
8/29/2014 08:06:13 am

<blockquote> We've been asleep at the wheel, haven't carefully thought about the epistemology of consensus, and what should count as evidence. I don't think mitigation papers can count in most cases</blockquote>

I agree on this. Mitigation papers generally write results contingent on assumptions relating to catastrophic anthropogenic global warming (CAGW), but otherwise don't test the validity of those assumptions.

One could explore the high cost of mitigation of CAGW as a vehicle to express their skepticism over the validity of a mitigation approach to addressing AGW, for example. You might find that such an individual would actually be skeptical of AGW itself. Such a paper would be an example of "motivated reasoning" but couldn't possibly be viewed as an endorsement of mitigation, or the assumptions about CAGW upon which is it based.

More generally, I believe the real problem is the "psychological landscape" associated with belief in (or denial of) AGW is poorly mapped out. This study could have been used to improve our understanding o this "landscape", instead—in my opinion—it is just another dude paper that eventually undermines the credibility of any researcher willing to endorse it.

Reply
Alex
8/29/2014 08:16:23 am

Denis, sorry to inform, but Holdren knows exactly what he's doing. He's been a pure political actor from day one of his adult life. These people set out to control other people, then look for "science" as the hammer to make it happen. Cheers

Reply
David
8/29/2014 11:12:31 am

Thanks for this article Jos'e but if there is one thing worse than this discrace of a paper it is the fact that it passed pier review. To me it conjures up images of a small but powerful group of gatekeepers that think they are like arthurian knights sat at a round table passing around their papers for each other to endorse. their behavior is doing untold damage to the reputation of science.If you wondered who is the main chair leader and mentor to the likes of cook and lewenski look no further than Naomi Oreskes professor of history and science studies of of california adjunct professor of geo sciences at the scrpps institute of oceanography . A staunch supporter of the concensus,arch propagandist for the cause and author of merchants of doupt. But make sure your sat down with a strong drink and keep an eye on your blood pressure.All the best to you

Reply
Fernando Leanme link
8/29/2014 08:31:05 pm

David, never take it personal, it's only business. So you think Naomi Orestes is in charge? I thought Holdren was like Skeletor, and Naomi plays the Evilynn role? But I never figured out who was He Man or the midget who carried the key.

Reply
toby
8/29/2014 06:46:38 pm

"10 minutes in the database" and the future Dr Duarte becomes judge, jury and executioner! WTF!

Is this screed worth any attention? Did Duarte read any of the attacks and defenses of the paper that have appeared elsewhere? Like here - http://andthentheresphysics.wordpress.com/2014/05/10/richard-tol-and-the-97-consensus-again/

Has he evidence the conclusions.are wrong? No.

Add in a flavouring of the dismal ranting about "activists" and "Marxists" you get on climate denial blogs, and the mess is unpalatable. Barely worth the time time Duarte devoted to the database.

Reply
Fernando Leanme link
8/29/2014 08:19:51 pm

Toby, indeed it's possible to identify a mistake in a data set in 10 minutes. What we are missing is a test of Jose Duarte's English language reading comprehension at high speed.

I mention this issue because it's possible to have different speeds and skills in each language we know. In my case I find an interesting effect, I lost my Spanish, learned English, then French. Later I moved to Russia, the Russian overprinted on French, I never spoke Spanish and then I moved to Venezuela so I had to relearn Spanish, forgot half my English and now I work at half speed in e very language I know...

Jose could have a similar problem, but I suspect he probably reads English faster than 99 % of his high school class. Thus I have to conclude on a preliminary basis he is right (high confidence).

Reply
Otter
8/29/2014 08:48:20 pm

'has he evidence the conclusions are wrong?'

TONS OF IT! Not that you would make the slightest effort TO THINK.

Reply
Mark Bofill
8/29/2014 11:46:38 pm

"Has he evidence the conclusions are wrong?"

Methodology mattered, once upon a time. Dare I say, before Climate Science became a burgeoning field. Reproducible results and all that. If you got the right answer for the wrong reason, that still wasn't good.
It's sad to see how our standards have fallen.

Reply
John M
8/30/2014 06:38:06 am

Another example of the idea that lies are OK as long as the conclusion is correct. At some point, you need to ask if the conclusion is really correct if people need to lie to prove it. If they really want to know what climate scientists really believe about AGW, is it really that hard to maybe just poll them?

Reply
Lloyd Mongo
8/30/2014 11:15:28 pm

The question really is: "Does the evidence presented support the conclusions." The answer seems to be: "Not by accepted standards of science."

Is it your position that the evidence presented, and conclusions drawn, in the Cook et al paper represent sound scientific practice?

Reply
Carrick
8/31/2014 02:53:10 am

If I wrote a paper stating that clear solar-illuminated sky is blue because of magic pixie dust, based on your logic it would be a good paper as long as the sky were really blue.

Reproducing other people's findings of 97% using bad methodologies simple means this paper adds nothing to the scientific corpus.

And you're suggesting that the SKS group are not activists???

That's very quaint.

Reply
Smokey link
9/12/2014 05:08:30 pm

@Toby:

"Climate denial blogs"? What are those?

Are those the blogs like realclimate, owned and operated by Michael Mann? Because Mann is the guy who denies that the climate ever changed, until the Industrial Revolution. So realclimate must be a "denial blog" by your own definition.

Skeptics [the only honest kind of scientists] know that the climate always changes, naturally. Always has, always will.

So explain for us: what, exactly, is a "climate denialist blog"?

Reply
Phil Clarke
8/30/2014 03:49:19 am

"(There were some papers that were classified as "Not climate related" in my quick search, but the above papers were not -- they were classified is implicit or explicit endorsement.)"

Erm, the last two in your list WERE classified as not climate-related, Jose. Epic fail. Pot meet kettle.

Reply
TerryMN
8/30/2014 11:49:49 am

Phil, lets cut to the chase. There are dozens of things fatally wrong with the paper. Are you going to continue to defend it? A simple yes or no will suffice, thanks.

Reply
TerryMN
8/30/2014 12:06:42 pm

"Dozens" was an overstatement, sorry. Please substitue "many." Thanks.

Carrick
8/31/2014 03:11:50 am

Phil, finding two errors in José's examples doesn't exactly an "epic fail", since the main point "The Cook et al 97% paper included a bunch of psychology studies, marketing papers, and surveys of the general public as scientific endorsement of anthropogenic climate change" remains valid.

As Terry says there were "many" things wrong with the paper, including most of the examples given by José.

There are many who endorse the view that a paper is okay as long as the results are useful, regardless of just how badly the methodology skunked it up.

Reply
Phil Clarke
8/31/2014 06:14:16 am

Really? On a piece attacking Cook for wrongly categorising appers, Duarte erm, wrongly categorises 25% of his sample. Epic Fail? maybe not. Embarassing certainly. But to his credit, he has corrected the error. The paper may or may not be flawed, certainly the 11 or so papers cited above could be excluded without affecting the conclusions. However I agree with this:

"This is what I've seen a lot of climate skeptics do: They come into the climate debate with preconceived notions, and they latch on to those handful of dissenting scientists who agree with them. They don't know the names of a lot of non-skeptic scientists, except perhaps for a couple of people they view as arch-villains. This is pure confirmation bias. You're less likely to get to the truth if you only read people who agree with you. Do you read champions of the "mainstream" view like Gavin Schmidt, Kevin Trenberth, and Drew Shindell? Why not?

A critical reason why this approach is faulty is that skeptical climate scientists are significantly outnumbered by scientists who are more confident in human-caused warming and in future warming scenarios. I think some of the research on the climate science consensus is garbage, but even if the numbers are inflated, it looks like a large consensus will still be there if you fix the studies and revise them downward (<b>there's a lot of room to go down from 97% and still have a very high number</b>)."

Jose Duatre.

http://www.joseduarte.com/blog/but-climate-scientist-so-and-so-says-its-not-a-big-deal

And I also agree with another critic of the paper, Richard Tol.

"There is no doubt in my mind that the literature on climate change overwhelmingly supports the hypothesis that climate change is caused by humans. I have very little reason to doubt that the consensus is indeed correct"

I'm not sure that quantifying the consensus to 3 significant figures is useful or possible. The concensus is strong because the science is strong.

Joe Duarte
8/31/2014 08:56:49 am

Hi Phil,

It's a terrible to assume there are only 11 such papers. I mention in my post that I spent little time on this. Half of them came from 10 minutes. The others came from a few multitasked hours. You need to extrapolate. (And I have a lot more than 11 at this point. You can find a lot on your own.)

It will also be interesting to look at all the substantive papers by actual climate scientists, on the issue of climate, that they excluded completely. They have nothing from Dick Lindzen since 1997. That's a shocker in and of itself. They excluded a lot of climate science papers that contested their aims, by various authors. Their search was incompetent, perhaps deliberately so.

In the post above, I've given you enough. No one should need more -- anything more I do should be considered generous. You need to be able to read and process what is in the post, because there's a hell of a lot more than what you're talking about. And it kills the paper. This paper is over, if we're being serious, or if we're being pro-science. Also, it's important to understand that retention of the results (which won't be the case here, but which you argued would be) will not matter. Science is about method, not results. Dana N said something similar, trying to argue that invalid methods don't matter if the results are the same or something. No one should ever listen to that man again.

You should also be looking for more than I offered in the post. Because there is much more. I've given you plenty to go on, hints and the like. There is much more to say about the paper, and some of those things would invalidate it irrespective of everything else. This was a scam paper, and it's time for people to stop rationalizing and defending it. This was fraud.

Joe Duarte
8/31/2014 09:01:05 am

Typo in first sentence: "It's a terrible *mistake*..."

People need to be more realistic here. This was never going to be about 6 or 11 papers. That's cute. There are far more, and people who are trying to do arithmetic on 6 or 11, to try to argue that the results don't change are just wasting time and completely missing the extrapolation point, and lots of other points.

Carrick
8/31/2014 11:59:41 am

Phil, yup *really*.

The main point "The Cook et al 97% paper included a bunch of psychology studies, marketing papers, and surveys of the general public as scientific endorsement of anthropogenic climate change" remains valid.

I'm glad you found and pointed out errors in José's work, but it's not an epic fail to mis-categorize two papers. Cook's paper remains a 97% fiasco, with or without two errors in José's analysis.

Carrick
8/31/2014 12:03:35 pm

José: <blockquote> Science is about method, not results. Dana N said something similar, trying to argue that invalid methods don't matter if the results are the same or something. No one should ever listen to that man again./blockquote>

See my comment about finding that the sky is blue and attributing it to magic pixie dust:

That the sky is blue or that the vast majority of papers (even by critics) affirm the basic message that "the Earth is warming, and humans are playing a role in that warming" are not novel results.

You grade a paper based on the novelty of the work, not by its political convenience. If the novelty is lost by poor methodology, then simply getting a result that you could have guessed without performing the study, does not rescue an otherwise atrociously bad paper from its deserved place in the trash bin of history. But had they taken the time to deconstruct the intellectual landscape of belief in or denial of AGW amongst scientific researchers, the paper would have made a positive contribution.

scf
8/30/2014 01:29:53 pm

I like what you are doing here. Many, many people inherently agree with you, but you are quite good at putting the points into words.

Reply
david
8/31/2014 05:04:06 am

Jos'e if you want to know why many people including myself are skeptical of dangerous manmade global warming you could do a quick search for Micheal Crichtons essay ' why politized science is dangerous' it ticks the right boxes for me but then again i could be a conspiracy theorist

Reply
Phil Clarke
8/31/2014 04:48:59 pm

Lot of speculation there, Jose, not much actual evidence.

You invite us to extrapolate from your sample of 11 or so to the whole population, but without knowing how big *your* sample size (the denominator) is, such an invitation is pure speculation. I recommend you do not attempt such a line of argument with your PhD supervisor.

I guess Lindzen's papers did not include the search string 'global climate change' or 'global warming'. in the title or abstract. Yes its a fairly crude sampling method, but that cuts both ways, some 'concensus' papers will also have been excluded (I speculate). If you don't like the methods, you are of course free to do your own study, or spend a bit more time with the (online) Cook data than your are apparently willing to.

But, and here I am speculating, I guess that any such survey is highly likely to come up with a similar number as Cook, the self-ratings collected by Cook, and the other literature and opinion surveys published to date.

Reply
Joe Duarte
8/31/2014 06:37:18 pm

Phil, you're not addressing even a fifth of the points made in the essay. This is a fraud case, for one thing, and the fraud is multiple. That's laid out in the essay.

If you don't know what interrater reliability is, or why independent raters matter, well those are things one would need to know to ever be confident in any kind of subjective rater study. No one should be defending this garbage unless they're confident in their understanding of the methods.

But those issues are trumped by the fraud and false claims made about the methods used. This is quite clear, and you haven't even touched on those issues. So is the issue of the absurdity of using political activists to rate abstracts on the subject of their activism, and giving them the power to create the results they wanted.

The number has gotten way past 11 by the way, and there are a massive number of irrelevant engineering papers in their 97%. There are so many huge points of failure here. What they did here is pretty obvious at this point. None of the arguments you're making in clinging to this paper will survive the points I've made in the essay, or further encounters with their data.

Paul Matthews link
8/31/2014 07:45:40 pm

Following a tweet from Jose, I spent about 2 minutes looking at the database of papers, and found

Lindsay, 1994: Update Of Refrigerant Issues - Accelerated Phase-out, Safety, And Refrigerant Management

Lewis, 1994: An Opinion On The Global Impact Of Meat Consumption

Both of these papers were counted as category 3, implicitly endorses AGW.

A reminder of what the paper claims: "We examined a large sample of the scientific literature on global CC". So they are claiming that an opinion piece on vegetarianism is part of the scientific literature on climate change and part of the 97%.

Reply
Joe Duarte
9/15/2014 08:12:58 pm

Hey Paul, thanks! I forgot to thank you earlier.Yeah I still have the refrigeration paper in my back pocket, along with lots of others. That was purely the pragmatics of the transition to new refrigerant, like a business plan. And it's from 1994!!

Thanks for helping me on Twitter. I'm still getting a feel for it.

Reply
Brad Keyes link
8/31/2014 08:07:13 pm

José, thanks for having the stomach to autopsy this putrefying corpse of a
"paper." Somebody had to do it eventually.

The salient point, and I'm glad you touched on it at one point, is that these people are anti-science. The myth of the climate activist as a supporter of "science" is the result of confusing a position (the [dangerous] AGW hypothesis) with a process (science). Cook et al. wouldn't know the latter if it punched them in the face, though science would certainly be forgiven for doing so repeatedly.

It may surprise you that they got away with it, but it's no surprise to them. They took an educated risk. Their role model, Naomi Oreskes, had already demonstrated the kind of crimes you can commit against science, epistemology and common honesty— with impunity—as long as it's for the cause.

Reply
Brad Keyes link
8/31/2014 08:48:35 pm

Scientific Epistemology 101:

In science, opinion is not a form of evidence.

That will be all.

Now one understands why consensus polls enjoy the status they enjoy in science, which is somewhere between contempt and ridicule. (Climate science being the obvious exception.)

PS The reason Cook et al. believe you can pretend to measure evidence by counting papers is presumably their scientific illiteracy. The arithmetic of evidence is not so much a foreign language to them as an extraterrestrial one. They'd be shocked to learn that there is such a thing.

Have you spent much time reasoning with these folks, José?

"With" is probably the wrong word.

Reply
Phil Clarke
8/31/2014 09:33:51 pm

Fraud is the most serious charge one can make against a scientist. It crosses the line from mere disagreement and into the realm of deliberate malfeasance. You need to provide more than a sample of 11 (0.003% of the population) which *in your opinion* should be excluded, to back it up. Without hard data, just saying 'I've only scratched the surface' is mere hand-waving. (Oh, I see you're up to 14 now, out of how many?).

You made an amusing error in your original post, which invalidated 1 in 4 of your examples, but you claim a massively smaller sample must indicate knowing fraud. Hmmmm.

You didn't like the 'cooking stoves' paper being included as a 'mitigation' paper that assumes greenhouse gas warming, but here is the abstract ...

"Use of biomass fuel in traditional cooking stoves (TCS) is a long-established practice that has incomplete combustion and generates substances with global warming potential (GWP). Improved cooking stoves (ICS) have been developed worldwide as an alternative household fuel burning device, as well as a *climate change mitigation*"

Under the criteria described in their methods, the inclusion and categorisation seems fair enough to me. The paper is about mitigating AGW, indeed probably would not have been written if the authors did not believe greenhouses gases are warming the world. You can disagree about the validity of the methodology, but following it and doing exactly what they said they would is not fraud.

On the exclusion of Lindzen's recent papers, this was probably because they were not returned by the admittedly crude keyword search. However, any sampling method that included Lindzen's studies would *also* include the papers that rebutted them (eg Hartmann and Michelsen 2002), so the net effect on the percentage would likely be negligible..

You didn't like the exclusion of Spencer's essay 'How serious is the global warming threat' at the same time complaining that they did not exclude papers on 'Social science, education, research about people’s views on climate'. Spencer's piece was published in 'Society'. Again .... hmmmm.

Showing that a paper has flaws is one thing, none is perfect, showing that the errors invalidate the conclusions is another, this you have not done, in my view, never mind demonstrating fraud.

But I am certainly not going to get into an argument about the minutiae of the methodology or go through your examples paper by paper. If you have hard evidence of scientific fraud then, assuming you wish to be taken seriously, you will of course contact the journal and present it, or submit a comment for peer-review and publication, rather than posting polemical blog posts. Good luck with that.

Also, so what? I agree that the whole study-counting thing is no more than a numbers game, it does not really qualify as a literature review, but then where did they claim differently? A single paper that demonstrated global warming was actually driven by the sun or ocean currents would render all of Cook's thousands of abstracts redundant. I do find the amount of heat generated by this one paper quite amusing.

Reply
Joe Duarte
9/1/2014 03:00:38 am

Phil the fraud is falsely stating their method, three times. I'm amazed at comfortable you are with fraud. Saying you have independent raters when you were running a forum where they were all collaborating is fraud. Saying they couldn't see the authors when one of them openly admitted to outed Lindzen as the author of a paper is fraud. Saying you excluded social science and survey papers when you including many of them is fraud. The cooking stove paper is a survery of the general public. It's not going to count.

You're points aren't strong. You're not addressing the substance here, the espistemological issues, the fallacy of demanding proof od a negative, the exlusion of no opinion papers in calculating a consensus, and the absurd invalidity of the method of the study.

Spencer is a climate scientist and he wrote a paper directly about climate scientist. Your points are getting very, very dumb. I discourage that on my blog.

The fraud has already been reported and is under investigation.

Reply
Brad Keyes link
8/31/2014 10:15:05 pm

Phil,

"If you have hard evidence of scientific fraud then, assuming you wish to be taken seriously, you will of course contact the journal and present it, or submit a comment for peer-review and publication, rather than posting polemical blog posts."

Or he could privately email Andy Revkin and describe the paper as "pure scientific fraud" without even providing a reason, let alone having to defend a public blog post.

You know, the Michael Mann school of discrediting studies you don't like.

Much easier.

Reply
Phil Clarke
8/31/2014 11:22:34 pm

Brad - which part of 'privately' is giving you the problem?

Reply
Phil Clarke
8/31/2014 11:28:07 pm

And if you refer back to the illicitly obtained mail in question you'll find that on the very next line Mann linked 2 RealClimate posts that debunked M&M .....

Brad Keyes link
9/1/2014 04:20:55 pm

Phil:

"which part of 'privately' is giving you the problem?"

Which part of "Andy Revkin is a massively influential environmental science journalist" is giving you the problem?

"And if you refer back to the illicitly obtained mail in question you'll find that on the very next line Mann linked 2 RealClimate posts that debunked M&M ....."

Oh, so Mann had some blog science to back up his grave allegation? I must admit that did slip my mind. Of course, if he'd had hard evidence of scientific fraud then, assuming he wished to be taken seriously, he would have contacted the journal and presented it, or submitted a comment for peer-review and publication, rather than posting polemical blog posts... right, Phil? ;-)

PS: everyone, let's take a moment to acknowledge Phil's guts. Sure, his arguments aren't getting him anywhere, because they aren't particularly good, but then he is massively outnumbered.

Standing up for the consensus can be a lonely struggle.



Lloyd Mongo
8/31/2014 11:18:13 pm

I think the implicit contention in some comments that there is "97% consensus on general relativity, and we think (hope?) there's 97% consensus here, therefore AGW is equivalent to general relativity in its acceptance" is wildly fatuous.

What are the differences between GR and AGW in terms of science? Plenty.

GR was a theory with a solid mathematical underpinning that directly predicted results of certain experiments. It was falsifiable, and deliberately contained the seeds of its own potential destruction by making predictions that could be verified by experiment.

AGW is based on conjecture and, at its heart, argumentum ad ignorantiam; specifically: "Well we don't know how to make our models match observation other than by introducing what we imagine to be parameters that might model how we suppose CO2 would affect climate." When reality fails to match the models, which produce, despite millions of lines of code, little more than a lagged rescaling of the inputs, and which, with their copious parameters, have become over-fit, then either reality is adjusted by unjustified manipulations of past temperature data, or new explanations (excuses) for why the models have no predictive skill are invented. So the "settled science" of AGW keeps mutating; the scientific holy grail of falsifiability always just disappearing over the horizon.

Remember, Albert Einstein specifically described how GR could be falsified.

The Team actively suppresses all contrary data, hypotheses and attempts to falsify AGW (even explicitly admitting as much, albeit among themselves in communications they thought would not become public).

Albert Einstein actively encouraged experiments that would disprove GR, and alternative theories that themselves were subject to being falsified, because he *knew that's how science worked*.

People expressing views contrary to AGW are shouted down with vicious invective or are dismissed with the laughable "You're not a 'climate scientist'" canard. Remember, Einstein published numerous papers on thermodynamics and statistical mechanics while he was a clerk in a patent office.

There may or may not be AGW, but there seems to be a profound infestation of "climate science" by those who would destroy it by their unscientific conduct. The "skeptics/deniers" have a field day with utter nonsense such as the paper that is the topic of the current discussion, and science is not well served by those who support such specious claptrap. Defending it on the basis that "well, it seems to produce the right result" is juvenile -- methods matter (need I trot out the stopped-clock trope?).

Cut it loose; its carcass is emitting a foul odor.

Reply
Phil Clarke
8/31/2014 11:20:02 pm

PS. Another factual error. You say

'For an upcoming event, Cook claims "They found that among relevant climate papers, 97% endorsed the consensus that humans were causing global warming." '

but those are NOT Cook's words, they are just a description of the event by the organiser. You might want to update your rant.

Reply
Carrick
9/1/2014 01:35:22 am

In cases I've been associated with, the speaker produced the original text that was used.

So what's your basis for saying they weren't Cook's words or that he didn't endorse them? This phrase is virtually identical to that found <a href="http://www.skepticalscience.com/global-warming-scientific-consensus.htm">on Cook's website:</a>

<i>97% of climate experts agree humans are causing global warming.</i>

So perhaps you want to reword your own rant.

Reply
Phil Clarke
9/1/2014 02:00:41 am

Well, the words are in quotes and attributed to Cook, even though he never wrote them. This is simply wrong. If you are going to criticise others for inaccuracy ....

And the actual Cook quote you provide relates to *scientists* ie the self-ratings by the authors, which also came in at 97% - not the papers rated for the study. Different thing.



Phil Clarke
9/1/2014 12:09:17 am

"This approach is illustrated with a 16 expert real-world dataset on climate sensitivity obtained in 1995. Climate sensitivity is a key parameter to assess the severity of the global warming issue. Comparing our findings with recent results suggests that the plausibility that sensitivity is small (below 1.5 °C) has decreased since 1995, while the plausibility that it is above 4.5 °C remains high."

From the abstract of Ha-Duong, M. (2008), listed above as 'non-climate related'. Say what?

Reply
Joe Duarte
9/1/2014 04:43:22 am

Phil, this one is interesting. I could give it to you, depending on some things. Here's the whole abstract (you just quoted the end of it):

This paper examines the fusion of conflicting and not independent expert opinion in the Transferable Belief Model. A hierarchical fusion procedure based on the partition of experts into schools of thought is introduced, justified by the sociology of science concepts of epistemic communities and competing theories. Within groups, consonant beliefs are aggregated using the cautious conjunction
operator, to pool together distinct streams of evidence without assuming that experts are independent. Across groups, the non-interactive disjunction is used, assuming that when several scientific
theories compete, they can not be all true at the same time, but at least one will remain. This procedure balances points of view better than averaging: the number of experts holding a view is not
essential.

This approach is illustrated with a 16 expert real-world dataset on climate sensitivity obtained in 1995. Climate sensitivity is a key parameter to assess the severity of the global warming issue.
Comparing our findings with recent results suggests that the plausibility that sensitivity is small (below 1.5C) has decreased since 1995, while the plausibility that it is above 4.5C remains high.

I'll comment separately below, since I think there's cap on comment length on this system.

Reply
Joe Duarte
9/1/2014 05:13:18 am

You can read the paper here: http://halshs.archives-ouvertes.fr/docs/00/28/08/23/PDF/HaDuong-2008-HierarchicalFusionOfExpertOpinionInTheTBMApplicationToClimateSensitivity.pdf

It's a cognitive psychology or applied reasoning paper, where the author applies, or at least plays with, the Transferable Belief Model. It's an elaboration of the Dempster-Shafer mathematical theory of evidence, dating back to the 1960s and 70s. The author applies to a 1995 study that interviewed 16 climate scientists. That study is here: http://keith.seas.harvard.edu/papers/13.Morgan.1995.SubjectiveJudgmentsByClimate%20Experts.s.pdf

Both studies are fascinating. If the Cook crew had read them, it would've opened their eyes to the world of expert judgment and how to validly combine or integrate such judgments. They appear to be completely unaware of this field or all the work that's been done in it, which is remarkable for a group who published study of expert consensus on an issue.

The Ha-Duong paper shouldn't count for Cook for at least two reasons: 1) it's a social science paper, and they said they treated social science papers as Not Climate Related (see their Table 1). A paper about cognitive psychology and the nature of expert agreement is definitely social science. 2) It's a consensus paper, in a small way. It applies a particular model to an existing dataset of climate experts and generates novel estimates from that data. Cook's paper cannot include other consensus papers -- a consensus paper cannot include consensus papers as evidence. This would be circular and would constitue data duplication.

3) I haven't mentioned this yet, but old data should not be included in a study that aims to report "the consensus", present-tense. This is another critical problem with the study, one of several I haven't detailed yet. There are too many critical issues. But anyway, what people were saying in 1991, 1992, or 1995 isn't going to be valid unless they were saying some very specific kinds of things. I won't go into it all here, but this kind of project, of measuring a consensus, requires a great deal of thought and work. You can't just count papers and send it in. That was absolutely invalid, and confusingly dumb. At the least, we'd need weights. But more thought would need to go into it. For one thing, they'd have to deal with the problem of duplication, another critical flaw in the Cook study. Counting mitigation and some of the impacts papers the same as actual climate science or attribution papers introduces epistemic duplication. Those mitigation and similar papers don't represent independent units of knowledge or expert opinion that aggregate into a summed consensus, as Cook and crew. Their whole approach is DOA.

See Keith's "When Is It Appropriate to Combine Expert Judgments?" as one interesting exploration: http://keith.seas.harvard.edu/papers/14.Keith.1996.WhenToCombineExpertJudgments.f.pdf

Reply
Karl Kuhn
9/1/2014 12:59:42 am

Phil,

re Ha-Duong (2008): a paper with this title:

"Hierarchical fusion of expert opinions in the Transferable Belief Model, application to climate sensitivity."

Is not a climate research paper, but a paper that is essentially doing the same as Cook: climate change is real because the experts we interviewed say so.

This is not research about the climate, but research about climate science, at best.

But your mind is too barren already to realize the difference.

Reply
Phil Clarke
9/1/2014 01:28:57 am

"Is not a climate research paper, but a paper that is essentially doing the same as Cook: climate change is real because the experts we interviewed say so."

No, Cook is saying the experts largely agree and therefore the concensus is real and here are the numbers. The paper used TBF with a case study of climate sensitivity, so no, it is not a climate research paper, but, it does provide evidence of one aspect of the concensus.


Reply
Carrick
9/1/2014 01:42:52 am

I wouldn't include that paper as a climate science paper. It's a study on opinion. The authors may or may not believe the opinions of the people they are studying, but that's irrelevant.

Studying opinion, even the opinion of climate scientists, does not promote you suddenly to the status of being a "climate scientists".

If you can't get something that basic correct, I can see why Cook's paper must look to you like a work of pure genius.

Phil Clarke
9/1/2014 01:55:01 am

>>I wouldn't include that paper as a climate science paper.
Nor would I. Nor did they.

>>It's a study on opinion. The authors may or may not believe the opinions of the people they are studying, but that's irrelevant.

As far as i can tell, it is a study of nt of a model used to assess expert opinion. The case study used was expert opinion on climate sensitivity. This made it a climate-related paper.

If a concensus is not an aggregation of expert opinion, what is it?

Carrick
9/1/2014 02:21:23 am

Phil: "If a concensus is not an aggregation of expert opinion, what is it?"

Yes, but this is one step further removed. Using this paper in a study on AGW consensus, would be include aggregating the opinion about expert opinion on AGW, which makes it no longer a study of opinion on AGW, but opinion on consensus of AGW.

Rational Optometrist
9/1/2014 01:16:53 am

Hi Phil - if you want to refute José's points - at least here, on this blog, in response to his post - you'll have to respond to the major points he makes, not just nit pick minor errors like the above. Your 'gotcha!' points are small-fry in the face of the tsunami of errors pointed out above, such as the inherent activism of the raters, or the quotes from the hidden discussion forum, or the obvious comprehension difficulties for laypeople analysing scientific abstracts. These errors jointly and severally invalidate the paper. You'll need to address these.

Reply
Phil Clarke
9/1/2014 01:40:45 am

No. it is not I who is making an allegation of fraud.

The activism of the raters - pure ad hominem argument, the only hard evidence presented of incorrect categorisation (and I agree with some) amount to a lot less than 1% of the total.

Saying 'Look for the explicit cheating in at least one of the forums linked to above. Find it yourself' is a bit like saying 'God Exists, look in the Bible for proof'. eithe Jose has evidence of cheating, in which case he should explicitly quote it or he should retract and apologise. This is not good enough.,

Also if these raters are activists, then they are likely versed enough in the science to rate the majority of abstracts on the topic, heck even I have heard of GISS Model E, as I suspect most people with an interest in the topic have.

Again, we are invite to extrapolate from *one example* to build a case that the paper is invalidated. Sorry, you need more than that.

To repeat, where is the evidence that these flaws materially affect the actual conclusions of the paper (rather than the Straw Man that Jose has constructed)?

Reply
Phil Clarke
9/1/2014 01:44:21 am

PS Remember also, the comparison of the ratings performed for the paper with those performed by the authors themsleves. There was good agreement, with most of the authors rating their studies as endorsing the concensus more strongly than the (allegedly) biased raters.

Carrick
9/1/2014 02:02:01 am

I think you need to look up "straw man". A straw man involves the misrepresentation of another person's argument—you prove the misrepresentation of the argument wrong, then by flawed logic, conclude you've proven the real argument wrong.

So the presence of major methodological flaws is not a straw man. Rather, it's an overarching reason for the paper to be withdrawn.

I can think of three separate reasons for withdrawing the paper. Any of these should be sufficient in a rational world. This paper is protected by the editor in chief (Kahan), who is also a colleague of and frequently blogs on results by Cook's advisor (Lewandowsky). So nepotism will out here, I'm afraid.

The paper should be withdrawn because it violated human research ethics review requirements. (Incomplete ethics approval, with the strong possibility that the lead author mislead his ethics review board.)

The paper should be withdrawn because the so-called independent evaluators have been documented as discussing the paper among themselves, which removes any remote possibility that this so-called research can be considered publishable.

The paper should be withdrawn because the paper's statistical methodology is irreversibly flawed. Their results are not even correct, as stated, even given the assumption that the opinions were actually independent, and the talk back and forth on their forums was just a big charade (part of a holiday celebration, perhaps).

I would say you're "barking up the wrong tree" here, if your goal is to protect Cook from criticism. You seem to be assuming that a bit of smoke does not imply a fire.

Good advocates never ask for answers that they don't already know themselves. After all, in spite of his protestations, their client can may be really guilty after all.

If you want to see Cook's paper really sink (which I think would require herculean efforts given the political forces that want to keep it afloat), keep pushing though.

Joe Duarte
9/1/2014 02:35:15 am

Straw man doesn't mean what you think it does, as Carrick notes.

Phil, you missed most of the essay. The observation that we can't exclude papers that don't take a position will invalidate this paper, so that scratches your whole premise about the numbers. Did you not read that part? All your dancing around numbers like 11 is pointless, since even given the invalid premise that 11 is never going to be the number anyway -- it's absurd to keep pretending it's going to be anything as low as 11. You're not processing what's happening here. This paper is fundamentally invalid in multiple ways. None of what you're talking about is going to rescue it.

Fraud is fraud. If you don't care about fraud, then you can't purport to be pro-science. You've had nothing to say about the fraud of falsely stating a method, or the fraud of cheating and breaking protocol to see who the authors were (I exposed that cheating in the earlier post).

To defend this paper, you'll have to do a lot of work. None of the stuff you're doing is going to counter the critical issues of invalidity. I'm not going to do all the work for you. Read the forum posts. Look at what they did. No one should have to hold your hand. This study has no future.

Phil Clarke
9/1/2014 01:47:25 am

Sorry - i should have put *where there were differences* the majority were in the direction of the scientists rating their papers as more strongly endorsing ....

Reply
Joe Duarte
9/1/2014 02:38:44 am

As I pointed out, this is now moot, given the inclusion of a bunch of non climate papers. There was massive disagreement in the categorization of the studies. And only 14% responded, and Cook excluded a massive amount of literature by skeptic scientists. It's stunning that nothing Dick Lindzen has published since 1997 was included. Lots of others too. This all means we're not going to care about their authors survey. It's a moot feature of the "study" at this point.

Reply
Phil Clarke
9/1/2014 03:31:48 am

>>I think you need to look up "straw man". A straw man involves the misrepresentation of another person's argument—you prove the misrepresentation of the argument wrong, then by flawed logic, conclude you've proven the real argument wrong.

Quite. The thrust of the argument is that the inclusion of 'a bunch of psychology studies, marketing papers, and surveys of the general public as scientific endorsement of anthropogenic climate change.' invalidates the paper.

First problem - the 'bunch' listed barely makes it into double figures, less than half of one percent of the thousands of abstracts. They could be removed and the arithmetic and conclusions would be basically unchanged.

Secondly, under the stated methodology of the paper it is perfectly possible for studied not *primarily* about climate science to be included if the abstract implies or states that human caused global warming is real. This is made clear in the methodology and in the introduction which says 'Through analysis of *climate-related* papers published from 1991 to 2011, this study provides the most comprehensive analysis of its kind to date.

The title of the paper is 'Quantifying the consensus on anthropogenic global warming in the scientific literature' - not just the *climate science* literature, any more than the IPCC reports are all about WG1. So if an abstract to an engineering paper, say, makes it clear that the motivation for the study is 'because it could address issues that are related to reducing global climate change' as the alumina deposition paper does, then the paper is clearly in accord with, is part of, and endorses the mainstream concensus.

The categorisation may be, probably is, wrong in a bunch (translation: a handful) of cases, but this hardly rises to fraud.

>>The paper should be withdrawn because it violated human research ethics review requirements.

A whole new line of argument! Evidence?

>>The paper should be withdrawn because the so-called independent evaluators have been documented as discussing the paper among themselves, which removes any remote possibility that this so-called research can be considered publishable.

What exactly is wrong with researchers discussing methodology! By 'documented' do you mean the leaked internal discussion board (with all the possibility of getting the context wrong)? Help me out here - where is the evidence of 'cheating' that the researchers have been accused of. A verbatim quote please.

>>The paper should be withdrawn because the paper's statistical methodology is irreversibly flawed.

More detail please. Have you found something that the journal reviewers missed?

Reply
Carrick
9/1/2014 05:20:27 am

Phil Clarke: "Quite. The thrust of the argument is that the inclusion of 'a bunch of psychology studies, marketing papers, and surveys of the general public as scientific endorsement of anthropogenic climate change.' invalidates the paper. "

I'm afraid that's a straw man argument. That's not the thrust of the argument, but one facet of it.

"Quantifying the consensus on anthropogenic global warming in the scientific literature' - not just the *climate science* literature, any more than the IPCC reports are all about WG1."

That's not what the paper states. The paper clearly frames this as a measure of consensus on climate change science, not a literary study with no external validity, which is what you are now trying to argue it is.

Such a paper as you are now suggesting it to be would have great difficulty getting published in a psychological research journal of course, since literary analysis is not psychologic research.

"A whole new line of argument! Evidence?"

No, it's not a new line of argument. Beaten into the ground, just not on this thread. I can provide links, but this is all information that you can verify (the information is all publicly available):

Cook's ethics review is online via an FOI request on Brandon's blog. The ethics review does not include an approval of the primary activity which took place here, which was the categorization by the raters of climate science papers. In fact, the ethics application only covered the portion of the study involving the survey of authors (which ironically in the US wouldn't have required an ethics application). University of Queensland regulation forbids the retroactive approval of ethics for research involving human participation.

Because the study was approved, used data that was not approved and could not retrospectively been approved, it's my concern that the ethics board was mislead on the scope of this aspect of the research. The remedy for publishing research involving human participants is that was not properly cleared by the ethics board is withdrawal of the study.

"What exactly is wrong with researchers discussing methodology! By 'documented' do you mean the leaked internal discussion board (with all the possibility of getting the context wrong)?"

As to specific examples, how about the ones in José's article that you obviously haven't read (tl;dr). It's very hard to come up with a context that makes the discussion of papers not a discussion of papers. But the whole forum thread is repleat with examples:

http://www.hi-izuru.org/forum/The%20Consensus%20Project/2012-02-27-Official%20TCP%20Guidelines%20%28all%20discussion%20of%20grey%20areas,%20disputed%20papers,%20clarifications%20goes%20here%29.html

See 2012-03-15 13:11:25 for example.

Regarding "leaking"… the forum is metadata that is relevant to the study and should have been included with the paper. That it was leaked is not relevant here in any case, unless there are issues with its veracity (I've never seen anybody deny they discussed the abstracts among themselves).

As to the other can discuss methodology, but you can't discuss the papers between the reviewers and maintain independence. However, the methodology needs to be decided upon during a training period (hold back 10% of the papers, let them discuss these among themselves, rate the other 90%), and should not be an evolutionary process. What happened here, besides the issues with independence of researchers issue, is just terrible research design. I've literally seen much better executed high school science fair projects than this!

Regarding lack of independence, allowing them to discuss the papers would itself be considered an ethics violation by the author, were he to then try and claim they were independently arrived at. Indeed, this type of miscategorization be considered an act of scientific fraud, and not an insignificant one, because the independence of the raters is needed in order to be able to apply any meaningful statistical analysis to the ratings data.

"More detail please. Have you found something that the journal reviewers missed?"

Not myself so much, though I agree with some of the issues.

The cross-validation statistics used to compare the 12,000 abstract ratings against the authors is very weak. Basically they binned the responses and compared the bins without considering order significance. Instead they needed to compare on a paper by paper basis. Order effects are import here. If you want to test whether you agree with the categorization by James Hansen of a given paper, you need to compare that paper, not lump it in with 3000 other papers you've also rated.

When you do, cross validation fails at the 0.03 level. Discussed here:

http://www.bishop-hill.net/blog/2013/10/11/cooks-consensus-standing-on-its-last-legs.html

How you correct for the lack of independence is probably the bigger issue. I would maintain it is impossible to develop a statistical model descri

Carrick
9/1/2014 05:24:25 am

Last part was cut off:

How you correct for the lack of independence is probably the bigger issue. I would maintain it is impossible to develop a statistical model describe the complex interaction between the participants, but in any case, any model that assumes statistical independence between the raters must be invalid.

Joe Duarte
9/1/2014 05:31:52 am

Phil, I think you're Carrick or someone else, but I can address the independence issue.

The paper said they used independent raters. They lied. They discussed their ratings in a forum. That alone would void paper, does in fact void the paper.

The reason that subjective rating studies use multiple independent raters is so that their ratings are valid, and not just based on one skewed judge. The reason why the raters have to be independent is so that one rater's judgments won't be contaminated by others. Then the independent ratings are tested for reliability, which is interrater reliability. If interrater reliability isn't high, it's busted -- you don't have a reliable measure, in essence, because the raters didn't agree.

You can't even calculate interrater reliability validly if the raters are not independent. It wouldn't mean anything. And there's no point in using multiple raters if they're not independent, if you're going to contaminate their ratings in a stupid forum. (Unless you have a novel theory of measurement based on collaborating raters -- that would require a whole bunch of work and validation that is beyond the scope or abilities of the Cook crew.)

In this case, all of the above is largely moot because it was a scam. The raters were ideologically biased activists who had a particular outcome in mind, and were given the power to create that outcome given that the study was based on their subjective ratings of climate science abstracts (and other random abstracts evidently). This makes interrater reliability and rater independence meaningless, and invalidates the study. We would expect high reliability from ideological raters (who are all of the same ideology) -- it wouldn't mean anything, since it's invalid to have them rating something directly pertinent to their ideological aims. (Their IRR is surprisingly low, evidently, based on analyses others have done and apprised me of -- shockingly low figures. It makes sense now that I've seen a lot of their data -- the raters were all over the place.)

We'll never escape the invalidity of using political activists to subjectively rate science abstracts on the issue of their activism. That makes the whole paper go away. The burden is entirely on the idiots who conducted such a "study" to validate their ratings in every possible way and deal with all the obvious issues. It would never be on us, on anyone else. You want me to carry the burden. I am rejecting that burden. I see myself as having been extremely generous to even document as much as I have. We should never have to document all the absurdities that result when political activists are able to create data by subjectively rating science abstracts on the issue of their activism. Science and academia will ultimately have to agree on that -- we will have to agree that this design is forever unacceptable and invalid. This study has so many other problems, so many catastrophic issues, including fraud, that we really shouldn't waste too much more time on it. You've come to the point where you're willing to ignore false statements about methods used -- that would undermine science as a field.

Joe Duarte
9/1/2014 05:33:16 am

Typo again -- I meant to say I think you're *quoting* Carrick or someone else.

Phil Clarke
9/1/2014 03:59:01 am

Well, obviously I disagree that the minute number of examples that you've presented to back up your case is 'pointless', and so should you if you care about evidence.

The Straw Man is easily clarified - your position is that the paper is being presented as a measure of the consensus on AGW in the climate science community, hence consideration of papers not primarily on the topic shouldn't count, whereas the reality, clearly laid out in the methods is that it is at attempt to quantify the breadth and depth of acceptance and endorsement of the AGW concensus in the scientific literature as a whole, hence any scientific paper that states or implies such acceptance is fair game.

Fraud is indeed fraud, and an isolated example of somebody *discussing* breaking protocol on a discussion board hardly counts. A larger question is 'So what?' Did the breach occur in the actual study? Did it have the slightest impact on the conclusions of the paper? I suspect the answers are 'You don't know' and 'No' respectively.

The paper may or may not have a future, but it certainly has a distinguished past, it was the most downloaded paper of the year and the board of ERL voted it the best article of the year. Just maybe they have not abandoned their sense of perspective.

Reply
Carrick
9/1/2014 05:48:44 am

Phil, from the conclusions:

"Among papers expressing a position on AGW, an overwhelming percentage (97.2% based on self-ratings, 97.1% based on abstract ratings) endorses the scientific consensus on AGW.
"

See the "endorse scientific consensus on AGW" phrase? The point being made here papers that study opinions cannot be seen an endorsement of AGW, but where included as if they were.

The purpose of the literature review is to characterize the level of endorsement of AGW. Yes, they included all papers that met certain search phrases (but left many out and ended up with a sampled biased toward sociological and psychology journals by using the "global climate change" rather than "global warming" and "climate change" (phrases used more commonly in the physical sciences).


"Fraud is indeed fraud, and an isolated example of somebody *discussing* breaking protocol on a discussion board hardly counts. "

As I discussed above in my tl;dr comment, discussion among the raters breaks the research protocol described in the paper and invalidates the assumption of independence of researchers. That isn't fraud, that is just a very crappily run research project.

Where it gets dicey from an ethical perspective is that Cook set up the forum (using his own software, so it wasn't like he pushed the wrong button and it happened on its own), so he clearly knew that independence of raters had been violated, yet he as clearly stated that the ratings had been done independently in his paper:

"Abstracts were randomly distributed via a web-based system to raters with only the title and abstract visible. All other information such as author names and affiliations, journal and publishing date were hidden. Each abstract was categorized by two independent, anonymized raters.
"

They were neither independent nor anonymized. This is not a peripheral piece of information, it is central to research psychology studies. Cook knew this was false, but mislead his reviewers and his audience in his write-up. Clearly he was aware that what he stated was false, and he was aware of what the implications of a lack of independence means for his study.

The real question should be "how is this not academic fraud?" I'm sure you'll have a reason, and would be equally lenient were Monkton to have behaved in the same manner as Cook.

Phil Clarke
9/1/2014 06:47:34 am

"The paper said they used independent raters. They lied. They discussed their ratings in a forum. That alone would void paper, does in fact void the paper."

Well, let us see what the paper *actually stated*, shall we?

"some subjectivity is inherent in the abstract rating process. While criteria for determining ratings were defined prior to the rating period, some clarifications and amendments were required as specific situations presented themselves"

So the paper itself concedes a degree of subjectivity and states that the criteria were fine-tuned to deal with unforeseen cases, which is exactly what we saw on the online forum. How on earth do you think that was done if not by the researchers communicating with each other? It may not be the best experimental design in the world but it is a long way from invalidating the data.

also

"Initially, 27% of category ratings and 33% of endorsement ratings disagreed. Raters were then allowed to compare and justify or update their rating through the web system, while maintaining anonymity. Following this, 11% of category ratings and 16% of endorsement ratings disagreed; these were then resolved by a third party."

So full independence only applied to the first step, as part of the reconciliation process raters were permitted to 'show their working'.

No smoking gun, no lies. Its one thing to assert that a bias MUST have been introduced because the raters were of a particular viewpoint - a point discussed in the paper - it is another to show that their ratings were systematically inaccurate, which is what would actually invalidate the research. Has this been done?

Carrick
9/1/2014 07:06:20 am

Phil: ""some subjectivity is inherent in the abstract rating process. While criteria for determining ratings were defined prior to the rating period, some clarifications and amendments were required as specific situations presented themselves"


Cook very specifically and in plain language stated that the reviewers were both independent and anonymized.

We know that neither of these are true, and you are left arguing over the degree to which they were not true. So I think you are using the "almost a virgin" argument here.

It's not a strong argument.

Adjusting methodology should never happen in the middle of a study. That's the (missing here) training period is for.

Further, without providing the full metadata (what we have here was provided as part of a likely hack of their website), there is no way to determine the degree of independence among the raters.

I assume you have a bit of familiarity with statistic methods. How does one model interaction effects, when the degree of interaction is not fully documented? The correct answer is "you can't".

I'll note that on his website, where the question is asked "were the reviewers independent"? The response studiously avoids answering this question and instead points to the apparent validation of the results of the raters against the authors ratings. Given the weakness of that comparison, this amounts to a non-informative response.

Worse, from my perspective, I'm left wondering why the question was never answered directly.

I think it's because Cook himself can't actually say how much actual interaction occurred between the rankers because they were not anonymized.

This is ever bit as serious of an error as using political activists to do the ratings to start with: Because they knew each other, you have no independent way of verifying whether or not there was contact between the forum members outside of the forum.

As to the "no smoking gun" statement., Cook very much did state clearly " Each abstract was categorized by two independent, anonymized raters."

We know this statement is false, and even if you can find waffle language that speaks to a different reality, this statement remains false. And it is a BIG deal, not a minor aspect of the paper: There was a misrepresentation made, and it was not a small one.

Joe Duarte
9/1/2014 08:53:20 am

Phil, your statements about the forum are false. They discuss actual ratings of specific papers. Given this fact, your subsequent statements are also false. They were never independent -- they discussed their original ratings of specific papers in the forum. Their description of their method in the paper, that they used independent raters, is completely false. Their statement that raters were blind to the authors of the articles they were rating is also completely false.

In the discussion that you should've read, one rater says:

"So how would people rate this one:..."

After quoting the abstract, he asks:

"Now, once you know the author's name, would you change your rating?"

The words "the author's name" were a hyperlink to the paper, obliterating blindness.

It was a Lindzen paper.

The rater openly outed the author of one the papers to all the other raters. He was never rebuked, and everyone carried on as if fraud didn't just happen, as if the protocol hadn't been shattered. The rater later said " I was mystified by the ambiguity of the abstract, with the author wanting his skeptical cake and eating it too. I thought, "that smells like Lindzen" and had to peek."

Another rater says "I just sent you the stellar variability paper. I can access anything in Science if you need others."

The paper. The whole paper. Meaning everything is revealed, including authors. Do we need to quote the Cook methods section again for you?

Another rater says: "Sarah: It's a bad translation. The author, Francis Meunier, is very much pro alternative / green energy sources."

Helpfully, another rater links to the entire paper: "I have only skimmed it but it's not clear what he means by that and he merely assumes the quantity of it. Note that it was published in the journal Applied Thermal Engineering. pdf of a related article here"

Cook helpfully adds "FYI, here are all papers in our database by the author Wayne Evans:"

This is all overkill -- I gave you more than enough to go on before. But this stunningly illustrates the motives of the raters, and destroys any claim that they weren't biased, that this study wasn't a scam conducted by political activists, that the method described in their article was actually followed.

If you don't care about fraud, if this all seems kosher to you, we're done here. In any case, I'm not going to keep correcting your factual errors or your complete ignorance of the methodological requirements of a subjective rating study. It's important to know what you're talking about, and to have substantive arguments.

Carrick
9/1/2014 05:51:00 am

Sorry for all of the typos. I'm doing this in the middle of real measurement. So it's not perfect, but I hope it doesn't detract too much from the substance of what I'm trying to communicate.

Reply
Phil Clarke
9/1/2014 07:01:54 am

>>As I discussed above in my tl;dr comment, discussion among the raters breaks the research protocol described in the paper and invalidates the assumption of independence of researchers. That isn't fraud, that is just a very crappily run research project.

Ah, so it's NOT fraud. What I saw in my scan of the forum is one researcher conceding that she once 'peeked' at the whole paper rather than just the abstract. Not ideal - but a weak hook on which to hang the assertion that the whole thing should be thrown out. And given that the paper is open that ' criteria for determining ratings were defined prior to the rating period, some clarifications and amendments were required as specific situations presented themselves' I am unconvinced that the research protocol as described WAS broken.

As for Monckton, the interpretation he places on Legates et al, well, lets not go there.... ;-)

Reply
Carrick
9/1/2014 07:20:01 am

Again, that part is not fraud. It is at best a poorly designed and executed measurement protocol.

If you find one example of a researcher conceding that they "peaked at the whole paper", then you know it did happen, but there's no way to determine how frequently it happened (just once or many).

Well designed measurement protocols with good controls preclude that possibility from occurring. (You probably couldn't do this as an internet-based rating system and have enough control to preclude "cheating".)

The part that putatively fraud is making deliberate misstatements about the actual conduct of the research. And this doesn't have to be a ubiquitous behavior before before it becomes a research ethics problem.

It appears to me that you think we should just ignore the very many warts, abrasions, stress fractures and totally broken portions this paper, which in the end says very little, and accept it as its conclusions, because the conclusions are plausibly correct.

Science should not be judged based on whether we like or can use the results, but whether the study plausibly has added any new information to the scientific corpus.

That's the bottom line to me… is there anything of any real value to this paper? I think the answer is "clearly no"

TerryMN
9/1/2014 07:23:45 am

Phil, it really seems you're trying too hard to score points and not nearly hard enough to just take an objective look at the methodology used in the paper.

Carrick
9/1/2014 06:07:33 am

Phil., one other comment and I'm through here for a while.

In the ratings of reviewers, they had a category for papers that did not take a position on AGW.

It is perfectly okay to screen a data base for a series of search terms, and to screen those for papers that take a position on an issue that you want to study.

The problem comes in when you have a systematic mischaracterization of a class of papers, so that you include a vast swath of papers that neither endorse nor reject the "scientific consensus on AGW" into your study. It becomes a bigger problem, when your supposedly inclusive sample of 12,000 papers ends up being a subsample that is biased towards the same class of papers that you've errantly decided to include as responsive to the survey question being examined.

And in the middle of this we have the specter that "[they] excluded every paper Richard Lindzen has published since 1997. How is this possible? He has over 50 publications in that span, most of them journal articles. They excluded almost all of the relevant work of arguably the most prominent skeptical or lukewarm climate scientist in the world. Their search was staggering in its incompetence. They searched the Web of Science for the topics of "global warming" and "global climate change", using quotes, so those exact phrases. I don't know how Web of Science defines a topic, but I'm guessing it's looking at the keywords authors select for their papers."

So not only did they include classes of papers that can't be viewed as having a position on the scientific consensus on AGW, it appears they omitted a whole swath of papers that they thought might have had a negative light on AGW.

Let's pause for a serious WTF moment now.

Reply
Barry Woods
9/1/2014 07:19:56 am

let us not forget, this was supposed to be a bigger, better version of other consensus papers, thus the 12,000 papers is important to the activists.. They discuss this in the leaked forum.

they even created a graphic, for marketing purposes...
http://www.skepticalscience.com/graphics.php?g=79

not sure where the 98.5% of 10,306 'scientists' comes from, another misrepresentation of their own work?

ie papers (ie a scientist can have more than one paper, and 8,000 papers had no position

https://wattsupwiththat.files.wordpress.com/2013/08/index-of-_images_user_uploaded.pdf


other examples of how activist they are:

Strategy for anticipating denier response 'we don't deny that humans are causing global warming'
http://www.hi-izuru.org/forum/The%20Consensus%20Project/2012-03-05-Strategy%20for%20anticipating%20denier%20response%20'we%20don't%20deny%20that%20humans%20are%20causing%20global%20warming'.html

Cook: Expect that one denier response to TCP will be "we've always agreed that humans are causing global warming - we just dispute the degree of causation or that climate sensitivity is high" or something to that effect.

When someone posts this response, we can dig into the SkS database and find all instances where that blog/denier gave an argument under the category "It's not us" - the SkS database will have all that information. Then we can post a blog post "XXX reverses position on humans causing global warming", citing their worst examples of denying AGW along with their new quote "we don't deny AGW".

Then when they go on to post another argument for "It's not us", we can point out their contradiction again.

Not sure if we want to get that petty but just something to think about, anticipating the lines of attacks we will face.

Dana:
"I thought category #1 was our response to that criticism - in addition to 'x' percent of papers endorsing AGW, 'y' percent endorse AGW as the primary cause of the observed warming.

Then there's the future phase of the TCP where we do a survey of climate sensitivity papers to prove there's a consensus on that issue as well. That'll really kill the deniers."

Cook:
BTW, here's an example of the kind of response we can expect - this is the WSJ 16 response to the Doran/Anderegg 97% consensus:

http://theclimatescepticsparty.blogspot.com.au/2012/02/scientists-riposte.html

"The Trenberth letter states: “Research shows that more than 97% of scientists actively publishing in the field agree that climate change is real and human caused.” However, the claim of 97% support is deceptive. The surveys contained trivial polling questions that even we would agree with. Thus, these surveys find that large majorities agree that temperatures have increased since 1800 and that human activities have some impact. But what is being disputed is the size and nature of the human contribution to global warming. To claim, as the Trenberth letter apparently does, that disputing this constitutes “extreme views that are out of step with nearly every other climate expert,” is peculiar indeed."

So will be interesting to see the number of Level 1 endorsements that addresses directly this argument."

-------------------------------

Skeptical Science trying to predict sceptical reaction to the paper:
http://www.hi-izuru.org/forum/The%20Consensus%20Project/2012-03-07-Skeptic%20reactions.html

Cook not exactly sounding neutral (a perhaps little paranoid?)

"The wording will have to be very carefully constructed because as you say, this will be going out to deniers too. Considering every denier scientist seem to have a direct line to a red phone on Anthony Watts' desk, the existence of TCP will probably known to Watts before we've even looked at the results from the scientists. A scary thought really. For that reason, I think we should wait till as late as possible before emailing the scientists. Eg - wait till after quality control, once our results are done and analysed and the scientists' ratings are the final piece in the puzzle.
Keeping in mind our email will likely get broadcast on the denialosphere, we have to be very careful to have neutral wording that isn't leading in any way. The word consensus will likely not even be mentioned. But this isn't the thread to discuss that. I've started a thread just tonight on pinning down the quality control process and once that's dealt with, then I'll start working on the scientists self-rating stage. But if people want to post thoughts about that process, start a new thread and we can collect ideas in there."

------------------
bunch of activists

Reply
empire sentry
3/26/2015 04:23:25 pm

These people are just downright disgusting. They stoop to anything to keep their social movement going. Effing incredible.
.....Those that have been sucked in to feel like they belong, or out of denial they may have been wrong in the past, or out of denial that they might be wrong (as if they had picked a football team) will do all they can to protect their position...regardless of facts.
Stage 1: Emergence Stage 2: Coalescence Stage 3: Bureaucratization Stage 4: Decline......... Persons will not leave due to the solidarity of 'belonging' or fear of the same attacks and retaliation they themselves have been dishing out to non-members....
So, how do we raise awareness of the scams? Anytime anyone brings up facts, they are labeled deniers. I had this "debate" today. Woman said the extreme right deniers never offer any proof therefore, they are wrong and, when they do offer anything, its all lies.
I look to Australia and Sweden. They implemented their Green processes. People made drastic sacrifices. People decided the evaluate facts versus sales pitch and opted out. What can we do to bring this to bear in the US?

Brad Keyes link
9/2/2014 04:40:03 am

Joe,

"The problem comes in when you have a systematic mischaracterization of a class of papers, so that you include a vast swath of papers that neither endorse nor reject the "scientific consensus on AGW" into your study."

At the risk of sounding like a broken record, the vast majority of papers Naomi Oreskes laid claim to in her seminal 2004 consensus paper—oops, I mean *essay*; pseudoscholarly *essay*—took no position either way. That didn't save them from being lumped in with the small handful that could be said to endorse it.

Unlike Cook et al., Oreskes was not only a mentally competent adult but (nominally) a scientist, so she didn't even have the excuse of naivete. Yet her brazen academic fraud didn't even cost her her job.

This is all out in the open. Plenty of people are going to have a hard time insisting they didn't know what was going on when they stand before their maker. Unfortunately for them there's a special section of hell, the Stephen Schneider wing, just for hackademics who put their mortgages ahead of their oaths to defend science.

Reply
Tucci78 link
9/1/2014 08:57:40 am

I've tried to track each comment as posted on this thread, but I'm not infallible, so pardon me if I'm repeating a point already discussed.

Plainly, the Cook et alia "97%" paper is a crock of stinking filth, without validity from the bow wave to the cook's garbage dump off the fantail, but the level of credence in the catastrophists' anthropogenic global climate change (AGCC) hokum which prevails among experienced investigators in the various "hard" scientific disciplines seems to me a legitimate subject upon which *honest* inquiry could be undertaken.

Without further analysis of the egregious batpuckey shoveled by Cook & Co, what might a SOUND, unprejudiced study of the pertinent scientific literature show, and how would such a study be designed to gather, interpret, and present such data to the best possible levels of reliability as an insight into "sucker quotient" among those people who might be expected professionally to approach the preposterous bogosity of AGCC with their B.S. detectors functioning?

Reply
Phil Clarke
9/1/2014 09:45:14 am

Yep, we're done. Despite numerous attempts to frame the debate, and reliance on personal communications, stripped of all context, you have not provided a single unambiguous case of a researcher revealing which papers they worked on to a coworker working on the same rating, or collusion specifically to arrive at a particular a result. What I see is researchers discussing methods, based on specific cases AS THE PAPER EXPLICITLY STATED HAPPENED.

You have not provided a single example of a paper being incorrectly rated or any evidence whatsoever of systematic bias actually being introduced, just a lot of assertions and opinions. Your sample size to back up the argument that some papers were inappropriately inadequate is woefully inadequate.

Yep. We're done.

Reply
Phil Clarke
9/1/2014 10:12:07 am

Sorry, that should have read ' Your sample size to back up the argument that some papers were inappropriately included is woefully inadequate.'

There's a lack of proportion here. Tim Curtis, who was there, says that 5-10 abstracts were discussed by raters, which was a breakdown in the research method. But he also does the appropriate, and scientific thing, of assessing the impact on the conclusions. Even if 10x that number were invalid, the arithmetic would not change much, certainly a lot less than Cook's stated 7% error rate.

It is a fact of life no research is ever conducted flawlessly.

Equipment is miscalibrated, protocols are not scrupulously followed, it happens and when it does the appropriate response is to allow for it, quantify the effect on the conclusions if possible, attempt to reproduce the study using the raw data (which anyone could do and nobody has, to my knowledge).

Shrieking 'fraud' and insisting the whole thing needs to be dumped is just silly.

Reply
hunter
9/1/2014 06:09:01 pm

Phil, if Jose's paper was showing that the consensus on heliocentric universe theory was bunk, you would be calling for him to be visited by those friendly guys from the Papal Inquisition. It has been entertaining to see so much effort spent to avoid the obvious by someone claiming to defend science. And your assertion that not one paper has been shown to be miscategorized is evidence at best of a lack of reading comprehension by you.

Reply
Carrick
9/2/2014 05:59:05 am

Phil Clarke: "Tim Curtis, who was there, says that 5-10 abstracts were discussed by raters, which was a breakdown in the research method. But he also does the appropriate, and scientific thing, of assessing the impact on the conclusions. Even if 10x that number were invalid, the arithmetic would not change much, certainly a lot less than Cook's stated 7% error rate."

Again it's an issue with experimental design and control. By anecdote, Tom Curtis knows of 5-10 examples where discussion occurred. That's hardly proof that there were only 5-10 examples total. In fact, it's completely nuts that you think this would prove that.

We don't know from Tom Curtis that the number of times where it was discuss was limited to 5-10 times. We in fact have no way to know at this time how many times, and for which papers, there was an interaction among the raters. At this point, this information is unknowable beyond tha

What this means is that the design was poor enough to allow uncontrolled interactions. This is an unacceptably low standard for peer reviewed research, and is enough to disqualify this paper for publication in peer reviewed papers.

I should stress that it is perfectly possible, is not difficult, and indeed happens all of the time that experiments get performed where there is no interaction between raters: Anonymizing the raters and not selecting them from a pool of like-minded activists is a very simple way to fix this problem.

"It is a fact of life no research is ever conducted flawlessly."

Certainly I understand this because I do research myself. If you look around very much, you'll find me saying frequently that all research has "warts".

The question is only at what point do you decide the work is so poorly done that the work should not be accepted for publication, rather than whether it is flawless, before it can be published. I think you need to up your standards a bit if research that we can't even know is valid or not ("not even wrong") should make its way to peer-reviewed literature, and if in the peer-reviewed literature, should stay there when major flaws have been discovered.

The problem here is a lack of control prevents us from being able to test the internal validity of the study. We know that interaction occurred, but we don't know how often. We know that studies were included as addressing the consensus on climate change, but we don't know the frequency of this. And so forth.

But the authors should know. and should report on these questions. It is their responsibility to address these questions, and it's important that they draw specific attention to warts that they know about, instead of papering them over and hiding them.

It's not the readers responsibility to ferret out these issues for the authors.

Reply
Phil Clarke
9/2/2014 07:17:38 pm

>>"We don't know from Tom Curtis that the number of times where it was discuss was limited to 5-10 times. We in fact have no way to know at this time how many times, and for which papers, there was an interaction among the raters. At this point, this information is unknowable."

I believe him. Here's the source: http://www.skepticalscience.com/97-percent-consensus-robust.htm#103135

But well, quite. Clearly the degree of interaction between the raters is unknown and non-zero. But then the paper told us that, in the 'sources of uncertainty' section it states:-

'While criteria for determining ratings were defined prior to the rating period, some clarifications and amendments were required as specific situations presented themselves'

The are all sorts of problems with relying on purloined material from an internal team discussion as proof of anything. Cuts both ways. I see only one unamimbiguous breach of the protocol, out of >12,000 ratings. Yawn.

Clearly, the experimental design could have been better, ain't that always the case, you are of the opinion that the flaws, and non-zero independence make the study unpublishable, the journal did not and we'll see if the board finds the 'forum' evidence requires a retraction ....

But to repeat myself, the impact of any flaws could easily be quantified; the abstracts and ratings are online. Instead of raters, imagine non-destructive measurements of a physical sample taken with an instrument that you later discover may have been biased. We still have the original samples: re-measure with a properly-calibrated instrument and quantify the bias. Take a statistically significant sample of the abstratcs, pass them through an 'unbiased' rating process (good luck with that) and compare with the Cook number. A few mandays, tops. If systematic bias is found you can announce 'the 97% is wrong, it should be X'. Seems to me a better approach than throwing poorly-substantiated accusations of fraud around and pointing to insignificant numbers of (possibly) poorly- chosen papers.

Of course it will not happen and we know why it won't.

But I repeat myself, so time to go. Data. It wins every time, the rest is just conversation.

Phil Clarke
9/1/2014 07:02:22 pm

Well, thanks for the little thought experiment, Hunter.

Notwithstanding his problems with this paper Jose is on record as saying the majority of climate scientists do indeed support the concensus.

On the subject of reading comprehension, what I said was that no paper had been shown to be miscategorised *here*. Maybe a handful of borderline judgment calls at best. If memory serves, there were some examples, after the paper was published, of sceptics claiming that had been miscategorised, Again a handful, a flea-bite on the mountain of evidence collected.

There are no perfect studies, few pristine datasets, the challenge is to quantify the defects, and there are tools to do this. If I were able to open up the mailboxes of every research team in the world, I am sure I would find examples of minor protocol breaches, human error, instrument failures and other scientific dirty laundry.

Science is an ongoing self-correcting enterprise, Cook and al have made their raw data available so that the impacts of the various flaws could easily be quantified, heck the readership of the more popular 'sceptical' websites could do their own ratings in a week or so. We note that this this will not happen, we wait to see how the journal and authors respond, in the meantime, we also note that the result is remarkably close to previous similar exercises, both peer-reviewed and less formal, we note the with amusement the increasingly shrill cries of malpractice and 'fraud'. :-)

Reply
Rational Optometrist
9/1/2014 07:50:53 pm

"We note that this this will not happen, we wait to see how the journal and authors respond, in the meantime, we also note that the result is remarkably close to previous similar exercises, both peer-reviewed and less formal, we note the with amusement the increasingly shrill cries of malpractice and 'fraud'."

Phil - who are you speaking for here?

Reply
Phil Clarke
9/1/2014 08:32:26 pm

The concensus. ;-)

Brad Keyes link
9/1/2014 09:40:29 pm

I just want to emphasize that simply because the overwhelming majority of English speakers would write "consensus," that does NOT make Phil's spelling of the word wrong.

The majoritarian fallacy is only valid *in science.*

Phil Clarke
9/1/2014 10:05:52 pm

Thanks, Brad. Now you've pointed that out, reading it back, it grates.

One of the increasingly long list of things I *used* to know ....

Reply
CC Squid
9/2/2014 02:53:21 am

"If people can just make stuff up and apply any ideologically-loaded label to anything, social science could collapse. "

I am afraid that this is true of all disciplines that have to include "science" in their title. These fields have added "science" to their title to capitalize on the science of the 60's. When I see "science" in a discipline's title I immediately add charalitin to the end.

Follow the Money!

Reply
Carrick
9/2/2014 06:02:26 am

This probably my last comment on this thread.

I thought it might be worth noting what *I think* should be done to properly replicate this study. (Note the following includes points made by Richard Tol and Brandon Shollenberger).

1) A new sample of publications that is less biased in its selection (more physical climate science pubs for example, few social and psychology science pubs).

2) A better analytic methodology for selection of the number of publications. (Why 12,000 instead of say 3000? I assume 12,000 was chosen because it sounded "big" and not because there were competent scientific reasons for choosing it.)

3) Use fully anonymized raters, provide a controlled environment for the ratings that does not permit them to perform internet searchers.

4) Following standard research protocols, the PI should apply for and have received human subjects research approval before commencing *any* data collection using the rankers that have been recruited for the study.

4) More overlaps between raters. I'd say "at least three" per paper.

5) More uniform number of ratings per paper by raters. Most of the ratings were done by a few of the raters. Allowing this asymmetry was a bad experimental design decision,.

6) Better set of criteria and categories for rankings.

7) A predefined training period where a subset of the publications are analyzed and used to refine the ranking process.

8) The metrics for validation should be described and tested using synthetic data prior to the beginning of the full measurement period. (For me, this would occur after step 7, and I would use step 7 to help formulate a good set of validation requirements.)

9) Fully anonymized data (including metadata), whether the researchers think it is "scientifically relevant" or not, should be released when the paper is accepted for publication.

I'm sure there are some items I've missed, but having a good research plan in place before you undertake a large scale study like this is the best way to prevent it from crashing and burning.

I do think there would be value to a replication of this study that learns from and fixes the problems of this one.

But I don't think the problem of a valid replication of this project should be an exercise left to the reader. Nor do I think it's valid to criticize a reader for noticing flaws in a paper and then expecting him to march out and do the work the researcher was paid to do for him.

Reply
Phil Clarke
9/2/2014 08:22:36 am

All very sensible. Especially (1) I checked and Cook et al excluded more than half of James Hansen's publications. Reflect for a moment, they excluded almost all of the relevant work of arguably the most prominent climate scientist in the world. Their search was staggering in its incompetence ;

But given that journals generally publish novel research and this would likely just confirm Cook's numbers, I guess getting funding might be a challenge ... I don't suppose Watts or Steve (FOI) McIntyre will want to crowdsource it, for obvious reasons ;-)

Reply
Joe Duarte
9/2/2014 09:05:02 am

Hi Carrick,

Something that's always struck me as strange about satanists is that their worldview is imported from their presumed adversaries – Christians. Their whole satan thing pivots around someone else's framework. Now, we never really hear about satanists anymore, I don't know any, and maybe their existence was a myth. But they seemed pointless to me, the basic idea was pointless.

Trying to replicate Cook et al strikes me as similarly pointless. We have a nasty habit in certain quarters of science of giving an arbitrary epistemic privilege or standing to something simply because it exists, or was published. It's a type of first-mover advantage, and it's bad epistemology, especially when the paper is invalid or fraudulent. Fraud and invalid papers – and Cook is both – should be cleansed from our memories. They have no standing, do not represent knowledge, should never be published, and if they somehow snuck through peer review, should be retracted. Then everyone should carry on with their lives as before.

Counting papers isn't going to be valid measure of some sort of knowledge here. If by consensus, we mean something that carries epistemic information, something that weighs into our assessment of the truth of a thing, counting papers isn't going to be the route.

I didn't mention this in the essay, because I was bored and all these things should already be thought of by the journal, IOP, reviewers, AAAS, etc. but counting papers has an inherent duplicative flaw. In many ways. Papers by the same authors on the same topic, model, or test are often going to be epistemically duplicative. Each paper does not carry an equal unit of consensus or epistemic weight, information, etc. Especially since authors tend to stick to their hypotheses or narratives through thick and thin.

It's easy to model the math of the anchors of consensus in climate science -- the perception of a consensus there (which I assume is true) -- and then it how it networks out to other fields, who import the AGW assumptions or projections as premises for whatever their doing, or just make a drive-by comment about AGW in their work in order to improve their grant or publication prospects. The consensus would proliferate in ways that parallel the proliferation of other kinds of things. The math models are well established and would be easy to apply here. So we'd probably scratch all those mitigation and impact papers that just import the consensus instead of representing it per se.

Counting papers would be restricted to a strict set of mostly attribution papers, with discounted weights for successive attribution papers by the same authors. Most of the rest of the papers could be explained by the sociology and social psychology of science. Or at least, we'd have to address that carefully and couldn't just go around counting a bunch of papers like monks with a counting fetish.

Reply
Brad Keyes link
9/2/2014 04:27:40 pm

Joe,

"Trying to replicate Cook et al strikes me as similarly pointless."

It is—every bit as pointless as Cook et al. itself, which purported to replicate the equally pointless Oreskes 2004.

Also, not to put too fine a point on it, but why should the SkS kidz bother being climate-scientifically competent when the queen of consensus "studies" herself is a half-geologist-half-historian?

"If by consensus, we mean something that carries epistemic information, something that weighs into our assessment of the truth of a thing..." Hang on. We mean nothing of the sort. If you mean that, you've misunderstood the word.

What we mean is what the word means, and the word means 'majority opinion.'

Here I'm afraid you're getting confused, Joe. 'Consensus' quantifies opinion. Not knowledge, not evidence, not even information.

In the natural sciences, *what people think about nature* is worth nothing. By definition. By axiom.

This is a principle all scientists have accepted and followed for, oh, 300 years now. Only with the advent of climate "science" do we suddenly seem to find it difficult. It's not difficult; it's not complicated; stop trying to make it so.

Opinion is worth squat. Overwhelming opinion is worth overwhelming squat.

Evidence or go away.

Oreskes and her epigones are scientifically obliged to go away.




Rational Optometrist
9/3/2014 02:36:09 am

Joe - Brad has it correct but I'd be interested in your take. I think it's a common misunderstanding that consensus 'weighs into the truth of a thing', when in fact it has nothing to do with 'truth' (if you mean scientific fact) and everything to do with opinion.

There's a serious epistemological flaw in holding up the '97% consensus' as evidence of ...err... whatever the object of the consensus was meant to be. Let's say it's that humans are causing most of the observed C20th warming. The consensus study offers no truth / evidence about whether this is the case. It doesn't measure the natural world. It doesn't test any theory. All it does is quantify what is written *about* the issue.

Scientifically that's worth nothing, as Brad states. As an example of social wisdom/idiocy, it might be worth something, but any scientific forum should incinerate it immediately.

Joe Duarte
9/3/2014 05:29:35 pm

Brad, I don't think Oreskes is obliged to go away. She's a scholar. And I liked her point that there isn't just one universal scientific method. I wouldn't take that point to the destination she does, necessarily, but so what?

The 2004 paper is confusing. The whole paper is one page long. The methods section is a paragraph or two. I was confused about who the raters were, and was left with the assumption that she did all the ratings herself. The rating scheme she used won't work, but we know almost nothing about that study given that the paper is a page long, and I have no idea if the data and ratings are available. The Cook paper used a search similar to hers, so she might have missed everything they missed, but only for the shorter interval she worked with. It never occurred to me until now, but this means she might have counted a bunch of psychology papers, the Kenyan solar panel study, etc. If so, she should retract. A 2004 study won't be relevant to the question of a present consensus, but people cite it anyway, so I would a retraction would be helpful.

A consensus, or majority opinion, carries epistemic information if majority opinions of experts in some domain on some issue are likely to be correct. I don't like the skeptic argument of "science isn't about consensus", meaning that we can just ignore any and all consensuses, that they don't give better than chance outcomes. It would be remarkable if scientific consensuses weren't reliable at all, ever.

I wouldn't call it "opinion" either. Climate scientists aren't trafficking in pure opinions. They're doing science, gathering data, testing. It's certainly possible for field to be completely wrong, but I don't know how we would assume a consensus has no meaning at all. Iterated across fields, this couldn't be right. I am comfortable with people attending to features of a field and its practitioners as inputs into the credibility of a field. Like Oreskes said, there isn't just one scientific method. Fields vary. And some of those scientific methods are probably more reliable or valid than others. Some fields are more politically biased, like the social sciences, and apparently climate science (which might select for environmentalists, a possibility that deserves thorough scholarship.)

One of the reasons I'm not a skeptic is that I'm a libertarian, which sounds weird. It's too convenient for libertarians (and conservatives) for AGW to be false. The potential bias there is obvious, and I'd need good evidence to be an AGW skeptic (although my confidence in climate science had eroded a bit in recent months, based on the behavior of climate scientists and their disgusting embrace of junk science on the consensus.) So I think the "consensus is irrelevant" position can easily be a cover for bias. (I don't see any policy implications in AGW, so my non-skepticism on the science is easy anyway and deserves no credit as a test of integrity.)

Brad Keyes link
9/3/2014 07:10:33 pm

"I don't think Oreskes is obliged to go away. She's a scholar."

In what sense? Because her tax return says so?

You seem to have read her seminal, career-making Essay (not "paper," please), but what about the excuses she devised when it became widely known that almost none of her included papers actually endorsed the consensus? What did you think of her (for want of a better word) logic? Here it is:

"In the original AAAS talk on which the paper was based, and in various interviews and conversations after, I repeated [sic] pointed out that very few papers analyzed said anything explicit at all about the consensus position. This was actually a very important result, for the following reason. Biologists today never write papers in which they explicitly say "we endorse evolution". Earth scientists never say "we explicitly endorse plate tectonics." This is because these things are now taken for granted. So when we read these papers and observed this pattern, we took this to be very significant."

So you see, you silly critics: silence equals consent. Or consensus.

Is a pseudoscholar still a kind of scholar, I wonder?

"The 2004 paper is confusing."

According to Oreskes there's no point reading your critique, so I'll skip that whole paragraph. How can a mere psychologist sit in judgement of consensus research that was carried out by a half-geologist-half-historian? Or by a luggage millionaire, for that matter? As Oreskes so scholastically put it, when a mere medical researcher presumed to criticise her:

"The blog reports describe Mr. Schulte as a medical researcher. As a historian of science I am trained to analyze and understand scientific arguments, their development, their progress, etc., and my specific expertise is in the history of earth science. This past summer I was invited to teach a graduate intensive course at Vienna International Summer University, Vienna Circle Institute, on Consensus in Science. I do not know why a medical researcher would feel qualified to undertake an analysis of consensus in the earth scientific literature."

So someone is apparently obliged to go away—but who?

"I wouldn't call it "opinion" either."

Then it's a good thing you're not a lexicographer, isn't it? ;-D The word means "majority opinion."

"Climate scientists aren't trafficking in pure opinions."

I never said they were. But when you count papers, that's literally the only thing you're measuring (if that).

"I don't like the skeptic argument of "science isn't about consensus", meaning that we can just ignore any and all consensuses, that they don't give better than chance outcomes."

Think about the epistemology here. Iterate it a few times in your head. It should become clear why this prima-facie plausible argument ("if everyone believes it, it's more likely to be true than not") was never heard in hundreds of years of science... until the climate post-normalists reared their ugly heads.

I have to run, but look forward to continuing this dialogue. Epistemology matters.







Tucci78 link
9/3/2014 08:50:35 pm

At 12:29 AM on 4 September, Joe Duarte had written:

"One of the reasons I'm not a skeptic is that I'm a libertarian, which sounds weird. It's too convenient for libertarians (and conservatives) for AGW to be false. The potential bias there is obvious, and I'd need good evidence to be an AGW skeptic (although my confidence in climate science had eroded a bit in recent months, based on the behavior of climate scientists and their disgusting embrace of junk science on the consensus.)"

--------------------
Not only "weird" but nonsensical.

I am myself a libertarian in large part because of what happened on the evening of Sunday, 15 August 1971, when I watched Richard Milhous Nixon announce that by executive order he was "temporarily" closing the gold window (ain't nothing so permanent as a "temporary" government measure) to divorce the U.S. dollar from the last vestige of even a fiction of specie convertibility, thereby opening the sluices to the present flood of fiat currency inflation.

When Slippery Dick gave us that "I am now a Keynesian in economics" business, I realized that neither faction in the great Boot-On-Your-Neck Party diumvirate would ever present us with praxeologically valid options, and therefore neither bunch had any legitimate call upon my loyalty.

Mr. Duarte, what are your REASONS for being a libertarian?

If those reasons are sound, regardless of the validity or bogosity of the anthropogenic global warming (AGW) hokum, in what way could, should, or would your libertarianism prejudice you against this quackery?

If you are an adherent of sound scientific method, you must NECESSARILY be a skeptic in this as in all other areas of inquiry, and the "climate scientists" complicit in the push for the abrogation of scientific method are by definition NOT "doing science, gathering data, testing," but rather presenting the seeming of scientific investigation while all the while using that masquerade to advance public policy measures predicated upon malicious nonsense.

Are you trying to assess these weasels on the basis of what you assume are their stated or assumed intentions, or are you drawing your conclusions on the basis of what has already been perceived about their conduct?

Peter Michael Ward link
9/2/2014 08:42:02 am

While what you say is valid, many on the pro-AGW side will argue that if you don't publish this in a peer-reviewed journal it's worthless. It's the same thing they keep telling skeptics, while of course working to keep skeptical papers out of the journals they control. So I think you can expect this post to sink without trace in the pro-AGW community. Therefore we need to publicise it as much as possible to those who might still be listening!

Reply
Brad Keyes link
9/2/2014 06:53:28 pm

Peter,

"Therefore we need to publicise it as much as possible to those who might still be listening!"

Agreed. One epistemological quibble aside, I'm a big fan of what José's done here. I've [re]tweeted about it. What other suggestions do you have for keeping it visible?

Reply
Joe Duarte
9/3/2014 10:57:01 am

Hi Peter -- Some will take that attitude. I guess I don't care. The paper should just be retracted. It doesn't deserve anymore formal engagement in journal settings, except as post-retraction postmortems.

This paper is a severe fraud case. It's also completely invalid. And somewhat false, but "false" has questionable meaning when a study is invalid. It's ultimately too dumb and fraudulent to rise to the level of falsity. We don't want to get in the habit of trying to assess the "data" in fraud and invalidity cases -- it perverts the burden. If IOP and ERL don't retract, that would change the game. If people can lie about their methods, if we can do subjective rater studies now with intensely biased raters breaking independence and blindness in an online forum, mocking the participants they're rating, then hell, all bets are off. I'm guessing the methodological requirements of subjective rating studies might not have been obvious or intuitive to people who don't design, perform, or review such studies (although similar requirements in biomedical research seem to be widely understood.) If they didn't believe a Mexican telling them that raters can't be biased with respect to the outcome of the study -- since said outcome in entirely in their hands -- or the importance of independence, and bliindness to the identity of the authors/participants, I assume they just asked some white people who would know, some social scientists who are experts in subjective rater study design, interrater reliability, etc. The bias issue is so profound here that I'd expect that any qualified social scientists they asked would be confused at first. It never really comes up -- you might as well pay the raters to deliver a given outcome, because money won't be a bigger force than ideology in such cases.

People are far too stubborn and CYA when it comes to admitting they did something wrong. It's just so petty and silly -- we need to develop tools to make it easier for people to act with integrity after they've made a mistake. We should be way past simple institutional corruption and transparent conflicts of interest in this day and age. It's just so primitive. We can't have too much of that in science -- journals have to be able to confront their past decisions, have to be able to investigate and acknowledge fraud and false papers, able to look at the world around them, data and so forth. This wasn't a hard case. Many of the retractions on Retraction Watch involve less severe issues. If politics is allowed to excuse fraud and ridiculously dumb papers, then our case to the public, our case for the importance and validity of science, would fall apart. There are long-term consequences here.

Reply
Tucci78 link
9/3/2014 02:32:58 pm

At 5:57 PM on 3 September, Joe Duarte had posted:

"I'm guessing the methodological requirements of subjective rating studies might not have been obvious or intuitive to people who don't design, perform, or review such studies (although similar requirements in biomedical research seem to be widely understood.) If they didn't believe a Mexican telling them that raters can't be biased with respect to the outcome of the study -- since said outcome in entirely in their hands -- or the importance of independence, and bliindness to the identity of the authors/participants, I assume they just asked some white people who would know, some social scientists who are experts in subjective rater study design, interrater reliability, etc. The bias issue is so profound here that I'd expect that any qualified social scientists they asked would be confused at first. It never really comes up -- you might as well pay the raters to deliver a given outcome...."

It's because raters _can_ be biased that methodological rigor is absolutely necessary in studies such as this Cook & Co. "97%" fiasco, and deviation from the stated methodology is absolutely invalidating.

What brings this clusterpuckery to the level of fraud - the purposeful and deliberate criminal misrepresentation of facts in pursuit of a pecuniary gain at the expense of other people, which could not have been achieved without telling lies - is the fact that the breach of method was manifestly not inadvertent, but had been undertaken by the investigators during the course of data-gathering, with the overt objective of misrepresenting the truth shared by those investigators among the raters in a manifestly contrived scheme of duplicity.

In other words, we're looking not only at allegations of fraud but also the strong suspicion of conspiracy to commit fraud, with is itself a felony.

Let's say that the investigators involved in such a "study" had contrived to publish an exaggerated impression of the prevailing perception of mood disorder, and that they had been provided financial support by a commercial pharmaceuticals manufacturer with a monetary interest in increasing diagnostic "catch" in that therapeutic category (wherein said manufacturer had branded medicinal agents approved for marketing in these United States and elsewhere).

In such a study's publication, the lying misrepresentation of deviations from stated method - as we see in Cook et. al. - would not only require retraction but would also rise to the level at which the matter really should be presented to a grand jury in pursuit of a true bill of indictment.

Consider the sources from which Cook and his co-respondents (to include their raters) derive their funding, both in regard to this particular study and all past and ongoing financial returns, with an eye toward the strong possibility of criminal mens rea.

Joe Duarte
9/3/2014 02:51:32 pm

A felony? I've never understood scientific fraud to be a crime – I've never heard of criminal prosecutions for typical scientific fraud cases, so it seems unlikely that this would ever be a criminal case. (I don't know anything about their funding.)

I'd also want to maintain a major air gap between scientific fraud and criminal charges. This case is easy, but anyone can accuse anyone of fraud, and the term, like many others, can be ideologically loaded beyond recognition (at least in theory.) We've seen how racism and sexism can be defined. It's important to maintain a buffer between accusations of scientific fraud and criminal fraud just from the danger of "fraud" becoming a means of censorship and stifling of dissent. We don't seem to be in any danger in today's milieu of defining scientific fraud too broadly. It's quite clear and rigorous most of the time. If someone falsifies data, that's usually not going to be a matter of opinion or judgment. And falsely stating methods and inclusion criteria, as happened here, is fairly clean and descriptive -- "social science, education, research on people's views" is straightforward and we know that surveys of the general public, education exhibits and so forth fall into those categories, and we know what blind and independent mean. But we don't know how fraud might be used in the future, whether we'll be watering our crops with Gatorade, etc.

Joe Duarte
9/3/2014 03:05:11 pm

One of the things that bothers me about this case is that none of the raters blew the whistle when the paper was published. These raters presumably recall their time as raters, that 1-2 month experience. They must recall the online forums. When they saw the paper claim that they were blind to authors, they had to remember that they weren't. They had to remember multiple disclosures of authors of papers, and that it was open and flagrant. Same drill with independence. And with the "anonymized raters" claim (to whom were they anonymous?)

Curtis wasn't the only person to raise his hand at some of the things that were going on. At least one other rater did too. But when the paper was published, no one blew the whistle.

Tucci78 link
9/3/2014 04:36:22 pm

At 9:51 PM on 3 September, Joe Duarte writes:

"A felony? I've never understood scientific fraud to be a crime – I've never heard of criminal prosecutions for typical scientific fraud cases, so it seems unlikely that this would ever be a criminal case. (I don't know anything about their funding.)

"I'd also want to maintain a major air gap between scientific fraud and criminal charges."

-------------
Can't be done. Use the word "fraud" and you are speaking of a criminal act, not uncommonly rising to the level of felony.

It's not a matter of ideology but rather criminal intent (mens rea) and criminal effect, not to mention the civil aspects of such cases, in which compensatory and punitive damages may be imposed upon the miscreant parties involved in the perpetration of deceit for the purposes of pecuniary or other material gain attained to the detriment of specific other people and the public at large.

Ever signed off as an investigator on a proposal seeking funds - whether from government or in the private sector - in grant-of-aid to undertake a research project?

Any deliberate misrepresentation of facts in such a grant application renders each signatory liable to both criminal charges - on allegation of fraud - and civil action.

"We don't seem to be in any danger in today's milieu of defining scientific fraud too broadly. It's quite clear and rigorous most of the time."

I agree that it's "quite clear." However, you seem to miss the fact that much of what has been perpetrated in the way of duplicitous climate alarmism has been motivated by the malfeasors' designs upon both direct and indirect personal benefits which they could not secure to themselves _without_ presenting confabulations as if they were data validly reflecting objective reality.

I doesn't matter how much temporizers argue that this is the effect of "noble cause" corruption, IT'S STILL CORRUPTION, undertaken deliberately and with malice aforethought, the effects including the diversion of research funds from legitimate purposes as well as support for public policies and private sector malinvestment, which do great material damage to particular parties and to the common weal.

Can it be legitimately argued that the direct and indirect adverse effects of fraudulent "climate science" have not included very real, very costly, quite mensurable damages done throughout the economies of the world, or that measures aimed at both the recovery of such damages - if only partial - and retaliation against the plunderers (if only pour encourager les autres) is not both justified and warranted?

If this has a stultifying effect upon such grabtacular Cargo
Cult Science "research," isn't that what NEEDS to happen?

It is not sufficient that honest men should prosper, but that predatory liars and charlatans should be induced to mend their ways.

Else what profits a man for being honest in his dealings with the people around him?

Brad Keyes link
9/3/2014 05:11:54 pm

If scientific fraud is not a police matter—and I grant that tucci78 might be speaking in the epic, rather than the indicative, mood—then perhaps it should be. After science has put the embarrassing chapter known as global climate acidification warming behind it, citizens may well demand some kind of guarantee that nothing like it will happen again, let alone with impunity. If so, why not modernize the law to recognize that undermining trust in science is a crime against civilization itself and that pseudoscience cults are therefore inimici humani generis?

And by "cults," I mean:

"'Could a denier have written it' seems like a good metric to use."

Joe Duarte
9/4/2014 01:33:38 am

"Can't be done. Use the word "fraud" and you are speaking of a criminal act"

Facts are always possible. It's just a fact that scientists and academics more broadly don't use fraud in the way you're talking about it. Fraud is an ethical issue, not a matter of criminal law. One way to make this fact plain is to observe that any formal investigations tend to be conducted by journals and university ethics committee, not prosecutors.

It's important to assimilate reality as it is, and not demand that it shape itself to one's preferences. That words are used differently in different settings is a common truth, it's best to avoid the habit of categorically ruling out the possibility.

Politics biases us. That's why people should account for such biases on issues like AGW. Leftists too, since they're likely as biased on the issue as other camps -- they seem no more engaged with the evidence than anyone else.

You might be using skeptic differently too. No one must be a skeptic in all areas of inquiry, in the sense of never believing a consensus in a field. I'm not a skeptic of the transit method of exoplanet detection -- seems pretty solid to me. I think it's a bad habit to demand that people must take this or that position, and is reminiscient of the barbaric "denier" epistemology.

Praxeology is false, so I wouldn't hitch my wagon to that train if I were you. It might be more accurate to say it's arbitrary. The Austrians make too many mistakes. In any case, praxeology is a dead end unless someone is going to gather evidence to demonstrate that it's true, which they don't, and available evidence contradicts it. Throwing all these dogmatic commitments and worldviews into your shopping basket is just going to increase the probability of being wrong on stuff.

Joe Duarte
9/4/2014 06:46:18 am

"If so, why not modernize the law to recognize that undermining trust in science is a crime against civilization itself and that pseudoscience cults are therefore inimici humani generis?"

Brad, do you want their first-born too? Dude, we can't have laws against undermining trust in science, or that guarantee that "nothing like it will happen again". There's so much danger in such an approach, so much potential for persecution. I think modern politics is inherently neurotic and overreactive, which might have always been true of politics. Every time something bad happens, people scream that "It should never happen again", and thus we get bad laws in the wake of every crisis, like Sarbanes-Oxley, which is basically welfare for the Big 5, 4, or 3 accounting/audit firms (I don't remember how many are left.) and harmed the economy in predictable ways. We forget the human condition, the normality of occasional crises or problems. Trying to legislate away scientific fraud in response to a possible peak strikes me as similarly errant.

Why isn't retraction enough? To criminalize this kind of stuff, you'd need new laws, but it doesn't look like such laws would be coherent. Fraud in the criminal sense involves discrete, identifiable victims, like the person you sold your car too after rolling back the odometer. Or whoever gave Madoff their money. Scientific fraud tends not to have identifiable victims of the same sort. Civilization can't be a victim or have standing in a court of law. Lewandowsky falsely linked climate skeptics to moon landing hoaxism, and free marketeers to rejection of beliefs they overwhelmingly endorsed, so I guess an enterprising lawyer could think about a class action civil suit for libel (I'm not sure if there's ever been a class action libel action), against the researchers and the journal. But I don't think I'd support that either. I don't want lawyers and courts involved in science, or regulating what scientists can say. I'd rather focus on the self-policing mechanisms.

For climate scientists to harm civilization, AGW would have to be false. I'm not sure how we could know that right now, or why that would be more likely than AGW being true. Any policy stupidity that we implement, which is possible even given that AGW is true, can't be laid at the feet of climate science, and we shouldn't be laying broad and vague outcomes at the feet of specific fields.

Joe Duarte
9/4/2014 07:04:58 am

A good parallel to scientific/academic fraud is journalistic fraud or fabrication. Journalists don't go to jail for lying -- they just get fired, e.g. Jayson Blair and that guy at the New Republic.

http://en.wikipedia.org/wiki/Jayson_Blair

When an NYT reporter lies, there's no victim with standing. The readers are victims but not in a concrete way. The NYT is a victim. I suppose there could be breach of contract and if a media outlet wanted to they could sue the fired reporter or something.

One victim of scientific fraud I forgot about -- unwitting coauthors, especially with classic granular data fraud. I know a few such victims in social psychology. One thing that strikes me is that we don't learn about fraud. We're taught nothing about it, formally. The field doesn't seem to have a consensual definition of it, nor are graduate students trained in ethics or fraud definition/prevention. Ethics is usually just a trivial online human subjects ethics course and certification. Social science is vulnerable to broader types of fraud because we're able to describe and characterize reality with words all the way down, to a degree not available in other fields. Climate and other physical scientists can't just call things whatever they want. Social scientists often get away with it, and this can introduce a humanities-like flexibility, which morphs into fraud when the words cleanly conflict with the data. Combine this with extremely sloppy, even fraudulent, statistical practices that have no parallel in other fields, and our vulnerability expands. We don't have safeguards in place to prevent it, so scumbags are able to publish lies. This will change.

Brad Keyes link
9/4/2014 08:06:29 am

Joe,

you mention that "Lewandowsky falsely linked climate skeptics to moon landing hoaxism, and free marketeers to rejection of beliefs they overwhelmingly endorsed"

Far worse, his compatriot Prof David Karoly falsely linked skeptics to an (imaginary) "relentless campaign" of electronic death threats against Australian climate scientists, none of which Karoly deigned or was asked to produce as evidence despite the fact that he was alleging the existence of a serious (and despicable) criminal conspiracy. The story got wide, uncritical traction in the Australian press—accompanied by the convenient meme, also of Karolian origin, that "the scientists" would be producing better evidence of the dangers of AGW if only they hadn't been terrorized into silence! And not one of Karoly's scumbag colleagues put up his or her hand to question the veracity of this (libelous?) myth. It was effectively discredited only after it had been infiltrating the public mind for a solid year, and only by the efforts of a non-scholar who knew his FOI rights well enough to demand disclosure of another university's cache of alleged death threats, which naturally turned out to be nothing of the kind.

None of this black episode played out in the scientific literature itself—the lies were simply told to the media. I mention it mainly to explain my rather low confidence in the operance of any "self policing" mechanisms wherever alarmist climate science is concerned.

Another example is MBH98, an extremely high-impact paper which nobody knew how to replicate, and nobody even tried to replicate (competitively) for 5 years [!], and which was eventually replicated not by "a scholar" at all, but by Steve McIntyre. This is staggering. This should stagger you.

The self-policing mechanisms of science and scientific culture have served civilisation well for hundreds of years. Not flawlessly, but well.

The problem with depending on such mechanisms is that alarmist climate scientists evidently don't behave like scientists.

Brad Keyes link
9/4/2014 08:31:07 am

"Civilization can't be a victim or have standing in a court of law."

OK, but I wasn't exactly suggesting that civilization act as plaintiff in a civil lawsuit against the perps.

What I had in mind was something closer to the laws against piracy—a crime whose intrinsic international-ness makes it analogous to science, which is also an international (dare I say universal?) enterprise.

That said, I happily cede to your understanding of legal principles, compared to which mine is clearly superficial. I've had the unusual experience of actually learning from your counterarguments, for which thank you.

"For climate scientists to harm civilization, AGW would have to be false."

1) It wouldn't have to be false—and I don't suggest it is—but merely innocuous.

2) You probably understand this already, but I'm not (we're not?) imputing blame to "climate scientists" simpliciter. It's only the alarmist climate scientists—a growing minority, I suspect—who are the problem for civilization.

Brad Keyes link
9/4/2014 08:50:29 am

Joe,

To clarify:

Anthropogenic global warming wouldn't have to be non-existent—and I haven't argued that it is non-existent—but merely innocuous.

I'm clarifying because I'm not sure what you meant by applying "false" to a physical phenomenon.

Brad Keyes link
9/5/2014 02:50:04 pm

On the legal question, Wikipedia says Anders Breivik "was charged with violating paragraph 147a of the Norwegian criminal code, 'destabilising or destroying basic functions of society' and 'creating serious fear in the population,' both of which are acts of terrorism under Norwegian law."

No wonder we don't hear anything about Norwegian climate alarmists and pseudoscientists.

DGH
9/2/2014 12:14:27 pm

Joe,

You wrote, "The raters apparently didn't understand it, which isn't surprising, since they're just random political activists with no scientific qualifications."

As a blanket statement that is not entirely fair. I'll give you that the luggage entrepreneur is an unqualified activist if you'll acknowledge that Dr. Green has some "scientific qualifications."

http://www.mtu.edu/chemistry/department/faculty/green/

- While Dana Nuccitelli's career as a minute taker at EPA remediation site meetings doesn't qualify him as a climate scientist (or rater) his education is not so far off the mark. From the SKS website, "He has a Bachelor's Degree in astrophysics from the University of California at Berkeley, and a Master's Degree in physics from the University of California at Davis."

- Likewise, John Cook studied physics (before he launched his 10+ career in cartoon writing).

- Logicman has identified himself publicly. He has a background in the sciences although his degrees and experience (engineer and computer programmer) are not directly related to climate.

No need to run through all of the raters' backgrounds. Some could be well qualified to rate these papers and others, well, not so much. The burden was on the authors to make that case and they missed that mark.

But your comment is off target.

Reply
Joe Duarte
9/2/2014 01:55:29 pm

Hi DGH,

I agree to some extent, and I'll fix it. I amended the essay yesterday, and one of the things I added was that even climate scientists couldn't be trusted to subjectively rate abstracts if they were political activists or politically motivated on the outcome or subject of their ratings.

What that highlights is that there were two problems with the raters that invalidate the study -- that they're political activists, and that they're not climate scientists. I should have pointed out that the latter is a necessary but not sufficient condition to be able to fully understand climate science abstracts. Well, being a climate science expert is, which will be almost isomorphic with being a climate scientist.

The challenge with the raters you mention is that if we attend to their forum discussions, we know they're not qualified. Dr. Green was the one who said that she had counted many social science papers as mitigation (which was counted as scientific evidence of endorsement of AGW.) Nuccitelli said a psychology paper about white males was a "methods" paper. Green implied in that thread that she had actually rated the white males paper herself, and that she rated it explicit endorsement. (It's not the only time people in those forums announce that they were assigned the same paper someone is asking about, or reveal their rating. It's just incredible how unserious they were about their method or about doing a valid science.)

We also see evidence that raters didn't understand some of the abstracts (beyond the examples I gave in the essay), even raters who I think have some sort of science or engineering background, like one who complained about that it "looks like thermodynamic mumbo-jumbo to me".

So some science background isn't going to legitimize these raters. All the degrees in the world count for nothing if you're going to count surveys of the general public, papers about TV coverage, ER visits, or windmill cost estimates as scientific evidence of endorsement of AGW.

I actually stopped looking up the raters after the first few. I had gone through maybe five, and discovered that none were climate scientists, one was Honeycutt (the Timbuk2 entrepreneur), one was a Finnish blogger who to his credit explicitly declared that he was not a climate scientist (on his blog, I think), one was Nuccitelli, who I might have heard of before, one was logicman, and of course Cook, who is beyond the power of any degrees to restore (and statements like he "studied physics" are too vague – I've got "Six Easy Pieces" and "Six Not So Easy Pieces", so I, and millions, have studied physics.)

At that point I stopped because I didn't want to discover that one of their raters was a strip club DJ who "studied physics". Well, I'm being too sarcastic. But I did stop for that reason, which highlights something else. Throughout all of this, I've given evidence that should be more than enough on any given issue. A study based on political activists subjectively rating science abstracts for their cause should not be something we ever hear about. That method invalidates. Nothing else should be necessary. But if we set that aside, and just say people are rating science abstracts, well that's an interesting method. Science abstracts are hard, and specialized, often unnavigable to other scientists, sometimes even scientists in the same field. Raters would have to be experts, or expertly trained, but really they'd probably just have to be experts, scientists in the field. If we discovered that a consensus study's raters of climate science abstracts included luggage entrepreneurs and bloggers for whom English is likely a second language, that's a siren, and all-stop. That should immediately call such a study into question, pending a review of the qualifications of the raters. People like me or you shouldn't need to say anything more about it, or do all the work. If I were the editor of the journal and had somehow missed this, only to learn after the fact, I'd probably want to give them a test, give them various abstracts to rate, and if it was clear that they didn't accurately understand all of them, I'd retract the study because the results would be untrustworthy. That's what rigor means. And that is of course just one of so many issues, many of which are even more serious.

But suggesting that they all lack scientific qualifications was wrong of me, sloppy. I'll change it. And I actually like the Finnish blogger. In normal everyday circumstances I might like a lot of these people, but that won't change the fact that what they did here was invalid and that they weren't the people for this, especially given their biases, which are stark in the forums. It was also fraudulent, for which I hold the lead author responsible -- it says right there in their paper that they excluded social science, education, and studies of people's views. It says raters were independent and anonymous. It says they were blind to authors. This was a disast

Reply
Joe Duarte
9/2/2014 02:04:43 pm

My mention of the white males thread and Dr. Green is confusing. She was the one who asked about it, and she announced her rating decision, although this might have been provisional since she was asking about it. The whole thing is just amazing to me. This was March 22. They started rating several week earlier, and still you have explicit violation of their stated method, violation of independence and blindness -- she linked to the full article, exposing the authors and everything. This suggests blindness was never enforced.

Tucci78
9/2/2014 02:14:21 pm

At 8:55 PM on 2 September, Joe Duarte had commented:

"So some science background isn't going to legitimize these raters. All the degrees in the world count for nothing if you're going to count surveys of the general public, papers about TV coverage, ER visits, or windmill cost estimates as scientific evidence of endorsement of AGW."

It may be that "some science background" isn't strictly *necessary* for someone to function properly as a rater in a study such as this one, providing the criteria to be used in rating each element under evaluation are lucidly, explicitly set in the investigation protocol and the raters are trained reliably therein.

In truth, each rater might need only to be literate in the English language.

Consider that the disciplines most closely related to "climate science" have been - increasingly since the influence of the politicians came aggressively to be manifest in the 1980s - perniciously, pervasively affected by malfeasantly purposeful inculcation.

Think of a project to evaluate publications in genetics, for example, when substantial numbers of the raters are people who had been educated and who had prospered in the Lysenko-dominated field in the old Soviet Union.

Tucci78 link
9/2/2014 12:44:14 pm

At 12:59 PM on 2 September, Carrick had written:

"By anecdote, Tom Curtis knows of 5-10 examples where discussion occurred. That's hardly proof that there were only 5-10 examples total. In fact, it's completely nuts that you think this would prove that."

I think we must treat this on the "One Cockroach" principle.

If you *see* one cockroach scuttle across the kitchen floor when the lights are flipped on, the probability is high that there are THOUSANDS of cockroaches infesting the house.

Reply
Joe Duarte
9/5/2014 12:25:57 pm

The raters were randomly people working at home. They could just google the papers and break blindess. There's no way we could assume they were ever blind, and in their forums somed seemed to have no awareness that they were supposed to be blind. They revealed authors so many times. Some might not have been aware that the paper would ultimately claim that they were blind to authors. But when they saw the paper make this claim, they should've contacted the journal and blown the whistle.

Reply
DGH
9/2/2014 03:02:56 pm

Joe,

Last things first, this paper was junk science and should not have been published. The raters were not randomly assigned, they had access to more information than the Methods disclosed, they collaborated during the rating process, and their qualifications are subject to challenge. Furthermore, the research plan, which was developed post hoc, was ambiguous and invited biases that absolutely influenced the conclusions.

Perhaps the greatest ambiguity incorporated into this study was the definition of "consensus." Over the many years of research that this study encompasses "climate science" (and the scientific consensus) certainly evolved. At best this study provides an unclear assessment of the state of of a moving object.

I'll follow-up on the balance of your post soon.

DGH

Reply
Dave Andrews
9/5/2014 12:36:06 am

As a layman I want to thank you very much for this impressive piece of work. It was very much needed. Bravo!!

Reply
Jonathan Abbott link
9/5/2014 01:31:16 am

Excellent article, Joe. Thanks.

Reply
Les Johnson
9/5/2014 06:41:17 am

Let me get this straight: One reviewer looked up an abstract, because it "smelled" like Lindzen. He was actively reviewing a paper by Lindzen, yet NOT A SINGLE PAPER by Lindzen after 1997 made it into their study?

I would count this as another case of fraud.

Reply
Joe Duarte
9/5/2014 12:30:56 pm

We won't run out of fraud here -- it's a fraud case for the ages. But this won't be an example -- that Lindzen paper is an old 1994 publication, so it was included. I think it was counted as rejection -- it would be a bit bold of them to try counting it as anything else, since they sniffed out the author because it sounded like a skeptic.

Reply
Brad Keyes link
9/8/2014 06:17:09 pm

José,

have you had any more thoughts on the epistemology of consensus (i.e. majority opinion), and on why the entire concept is anathema to all non-climatological fields of science?

Just to preëmpt a common objection to this generalisation, yes, there are countless medical fields in which it's not hard to find "consensus statements" of the best practice in such-and-such a condition. But please note that this tradition—which is dying, thanks to the dawn of E.B.M.—prevails strictly outside the realm of basic science. To put it another way, consensus may have some place in medicine qua *practice*, but it is correctly treated as valueless in medical *science.*

Oreskes' 2004 Essay should not exist.

It would not exist in a non-pathological science.

Or if it did exist, it would exist only in sociology or social psychology. (In which case, of course, it would have been expected to comply with whichever proper, scholarly standards applied in the field. It wouldn't have taken place in the methodological Wild West where Oreskes naively sorted and counted her piles of paper.)

So I'll rephrase: her Essay would never have been written in any non-pathological branch of the natural sciences.

And it sure as hell wouldn't be allowed to waste a page of precious Science real-estate, would it?

Thus, even those of us "deniers" who weren't particularly attentive to climate science at the time could smell the intellectual corruption as soon as Oreskes' abortive monster hit the floor back in 2004.

If you didn't smell it, don't worry—you probably weren't suffering the curse of the cynic back then. (Cynics are cursed always to stand downwind.)

But you smell it now—you smell it in Cook13, nine years later—and you are to be lauded for doing something about it.

(Apologies if that last point—that I very much like and commend what you've written—wasn't clear enough earlier. Snarkasm was always my vice!)

Reply
RO
9/9/2014 03:42:26 am

An author of the paper gives his thoughts on that argument here @12:55 - http://andthentheresphysics.wordpress.com/2014/09/08/sometimes-all-you-can-do-is-laugh/#comments

Jo(s)e gets a mention.

"...generally speaking, science does advance via consensus. Ideas are tested many times, and when the supporting evidence becomes overwhelming, an expert consensus forms that the idea in question is probably correct. In most cases, scientists then move on to investigate other unresolved questions.

Contrarians constantly make this argument to downplay the importance of consensus. I think it’s important not to fall into that trap of undervaluing an expert consensus.

Regarding the GWPF report, it’s nothing but a summary of the lame denier attacks on our paper, with a bunch of quote mining from our hacked private forum thrown in for bad measure. Feed Tol, Monckton, Duarte, and Schollenberger to a dog, and Montford’s report is what would come out the other end."

Reply
Brad Keyes link
9/9/2014 05:07:07 pm

LOL—brilliant find, RO!

You spared us the most cringemaking sentences, of course.

But being less squeamish than the average bear, I do not hesitate to give you... Dana1981.

In full, pseudoscientific surround stereo:

————————————————————--
"I take a bit of issue with this comment, ATTP:

>> science doesn’t work via consensus

I guess that depends on exactly what facet of science you’re talking about (or what you mean by “work”). "
——————————————————--

Allah help us.

While we're at it, I suppose it also depends whether you're talking about science [nod happily] or "science" [shake head, frowning], doesn't it?

LOL. Nuccitelli: the gift that keeps on giving. The gift of laughter.

(It *does* sound like one of those chocolate brands they only seem to sell at gas stations, and only to selfish people like you who didn't bother purchasing a housewarming gift in a more timely fashion, right? You have to admit it. Don't deny it, anyone.)

Full Dana disclosure:

I wasn't there when he was caught foisting his clairvoyant "review" of HSI on unsuspecting Amazon users, but I've yet to see any indication of a moral/ethical awakening in him since then, and for all I know there will never be one. Some people, for some reason, fail to make whatever synaptic handshakes are prerequisite to the development of anything we'd recognise as an adult human conscience.*

Anyhow, it took no real skill for me to catch Nuccitelli libelling Professor Lindzen as a medicine-denier last year. The only time-consuming part was shaming his guardians at The Guardian into a quiet, grudging, jesuitical retractionette.

(I'm usually on shaky ground when it comes to legal theory and terminology, as our gracious host knows!, but in this case I chose the key participle advisedly and fairly confidently.)

Dana is a "scumbag," as Joe might say. (Even or especially if Joe is aware of the word's unedifying provenance!)

* But if I might pass the microphone back to our host, I wonder if he, i.e. you, i.e. Joe, could clear up a couple of things:

1. Neurologically, what would be the right burn to use w.r.t. conscience-defective morphological adults like Dana: "psychopath" or "sociopath"? (Pop culture has done me no favors here but I think you can undo the damage!)

2. With particular regard to the canine, scatological or at best enterological allegory shat forth [above] by Dana's morphologically-adult brain, please answer the following seriously, Joe. I mean seriously. Let's be totally serious for a sec:

If (and I ask this counterfactually, hypothetically, subjunctively, etc.)—IF coprophages like Dana are *right* about the climate, wouldn't you rather be wrong?

Obviously yes.

You know you would. (I just enjoy asking rhetorical questions. ;-D )

Joe Duarte
9/9/2014 05:36:16 pm

Brad, I'm not following all this.

Dana N is not "defective", nor is he likely to be a sociopath. You're committing all the usual sins of partisans, including the sins of your adversaries.

I don't understand the post you're responding to. It's incoherent. The Cook quote has nothing to do with anything, certainly won't exonerate their fraud or address any substantive issues I raised, which I think are critical issues that void the study, so... I haven't gone to the page yet, but I'm really starting to worry that these people have nothing to say, and may not understand the issues. It's hard to tell if they understand, because they never say anything.

Their treatment of consensus is very crude and unscholarly -- they're not thinking deeply enough about it to do research on a consensus. They seem to have a method of reading one or two sources and concluding that they've learned everything there is to know about something, like the nature of consensus, or philosophy of science. It's odd. There are layers and layers of issues that I haven't even touched on, that I expected editors and IOP to see -- the world is never going to look back on this stuff and call it valid scholarship.

In any case, repeating vague blather about consensus doesn't address reports of fraud in your purported study of consensus. I'm not sure if they don't grasp basic logic or valid reasoning, if they're high or what. This is odd. My undergraduates are better. It's just so odd.

To answer your other question from the other comment, no I never think about consensus in my field. I'm starting to feel very lucky to be a social psychologist. We never talk about consensus. You're right that most sciences don't. We'd never decide we needed a consensus on the effects of self-esteem or something. People just do their research and report it. That's it. What worries me about climate science is the explicit goal of generating a consensus, as I understand the IPCC does. That would seem to bias such a consensus, make it less reliable, since you've created social incentives and disincentives, pressures and the like. If I were a climate scientist, I imagine there'd be pressure to weigh in on the consensus. In my field there is no pressure, no attempts at consensus. Granted, AGW looms larger than most of our work, and the extreme scenarios seem very damaging, so there's some justification for seeking a consensus. But that won't change the biasing and inflationary effects of seeking it, of trying to arrive at it. I think Judy Curry has done some work on this, perhaps on the high-order risk of Type 1 error it implies.

Brad Keyes link
9/10/2014 06:12:28 pm

Reading back my own comments, I really, really wish I hadn't been careless enough to tack this, erm, "joke" on the end of one of them:

"IF coprophages like Dana are *right* about the climate [....etc.]"

Ouch.

I was practically begging the average reader to [mis]read my words in the most appalling possible sense, wasn't I?

On a reasonable parsing it certainly does look as though I'm calling *the hundreds of millions of people who subscribe, LIKE DANA, to the climate views Dana subscribes to* an obscene name.

D'oh! No! I do NOT imagine, and I was NOT trying to suggest, not even as the premise of a "joke"!, that half the developed world, including most of my friends and family, metaphorically eat poo.

I was bunglingly trying to attach the sentence's predicate ONLY to the less-than-a-single-decker-busful of bad actors (*like him*) in the debate.

Joe, could you please delete the whole thing? And accept my apologies for being so lazy, cavalier and dumb?

(The "joke," even if worded competently, would have been a throwaway line not worth the pixels.)

Joe Duarte
9/9/2014 05:14:12 pm

Hi Brad -- Consensus is a heuristic that helps us navigate reality. We have to do stuff. Humans can't just sit around and live non-contingently. We need knowledge to make various decisions. Most of the time, a scientific consensus isn't directly pertinent to life decisions, but it can be.

As a heuristic, it's generally not going to be foolproof. The reliability of consensus will almost certainly vary across contexts, where context includes things like field, timepoint/maturity of consensus, specificity or level of resolution, whether it predicts future events, etc.

It partly comes down to how reliable scientific consensuses are on average. This quickly becomes complicated. I'm not sure anyone has rigorously assessed this across fields, which would probably require multiple snapshots at arbitrary timepoints, and accounting for all sorts of features like resolution, maturity, size, nature of the field, etc.

Would embracing consensuses give you better than chance accuracy? I suspect so, and if I'm right, you're going to be wrong a lot.

However, whatever the results of comprehensive studies of the reliability of scientific consensuses, I doubt they would require anyone to believe in AGW consensus. Slipping into the perspective of a rational outsider, there are lots of facts such a person could attend to that would make them less confident in it, like the behavior of some climate scientists, the smears of the "denier! denier!" crowd, the politicization of the issue, and certainly the scam consensus studies.

I'm not saying it would be wise to be a skeptic based on those cues. I'm saying it's reasonable. You don't have to be crazy or dumb or believe in ghosts to attend to those cues and doubt climate science. Different people will arrive at different information, and will process it differently. I'm mostly concerned with what a reasonable person might see or conclude. They could also not be a skeptic. Politics seems to be the biggest predictor, not knowledge or rationality.

I'm not a skeptic for a few reasons, the biggest is probably that I think climate scientists are very smart. Whatever their biases and behavior, I think they're too smart to get caught with their pants down. I don't think AGW is going to turn out to be false.

We know almost nothing directly. We don't know who did or did not shoot JFK. We don't know how far the earth is from the sun. Almost no one has done the work necessary to confirm that the earth orbits the sun, reproducing something like Copenhagen's work. We just accept various things as being true. We have to, for cognitive and temporal economy. We need ways to differentiate reliable from unreliable claims and ideas, heuristics and the like, and consensus of experts is one. I don't like that our contact with such consensuses is through the media, which is one of many reasons why the "denier" smear is unreasonable -- it's not obvious that simply believing things because people tell you they're true, "people" usually being the media, is rational or reliable.

Reply
Shnoop
9/9/2014 09:33:49 pm

" I don't think AGW is going to turn out to be false."

Very interesting remark. For you there must be some meaning in "AGW" that isn't represented by the words that form the acronym: there isn't really much debate that humans can affect the climate, or the "global temperature", whatever that may mean.

The debate is a) by how much and b) what is appropriate and practical to do about it now.

It's clear that the answer to "a" is: not as much as has been claimed by such as the IPCC. Their predictions comprise a litany of failure. Their prognostications are based on incomplete theory and computer models, the soundness of which is looking less and less as reality diverges more and more from the model output. The excuses for "the pause" are sounding more and more frantic.

Of course, the answer to "b" depends on the answer to "a". Is vastly more expensive and unreliable energy really the solution, or should we focus our efforts, and money, on mitigation of all the vagaries of climate, weather and the natural environment?

The characterization of "AGW" as a binary true/false proposition does no service to informed discussion of the important issues at hand.

Brad Keyes link
9/10/2014 02:48:00 pm

Joe, thank you so much for writing all this. Your comments (which I hadn't read in their entirety when I posted earlier—d'oh!) exhibit a rare intelligence. In a good way.

Every (friendly) rebuttal you throw at me is *less* misguided than the previous one, in a flagrant inversion of the quasi-natural law that condemns 99.9% of human debate to be negative-sum, bad theatre.

No offense, but when I witness an academic digging himself *OUT* of a doctrinal hole I feel like I'm watching that first velociraptor turn the doorknob the right way.

It's a once-a-decade privilege. :-D

(It's perfectly PC for me to make these classist remarks because I have academic heritage myself. I'm one-quarter intellectual, on my dad's side. So I can say these things. So shut up.)

In case I haven't mentioned it, Joe, and in case it makes any difference to you, I used to teach this stuff (epistemology) for money.

You're making one substantive conceptual error (not "errors," plural, anymore!). But it has ramifications down the line.

Given sufficient CPU time, I bet you'd figure out what I mean without any input from me.

Up to you. I'm actually enjoying listening as you riddle it aloud, but it's a guilty pleasure because if I were you I (i.e. you) would probably want you (i.e. me) to give me (you) a time-saving clue. Vita brevis, ars longa and all that.

Let me know!

BK

Kristy
9/14/2014 03:30:15 pm

Joe:
You said: “Different people will arrive at different information, and will process it differently. I'm mostly concerned with what a reasonable person might see or conclude.” I used to agree with the global warming consensus. I’m really just a layperson when it comes to the really technical aspects of global warming. I am in the medical field and love science. I have volunteered at my children’s schools to promote science by doing hands on experiments. My agreement with the consensus started changing when I was helping my daughter about 6 years ago with a paper on polar bears. I had been told that the polar bear population was declining, but after researching with my daughter, I found out that wasn’t the case. I then started doing some more investigation and found this whole other side of global warming, the skeptic side. The more I read of the skeptic side of the science, the more I disagreed with the consensus. One thing I have found out is that no matter what, the hypothesis of AGW can never be invalidated. NEVER. I consider myself a reasonable person and that just doesn’t fit into the science I know. Here are just a couple of examples.

Susan Solomon wrote this paper on water vapor in 2010 and she states this:

Scientists have underestimated the role that water vapour plays in determining global temperature changes, according to a new study that could fuel further attacks on the science of climate change.
The research, led by one of the world's top climate scientists, suggests that almost one-third of the global warming recorded during the 1990s was due to an increase in water vapour in the high atmosphere, NOT HUMAN EMISSIONS OF GREENHOUSE GASES. A subsequent decline in water vapour after 2000 could explain a recent slowdown in global temperature rise, the scientists add. The experts say their research DOES NOT UNDERMINE THE SCIENTIFIC CONSENSUS THAT EMISSIONS OF GREENHOUSE GASES FROM HUMAN ACTIVITY DRIVE GLOBAL WARMING, BUT THEY CALL FOR A CLOSER EXAMINATION OF THE WAY CLIMATE COMPUTER MODELS CONSIDER WATER VAPOUR.

http://climatechangepsychology.blogspot.com/2010/01/susan-solomon-water-vapor-caused-one.html

Got that…..unexpected that 1/3 of the warming in the 1990s came from a negative feedback of water vapor and not human emissions, but that doesn’t change anything…instead they CHANGE THE MODELS.

The AGW hypothesis states that MWP was not global, but if it were found to be global, then this warming would not be out of the ordinary. So here is a paper (along with many others) that provides evidence for global MWP:

http://wattsupwiththat.com/2012/03/22/more-evidence-the-medieval-warm-period-was-global/

But yet, the AGW scientists came out screaming that this paper doesn’t prove anything.

We now have Kevin Trenberth come out in 2012 stating that increasing temperatures won’t be seen globally, but instead the heat will be a HOTSPOT THAT MOVES AROUND. But yet we were told that the MWP had to be global to count as global warming.

“We can confidently say that the risk of drought and heat waves has gone up and the odds of a hot spot somewhere on the planet have increased but the hotspot moves around and the location is not very predictable. This year perhaps it is East Asia: China, or earlier Siberia? It has been much wetter and cooler in the US (except for SW), whereas last year the hot spot was the US. Earlier this year it was Australia (Tasmania etc) in January (southern summer). We can name spots for all summers going back quite a few years: Australia in 2009, the Russian heat wave in 2010, Texas in 2011, etc. Similarly with risk of high rains and floods: They are occurring but the location moves.”

http://thinkprogress.org/climate/2013/08/18/2484711/ipcc-report-more-certain-global-warming-is-caused-by-humans-impacts-speeding-up/

Chris Landsea resigned from the IPCC due to his lead author Keven Trenberth making scientific statements to the media with absolutely no science to back up his statements. A part of Chris Landsea’s resignation letter:

“It is beyond me why my colleagues would utilize the media to push an unsupported agenda that recent hurricane activity has been due to global warming. Given Dr. Trenberth’s role as the IPCC’s Lead Author responsible for preparing the text on hurricanes, his public statements so far outside of current scientific understanding led me to concern that it would be very difficult for the IPCC process to proceed objectively with regards to the assessment on hurricane activity. My view is that when people identify themselves as being associated with the IPCC and then make pronouncements far outside current scientific understandings that this will harm the credibility of climate change science and will in the longer term diminish our role in public policy.”

You must read the entire letter to get a better understandi

Kristy
9/14/2014 03:32:14 pm

Here's the rest of my post that got cut off:

You must read the entire letter to get a better understanding:
http://cstpr.colorado.edu/prometheus/archives/science_policy_general/000318chris_landsea_leaves.html

There have been other scientists who have resigned from the IPCC due to the political nature of the science and the disregard for scientific integrity.

Judith Curry wrote a post on why the IPCC AR5 weakens the case for AGW.

http://judithcurry.com/2014/01/06/ipcc-ar5-weakens-the-case-for-agw/

I could go on and on about failed predictions from 10-15 years ago and “unexpected” findings and how none of these ever invalidate the hypothesis and yet with all the observations invalidating the predictions, the alarmism grows louder and louder. That does not seem reasonable to me.


MikeR
9/9/2014 01:33:48 am

I always enjoy McArdle. Here I particularly enjoyed the comments: almost immediately after Megan wrote her article about how people fool themselves into thinking that only their side is right, comments began to appear that said, Yes, but only my side is right.

Reply
MikeR
9/9/2014 01:34:24 am

Whoops: forgot the link.
http://www.bloombergview.com/articles/2014-09-08/the-truth-about-truthiness

Reply
Brad Keyes link
9/9/2014 03:28:01 pm

MikeR,

"I always enjoy McArdle."

All I know about her is that she said everything right about Gleick and said it better than anyone else I'm aware of, AND had to cross credal lines to do it.

You seem to know her work, so you might be able to sate my curiosity here. Does McArdle *usually* exhibit such superhuman integrity (by the low, low standards of the climate debate) or shall we be forced, with regret, to "explain away" that little moral miracle as the happy sequel of a stroke, seizure or drug ingestion on her part?

(In case you're similarly curious, please do ask my friend Shub to expound her decidedly non-heartwarming hypothesis about McArdle's state of mind in l'affaire Gleique. If it checks out—and I hate to say it does appear to—then it would suffice to drain all the wonder from Climate Christmas in No-Man's Land. In short, it's a great cure for faith in the basic goodness of people.)

Reply
Joe Duarte
9/9/2014 05:19:07 pm

Megan McArdle is to my knowledge the best in the business. She has a rare or unique combination of knowledge. She knows economics for one thing, which is so rare. She has extraordinary brainpower and skill.

MikeR
9/10/2014 06:15:41 am

I agree with Jose. I learn a lot at her site, because she always presents all sides, always presents her assumptions clearly, always tries to tell what's really going on.

No idea what you're referring to about "your friend Shub's" idea. McArdle is not even part of the climate debate, she's mostly interested in economics (and home cooking).

Brad Keyes link
9/10/2014 09:52:06 am

Joe, Mike,

Thanks guys for sharing your (persuasive) endorsements of Megan McArdle’s oeuvre. Your 2/2 unanimity has epistemic utility to me!

(And no, Joe, despite appearances, I’m at no risk of breaking epistemological hygiene by saying so. I’d be happy to explain how the whole system coheres nicely if you indicate that you have time to hear about it.)

Brad Keyes link
9/10/2014 10:39:48 am

Hi Joe,

One senses that you didn’t exercise your usual level of care when you declared the following:

>> Dana N is not "defective”,

How can I put this? Not sure I agree 100% with your police work there, Lou! ;-)

I can only repeat that Dana most certainly is “defective,” with our without quotes—and to be specific, morally/ethically so—beyond my (Brad Keyes’) personal capacity to argue sensibly or honestly against that description.

YMMV—to state the obvious! (We can probably just take this disclaimer as read from now on. Coolio?)

I’d like nothing better than to live on a planet where Dana Nuccitelli was ethically intact, non-defective, entire, had *integrity* (a noun whose Latin relatives need no introduction), but I'd be kidding myself if I made my brain think such a place actually, contingently, was one and the same world on which you and I are now conversing, Joe.

It’s… how can I put this?… not.

I know—and I can't, by any act of will, *un-know*—that Dana is ethically defective because, to begin at the beginning:

I’ve watched him—and I can’t *un-watch* him—in the act of propagating a false though easily-checked meme, which additionally happened to be defamatory—objectively defamatory, so to speak—that is, disreputable—impinging on the very core of the character of a certain person (of whom Dana has apparently been cretinous enough to make an enemy for no reason whatsoever—strictly by the bye, parenthetically).

Clearly though, this fact in isolation (and I’m incapable of classifying it as other than a *fact*—for which you may blame the primordial and pandemic curse of egocentric perspective sometimes called “the human condition") could quite easily be explained without positing *any* moral deformity on Dana’s part.

As an honest mistake. Without a scintilla of (evident) malice.

And I guess you’ll have to take my word for it, Joe, when I say I’d prefer to believe in such an explanation.

I’d prefer not to have any moral opponents. I don’t derive my jollies from sharing the world with bad people. (There does seem to be a type of personality that thrives on moral conflict, but that’s not me. I fail to see the appeal.)

I don’t want bad people to exist, I’d rather not have to waste my time trying to undo the bad acts they do, and if I had a magic wand I’d humanely euthanise every last one of them right now so we could all walk forward as friends.

Unfortunately, uglily, Dana’s moral lesions are neither ambiguous nor well-hidden.

A number of people, including me, proceeded to go above and beyond what could reasonably be asked of us (and without hope of recouping one goddamn cent from it, to risk further obviousness) to try to tell Dana that he was wrong to (objectively) defame the person in question in the manner in which he was (objectively) defaming him. Wrong on the facts. Facts anybody could have googled, but few will ever have occasion to.

At this point, though, I'm wondering: do I need to tell Joe how these efforts of ours were repaid?

Or can I trust in Joe's broad-brush familiarity with the manner in which all such efforts have always repaid in all the years since these imbecilic “climate wars” began? Could I impose upon Joe, in the interests of time, to mentally fill in the rest of the story?

Because if I asked that favor of you, Joe, I’d be really amazed if you got any detail out of place.

If you haven’t already guessed exactly how Dana reacted, and how the Guardian eco-moderators reacted, to boot, then we may have a deeper and dirtier perceptual disjunct than even I’ve dared to imagine! LOL :-D

For non-Joes, here’s a summary: Dana never acknowledged the problem, no matter how many times and on how many blogs (frequented by him) it was pointed out. He didn’t even try to dispute it, which one could have respected. He did nothing, said nothing, corrected nothing. Meanwhile a nasty lie was allowed to sit there doing its nasty work. Months later, the Guardian quietly walked back from the lie. I’m very lucky I even noticed their retractionette.

[Oh, can I safely call it a “lie” at this point? Yes? No?]

It is now September, 2014. Dana has never acknowledged, explained or apologised for the falsehood. Or if he has, he’s failed to do so to my face, and I’m just one of the people to whom basic decency requires Dana to say something, or would require him to say something if he himself were basically decent, which is the very assumption I’m challenging here.

(And the defamed person in this story is Richard Lindzen, in case that wasn’t clear.)

That’s mens rea right there, as far as I’m concerned. Mens rea is old news at this point. (Please tell me if I’ve misunderstood

Reply
Joe Duarte
9/12/2014 03:27:42 pm

Brad, I'm not following your recent posts. You're zooming through stuff and talking in code. I have no idea what you're talking about re: Lindzen and Nuccitelli.

A week or two ago, I deleted the passages in my post where I called Nuccitelli stupid (or that thinking a psychology study about white males could be counted as mitigation or methods in this context was stupid), and where I lingered on the intelligence of the raters and authors. (There's still some content where I say this whole situation is just way too dumb.)

I don't want to slide into the stupidity spree, the habit of calling people dumb. It's clear that these people had no training in this kind of research, had no idea what they were doing, weren't familiar with the basic methodological requirements here, and cited none of the relevant literature. That's more directly relevant. The fact that they never seem to have anything to say for themselves, even in answer to the fraud report, that their answers have no logical connection to what's happening is frustrating for many people, and probably gets us sliding down that dark path.

I think some of these people are clearly terrible human beings. What they did in these studies is stunning. But I think they're outliers. No one in social science is like Lewandowsky, not even the people trying to protect him -- he's an extreme case. I can't imagine using science to assert false and damaging associations about a political camp I disagreed with. And the Cook crew is remarkable.

To those of you who are cynical about the outcome -- yeah, just wait. We can't have this kind of junk in journals, and one way or another we're going to have to clean it out.

Reply
Brad Keyes link
9/17/2014 01:32:33 pm

Joe,

apologies for not noticing this comment until now.

OK, Dana wrote this article about Lindzen:

http://www.theguardian.com/environment/climate-consensus-97-per-cent/2014/jan/06/climate-change-climate-change-scepticism?commentpage=1

Notice the subtle tobacco theme that pervades the piece? :-)

The caption under the picture used to say, "....which Lindzen denies cause cancer." That was make-believe.

The new version is barely an improvement but because 'skeptical' has so many meanings I'm afraid they'd be able to hide behind one of them if I bothered demanding a retraction. Lindzen does have a habit of cheekily telling interviewers that we should be 'skeptical' about everything, i.e. not fool ourselves.

Also note the epistemological cretinism of the piece. Nuccitelli uses the fact that Lindzen pursues falsifiable hypotheses...AGAINST Lindzen. "He's wrong more often than anyone!"

(It's OK to be wrong once or twice, Dana admits patronisingly. Well, that must have come as a relief to an author of 150+ peer-reviewed papers.)

Brad Keyes link
9/10/2014 10:42:02 am

[CONTINUED]

That’s mens rea right there, as far as I’m concerned. Mens rea is old news at this point. (Please tell me if I’ve misunderstood the last few thousand years of both folk morality and formal ethics! Haha)

It actually becomes academic, surely, whether the initial false publication was accidental or malevolent. I’m past caring about that.

You say,

>> nor is he likely to be a sociopath.

Now, I’m (honestly) not trying to be difficult or pedantic, but I wouldn’t have thought any individual was likely to be a sociopath.

(In which case it would appear to be both difficult and redundant to argue with you.)

As I understand it, Joe—naively, I hasten to add!!—a small minority of the population ARE sociopaths, and the rest of us AREN'T.

Unhappily though, I’m no closer to understanding how the diagnosis in question is meant to be ratified or (on the other hand) excluded, so I can’t even (meaningfully) feed you that old line “I'll take your word for it” here! :-D

This, I think, is the universe telling me not to use jargon I don’t understand. LOL

So let me phrase it as I should have done ab initio: Nuccitelli is—in my experience, which is the only experience I’ll ever be able to inform you about—an *unrepentant liar.*

Better? Sorry about the lexical overreach before! It’s a bad habit I need to work on, especially when I have the rare pleasure of talking to someone who can *tell* if I say the wrong word.

(And thanks for calling me out on it. Somebody had to. :-D)

>> You're committing all the usual sins of partisans, including the sins of your adversaries.

ROFL. Uh, no. I’m very sure you’re often right about things about which I’m wrong, Joe.

But this time you’re wrong. And I can only explain such a striking misunderstanding by supposing you have little to no clue *who my adversaries are.* (You don’t know, do you?) Which wouldn’t entirely be your fault—I’ve never told you. But you'd probably have worked it out by reading a couple of entries on my blog, to which I've given the URL every time I've submitted a comment here.

I’ll be as brief as I can.

1. All the adversaries, enemies and bad actors I’ve known since the international climate “debate" began would fit on a small bus. (I’ll spare you the hilarious gags about cliffs. Just this once.)

2. Not all my climate enemies are believalists. Not all my climate trenchmates are denialists. Not even almost.

3. My allies, friends and good friends in these farcical “climate wars” are a lot more numerous than the bad guys.

(I'll bet you’d say the same thing, and you’d also be absolutely right.)

4. I get the strong impression that almost everybody on both sides is:

(a) well-meaning
(b) confused, and
(c) bored.

5. The previous point has corollaries and it makes predictions, all of which (that I’ve mentally derived thence, to date) I happily stand behind, and none of which (to my knowledge) is falsified by observing the climate debate.

6. For instance: from the Demotic Confusion Hypothesis it follows veridically—and I have no qualms about asserting—that [even] the people who are closest to being “right” mostly tend to be right for badly invalid reasons (scientifically).

Which checks out empiricdotally, in spades.

7. I can’t resist mentioning this bonus truth: even the people who are "wrong" on climate change seem to have developed, as a culture, their own equally-persuasive narrative proving—totally spuriously, mind you! hehe—that they’re right.

This is why I don’t blame anyone for falling for *either* of the two competing climate mythoi, and falling pretty hard at that.

Nor—once they’ve fallen—can I really blame them for finding the contralateral fairy-tale so idiotic it can only be explained by assigning a strong role to bloody-minded, wilful blindness.

This is the big, dare I say the generational?, tragedy. It's evil.

Not that we can’t agree about the climate, which was a trick question to begin with, a question almost everyone answers haplessly, but that we’ve come to believe in all seriousness that there’s something *mentally wrong* with anyone who doesn’t arrive at the same facile, primary-school non-answer we arrived at.

8. My enemies. My enemies. Where to start the reeking bloody rap sheet of their infamies? My climate enemies have done scientific and other academic frauds; they've destroyed, withheld and pretended to misplace scientific data in order to prevent the human race discovering things about nature; they've forged documents to frame people they don’t like; mendaciously and publicly accused innocent people of deplorable crimes that carry prison sentences; betrayed the trust reposed in their professions b

Reply
Brad Keyes link
9/10/2014 10:44:33 am

[CONTINUED]

8. My enemies. My enemies. Where to start the reeking bloody rap sheet of their infamies? My climate enemies have done scientific and other academic frauds; they've destroyed, withheld and pretended to misplace scientific data in order to prevent the human race discovering things about nature; they've forged documents to frame people they don’t like; mendaciously and publicly accused innocent people of deplorable crimes that carry prison sentences; betrayed the trust reposed in their professions by fraudulently abrogating to themselves the magical competence to diagnose entire swathes of the (perfectly healthy) population with thought disorders just to score points in an academic bitch fight; deliberately and self-servingly lied to *massive* audiences about the way science itself works—than which I can’t for the life of me think of a greater crime against humanity in the recent history of the developed world, can you Joe? I’m dead serious about that and I’m all ears—is there anything worse a person can do to a crowd of people than to *stultify* them, anything that *doesn’t* involve the use of explosives or assault rifles?), yadda yadda, off the top of my head.

9. If people who plan and carry out such attacks on you (and all the things I just mentioned hurt everybody, don’t they?) aren’t YOUR enemies too, Joe... why not?

10. I don’t think this is quite as bad as the examples I’ve listed, quantitatively—it actually sounds a bit silly now—but for “balance” if nothing else, one wouldn’t want to leave out HI's Unabomber billboard. Ross McKitrick’s description of it—“fallacious, puerile and offensive”, I think—captured it well.

But if that billboard prolonged this venomous, idiotic cultural feud by even a single day, which is pretty hard to doubt, then by definition it was the work of an adversary of mine, wasn’t it?

By the way: Joe Bast’s weird, unapologetic apologetics after the fact could have easily been written by a moral retard like Dana, I think. There was that same apparent incomprehension of ethical axioms most of had picked up by the age of 10. For instance: blurting out a vicious and witless joke at the expense of fifty percent of the American people is a bad thing to do *whether* *or* *not* it attracts a bunch of new donors to your think-tank. When Joe Bast didn’t GET this, it caused certain words to pop into my head. *Autist. Sociopath. Psychopath.*

But I believe this is my cue to quit advertising my lack of a Psych background, isn’t it? LOL

Quick addendum: I have a couple of online friends on the Aspie spectrum but they'd have *no* difficulty grasping that which seems to have eluded Joe Bast. In fact they’re among the most deeply moral people I know—almost as if they’ve put a lifetime of effort into honing faculties the rest of us take for granted. (If that makes sense.) [Un]interestingly, I admire them both for their ethical genius despite (in one case) what you would think was a diametrically inimical climate-change philosophy. But I refuse to let that absurd disagreement vitiate my friendship with or affection for the person in question.

11. Even if I didn’t quote-unquote “believe in AGW” (which I do, to the debatable extent that it’s a scientific hypothesis), I still wouldn’t for a nanosecond imagine you and I were on opposite sides of this, Joe.

If you’re pro-science, which I think I can safely say you are, then there is no other litmus test as far as I’m concerned. I’m OK, you’re OK. Seriously. English can do justice to neither the depths nor the soaring heights of the crap I don’t give about your climate “position.” (Spanish could perhaps accommodate the sublime aorism of the apathy I’m trying to get across here, but *my* Spanish probably couldn’t.)

12. Anybody whose theory of climate sociology requires either 50% or 100% of the population to be *less intelligent* or *less well-informed* than them, personally, doesn’t really *have* a climate sociology.

They haven’t made any progress towards understanding the situation we’re in. They’ve gone backwards, into a anthropological fantasy world. They understand less about the climate divide than people who’ve never given it any thought.

12. The preceding paragraph is polemical, hyperbolic, Manichaean, grossly simplistic and not to be taken 100% literally.

13. For instance, Dan Kahan is a great guy *in my experience* (which has been disputed by a couple of other great people I know).

And even though Kahan's going bass-ackwards theoretically (at least since his initial important insights), that doesn’t bother me because the dignity and honesty with which he’s *trying* to make sense of the debate has elevated, or at least done a ver

Reply
Brad Keyes link
9/10/2014 10:45:53 am

[CONTINUED]

And even though Kahan's going bass-ackwards theoretically (at least since his initial important insights), that doesn’t bother me because the dignity and honesty with which he’s *trying* to make sense of the debate has elevated, or at least done a very great deal to *detoxify*, the debate.

In my experience. Feel free to [dis]agree.

14. Your understanding of epistemology is imperfect—that is, it falls short of encyclopaedic.

Now, I would be glad to point out to you the imperfections that are visible to me.

They’re small, they should be easy to fix, but like all mistaken ideas they breed like Fibonacci’s bunnies, so it’s preferable to have as few of them as possible at t=0, wouldn’t you agree? Yes, you would agree, since this is all almost childishly obvious, for which I apologise.

(Or rather, “apparent” to me—yes, that’s a better word than “visible”—since I could always turn out to be the mistaken party. It’s been known to happen! LOL.)

In return I’d only ask one thing. (In fact I needn’t even ask, since you’re already doing it.) Namely, I'd expect you to alert *me* to the errors *you* can see in *my* epistemology (or whatever -ology we’re talking about).

Deal? :-)

15. I have a complete, internally-coherent, predictive and thus-far unfalsified grand theory of all credal groups’ behaviours in the climate debate.

I know, for instance, why there are smarter and more knowledgeable people than me on the “wrong” side of the climate debate.

I know specifically the factor I‘d have change, with a magic wand, to make them flip to the [more] "correct" side.

I worked out the key mental tricks you need in order to perform interfaith climate empathy a couple of years ago.

It wasn't trivial, Joe. It took a very brilliant friend and me a solid, stressful, nasty year of playing adversarial roles—and we didn’t always *feel* adversarially disposed to each other so it took a lot of discipline to stay hard, but by the end of it this dynamic had become a rather beautiful partnership. And to be clear: this was with a person who was my credal mirror image.

The process was a bit of a waste for my partner, who could explain nothing much new at the end of it, because he only spoke English whereas I flipped from English to Science as needed, leaving him more confused than before.

But what i got from him was the the answer to every "hard problem" in climate sociology I've come up against since then.

And I have gained an implicit respect for the overwhelmingly rational approach I've seen the vast majority of people take, *no matter where it took them.*

People think.

Reply
RB
9/26/2014 06:13:34 am

My first visit to your blog. Never worry about the length of your article when it is so revealing!

I am gobsmacked - I have heard and read lots about this 97% issue but have not really had the time to pursue educating myself - and of course the 97% paper's conclusion is routinely trotted out by politicians. lawmakers and policymakers in my country.

Thank you very much for informing me.

Reply
DennisA
3/25/2015 05:23:38 am

I also am a new visitor, via the Richard Tol article, via Jo Nova.

Has anyone looked at the editorial board of Environmental Research Letters?

http://iopscience.iop.org/1748-9326/page/Editorial%20Board

How would these people ever allow the editor to publish a retraction?

Myles Allen, University of Oxford, UK, promoter of "Keep it in the Ground", paper Towards the Trillionth Tonne, Myles Allen et al

Peter H Gleick The Pacific Institute, Oakland, USA (Nuff said)

Stefan Rahmstorf Potsdam University, Germany, (actually Potsdam Climate Institute, Schellnhuber's outfit and also RealClimate)

Advisory Board:
Scott Goetz, Deputy Director, Woods Hole Research Center
Woods Hole was where John Holdren was Director for several years. This is what he was saying before he became Obama Science Advisor, "Global Climate Disruption: What Do We Know? What Should We Do?", http://belfercenter.ksg.harvard.edu/publication/17661/global_climate_disruption.html. Long time collaborator with Paul Ehrlich.

More about him here, although relates to before he joined the Obama administration:
http://www.discoverthenetworks.org/individualProfile.asp?indid=2368. Check the links in the sidebar as well for more recent pieces.

There are more "network" members, but it would take up too much space here.

Whilst you are at it, have a look at the editorial board of "Climatic Change", where Naomi Oreskes is a Managing Editor and Peter Gleick, Phil Jones, Ben Santer, John Schellnhuber, Tom Karl, van Ypersele, (favourite to succeed Pachauri), are on the editorial board.


Reply
dmduncan
11/8/2015 05:38:17 pm

NASA cites Cook study as a source for the truth of the 97% figure, footnote 1.

http://climate.nasa.gov/scientific-consensus/

Reply
infopdscenter link
11/9/2017 11:18:53 pm

Excellent article,Thanks for sharing..

Reply
Jan
9/5/2020 09:59:50 am

hello, i hope you can help me out,
my hdd crashed and i cant for the life of me re-find the blog where cook and fellow statistical amateurs discussed how to vote the abstracts,
can you link it ?

Reply
Henry link
1/2/2021 01:21:53 am

Thaanks for this blog post

Reply



Leave a Reply.

    José L. Duarte

    Social Psychology, Scientific Validity, and Research Methods.

    Archives

    February 2019
    August 2018
    July 2017
    December 2015
    October 2015
    September 2015
    August 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014

    Categories

    All

    RSS Feed

  • Home
  • Blog
  • Media Tips
  • Research
  • Data
  • An example
  • Haiku
  • About