Jose L. Duarte
  • Home
  • Blog
  • Media Tips
  • Research
  • Data
  • An example
  • Haiku
  • About

Ignore climate consensus studies based on random people rating journal article abstracts.

7/22/2014

38 Comments

 
UPDATE SEPTEMBER 11, 2014:

(This post was my initial take. For a better and more up-to-date report, go here. For follow-up, go here.)

The paper includes a bunch of psychology studies and public surveys as evidence of scientific endorsement of anthropogenic climate change, even a survey of cooking stove use. This vacates the paper. Lots of things vacate the paper, such as its inherently invalid and untrustworthy method, and the false claim that they used independent raters. But I think we will have broad agreement that the inclusion of psychology papers and public surveys vacate the paper. The world thought this was about climate science. This is ridiculous, but trivially predictable when you have human raters who have an obvious bias with respect to the subject of their ratings, who desire a specific outcome for the study, and who are empowered to deliver that outcome via their ratings. What happened here is exactly why we could never accept a "study" based on such a method.

The following papers were rated as endorsement and included in their 97% figure. Dana Nuccitelli even wanted to include a psychology paper about white males and denial as evidence of scientific endorsement. It's jaw dropping that someone who is supposed to inform the public on science would want to do that to the world, to generate a consensus figure based on studies that have no bearing on the consensus. There will be more such papers for those willing to invest time in this scam paper, and I haven't listed all that I found yet. I'll write this story up in a separate post when I have time, and for a news magazine. The broader ethics disaster here is going into a Nature submission:

Chowdhury, M. S. H., Koike, M., Akther, S., & Miah, D. (2011). Biomass fuel use, burning technique and reasons for the denial of improved cooking stoves by Forest User Groups of Rema-Kalenga Wildlife Sanctuary, Bangladesh. International Journal of Sustainable Development & World Ecology, 18(1), 88–97.  (This is a survey of the public's stove choices in Bangladesh, and discusses their value as status symbols, defects in the improved stoves, the relative popularity of cow dung, wood, and leaves as fuel, etc. They mention climate somewhere in the abstract, or perhaps the word denial in the title sealed their fate.)

Boykoff, M. T. (2008). Lost in translation? United States television news coverage of anthropogenic climate change, 1995–2004. Climatic Change, 86(1-2), 1–11.

De Best-Waldhober, M., Daamen, D., & Faaij, A. (2009). Informed and uninformed public opinions on CO2 capture and storage technologies in the Netherlands. International Journal of Greenhouse Gas Control, 3(3), 322–332. 

Tokushige, K., Akimoto, K., & Tomoda, T. (2007). Public perceptions on the acceptance of geological storage of carbon dioxide and information influencing the acceptance. International Journal of Greenhouse Gas Control, 1(1), 101–112.

Egmond, C., Jonkers, R., & Kok, G. (2006). A strategy and protocol to increase diffusion of energy related innovations into the mainstream of housing associations. Energy Policy, 34(18), 4042–4049.

Gruber, E., & Brand, M. (1991). Promoting energy conservation in small and medium-sized companies. Energy Policy, 19(3), 279–287. 

Şentürk, İ., Erdem, C., Şimşek, T., & Kılınç, N. (2011). Determinants of vehicle fuel-type preference in developing countries: a case of Turkey.  (This was a web survey of the general public in Turkey.)

Grasso, V., Baronti, S., Guarnieri, F., Magno, R., Vaccari, F. P., & Zabini, F. (2011). Climate is changing, can we? A scientific exhibition in schools to understand climate change and raise awareness on sustainability good practices. International Journal of Global Warming, 3(1), 129–141.  (This paper is literally about going to schools in Italy and showing an exhibition.)

Palmgren, C. R., Morgan, M. G., Bruine de Bruin, W., & Keith, D. W. (2004). Initial public perceptions of deep geological and oceanic disposal of carbon dioxide. Environmental Science & Technology, 38(24), 6441–6450.  (Two surveys of the general public.)

Semenza, J. C., Ploubidis, G. B., & George, L. A. (2011). Climate change and climate variability: personal motivation for adaptation and mitigation. Environmental Health, 10(1), 46.  (This was a phone survey of the general public.)

Héguy, L., Garneau, M., Goldberg, M. S., Raphoz, M., Guay, F., & Valois, M.-F. (2008). Associations between grass and weed pollen and emergency department visits for asthma among children in Montreal.Environmental Research, 106(2), 203–211. (They mention in passing that there are some future climate scenarios predicting an increase in pollen, but their paper has nothing to do with that. It's just medical researchers talking about asthma and ER visits in Montreal, in the present. They don't link their findings to past or present climate change at all (in their abstract), and they never mention human-caused climate change – not that it would matter if they did.)

Lewis, S. (1994). An opinion on the global impact of meat consumption. The American Journal of Clinical Nutrition, 59(5), 1099S–1102S.  (Just what it sounds like.)

De Boer, I. J. (2003). Environmental impact assessment of conventional and organic milk production.Livestock Production Science, 80(1), 69–77

Acker, R. H., & Kammen, D. M. (1996). The quiet (energy) revolution: analysing the dissemination of photovoltaic power systems in Kenya. Energy Policy, 24(1), 81–111.  (This is about the "dissemination" of physical objects, presumably PV power systems in Kenya. To illustrate the issue here, if I went out and analyzed the adoption of PV power systems in Arizona, or of LED lighting in Lillehammer, my report would not be scientific evidence of anthropogenic climate change, or admissable into a meaningful climate consensus. Concretize it: Imagine a Mexican walking around counting solar panels, obtaining sales data, typing in MS Word, and e-mailing the result to Energy Policy. What just happened? Nothing relevant to a climate consensus.)

Vandenplas, P. E. (1998). Reflections on the past and future of fusion and plasma physics research.Plasma Physics and Controlled Fusion, 40(8A), A77.  (This is a pitch for public funding of the ITER tokamak reactor, and compares it to the old INTOR.  For example, we learn that the major radius of INTOR was 5.2 m, while ITER is 8.12 m. I've never liked the funding conflict-of-interest argument against the AGW consensus, but this is an obvious case. The abstract closes with "It is our deep moral obligation to convince the public at large of the enormous promise and urgency of controlled thermonuclear fusion as a safe, environmentally friendly and inexhaustible energy source." I love the ITER, but this paper has nothing to do with climate science.)

Gökçek, M., Erdem, H. H., & Bayülken, A. (2007). A techno-economical evaluation for installation of suitable wind energy plants in Western Marmara, Turkey. Energy, Exploration & Exploitation, 25(6), 407–427.  (This is a set of cost estimates for windmill installations in Turkey.)

Gampe, F. (2004). Space technologies for the building sector. Esa Bulletin, 118, 40–46.  (This is magazine article – a magazine published by the European Space Agency. Given that the ESA calls it a magazine, it's unlikely to be peer-reviewed, and it's not a climate paper of any kind – after making the obligatory comments about climate change, it proceeds to its actual topic, which is applying space vehicle technology to building design.)

Ha-Duong, M. (2008). Hierarchical fusion of expert opinions in the Transferable Belief Model, application to climate sensitivity. International Journal of Approximate Reasoning, 49(3), 555–574. (The TBM is a theory of evidence and in some sense a social science theory – JDM applied to situations where the stipulated outcomes are not exhaustive, and thus where the probability of the empty set is not zero. This paper uses a dataset (Morgan & Keith, 1995) that consists of interviews with 16 scientists in 1995, and applies TBM to that data. On the one hand, it's a consensus paper (though dated and small-sampled), and would therefore not count. A consensus paper can't include other consensus papers – circular. On the other hand, it purports to estimate of the plausible range of climate sensitivity, using the TBM, which could make it a substantive climate science paper. This is ultimately moot given everything else that happened here, but I'd exclude it from a valid study, given that it's not primary evidence, and the age of the source data. (I'm not sure if Ha-Duong is talking about TCS or ECS, but I think it's ECS.))

Douglas, J. (1995). Global climate research: Informing the decision process. EPRI Journal. (This is an industry newsletter essay – the Electric Power Research Institute. It has no abstract, which would make it impossible for the Cook crew to rate it. It also pervasively highlights downward revisions of warming and sea level rise estimates, touts Nordhaus' work, and stresses the uncertainties – everything you'd expect from an industry group. For example: "A nagging problem for policy-makers as they consider the potential costs and impacts of climate change is that the predictions of change made by various models often do not agree." In any case, this isn't a climate paper, or peer-reviewed, and it has no abstract. They counted it as Implicit Endorsement – Mitigation. (They didn't have the author listed in their post-publication database, so you won't find it with an author search.))


-----------------------------------------------------------------------

Original post below:

Ignore them completely – that's your safest bet right now. Most of these studies use political activists as the raters, activists who desired a specific outcome for the studies (to report the highest consensus figure possible), and who sometimes collaborated with each other in their rating decisions. All of this makes these studies completely invalid and untrustworthy (and by customary scientific standards, completely unpublishable.) I had no idea this was happening. This is garbage, and a crisis. It needs to stop, and those papers need to be retracted immediately, especially Cook, et al (2013), as we now have evidence of explicit bias and corruption on the part of the raters. (If that evidence emerged during the actual coding period, it would be fraud.)

PAUSE BUTTON: This issue has nothing to do with the reality of the consensus, a reality that was evident before this political operation/study unfolded. I am not a "denier", or even a skeptic. I don't know enough, or have an argument that would lead me to be, even a "lukewarmer". There are 7 billion people on this earth, and we're not all sorted into good people and deniers. I'm quite confident that there's a consensus – a large majority – of climate scientists who endorse both that the earth has warmed over the last 60+ years, and that human activity caused most of it. The warming itself is a descriptive fact, not a theory or inference. I'd be quite surprised, amazed, if the basic theory of anthropogenic forcing as a principal cause turned out to be false, and somewhat surprised if AGW turns out to be mild, like 1° C. (Unfortunately, there is little research on scientists' views on the likely severity of future warming. A consensus only that humans have caused warming, a consensus so vague and broad, is not very useful. The Cook study would be unhelpful even if it were valid, which it is not.)

Back to the program...

In social science, it's not uncommon to use trained human raters to subjectively rate or score some variable — it can be children's behavior on a playground, interviews of all kinds, and often written material, like participants' accounts of a past emotional experience. And we have a number of analytical and statistical tools that go with such rating studies. But we would never use human raters who have an obvious bias with respect to the subject of their ratings, who desire a specific outcome for the study, and who would be able to deliver that outcome via their ratings. That's completely nuts. It's so egregious that I don't think it even occurs to us as something to look out for. It never happens. At least I've never heard of it happening. There would be no point in running such a study, since it would be dismissed out of hand and lead to serious questions about your ethics.

But it's happening in climate science. Sort of. These junk studies are being published in climate science journals, which are probably not well-equipped to evaluate what are ultimately social science studies (in method). And I assume the journals weren't aware that these studies used political activists as raters.

Examples of the unbelievable bias and transparent motives of the raters' in Cook, et al (2013) below. These are excerpts from an online forum where the raters apparently collaborated with each other in their ratings. It's worse than that – the first example is evidence of fraud if this was during the operational rating period. If it was during training, it's practice for fraud.

"BTW, this was the only time I "cheated" by looking at the whole paper. I was mystified by the ambiguity of the abstract, with the author wanting his skeptical cake and eating it too. I thought, "that smells like Lindzen" and had to peek."

Let's look at how the paper described their method: "Abstracts were randomly distributed via a web-based system to raters with only the title and abstract visible. All other information such as author names and affiliations, journal and publishing date were hidden."

Hence the fraud issue. Next example:

"Man, I think you guys are being way too conservative. Papers that talk about other GHGs causing warming are saying that human GHG emissions cause global warming.  How is that not an implicit endorsement?  If CFC emissions cause warming because they're GHGs, then CO2 emissions cause global warming for the same reason.  That's an implicit endorsement."

One wonders if a passing bird counts as implicit evidence of the consensus. This is what we call a nonfalsifiable hypothesis.

If this was the live coding period, this is a joke. A sad, ridiculous joke. And it's exactly what you'd expect from raters who are political activists on the subject they're rating. Who in their right minds would use political climate activists as raters for a serious report on the consensus? This is so nuts that I still have a hard time believing it actually happened, that the famous 97% paper was just a bunch of activists rating abstracts. I've called on the journal – Environmental Research Letters – to retract this paper. I'm deeply, deeply confused how this happened. If this is what we're doing, we should just call it a day and go home – we can't trust journals and science organizations on this topic if they're going to pull stunts like this.

Moreover the raters weren't generally scientists, much less climate scientists. One of the raters is a former bike messenger who founded Timbuk2, a company that makes great bags (Rob Honeycutt.) I've got mad props for him for what he's done with Timbuk2 – for anyone who starts their own business and follows their vision. That's very hard to do. But I'm not going to want luggage entrepreneurs to be rating climate studies or interpreting science for the world. I'll buy you a beer any day of the week Rob, but I just can't sign off on this.

Other raters are just bloggers. I don't mean scientists who blog. I just mean bloggers, who are not scientists. Nothing against bloggers – I'm just not feeling that, don't need bloggers to be rating climate science abstracts. Another rater is only identified by an anonymous username – logicman. Who can argue with logicman? Is there a big L on his uniform? Where's emotionman been lately? What's fallacygirl up to? Anyway, probably no one needs to be subjectively rating climate abstracts, but if anyone did, it would have to be climate scientists. Is this controversial in some cultures?

More importantly, I don't care who you are – even if you're a staunch liberal, deeply concerned about the environment and the risks of future warming, this isn't something you should tolerate. If we're going to have a civilization, if we're going to have science, some things need to be non-political, some basic rules need to apply to everyone. I hope we can all agree that we can't seriously estimate the AGW consensus by having political activists rate climate paper abstracts. It doesn't matter whether the activists come from the Heritage Foundation or the Sierra Club, Timbuk2 or Eagle Creek – people with a vested ideological interest in the outcome simply can't be raters.

Also note that anyone who wants to defend this nonsense, who wants to argue that it's fine for political activists to subjectively rate science abstracts – which they won't be qualified to even understand – on issues central to their political activism, needs to also accept the same method when executed by partisans on the other side. If Heartland gathers a bunch of activists to read abstracts and tell us what they mean, all the Cook defenders need to soberly include the Heartland study. The AAAS needs to include the Heartland study in their reports, including it in their average (they didn't do an average, just cherry-picked junk studies.) If a squad of Mormons reads the abstracts of a bunch of studies on the effects of gay marriage, and sends their ratings to a journal, Cook defenders should be cool with that, and should count it as knowledge about the consensus on gay marriage.

Of course, these scenarios would suck. This method perverts the burden – it allows any group of hacks to present their subjective "data", putting the burden on us, on everyone else, to do a bunch of work to validate their ratings. We should never be interested in studies based on activists reading and rating abstracts – it's a road we don't want to travel. Researchers normally get their data by observation – they don't create it, not normally.

We don't need random people to interpret climate science for us, to infer the meaning of abstracts, to tell us what scientists think. That's an awful method – extremely vulnerable to bias, noise, incompetence, and poor execution. The abstracts for many papers won't even have the information such studies are looking for, and are simply not written at the level of abstraction of "this study provides support for human-caused warming", or "this study rejects human-caused warming". Most climate science papers are written at a more granular and technical level, are appropriately scientifically modest, and are not meant to be political chess pieces.

(Updated paragraph: I had incorrectly suggested that they asked authors to self-rate their abstracts, just as Cook's raters did, when in fact they asked them to rate their papers. The failure to hold that variable constant complicates things, but admittedly it would be very difficult for authors to strictly rate an abstract, as opposed to the whole paper they wrote. None of this matters anymore given the much larger issues that have emerged.) There's a much better method for finding out what scientists think — ask them. Direct surveys of scientists, with more useful and specific questions, is a much more valid method than having ragtag teams of unqualified political activists divine the meanings of thousands of abstracts. Interestingly, but not surprisingly, direct survey studies tend to report smaller consensus figures than the abstract rating studies (I'll have more on that later.) The consensus will be strong regardless, so it's especially confusing why people feel the need to inflate it.

In the second part of their study, Cook et al surveyed authors of the papers in their dataset – that's not at all the way to survey climate scientists, since their paper search seems to have bizarre and unexplained results, e.g. it excluded everything Richard Lindzen published after 1997. Their pool of authors is invalid if we don't know whether the search had some selection biases. It's an arbitrary pool – they'd need to validate that search and its results before we could trust it, and they should've done that at the outset. And the fact that they included psychologists, social scientists, pollsters, engineers and other non-climate science or even non-natural sciences in the 97% (as endorsement) makes their survey of authors moot.

(For subjective ratings of abstracts to be a valid and useful method, it would need to be a carefully selected pool of raters, without ideological agendas, implementing a very specific and innovative method, under strict procedures of independence. I can imagine philosophy of science questions that might be anwerable by such methods, based on things like the usage of certain kinds of words, the way hypotheses are framed and results reported, etc. – but much of that could be done by computers. The studies that have been published are nothing like this, and have no hope of being valid.)

NOTE: The Cook, et al data was leaked or hacked a few months ago – I'm confused by what's going on here. Cook allegedly wouldn't release some of his data, and ultimately a bunch of data was hacked or scraped off a server, and it included the raters' online discussion forum. Climate science features far too many stories of people refusing to release their data, and mysteriously hacked data. Brandon Shollenberger has posted the data online. It's amazing that if it weren't for him, we wouldn't know how sketchy the study truly was. There's much more to report – the issues raised by the leaked dataset extend far beyond the quotes above and rater bias.

The University of Queensland has apparently threatened to sue Shollenberger, on some sort of "intellectual property" grounds. Australia is one of my favorite countries, but we need to stand up for him. To the best of my knowledge, he hasn't done anything wrong – he hasn't posted any sort of sensitive information or anything that would violate our core principles of scientific ethics. The identities of the raters were not confidential to begin with, so there was no new disclosure there. He's exposed the cartoonish bias and corruption of the rating process that underlied this "study", and in so doing, he's served the interests of scientific ethics, not violated them.

Even if those online discussions took place during the training period, it would still be alarming evidence of bias, but other evidence suggests this was not a training period. I've never heard anyone call scientific data "intellectual property" before – that's an interesting legal theory, since this is not about an invention or an original creative work. If scientists were to get into the habit of treating data as IP, or otherwise proprietary, it would impair scientific progress and quality control – it would also violate the basic premise behind peer review. Shollenberger's disclosures took place in a context where the authors apparently refused to release all of their data, so I'm not sure what other options there were for him. In other words, he's a whistleblower. You can contact the research governance people at the University of Queensland here (scroll to the bottom of that page).

Update: In their legal threat letter, the University of Queensland says that the letter itself is intellectual property, and that publication of the letter is cause for separate legal action. What? That's like an NSL. Is this new? What kind of upside-down bizarro world is this? You can send someone a threat letter, copyright the letter, and force them not to disclose it? This is unbelievably creepy.

Update 2: Political activism is not a vice. I'm not saying it's a vice. If you think the left, or right, or libertarian, or Rastafarian perspective is true, do your thing. People have the right to be left-wing activists, conservative activists, environmental activists, wherever their minds and their values have taken them. I'm a pro-immigration activist sometimes. But I will never be a subjective rater of textual material in a study whose outcome would potentially serve my pro-immigration cause, especially if my ratings could deliver that outcome, nor will I ever assemble a team of pro-immigration activists to perform such ratings. Are we being serious right now? This is ridiculous. We can't do that. Do we want to call what we do science? This shouldn't be hard.
38 Comments
Barry Woods
7/23/2014 07:07:32 am

John Cook - from the leaked The Consensus Project - Skeptical Science forum)

"Now my hope is that the message of a strengthening consensus makes a strong impact and a big splash and plan to network and schmooze this message out with every means at our disposal, including Peter Sinclair doing a video about the results and collaborating with Google to visualise our data (this collaboration has already begun). A strong impact will justify us going to the effort of launching "phase 3" of TCP which is publicly crowd sourcing reading the full papers of all the neutrally rated papers, to determine more accurately which papers endorse the consensus. As the crowd sourcing gradually sifts through the papers, the level of consensus will incrementally increase and we will slowly build over time a definitive, quantitative measure of consensus in the peer-reviewed literature.

By dragging this out over time, and dribbling new updates and announcements, we also get to repeatedly beat the drum of a strengthening consensus. This project is not intended as a one-off launch but a long-game strategy with the end goal being the term "strengthing consensus" achieving public consciousness. It's the ultimate counter-narrative to the increasingly used denier meme "the consensus is crumbling" or "scientists are mass-exodusing to skepticism".

The psychological research tells us that a key - a deal-breaker if you will - to the public accepting climate change is an accurate perception of the scientific consensus. If the public don't perceive a consensus, they won't support climate policy. But we know not only is there a consensus, it's getting stronger. This is a strong message and it is rarely presented and never quantified to my knowledge. So my hope is SkS can have a deep and lasting impact on the public perception of consensus which will make the path to climate action easier.

2012-02-22 13:43:22

John Cook - Skeptic Reactions, from The Consensus Project

The wording will have to be very carefully constructed because as you say, this will be going out to deniers too. Considering every denier scientist seem to have a direct line to a red phone on Anthony Watts' desk, the existence of TCP will probably known to Watts before we've even looked at the results from the scientists. A scary thought really. For that reason, I think we should wait till as late as possible before emailing the scientists. Eg - wait till after quality control, once our results are done and analysed and the scientists' ratings are the final piece in the puzzle.
Keeping in mind our email will likely get broadcast on the denialosphere, we have to be very careful to have neutral wording that isn't leading in any way. The word consensus will likely not even be mentioned. But this isn't the thread to discuss that. I've started a thread just tonight on pinning down the quality control process and once that's dealt with, then I'll start working on the scientists self-rating stage. But if people want to post thoughts about that process, start a new thread and we can collect ideas in there.

Reply
Carrick
7/23/2014 07:31:40 am

The ethics approval is located here:

https://hiizuru.files.wordpress.com/2014/07/documents-released-under-rti.pdf

As a preface:

I am a physical scientist, though I've frequently been involved in human subjects research (feel free to research my background from my (not published) email). I have gone through the CITI training and have been PI on a number of grants that have required human subjects approval. On the other hand, most of my research has been objective measures using a human subject, rather than subject data collected from one.

Is it not a requirement either in the US once you have interaction present that the project be subjected to IRB review BEFORE any dat is collected? (I believe the rules are actually more strict in Australia, but I'm just asking about your opinion about US regulations.)

Reply
Steve McIntyre link
7/23/2014 03:57:31 pm

The ethics approval did NOT cover the SKS ratings; it only covered a separate and much smaller author self-rating survey and was submitted after the SKS ratings were nearly all completed. Indeed, it refers to the SKS ratings as having been done by "Team members".

Based on the University FOI response, the SKS ratings were done without ethics approval. Nonetheless, one of Cook's excuses for not supplying rater ID information to Richard Tol was alleged obligations under the ethics application.

However, this excuse was completely fabricated since there was no ethics application for the SKS ratings program. Cook tricked University administrators when pressed to produce information on SKS ratings by pointing to the obligations on the other survey, easily misleading the unwary University administrators, who unwisely adopted Cook's misinformation both in refusals to Tol and even in press statements.




Reply
Carrick
7/24/2014 02:24:32 am

<i>The ethics approval did NOT cover the SKS ratings; </i>

Yes that is the point of my comment.

If we agree that this study needed to have ethics approval—which must be completed before data collection can begin—then this data cannot be used in a published research study.

The normal journal remedy for that is retraction of the paper.

James_V
7/23/2014 09:07:01 am

This is one of the more egregious examples of shoddy climate science, but sadly there is more where that came from. Skeptics have known this sort of thing was happening but nobody will listen. I guess anybody with any doubts can't be trusted? Sad state of affairs.

Reply
Joshua
7/23/2014 09:33:25 am

Not to defend the paper - but it seems to me to be unrealistic to use potential bias on the part of raters to out of hand invalidate a social science survey. Activists conduct social science surveys all the time. That activists have rated evidence could be a valid reason for a high level of scrutiny - but that doesn't mean that you should consider a survey invalid because of assumed bias on the part of the raters. Any scientist interested in assessing hypotheses could be considered as "activists" with an inherent bias.

The point is that the methodology should control for the bias. If you want to criticize the methodology as being inadequate to control for potential biases - have at it; more power to you. But don't set up a slippery slope standard that engages arbitrary rules as to how to define "activist" or who is too biased to be a viable rater.

Personally, I think that the vast amount of electrons wasted on Cook et. al. is emblematic of the biased argumentation that surrounds the climate wars. It amounts to little more than personality politics, In the end, to the extent any of this is relevant (and I think that the relevance is limited), as one of the major critics of Cook et al.(Richard Tol) has said:

==> "“Published papers that seek to test what caused the climate change over the last century and half, almost unanimously find that humans played a dominant role.”"

Reply
Joe Duarte
7/23/2014 03:03:46 pm

Hi Joshua – The bias here is far too extreme to permit in a scientific publication. This wasn't a survey study -- they didn't hand out surveys, and survey studies don't involved ratings by other people. Surveys are completed by the participants. The researcher has no role in entering responses, unless it's an interview. (They surveyed some scientists about their abstracts after the activists had already rated them – there are a few issues with what happened there, but I don't want to distract from the core issue right now, which is that this study was a scam. The study actually has multiple points of failure, multiple points that would call for retraction. It won't survive a light breeze, much less a serious evaluation. I'm only focusing on a couple of issues.)

Their bias isn't speculative or anything. You can see the quotes in the post, and you can see a lot more corruption on Shollenberg's site.

(I do think that thoroughly liberal social scientists can indeed do perfectly valid and clean research, even in political psychology. Nosek is a good example. But that's not going to touch on what happened here.)

As far as "activists", these people aren't going to be borderline cases. They're extreme. They're not standard liberal environmentally concerned Dem voters – they're quite a bit more extreme than that. They're at war. They really hate dissenters. They're talking about Republicans more than science on some of their pages. Their worldview is extremely binary and hostile -- most environmentalists are quite a bit more moderate and less hateful than they are. They're a pretty special population. Some of the scientists whose papers they rated had already been savaged on their crazy website. Their website will be a fruitful dataset for social scientists studying groupthink and tribalism (y'all might want to save their webpages for data donation purposes -- File, Save As, HTML). And we now know the "study" was a political operation from start to finish. We have explicit evidence, in the post you're replying to, that raters cheated, were incredibly biased against dissenting scientists and were even alert for their papers, and that some raters were pretty much willing to code anything as endorsement. (Did you see the quotes? There are links too.)

We're going to need to be able to identify ideology, and say that someone is a political activist. We need those concepts, for a lot of reasons. If we need to flesh out a clean, easy method for discerning that someone is a political activist, I'm pretty sure we can.

Your closing quote is confusing. Do you imagine that I'm saying AGW isn't true? I'm not, and it's not relevant to anything here. Science and academia need to fast-forward to a context where people can call out scam research on the consensus without it having anything to do with "denying" AGW. (And if we wanted to drive home that AGW is true, a quote from Richard Toll isn't going to do any work. No quote will. The evidence is better expressed as the results of surveys of a bunch of scientists, like Harris Interactive or Bray and van Storch.)

If the idea is that we should just waive through a bunch of politically motivated, corrupt, and false research because we think AGW is true anyway, I hope I don't have to spell out how that mindset – iterated just a few times – can take us a good distance from reality and corrupt our read on climate. I'm not interested in politically expedient package deals and wink-winks. This is science, and this isn't 800 BC. We need to be capable of at least some subtlety, some complexity, some integrity.

Reply
Joe Duarte
7/23/2014 03:21:26 pm

FYI, I should caution people – I don't recommend you create accounts on their site. The last time I tried to login there a few months ago, Google Chrome flashed a SQL injection warning, and I bugged out. Their database may have been infected with malware, or it might have been specifically targeted at me, triggered by my login.

I think malware is more likely, and I think I saw reports about malware on their server in some other forum. The only reason I even consider the targeted scenario is that they're so incredibly hostile and cultish. They censor and edit posts – they'll go into your post and delete your substantive arguments, and then post whatever text remains. I've never seen anyone do that. They did it to me, and it gives them a power to synthesize, fabricate, and shape a reality or a narrative on their discussion boards – a reality that never exposes them to serious questions or arguments, and never reveals to the world what someone actually said, in full. It's an incredibly creepy place. They really think that if you're not completely committed to their ideology, you must work for an oil company or something.

So, caution. I'd harden my browser before going there, and my stomach.

Another issue is that their wording was the same as that on the incredibly unscientific, politicized AAAS report. One could have just copied from the other, but someone claims it's the same people. All of this is just too creepy.

Seth
7/27/2014 04:14:52 pm

re: "(They surveyed some scientists about their abstracts after the activists had already rated them – there are a few issues with what happened there, but I don't want to distract from the core issue right now, which is that this study was a scam. ..."

Surely this provides a control for the rating done by the volunteers?

And surely the 97.1 % endorsement from the volunteers compared to the 97.2% endorsement from the author's own ratings show that the bias isn't as extreme as you seem to be making out?

Surely also the analysis of the self-rated papers is valid, even if you prefer not to read the analysis of the volunteer rated papers?

Joe Duarte
7/27/2014 05:25:29 pm

Hi Seth - Nice point. Their ratings didn't agree with the scientists' own ratings. See Table 5. For example, for categorization as "no position", they had 62.5%, while the authors who responded were at 35.5%.

There are some questions about what those numbers represent, some things that need to be unpacked, but across the board their ratings were wildly different from theauthors' (in terms of grouping into those categories.)

Another interesting issue is that only 14% of the scientists responded to the survey, so our inferences about the percent of scientists who endorse their abstract as endorsing AGW is going to need to be cautious. There could be a selection effect, and given that it was only 14%, the selection effect could be very consequential for their findings.

The most salient issue is whether the scientists knew who they were dealing with. You've got an extremely charged website that bashes scientists who dissent from the SS view. Denier, denier, all over the place. A lot of people would've understood the SS website to be an activist central, a place for strongly "pro" AGW people, people deeply concerned about the future impact of warming, and who are very political, railing against Republicans and Lindzen and other villains.

When all these scientists got the survey, did they know it was from them, from SS? Some scientists, especially those who shared or opposed their politics, would likely know all about their site. Others would not. Who responded? That's a very important variable. It's not obvious to me which kind of selection effect we'd see. It could've actually worked against them if more sober and maybe skeptical or lukewarmer scientists responded. Or if it was just their bros, people who like them, who are always reading their site, political allies, etc. Remember, it was only 14%. Who were they? Friends? Enemies? No selection effect at all?

This is a recurrent and troubling issue in climate consensus research -- so few scientists respond to the surveys. I've thought of doing a real study, no subjective ratings of abstracts, no political team, no particular outcome in mind, just a clean, professional, scientifically strong survey with rich sets of questions that give scientists much more power to express their convictions and best assessments. No double-barrelled questions, no tricks to try to shape the outcome (like saying human activity contributes to warming, and then announcing it as "caused the warming", which Cook may have pulled a version of). And I'll have provision to put some issues into their own words. Lots of questions about future, estimates and probabilities, expected severity of impact -- actually get real data on the dangerous and potential harm of AGW -- the missing data that Obama and Daniell Kammen lied about, claiming 97% agreed on "dangerous", when the study never measured views of danger.

Complete data transparency, posted online. All materials posted. No one needing to dig through partisan-rated abstracts, flipping the burden of proof. Just clean data, ready to go. Possibly even a receipt system for scientist participants to certify their answers were in the set, and accurate (not sure about implementation yet.)

But we need people to respond. I don't know why they don't. We need to tell them "Look, we're trying to have a country over here, a democracy and stuff -- we need to know what all of you think and know about climate change. It's kind of a big deal. We need you to respond when surveys go out." I might work with institutions to pressure their faculty to respond, to make certain awards or disbursements conditional on participating in any and all studies of the consensus they're invited to answer. We can't keep doing this kind of thing with 14%.

And the survey items will go through a vetting process with climate scientists to make sure they're well designed and not biased. Surveys can be biased, but it's nothing like having the researchers create the data directly by making subjective ratings of abstracts that bear on their militant political activism. I want all the data to come from the scientists themselves, their voice, their responses. And it won't be noised up by the more artificial task of having them rate one of their abstracts for its position on AGW. We'll just be asking them to tell us their position, all things considered, irrespective of any particular paper or abstract, what they think.

A. Scott
7/29/2014 04:35:36 am

What a load of hooey Joshua ... the electrons "wasted" on Cook, Lewandowsky et al are because of the clearly biased results and shoddy, at best - unethical at worst - practices of Cook, Lewandowsky and that entire gang.

This not, as Joe accurately notes, any type of legitimate science. It is an intentional PROPOGANDA campaign ... proven by Cooks own words.

Almost every similar paper Cook, Lewandowsky and the gang have been associated with has had the same serious challenges registered against them - including retraction of their work.

These are not climate scientists - and they are not doing work that offers any advancement of climate science.

This is - again by the authors admissions - by both Cook and Lewandowsky - little more than a poorly executed attempt at using Psych Sciences to influence public opinion to conform to THEIR beliefs about Catastrophic Anthropogenic Global Warming.

Reply
Jim Bouldin link
7/23/2014 09:59:04 am

To me, the bigger issue is not so much that Cook et al did the study the way they did; people do and submit bad studies all the time. The more serious issue is that ERL accepted it, *and* that it was their most highly downloaded paper in 2013. Now maybe the latter's a bad thing and maybe a good one, that depends on what all those downloaders think about it once they read it, if they do. But the fact is, ERL published it, and that's on them. And Climatic Change has now followed suit with van der Linden et al, on how best to communicate "consensus" findings, an unbelievably trite paper.

The fact that physical science journals that we depend on to communicate the nuts and bolts of the science itself, are delving into this stuff, is a problem.

Reply
Hilary Ostrov link
7/23/2014 10:08:40 am

<blockquote>The more serious issue is that ERL accepted it, *and* that it was their most highly downloaded paper in 2013. </blockquote>

Bingo! Mediocrity forever should be ERL's motto, IMHO.

Reply
willard link
7/27/2014 11:00:25 am

> There's a much better method for finding out what scientists think — ask them.

They did.

Jesus. This is a joke.

Reply
Carrrick
7/27/2014 07:22:34 pm

Cut the sophism, willard.

The focus of the paper on the ratings given by Cook and his crew, not the original authors.

Reply
willard link
7/27/2014 11:54:25 pm

> Cut the sophism [...]

And what sophism would that be, Carrick?

The simple fact is that to claim that we ought to ask scientists, *to be relevant here*, presumes that they did not. Yet they did.

Here's a quote to prove it:

> In a second phase of this study, we invited authors to rate their own papers. Compared to abstract ratings, a smaller percentage of self-rated papers expressed no position on AGW (35.5%). Among self-rated papers expressing a position on AGW, 97.2% endorsed the consensus.

http://iopscience.iop.org/1748-9326/8/2/024024/article

That's in the ABSTRACT of C13, Carrick.

The "focus of the paper," whatever that means, does not contradict that fact.

Joe's sentence is misleading. At best. It's not even close. Talk about ethics.

Jesus.

willard link
7/27/2014 11:55:43 pm

> Cut the sophism [...]

And what sophism would that be, Carrick? I'm recalling a fact.

To claim that we ought to ask scientists, *to be relevant here*, presumes that they did not. Yet they did. Here's a quote to prove it:

> In a second phase of this study, we invited authors to rate their own papers. Compared to abstract ratings, a smaller percentage of self-rated papers expressed no position on AGW (35.5%). Among self-rated papers expressing a position on AGW, 97.2% endorsed the consensus.

http://iopscience.iop.org/1748-9326/8/2/024024/article

That's in the ABSTRACT of C13, Carrick.

The "focus of the paper," whatever that means, does not contradict that fact.

Joe's sentence is misleading. At best. It's not even close.

Talk about ethics.

Jesus.

tlitb1
8/5/2014 10:44:40 pm

Did you stop reading when you got the end of the part of the above article you quoted?:

Are you saying Cook et al did "[find] out what scientists think" about the "consensus" question?

If so then I wonder how one could say they did this? If you look in the Cook et al paper SI you will see the instructions to the scientists include:

"Note: we are not asking about your personal opinion [about the "consensus" question] but whether each specific paper endorses or rejects..."

I think this is handled by our host here in this bit. After the bit you quoted:

"Not just about their abstracts, which you already rated – you're still adding unecessary layers of complexity and bias there."

It is clear once you get as far as reading this that the assertion by our host means it would be better actually asking the scientists a direct question about their personal opinion on the "consensus" question.



Reply
tlitb1
8/5/2014 10:58:29 pm

Excuse me,

My above comment was directed to willard 07/27/2014 6:00pm

Also apologies, don't know why I missed the end of the Cook et al instruction, but the full instruction was:


"Note: we are not asking about your personal opinion but whether each specific paper endorses or rejects (whether explicitly or implicitly) that humans cause global warming"


So I think anyone can see that Cook et al effectively asks the scientists to strap their heads into a narrow focus and just read the words of the 7 categories and tick the one they think matches their paper.

The category that got the most responses from the scientists, nearly half of all, was category 3 which the scientists read as follows:

"3 Implicit Endorsement: paper implies humans are causing global warming. E.g., research assumes greenhouse gases cause warming without explicitly stating humans are the cause"

If this paper has any power to ascertain what the actual scientists "think" about the "consensus" then I think we all must agree that the "consensus" can only be boiled down to the above rather weak statement. ;)

tlitb1
8/6/2014 07:37:52 pm

@tlitb1 08/06/2014 5:44am
"The category that got the most responses from the scientists, nearly half of all, was category 3..."

Sorry that is wrong, I'm paying the penalty for winging it without checking my notes.

For one thing the actual figures in the SI of Cook et al don't give per scientist figures, but rather aggregates ratings of all the anonymous scientists for each paper.

So for example some papers have ratings like 3.33, or 2.75.

Looking at my notes I see the number of papers coming out at *exactly* endorsement level 3 is 519 which is only about 38% of all the total of 1338 self-ratings that come out above the neutral level 4. So not nearly half.

However having looked again now I think the fact 67 papers having non-whole number endorsements, showing that authors of the same paper sometimes disagreed with each other as to what endorsement level their own paper matched among the 7 options, seems an interesting point noticeably not raised, let alone discussed, in Cook et al.

willard (@nevaudit) link
8/9/2014 03:14:15 am

> Are you saying Cook et al did "[find] out what scientists think" about the "consensus" question?

That depends if what they ask of the scientists represent what they think, and beliefs statements are notoriously <em>opaque</em>. I have no idea how to verify what scientists <em>really</em> think, nor why should we care about their private thoughts.

There are important limits to self-reports. Ask Joe's colleague:

<blockquote>

Also, all results were based on self-report, and <strong>self-report on sensitive subjects like sex are not always completely trustworthy.</strong>

</blockquote>

http://www.psychologytoday.com/blog/rabble-rouser/201402/how-have-better-and-more-sex-maybe

What goes for sex should apply to climate science. Anyone can go in the database and see some scientists misclassifying their PAPERS. I can name names if you want.

***

I'd even go so far as to say that what the whole idea that "direct surveys of scientists" is more valid goes against the very idea of scientific validity. To validate a belief, you need otters:

<blockquote>

Intersubjectivity emphasizes that shared cognition and consensus is essential in the shaping of our ideas and relations. Language, quintessentially, is viewed as communal rather than private. Therefore, it is problematic to view the individual as partaking in a private world, one which has a meaning defined apart from any other subjects. But in our shared divergence from a commonly understood experience, these private worlds of semi-solipsism naturally emerge.

</blockquote>

http://en.wikipedia.org/wiki/Intersubjectivity

Until we build really powerful MRIs, nobody has a <em>direct</em> access to beliefs. That applies to scientists themselves. Not only we should ask scientists what they think, but we need to verify if what they say they think is true. So not only C13's results are validated by the ratings of the PAPER (Joe's "Not just about their abstracts" is false, since they were asked to rate their PAPERS), but the other way around.

***

There sure are limitations to C13, and lots of room for criticisms. For the record, I publicly expressed some of mine more than a year ago, and my name is on Tol's acknowledgements. Look. I like a good blog rant like any ClimateBall (tm) player. But that's just a rant. It adds nothing of value.

Joe's outrage is acknowledged. Until he steps up and gets off his ad hom mode, nothing constructive will be done. Stupid rants feed on stupidity. By sheer self-fulfilling prophecy, suboptimal studies will continue to get done. Ranters will then declare victory. Non nova, sed nove.

What is written and publicly available in a corpus matters more than self-reports.

willard (@nevaudit) link
8/9/2014 03:14:25 am

> Are you saying Cook et al did "[find] out what scientists think" about the "consensus" question?

That depends if what they ask of the scientists represent what they think, and beliefs statements are notoriously <em>opaque</em>. I have no idea how to verify what scientists <em>really</em> think, nor why should we care about their private thoughts.

There are important limits to self-reports. Ask Joe's colleague:

<blockquote>

Also, all results were based on self-report, and <strong>self-report on sensitive subjects like sex are not always completely trustworthy.</strong>

</blockquote>

http://www.psychologytoday.com/blog/rabble-rouser/201402/how-have-better-and-more-sex-maybe

What goes for sex should apply to climate science. Anyone can go in the database and see some scientists misclassifying their PAPERS. I can name names if you want.

***

I'd even go so far as to say that what the whole idea that "direct surveys of scientists" is more valid goes against the very idea of scientific validity. To validate a belief, you need otters:

<blockquote>

Intersubjectivity emphasizes that shared cognition and consensus is essential in the shaping of our ideas and relations. Language, quintessentially, is viewed as communal rather than private. Therefore, it is problematic to view the individual as partaking in a private world, one which has a meaning defined apart from any other subjects. But in our shared divergence from a commonly understood experience, these private worlds of semi-solipsism naturally emerge.

</blockquote>

http://en.wikipedia.org/wiki/Intersubjectivity

Until we build really powerful MRIs, nobody has a <em>direct</em> access to beliefs. That applies to scientists themselves. Not only we should ask scientists what they think, but we need to verify if what they say they think is true. So not only C13's results are validated by the ratings of the PAPER (Joe's "Not just about their abstracts" is false, since they were asked to rate their PAPERS), but the other way around.

***

There sure are limitations to C13, and lots of room for criticisms. For the record, I publicly expressed some of mine more than a year ago, and my name is on Tol's acknowledgements. Look. I like a good blog rant like any ClimateBall (tm) player. But that's just a rant. It adds nothing of value.

Joe's outrage is acknowledged. Until he steps up and gets off his ad hom mode, nothing constructive will be done. Stupid rants feed on stupidity. By sheer self-fulfilling prophecy, suboptimal studies will continue to get done. Ranters will then declare victory. Non nova, sed nove.

What is written and publicly available in a corpus matters more than self-reports.

tlitb1
8/9/2014 10:30:04 am

@willard (@nevaudit) 08/09/2014 10:14am

"But that's just a rant. It adds nothing of value. "

I assume by "that" you mean the above article by our host.

If so then I don't think it is easy to show, let alone claim, that something like 'that' has "nothing of value". Certainly not by merely applying selected disdain. Even less so when that selected disdain is not supported when someone later responds to it.

You picked a short sentence from the above article, quoted it, and then implied it was easy to gainsay by saying "They did"

It seems clear now that you are not acknowledging your "They did" has any value at all in your further reply here.

I certainly don't see how you could have applied the same "They did" after quoting the clearly seen full implication:

"There's a much better method for finding out what scientists think — ask them. Not just about their abstracts.... "

"They did" doesn't really work there does it? ;)

"Until we build really powerful MRIs, nobody has a <em>direct</em> access to beliefs. That applies to scientists themselves."

You seem to have now moved on to informing us about truisms. Maybe you have a point about the limitations of social science surveys. It certainly is a challenging new idea, one that maybe someone could take up our host on. And maybe while they're at it, maybe that someone could also take it up with rival 97%'ers Doran and Zimmerman too, since they too seemed to agree with our host and went the path of implying that:

"Direct surveys of scientists, with more useful questions, is a much more valid method..."

willard (@nevaudit) link
8/9/2014 11:42:20 am

> You picked a short sentence from the above article, quoted it, and then implied it was easy to gainsay by saying "They did".

Joe will appreciate that a second reader (see below for the first one) implies that asking scientists what they think "does not matter" (paraphrasing) that much in the over economy of Joe's rant.

It just so happens that it's the only *positive* prescription that followed his rant. The negative one was: don't use biased raters, which has been softened to "don't use raters who are *too* biased" in response to Joshua. So when confronted with the simple fact that classification tasks are done all the time by biased people, Joe resorts to special pleading.

Anyone who read the ABSTRACT of C13 should know that authors themselves were asked to classify their own PAPERS. These are facts (the rating, about the PAPERS) that can be easily verified. This should be owned.

I do hope we all accept that self-reports may not be the best way to appeal to scientific validity. This should be enough to stand aside Leo's argument that C13's classification task was not *direct* enough, yet another special pleading.

Those are two very simple points. And now, according to Leo's tweet, I'm supposed to be "defending" C13:

> My response to @nevaudit passionate defence of Cook et al, in he calls José Duarte's article a joke.

https://twitter.com/TLITB1/status/496999893698097152

So now my picking up a short sentence becomes a "passionate defence". Jesus.

The discussion that follows is quite splendid.

***

Now, should I follow Joe's advice and ignore his rant? After all, it's biased in spades and utterly unscientific.

What a farce.

tlitb1
8/9/2014 12:50:09 pm

@willard (@nevaudit) 08/09/2014 6:42pm

"Joe will appreciate that a second reader (see below for the first one) implies that asking scientists what they think "does not matter" (paraphrasing) that much in the over economy of Joe's rant. "

Sorry I really don't think you can assert that our host must "appreciate" anything without showing you are some direct connection to his beliefs such as maybe having been in discussion with him. You don't have an MRI plugged to his head do you? :)

So this argumentation about his having changed opinion on rater bias, even if true, certainly isn't owned by you!

"Anyone who read the ABSTRACT of C13 should know that authors themselves were asked to classify their own PAPERS. These are facts (the rating, about the PAPERS) that can be easily verified. This should be owned."

Those capitalisations are a clue, aren't they? ;)

The fact our host when describing the authors involvement in self rating says "Not just about their abstracts..." has made an error by not saying "Not just about their papers..." is true. Maybe this is something that our host needs to own? Maybe it was something you should have picked up on at the time? Especially if you are now so passionate about that error as your captitalised hinting implies ?

"So now my picking up a short sentence becomes a "passionate defence". Jesus."

I hope this isn't considered too much of a challenge for you to answer, but just what work did you think "They did" was doing for you after your "picking up a short sentence " from the article above?

Remember you also said this about that short sentence:

"Joe's sentence is misleading. At best. It's not even close. Talk about ethics."

This certainly isn't substantiated by your later comments regarding third party interactions our host has had with other commenters. Or the use of "abstract" in the following sentence, a point which you now clearly decide is useful for hinting at a mistake, as if it had some unspoken import, without actually taking anything else from that sentence that may have added the context to correct your impression that the previous sentence was "misleading" !

Carrick
7/28/2014 02:09:10 am

The criticism isn't over the survey of authors, it's over the rankings by Cook and crew, which happens the primary thrust of the paper in terms of effort, resources consumed, content of the paper, etc.

The survey of authors was meant only an external validation of their own rankings. It was not over 12,000 papers, but just 1200 papers.

Thus the rankings provide most of the statistical power of the study, not the authors self-rankings.

You have either not read the paper, or if you have, you're in full sophist mode here.

Reply
Carrick
7/28/2014 02:13:18 am

In other news Cook has argued that the sky is blue because of magic pixie dust. People have said Cook's argument is wrong though they allow the sky is blue. Willard wonder's what the conflict is since we all agree the sky is blue and that the vast consensus of papers that take a position on the subject support the hypothesis of AGW.

Reply
willard (@nevaudit) link
8/9/2014 12:38:28 pm

> The criticism isn't over the survey of authors [...]

*The* criticism, now. As if Joe's "There's a much better method for finding out what scientists think — ask them" played no role at all.

***

Joe's main criticism was that the raters are (too) biased. This is easy to see. It drips from every single paragraph of Joe's rant.

There's no need to respond to that main criticism. Joe refuted it himself:

> I could be Pat Robertson’s assistant, and it wouldn’t change anything here.

http://judithcurry.com/2014/07/27/the-97-feud/#comment-612289

There's nothing special to plead otherwise about raters.

Only the ratings matter. Speaking of which, Joe doubled down on it at Judy's, with our emphasis:

> The study turns out to be a scam, based on random politicial activists reading and rating climate science abstracts, the focus of their activism, where they passionately desired a particular outcome, and **by virtue of their subjective ratings were in a position to deliver that outcome**.

http://judithcurry.com/2014/07/27/the-97-feud/#comment-612239

Unless Joe can show that the ratings were (a) subjective and (b) biased, he has no case. Instead, Joe goes full ad hom mode, which begs the two questions on which his rant rests, at least insofar as the main point is concerned.

Let him rate some ABSTRACTS and see how subjective the ratings are. I bet he'll find them quite conservative. One could even argue that *because* the raters knew what to expect in reaction to C13, they had to take special care of rating the ABSTRACTS conservatively. Just imagine if he C13 was based on mechanical Turks, like some social psycho studies contrarians do not read. What Joe considers a bug may very well be a feature that provides some resilience to its results.

Anyway. Another time, perhaps. When everybody will calm down.

***

We already dealt with his prescriptions above. But to counter the argument that unless one attacks the main point, one is a sophist:

> You know, your “but the main point” is so strong that I could use it against any criticism of C13 that does not contest the fact that there’s an overwhelming majority of scientists who endorse AGW, which is the main point of C13. And none that I’ve seen contests that fact. It is so strong as to undermine any kind of auditing practice that focuses on nits, details, and secondary points.

http://judithcurry.com/2014/08/04/is-the-road-to-scientific-hell-paved-with-good-moral-intentions/#comment-614727

In other words, I could use Carrick's argument to dismiss all of Joe's rant, and most of the auditing sciences.

While Joe's positive prescription (asking scientists) is secondary, it is not that secondary. On the contrary, considering that his main point is basically fallacious, I'd venture to say that it's the only important argument that needs to be addressed. There's nothing special about self-reports.

I thought it was easy to dismiss using Joe's "Jesus" and "what a farce".

Next time, keep your head up.

Thanks for playing,

w

Reply
Joe Duarte link
8/9/2014 06:58:33 pm

Willard, if I understand you correctly, you're making these claims:

1. I have to prove that the ratings were subjective.
2. I have to prove that the ratings were biased.
3. Identifying a conflict of interest in raters or researchers is "ad hominem", a fallacy.

I already refuted claim 3 in the other forum. It's strange that you would pull a trivial quote, but not address the argument I already gave. Point 3 has no hope of survival. We are never going to be interested in biased subjective raters in scientific studies (unless the study is about bias). There is nothing normal or necessary about using political activists to rate science abstracts pertinent to their activism. That's just not happening. If conservatives did it, it would never have been published, and I'm quite confident you would be tearing for the very same reasons that you are trying to evade now.

These were subjective ratings. That means they were subjective. By definition. When a researcher-rater reads some text and has to code it or decide what it means on various dimensions, that's subjective. No escaping that.

They were biased. Here's proof: "BTW, this was the only time I "cheated" by looking at the whole paper. I was mystified by the ambiguity of the abstract, with the author wanting his skeptical cake and eating it too. I thought, "that smells like Lindzen" and had to peek."

That's in the post above, the post you're commenting on. You've been silent on all the evidence. That quote makes this a fraud case. Perhaps I should have stressed that more -- this appears to be fraud, and is being investigated as such. There's another amazing quote too. There are many more. But we don't really need them. We know SS is absurdly biased and militant. They're the worst possible people for such a task. They hated some of the scientists whose work they were rating, had already savaged those scientists on their crazy website. They're at war. If you think people like that can be trusted to tell us what scientists think, we just disagree.

On the suggestion that I rate some abstracts myself and check their ratings... Nope, I'm not doing it. We're not qualified to be rating climate science abstracts. That's a scam. Those people, some of whom were luggage entrepreneurs, bloggers, and political hacks, were not qualified to rate climate science abstracts, and neither am I. This is probably their ratings diverged so wildly from the authors, on such basic criteria as whether the paper was even taking some kind of position on AGW. They're not climate scientists. Even with climate scientists, this would be a strange method, since they wouldn't be blind, they'd know who some of the authors were behind an abstract, will have rivalries and biases of their own. This approach is one of the worst ways to learn anything about a consensus.

One of the reasons we don't see this approach normally, and why it will never become acceptable, is that it **perverts the burden**. Scientists normally don't create their data – that's not the modal method. They observe and measure it in objective ways (like a thermometer – physical instruments are objective, humans and hacks reading something and deciding what it means is subjective, getting to your earlier absurdity). If we allowed this kind of junk science where political partisans could gather and assign themselves to read science abstracts -- that they wouldn't even understand or be qualified to rate reliably -- and decide what they mean, collating their results and sending it in to a scientific journal, we'd be screwed. Every partisan camp would join in the fun. We'd have conservative family groups trying to tell us what the science of Gardasil is, more Cooks cooking up junk paper on environmental sciences, a squad of Mormons subjectively rating the research on gay marriage.

We'd be screwed because the burden would now be on us to dig into the subjective ratings of a bunch of partisan hacks, investing hundreds or thousands of hours of work on each study. Too much junk would be published, and too much time wasted cleaning it out. People who want to defend this ridiculous "study" need to run the model in the heads, and iterate it dozens of times over, with a bunch of different camps doing the same thing.

So no, I'm not going to go in and rate abstracts. That's one of their scams. "Oh, we've posted them. You can rate some of them yourself!" We're not qualified to do it. Many, many climate abstracts will be unnavigable to non-climate scientists. The whole premise of this study is ridiculous. Here is a recent abstract:

"Comparisons of trends across climatic data sets are complicated by the presence of serial correlation and possible step-changes in the mean. We build on heteroskedasticity and autocorrelation robust methods, specifically the Vogelsang–Franses (VF) nonparametric testing approach, to allow for a step-change in the mean (level shift) at a known or unknown date. The VF method provides a powerful mu

Joe Duarte link
8/10/2014 08:38:56 am

On the issue "intersubjectivity" -- that won't have any epistemic standing here, or even any meaning. People can indeed have views, know what their views are, and tell us. It's ridiculous to assert that we need MRIs to know what scientists think, as though they can't talk, or as though their expressed beliefs are nonsense. If we were dumb enough to embrace the idea that people don't know what they think, it would collapse everything, and all of your argument. Because it would mean we would have no use for independent raters reading abstracts and deciding what they mean, nor we would we have any use for scientific papers as indicators of what scientists think.

I don't understand some of the thread. It's unclear who is talking sometimes, and where they're quoting or addressing someone else. But the arguments I've seen so far are terrible. I'm not interested in this level of argumentation, of not knowing what "subjective" means, of people saying we need an MRI to know what people think, and all these random quotes of irrelevant philosphical perspectives or what Richard Toll says about AGW. None of these arguments are relevant to the Cook paper, which I expect to be retracted. We have to be serious.

willard (@nevaudit) link
8/12/2014 06:05:18 am

> It's strange that you would pull a trivial quote, but not address the argument I already gave.

That quote not only addresses the point you made: it refutes it. That raters are biased don't imply that the ratings are biased. Exposing raters' bias does not suffice to show that the ratings were.

So refuting your argument is quite trivial: distinguishing raters' and ratings' biases suffices.

***

> These were subjective ratings. That means they were subjective. By definition. When a researcher-rater reads some text and has to code it or decide what it means on various dimensions, that's subjective. No escaping that.

That the raters were not peanut sorters is a trivial observation, I agree. But unless it is shown that the classification task was problematic, this observation is not even an argument.

This generalizes the first point I made, which rests on a trivial claim your very self holds.

***

> If you think people like that can be trusted to tell us what scientists think, we just disagree.

If you think you can get away with such pathetic strawman, Joe, then I don't even know how to end this sentence. Even people I distrust can be right.

I hope you realize that the quote I pulled is not that trivial.

***

> We're not qualified to be rating climate science abstracts.

The first claim leads to an interesting argument. This argument rests on an empirical matter. It can't be settled from your armchair, Joe.

This is not a trivial claim.

***

> This is probably their ratings diverged so wildly from the authors, on such basic criteria as whether the paper was even taking some kind of position on AGW.

Until some abstracts get judged, this "probable" can only be trivially empty.

***

> If we allowed this kind of junk science where political partisans could gather and assign themselves to read science abstracts -- that they wouldn't even understand or be qualified to rate reliably -- and decide what they mean, collating their results and sending it in to a scientific journal, we'd be screwed.

That's a Duarte slippery slope right there.

Even political partisans like you, Joe, can do sound science.

***

So let's recap.

Whatever you are doing right now, Joe, is not science. Just like Jim did before you, you're just bitching:

http://neverendingaudit.tumblr.com/post/89473861449

On the other hand, Jim was funny.

willard (@nevaudit) link
8/12/2014 06:35:56 am

> People can indeed have views, know what their views are, and tell us.

Indeed they can. That, in return, does not imply their self-reports are accurate, faithful, or even relevant.

The claim that we should better ask scientists what they think has two big problems. First, self-reports have limitations. Second, private beliefs matters less than what is written in the lichurchur.

***

> It's ridiculous to assert that we need MRIs to know what scientists think, as though they can't talk, or as though their expressed beliefs are nonsense.

By chance that's not what was asserted. What was asserted was that we don't have **direct** access to these beliefs. The main argument is still that self-reports have limitations. For instance, we know that some scientists misclassified their own papers.

***

So here's the deal. Just like we may argue for a plurality of voices in science (see Joe's editorial with Haidt and others), we may argue for a plurality of means to validate scientific results. Just like what validating should be in an empirical venture: a multiplicity of procedures to improve our chances to get inter-subjective results.

If what I'm saying against preferring self-reports has any "epistemic standing here" (as if Joe's was such a special place), then I don't know what could.

Good bye,

w

Joe Duarte link
8/14/2014 07:44:02 am

willard, we already know the study was false / a scam. They included a bunch of social science papers and public surveys, even a survey of cooking stove use. Nuccitelli even wanted to include a psychology paper about white males and 'denial' as evidence of scientific endorsement. They are very, very confused. The world thought they were talking about climate science. They weren't. I'll expose this more fully in the near future.

This was all too predictable. No one should ever have defended this study or this method. We will never be interested in militant political activists subjectively rating science abstracts they do not even understand. That method has no future, and hopefully, no past, if it's retracted as it should be.

You're confusing being a scientist or researcher with being a political activist and subjective rater reading science abstracts outside of one's field and deciding what they mean with regard to the political issue that is the focus of one's activism. That has nothing to do with science. We don't do that. Scientists never do that. And we never will. Because it's ridiculous and no one could trust the results of that process. So your nonsense about how I can do clean science even because I have political views or anyone else can too... that has nothing to do with putting oneself in the position of subjectively rating abstracts on a political issue where one has a conflict of interest. Your comments there seem disingenuous since you tried to refute my views on the other thread by quoting my About Me page where I say I'm a libertarian. I don't believe you're arguing in good faith anymore, and your arguments aren't substantive.

(We can't access anyone's "beliefs" with an MRI, or an fMRI -- your whole discussion of "direct access" to beliefs is nonsense.)

Joe Duarte link
8/14/2014 07:50:26 am

Someone said I made an error in saying they asked scientists to rate their abstracts? They asked them to rate their "papers" not their abstracts?

If this is true I'll correct. But this isn't good news for them. It means they didn't hold their variable constant. But I doubt that mattered much. What matters more is that the study is invalid because the method is invalid. It's also invalid because they collaborated online when they say in the paper that they used independent raters. At least one rater openly admitted to fraud in their forum. And it's also invalid because they included a bunch of social science papers, surveys of the public, even on cooking stove use, as evidence of the consensus. The world assumed they were talking about climate science, about actual scientific evidence, things that have some epistemic connection to "scientific knowledge of AGW".

assman
10/6/2014 07:47:55 am

Willard's basic argument is that we have to prove that Cook's methodology is flawed. But we don't have to prove anything. I start with the idea that all scientific research is garbage. The individual researchers have to prove me wrong.

We don't have to prove Cook's methodology is flawed...he has to prove it is not flawed.

We don't have to prove that biased raters lead to biased ratings. Its enough to suspect that. Cook's burden of proof is to show that isn't the case.

In other word's the burden of proof is on Cook not us.

MikeR
7/28/2014 04:41:50 pm

http://klimazwiebel.blogspot.com/2014/06/misrepresentation-of-bray-and-von.html?showComment=1403709840198#c5378143380060518114
Bray here gives something that I think was not available before: the full downloadable data set for their 2008 survey (18% response rate).
I'm not the one to do it, but I would love to see cross-tabs of the results: Are the "skeptical" scientists the same ones on various questions, or is there is a considerably larger group of them who collectively doubt at least one of the critical issues of the consensus? I can't tell from the individual response histograms whether it's the same 1/6 or so every time.

Reply
MikeR
8/12/2014 02:03:18 am

Uh-huh: http://pubs.acs.org/doi/pdf/10.1021/es501998e
Looks interesting. As expected, it shows a solid consensus (not 97%!) and a minority of more skeptical scientists. The result on equilibrium climate sensitivity was a surprise.

Reply
Joe Duarte link
8/14/2014 04:05:42 pm

Looks like a good method. I don't trust anything Cook is involved with, since he can't be trusted, given that he counted a bunch of public surveys and psychology studies as scientific evidence of anthropogenic climate change, lied about his method, and didn't retract the paper on his own. Having his name on a journal article is like deciding in advance to have less impact.

Reply



Leave a Reply.

    José L. Duarte

    Social Psychology, Scientific Validity, and Research Methods.

    Archives

    July 2017
    December 2015
    October 2015
    September 2015
    August 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014

    Categories

    All

    RSS Feed

Powered by Create your own unique website with customizable templates.
  • Home
  • Blog
  • Media Tips
  • Research
  • Data
  • An example
  • Haiku
  • About