UPDATE SEPTEMBER 11, 2014:
(This post was my initial take. For a better and more up-to-date report, go here. For follow-up, go here.) The paper includes a bunch of psychology studies and public surveys as evidence of scientific endorsement of anthropogenic climate change, even a survey of cooking stove use. This vacates the paper. Lots of things vacate the paper, such as its inherently invalid and untrustworthy method, and the false claim that they used independent raters. But I think we will have broad agreement that the inclusion of psychology papers and public surveys vacate the paper. The world thought this was about climate science. This is ridiculous, but trivially predictable when you have human raters who have an obvious bias with respect to the subject of their ratings, who desire a specific outcome for the study, and who are empowered to deliver that outcome via their ratings. What happened here is exactly why we could never accept a "study" based on such a method. The following papers were rated as endorsement and included in their 97% figure. Dana Nuccitelli even wanted to include a psychology paper about white males and denial as evidence of scientific endorsement. It's jaw dropping that someone who is supposed to inform the public on science would want to do that to the world, to generate a consensus figure based on studies that have no bearing on the consensus. There will be more such papers for those willing to invest time in this scam paper, and I haven't listed all that I found yet. I'll write this story up in a separate post when I have time, and for a news magazine. The broader ethics disaster here is going into a Nature submission: Chowdhury, M. S. H., Koike, M., Akther, S., & Miah, D. (2011). Biomass fuel use, burning technique and reasons for the denial of improved cooking stoves by Forest User Groups of Rema-Kalenga Wildlife Sanctuary, Bangladesh. International Journal of Sustainable Development & World Ecology, 18(1), 88–97. (This is a survey of the public's stove choices in Bangladesh, and discusses their value as status symbols, defects in the improved stoves, the relative popularity of cow dung, wood, and leaves as fuel, etc. They mention climate somewhere in the abstract, or perhaps the word denial in the title sealed their fate.) Boykoff, M. T. (2008). Lost in translation? United States television news coverage of anthropogenic climate change, 1995–2004. Climatic Change, 86(1-2), 1–11. De Best-Waldhober, M., Daamen, D., & Faaij, A. (2009). Informed and uninformed public opinions on CO2 capture and storage technologies in the Netherlands. International Journal of Greenhouse Gas Control, 3(3), 322–332. Tokushige, K., Akimoto, K., & Tomoda, T. (2007). Public perceptions on the acceptance of geological storage of carbon dioxide and information influencing the acceptance. International Journal of Greenhouse Gas Control, 1(1), 101–112. Egmond, C., Jonkers, R., & Kok, G. (2006). A strategy and protocol to increase diffusion of energy related innovations into the mainstream of housing associations. Energy Policy, 34(18), 4042–4049. Gruber, E., & Brand, M. (1991). Promoting energy conservation in small and medium-sized companies. Energy Policy, 19(3), 279–287. Şentürk, İ., Erdem, C., Şimşek, T., & Kılınç, N. (2011). Determinants of vehicle fuel-type preference in developing countries: a case of Turkey. (This was a web survey of the general public in Turkey.) Grasso, V., Baronti, S., Guarnieri, F., Magno, R., Vaccari, F. P., & Zabini, F. (2011). Climate is changing, can we? A scientific exhibition in schools to understand climate change and raise awareness on sustainability good practices. International Journal of Global Warming, 3(1), 129–141. (This paper is literally about going to schools in Italy and showing an exhibition.) Palmgren, C. R., Morgan, M. G., Bruine de Bruin, W., & Keith, D. W. (2004). Initial public perceptions of deep geological and oceanic disposal of carbon dioxide. Environmental Science & Technology, 38(24), 6441–6450. (Two surveys of the general public.) Semenza, J. C., Ploubidis, G. B., & George, L. A. (2011). Climate change and climate variability: personal motivation for adaptation and mitigation. Environmental Health, 10(1), 46. (This was a phone survey of the general public.) Héguy, L., Garneau, M., Goldberg, M. S., Raphoz, M., Guay, F., & Valois, M.-F. (2008). Associations between grass and weed pollen and emergency department visits for asthma among children in Montreal.Environmental Research, 106(2), 203–211. (They mention in passing that there are some future climate scenarios predicting an increase in pollen, but their paper has nothing to do with that. It's just medical researchers talking about asthma and ER visits in Montreal, in the present. They don't link their findings to past or present climate change at all (in their abstract), and they never mention human-caused climate change – not that it would matter if they did.) Lewis, S. (1994). An opinion on the global impact of meat consumption. The American Journal of Clinical Nutrition, 59(5), 1099S–1102S. (Just what it sounds like.) De Boer, I. J. (2003). Environmental impact assessment of conventional and organic milk production.Livestock Production Science, 80(1), 69–77 Acker, R. H., & Kammen, D. M. (1996). The quiet (energy) revolution: analysing the dissemination of photovoltaic power systems in Kenya. Energy Policy, 24(1), 81–111. (This is about the "dissemination" of physical objects, presumably PV power systems in Kenya. To illustrate the issue here, if I went out and analyzed the adoption of PV power systems in Arizona, or of LED lighting in Lillehammer, my report would not be scientific evidence of anthropogenic climate change, or admissable into a meaningful climate consensus. Concretize it: Imagine a Mexican walking around counting solar panels, obtaining sales data, typing in MS Word, and e-mailing the result to Energy Policy. What just happened? Nothing relevant to a climate consensus.) Vandenplas, P. E. (1998). Reflections on the past and future of fusion and plasma physics research.Plasma Physics and Controlled Fusion, 40(8A), A77. (This is a pitch for public funding of the ITER tokamak reactor, and compares it to the old INTOR. For example, we learn that the major radius of INTOR was 5.2 m, while ITER is 8.12 m. I've never liked the funding conflict-of-interest argument against the AGW consensus, but this is an obvious case. The abstract closes with "It is our deep moral obligation to convince the public at large of the enormous promise and urgency of controlled thermonuclear fusion as a safe, environmentally friendly and inexhaustible energy source." I love the ITER, but this paper has nothing to do with climate science.) Gökçek, M., Erdem, H. H., & Bayülken, A. (2007). A techno-economical evaluation for installation of suitable wind energy plants in Western Marmara, Turkey. Energy, Exploration & Exploitation, 25(6), 407–427. (This is a set of cost estimates for windmill installations in Turkey.) Gampe, F. (2004). Space technologies for the building sector. Esa Bulletin, 118, 40–46. (This is magazine article – a magazine published by the European Space Agency. Given that the ESA calls it a magazine, it's unlikely to be peer-reviewed, and it's not a climate paper of any kind – after making the obligatory comments about climate change, it proceeds to its actual topic, which is applying space vehicle technology to building design.) Ha-Duong, M. (2008). Hierarchical fusion of expert opinions in the Transferable Belief Model, application to climate sensitivity. International Journal of Approximate Reasoning, 49(3), 555–574. (The TBM is a theory of evidence and in some sense a social science theory – JDM applied to situations where the stipulated outcomes are not exhaustive, and thus where the probability of the empty set is not zero. This paper uses a dataset (Morgan & Keith, 1995) that consists of interviews with 16 scientists in 1995, and applies TBM to that data. On the one hand, it's a consensus paper (though dated and small-sampled), and would therefore not count. A consensus paper can't include other consensus papers – circular. On the other hand, it purports to estimate of the plausible range of climate sensitivity, using the TBM, which could make it a substantive climate science paper. This is ultimately moot given everything else that happened here, but I'd exclude it from a valid study, given that it's not primary evidence, and the age of the source data. (I'm not sure if Ha-Duong is talking about TCS or ECS, but I think it's ECS.)) Douglas, J. (1995). Global climate research: Informing the decision process. EPRI Journal. (This is an industry newsletter essay – the Electric Power Research Institute. It has no abstract, which would make it impossible for the Cook crew to rate it. It also pervasively highlights downward revisions of warming and sea level rise estimates, touts Nordhaus' work, and stresses the uncertainties – everything you'd expect from an industry group. For example: "A nagging problem for policy-makers as they consider the potential costs and impacts of climate change is that the predictions of change made by various models often do not agree." In any case, this isn't a climate paper, or peer-reviewed, and it has no abstract. They counted it as Implicit Endorsement – Mitigation. (They didn't have the author listed in their post-publication database, so you won't find it with an author search.)) ----------------------------------------------------------------------- Original post below: Ignore them completely – that's your safest bet right now. Most of these studies use political activists as the raters, activists who desired a specific outcome for the studies (to report the highest consensus figure possible), and who sometimes collaborated with each other in their rating decisions. All of this makes these studies completely invalid and untrustworthy (and by customary scientific standards, completely unpublishable.) I had no idea this was happening. This is garbage, and a crisis. It needs to stop, and those papers need to be retracted immediately, especially Cook, et al (2013), as we now have evidence of explicit bias and corruption on the part of the raters. (If that evidence emerged during the actual coding period, it would be fraud.) PAUSE BUTTON: This issue has nothing to do with the reality of the consensus, a reality that was evident before this political operation/study unfolded. I am not a "denier", or even a skeptic. I don't know enough, or have an argument that would lead me to be, even a "lukewarmer". There are 7 billion people on this earth, and we're not all sorted into good people and deniers. I'm quite confident that there's a consensus – a large majority – of climate scientists who endorse both that the earth has warmed over the last 60+ years, and that human activity caused most of it. The warming itself is a descriptive fact, not a theory or inference. I'd be quite surprised, amazed, if the basic theory of anthropogenic forcing as a principal cause turned out to be false, and somewhat surprised if AGW turns out to be mild, like 1° C. (Unfortunately, there is little research on scientists' views on the likely severity of future warming. A consensus only that humans have caused warming, a consensus so vague and broad, is not very useful. The Cook study would be unhelpful even if it were valid, which it is not.) Back to the program... In social science, it's not uncommon to use trained human raters to subjectively rate or score some variable — it can be children's behavior on a playground, interviews of all kinds, and often written material, like participants' accounts of a past emotional experience. And we have a number of analytical and statistical tools that go with such rating studies. But we would never use human raters who have an obvious bias with respect to the subject of their ratings, who desire a specific outcome for the study, and who would be able to deliver that outcome via their ratings. That's completely nuts. It's so egregious that I don't think it even occurs to us as something to look out for. It never happens. At least I've never heard of it happening. There would be no point in running such a study, since it would be dismissed out of hand and lead to serious questions about your ethics. But it's happening in climate science. Sort of. These junk studies are being published in climate science journals, which are probably not well-equipped to evaluate what are ultimately social science studies (in method). And I assume the journals weren't aware that these studies used political activists as raters. Examples of the unbelievable bias and transparent motives of the raters' in Cook, et al (2013) below. These are excerpts from an online forum where the raters apparently collaborated with each other in their ratings. It's worse than that – the first example is evidence of fraud if this was during the operational rating period. If it was during training, it's practice for fraud. "BTW, this was the only time I "cheated" by looking at the whole paper. I was mystified by the ambiguity of the abstract, with the author wanting his skeptical cake and eating it too. I thought, "that smells like Lindzen" and had to peek." Let's look at how the paper described their method: "Abstracts were randomly distributed via a web-based system to raters with only the title and abstract visible. All other information such as author names and affiliations, journal and publishing date were hidden." Hence the fraud issue. Next example: "Man, I think you guys are being way too conservative. Papers that talk about other GHGs causing warming are saying that human GHG emissions cause global warming. How is that not an implicit endorsement? If CFC emissions cause warming because they're GHGs, then CO2 emissions cause global warming for the same reason. That's an implicit endorsement." One wonders if a passing bird counts as implicit evidence of the consensus. This is what we call a nonfalsifiable hypothesis. If this was the live coding period, this is a joke. A sad, ridiculous joke. And it's exactly what you'd expect from raters who are political activists on the subject they're rating. Who in their right minds would use political climate activists as raters for a serious report on the consensus? This is so nuts that I still have a hard time believing it actually happened, that the famous 97% paper was just a bunch of activists rating abstracts. I've called on the journal – Environmental Research Letters – to retract this paper. I'm deeply, deeply confused how this happened. If this is what we're doing, we should just call it a day and go home – we can't trust journals and science organizations on this topic if they're going to pull stunts like this. Moreover the raters weren't generally scientists, much less climate scientists. One of the raters is a former bike messenger who founded Timbuk2, a company that makes great bags (Rob Honeycutt.) I've got mad props for him for what he's done with Timbuk2 – for anyone who starts their own business and follows their vision. That's very hard to do. But I'm not going to want luggage entrepreneurs to be rating climate studies or interpreting science for the world. I'll buy you a beer any day of the week Rob, but I just can't sign off on this. Other raters are just bloggers. I don't mean scientists who blog. I just mean bloggers, who are not scientists. Nothing against bloggers – I'm just not feeling that, don't need bloggers to be rating climate science abstracts. Another rater is only identified by an anonymous username – logicman. Who can argue with logicman? Is there a big L on his uniform? Where's emotionman been lately? What's fallacygirl up to? Anyway, probably no one needs to be subjectively rating climate abstracts, but if anyone did, it would have to be climate scientists. Is this controversial in some cultures? More importantly, I don't care who you are – even if you're a staunch liberal, deeply concerned about the environment and the risks of future warming, this isn't something you should tolerate. If we're going to have a civilization, if we're going to have science, some things need to be non-political, some basic rules need to apply to everyone. I hope we can all agree that we can't seriously estimate the AGW consensus by having political activists rate climate paper abstracts. It doesn't matter whether the activists come from the Heritage Foundation or the Sierra Club, Timbuk2 or Eagle Creek – people with a vested ideological interest in the outcome simply can't be raters. Also note that anyone who wants to defend this nonsense, who wants to argue that it's fine for political activists to subjectively rate science abstracts – which they won't be qualified to even understand – on issues central to their political activism, needs to also accept the same method when executed by partisans on the other side. If Heartland gathers a bunch of activists to read abstracts and tell us what they mean, all the Cook defenders need to soberly include the Heartland study. The AAAS needs to include the Heartland study in their reports, including it in their average (they didn't do an average, just cherry-picked junk studies.) If a squad of Mormons reads the abstracts of a bunch of studies on the effects of gay marriage, and sends their ratings to a journal, Cook defenders should be cool with that, and should count it as knowledge about the consensus on gay marriage. Of course, these scenarios would suck. This method perverts the burden – it allows any group of hacks to present their subjective "data", putting the burden on us, on everyone else, to do a bunch of work to validate their ratings. We should never be interested in studies based on activists reading and rating abstracts – it's a road we don't want to travel. Researchers normally get their data by observation – they don't create it, not normally. We don't need random people to interpret climate science for us, to infer the meaning of abstracts, to tell us what scientists think. That's an awful method – extremely vulnerable to bias, noise, incompetence, and poor execution. The abstracts for many papers won't even have the information such studies are looking for, and are simply not written at the level of abstraction of "this study provides support for human-caused warming", or "this study rejects human-caused warming". Most climate science papers are written at a more granular and technical level, are appropriately scientifically modest, and are not meant to be political chess pieces. (Updated paragraph: I had incorrectly suggested that they asked authors to self-rate their abstracts, just as Cook's raters did, when in fact they asked them to rate their papers. The failure to hold that variable constant complicates things, but admittedly it would be very difficult for authors to strictly rate an abstract, as opposed to the whole paper they wrote. None of this matters anymore given the much larger issues that have emerged.) There's a much better method for finding out what scientists think — ask them. Direct surveys of scientists, with more useful and specific questions, is a much more valid method than having ragtag teams of unqualified political activists divine the meanings of thousands of abstracts. Interestingly, but not surprisingly, direct survey studies tend to report smaller consensus figures than the abstract rating studies (I'll have more on that later.) The consensus will be strong regardless, so it's especially confusing why people feel the need to inflate it. In the second part of their study, Cook et al surveyed authors of the papers in their dataset – that's not at all the way to survey climate scientists, since their paper search seems to have bizarre and unexplained results, e.g. it excluded everything Richard Lindzen published after 1997. Their pool of authors is invalid if we don't know whether the search had some selection biases. It's an arbitrary pool – they'd need to validate that search and its results before we could trust it, and they should've done that at the outset. And the fact that they included psychologists, social scientists, pollsters, engineers and other non-climate science or even non-natural sciences in the 97% (as endorsement) makes their survey of authors moot. (For subjective ratings of abstracts to be a valid and useful method, it would need to be a carefully selected pool of raters, without ideological agendas, implementing a very specific and innovative method, under strict procedures of independence. I can imagine philosophy of science questions that might be anwerable by such methods, based on things like the usage of certain kinds of words, the way hypotheses are framed and results reported, etc. – but much of that could be done by computers. The studies that have been published are nothing like this, and have no hope of being valid.) NOTE: The Cook, et al data was leaked or hacked a few months ago – I'm confused by what's going on here. Cook allegedly wouldn't release some of his data, and ultimately a bunch of data was hacked or scraped off a server, and it included the raters' online discussion forum. Climate science features far too many stories of people refusing to release their data, and mysteriously hacked data. Brandon Shollenberger has posted the data online. It's amazing that if it weren't for him, we wouldn't know how sketchy the study truly was. There's much more to report – the issues raised by the leaked dataset extend far beyond the quotes above and rater bias. The University of Queensland has apparently threatened to sue Shollenberger, on some sort of "intellectual property" grounds. Australia is one of my favorite countries, but we need to stand up for him. To the best of my knowledge, he hasn't done anything wrong – he hasn't posted any sort of sensitive information or anything that would violate our core principles of scientific ethics. The identities of the raters were not confidential to begin with, so there was no new disclosure there. He's exposed the cartoonish bias and corruption of the rating process that underlied this "study", and in so doing, he's served the interests of scientific ethics, not violated them. Even if those online discussions took place during the training period, it would still be alarming evidence of bias, but other evidence suggests this was not a training period. I've never heard anyone call scientific data "intellectual property" before – that's an interesting legal theory, since this is not about an invention or an original creative work. If scientists were to get into the habit of treating data as IP, or otherwise proprietary, it would impair scientific progress and quality control – it would also violate the basic premise behind peer review. Shollenberger's disclosures took place in a context where the authors apparently refused to release all of their data, so I'm not sure what other options there were for him. In other words, he's a whistleblower. You can contact the research governance people at the University of Queensland here (scroll to the bottom of that page). Update: In their legal threat letter, the University of Queensland says that the letter itself is intellectual property, and that publication of the letter is cause for separate legal action. What? That's like an NSL. Is this new? What kind of upside-down bizarro world is this? You can send someone a threat letter, copyright the letter, and force them not to disclose it? This is unbelievably creepy. Update 2: Political activism is not a vice. I'm not saying it's a vice. If you think the left, or right, or libertarian, or Rastafarian perspective is true, do your thing. People have the right to be left-wing activists, conservative activists, environmental activists, wherever their minds and their values have taken them. I'm a pro-immigration activist sometimes. But I will never be a subjective rater of textual material in a study whose outcome would potentially serve my pro-immigration cause, especially if my ratings could deliver that outcome, nor will I ever assemble a team of pro-immigration activists to perform such ratings. Are we being serious right now? This is ridiculous. We can't do that. Do we want to call what we do science? This shouldn't be hard.
38 Comments
Barry Woods
7/23/2014 07:07:32 am
John Cook - from the leaked The Consensus Project - Skeptical Science forum)
Reply
Carrick
7/23/2014 07:31:40 am
The ethics approval is located here:
Reply
7/23/2014 03:57:31 pm
The ethics approval did NOT cover the SKS ratings; it only covered a separate and much smaller author self-rating survey and was submitted after the SKS ratings were nearly all completed. Indeed, it refers to the SKS ratings as having been done by "Team members".
Reply
Carrick
7/24/2014 02:24:32 am
<i>The ethics approval did NOT cover the SKS ratings; </i>
James_V
7/23/2014 09:07:01 am
This is one of the more egregious examples of shoddy climate science, but sadly there is more where that came from. Skeptics have known this sort of thing was happening but nobody will listen. I guess anybody with any doubts can't be trusted? Sad state of affairs.
Reply
Joshua
7/23/2014 09:33:25 am
Not to defend the paper - but it seems to me to be unrealistic to use potential bias on the part of raters to out of hand invalidate a social science survey. Activists conduct social science surveys all the time. That activists have rated evidence could be a valid reason for a high level of scrutiny - but that doesn't mean that you should consider a survey invalid because of assumed bias on the part of the raters. Any scientist interested in assessing hypotheses could be considered as "activists" with an inherent bias.
Reply
Joe Duarte
7/23/2014 03:03:46 pm
Hi Joshua – The bias here is far too extreme to permit in a scientific publication. This wasn't a survey study -- they didn't hand out surveys, and survey studies don't involved ratings by other people. Surveys are completed by the participants. The researcher has no role in entering responses, unless it's an interview. (They surveyed some scientists about their abstracts after the activists had already rated them – there are a few issues with what happened there, but I don't want to distract from the core issue right now, which is that this study was a scam. The study actually has multiple points of failure, multiple points that would call for retraction. It won't survive a light breeze, much less a serious evaluation. I'm only focusing on a couple of issues.)
Reply
Joe Duarte
7/23/2014 03:21:26 pm
FYI, I should caution people – I don't recommend you create accounts on their site. The last time I tried to login there a few months ago, Google Chrome flashed a SQL injection warning, and I bugged out. Their database may have been infected with malware, or it might have been specifically targeted at me, triggered by my login.
Seth
7/27/2014 04:14:52 pm
re: "(They surveyed some scientists about their abstracts after the activists had already rated them – there are a few issues with what happened there, but I don't want to distract from the core issue right now, which is that this study was a scam. ..."
Joe Duarte
7/27/2014 05:25:29 pm
Hi Seth - Nice point. Their ratings didn't agree with the scientists' own ratings. See Table 5. For example, for categorization as "no position", they had 62.5%, while the authors who responded were at 35.5%.
A. Scott
7/29/2014 04:35:36 am
What a load of hooey Joshua ... the electrons "wasted" on Cook, Lewandowsky et al are because of the clearly biased results and shoddy, at best - unethical at worst - practices of Cook, Lewandowsky and that entire gang.
Reply
7/23/2014 09:59:04 am
To me, the bigger issue is not so much that Cook et al did the study the way they did; people do and submit bad studies all the time. The more serious issue is that ERL accepted it, *and* that it was their most highly downloaded paper in 2013. Now maybe the latter's a bad thing and maybe a good one, that depends on what all those downloaders think about it once they read it, if they do. But the fact is, ERL published it, and that's on them. And Climatic Change has now followed suit with van der Linden et al, on how best to communicate "consensus" findings, an unbelievably trite paper.
Reply
7/23/2014 10:08:40 am
<blockquote>The more serious issue is that ERL accepted it, *and* that it was their most highly downloaded paper in 2013. </blockquote>
Reply
Carrrick
7/27/2014 07:22:34 pm
Cut the sophism, willard.
Reply
> Cut the sophism [...]
> Cut the sophism [...]
tlitb1
8/5/2014 10:44:40 pm
Did you stop reading when you got the end of the part of the above article you quoted?:
Reply
tlitb1
8/5/2014 10:58:29 pm
Excuse me,
tlitb1
8/6/2014 07:37:52 pm
@tlitb1 08/06/2014 5:44am 8/9/2014 03:14:15 am
> Are you saying Cook et al did "[find] out what scientists think" about the "consensus" question? 8/9/2014 03:14:25 am
> Are you saying Cook et al did "[find] out what scientists think" about the "consensus" question?
tlitb1
8/9/2014 10:30:04 am
@willard (@nevaudit) 08/09/2014 10:14am 8/9/2014 11:42:20 am
> You picked a short sentence from the above article, quoted it, and then implied it was easy to gainsay by saying "They did".
tlitb1
8/9/2014 12:50:09 pm
@willard (@nevaudit) 08/09/2014 6:42pm
Carrick
7/28/2014 02:09:10 am
The criticism isn't over the survey of authors, it's over the rankings by Cook and crew, which happens the primary thrust of the paper in terms of effort, resources consumed, content of the paper, etc.
Reply
Carrick
7/28/2014 02:13:18 am
In other news Cook has argued that the sky is blue because of magic pixie dust. People have said Cook's argument is wrong though they allow the sky is blue. Willard wonder's what the conflict is since we all agree the sky is blue and that the vast consensus of papers that take a position on the subject support the hypothesis of AGW.
Reply
8/9/2014 12:38:28 pm
> The criticism isn't over the survey of authors [...]
Reply
8/9/2014 06:58:33 pm
Willard, if I understand you correctly, you're making these claims: 8/10/2014 08:38:56 am
On the issue "intersubjectivity" -- that won't have any epistemic standing here, or even any meaning. People can indeed have views, know what their views are, and tell us. It's ridiculous to assert that we need MRIs to know what scientists think, as though they can't talk, or as though their expressed beliefs are nonsense. If we were dumb enough to embrace the idea that people don't know what they think, it would collapse everything, and all of your argument. Because it would mean we would have no use for independent raters reading abstracts and deciding what they mean, nor we would we have any use for scientific papers as indicators of what scientists think. 8/12/2014 06:05:18 am
> It's strange that you would pull a trivial quote, but not address the argument I already gave. 8/12/2014 06:35:56 am
> People can indeed have views, know what their views are, and tell us. 8/14/2014 07:44:02 am
willard, we already know the study was false / a scam. They included a bunch of social science papers and public surveys, even a survey of cooking stove use. Nuccitelli even wanted to include a psychology paper about white males and 'denial' as evidence of scientific endorsement. They are very, very confused. The world thought they were talking about climate science. They weren't. I'll expose this more fully in the near future. 8/14/2014 07:50:26 am
Someone said I made an error in saying they asked scientists to rate their abstracts? They asked them to rate their "papers" not their abstracts?
assman
10/6/2014 07:47:55 am
Willard's basic argument is that we have to prove that Cook's methodology is flawed. But we don't have to prove anything. I start with the idea that all scientific research is garbage. The individual researchers have to prove me wrong.
MikeR
7/28/2014 04:41:50 pm
http://klimazwiebel.blogspot.com/2014/06/misrepresentation-of-bray-and-von.html?showComment=1403709840198#c5378143380060518114
Reply
MikeR
8/12/2014 02:03:18 am
Uh-huh: http://pubs.acs.org/doi/pdf/10.1021/es501998e
Reply
8/14/2014 04:05:42 pm
Looks like a good method. I don't trust anything Cook is involved with, since he can't be trusted, given that he counted a bunch of public surveys and psychology studies as scientific evidence of anthropogenic climate change, lied about his method, and didn't retract the paper on his own. Having his name on a journal article is like deciding in advance to have less impact.
Reply
Leave a Reply. |
José L. DuarteSocial Psychology, Scientific Validity, and Research Methods. Archives
February 2019
Categories |