Jose L. Duarte
  • Home
  • Blog
  • Media Tips
  • Research
  • Data
  • An example
  • Haiku
  • About

How one paleo-participant can change the outcome of a study

1/6/2015

66 Comments

 
Lewandowsky, Gignac, and Oberauer (2013) authored "The Role of Conspiracist Ideation and Worldviews in Predicting Rejection of Science" in PLOS ONE. (Paper here.)

This study has many of the same features as their Psychological Science scam. In that study, they falsely linked belief in the moon-landing hoax to climate skepticism when in fact only three participants out of 1145 held both of those beliefs, and over 90% of climate skeptics in their sample rejected the moon-landing hoax.

In the PLOS ONE study, we see the same broken conspiracy items, e.g. the New World Order item erroneously refers to the NWO as a group, the JFK item doesn't describe much of a conspiracy, the free market items are written from a leftist perspective, using proprietary leftist terminology. The validity of this study would be in doubt regardless of the results.

A much more serious problem, however, is that there is bad data in the sample. Most consequentially, there is a 32,757-year-old, a veritable paleo-participant. (Data here.)

There are also seven minors, including a 5-year-old and two 14-year-olds, two 15-year-olds, and two others.

They were alerted to the presence of the minors and the paleo-participant over a year ago, and did nothing.

This would be a serious problem in any context. We cannot have minors or paleo-participants in our data, in the data we use for analyses, claims, and journal articles. It's even more serious given that the authors analyzed the age variable, and reported its effects. They state in their paper:

--- "Age turned out not to correlate with any of the indicator variables."

This is grossly false. It can only be made true if we include the fake data. If we remove the fake data, especially the 32,757-year-old, age correlates with most of their variables. It correlates with six of their nine conspiracy items, and with their "conspiracist ideation" combined index. It also correlates with views of vaccines – a major variable in their study. See the graph below.




Picture
Picture


See the full Plotly graph here. (By "fake ages", I mean that the 32,757 age is presumably fake, and I would assume the 5-year-old is not in fact a precocious 5-year-old who somehow got through the uSamp.com / Qualtrics participant pool that the authors used. As we get into the 14- and 15-year-olds in the sample, it's easier to imagine these might be true ages, and I think we become very concerned about the possibility of actual minors in the sample.)

(As noted in the graph, all the correlations between age and conspiracy theories were negative, perhaps contrary to common stereotypes.)

It's highly unusual to have out-of-range ages, especially five-digit ages, in survey data obtained electronically. Any of the online survey systems we use will validate the age field for us. That is, they won't accept an invalid age. No one should be able to say that they're 5 years old, or 32,757, and proceed to participate in an IRB-approved psychology study. The authors apparently used Qualtrics. I use it all the time. When building a survey, you customarily set the age validation right there on the side panel, like so:






























What's even more concerning is that the authors reported the median age (43), and even the quartiles, in the paper. And as noted above, they analyzed age in relation to other variables. It's difficult to understand how any researcher would know the median age, quartiles, and correlations with other variables, but not encounter the mean age, which was 76. That mean would immediately set off alarms for any researcher using normal population samples. Any statistical software is going to show the mean as part of a standard set of descriptive statistics. (The mean after removing the fakes and minors, is 43.)

It's also hard to imagine how they did not notice the 5-year-old, or the 32,757-year-old, which is the outlier responsible for the inflated mean age. Min and max values are given by default in most descriptive statistics outputs.

That one data point – the paleo-participant – is almost single-handedly responsible for knocking out all the correlations between age and so many other variables. If you just remove the paleo-participant, leaving the minors in the data, age lights up as a correlate across the board. Further removing the kids will strengthen the correlations.

What concerns me the most is that these researchers were alerted that their data was bad on October 4, 2013 and did nothing about it. A commenter posted directly on Lewandowsky's webpage where he had announced the paper, a mere two days after the announcement:

"Additional problems exist as well. For example, one respondent claims an age of 32,757 years, and another claims an age of 5. Do you believe this data set should be used as is, despite these obvious problems?"

Almost a year later, on August 18, 2014, I posted a comment directly the PLOS ONE page for their paper, and noted the bogus age data¹. They've known for a very long time that there is a 32,757-year-old in their data, along with a 5-year-old, two 14-year-olds, two 15-year-olds and two other minors, and they've known that they reported analyses on the age variable in their study. They did nothing.

I think it's safe to assume that they've known for quite some time that the above-mentioned claim is completely false: "Age turned out not to correlate with any of the indicator variables."

A 32,757-year-old will grossly inflate the mean and corrupt the deviation scores and SD – any trained researcher would know that this could severely impact any correlation analyses.


Some of their other effects seem to hold, but the coefficients are smaller controlling for age. However, I would not take any of their findings seriously given that:

  • Too many of their items are of very low psychometric quality. They're often vague, double-barrelled, and politically biased, e.g. "The free market system may be efficient for resource allocation but it is limited in its capacity to promote social justice." (R) Social justice is a term of art of the left – no one else uses that term. It denotes a contemporary left/liberal conception of justice, focused on issues like income equality, the welfare of minorities, etc. Conservatives, libertarians, moderates, et al. will have different conceptions of justice, focused on different concerns, and more importantly, they don't use that term and I would not be confident in their interpretations of it, or what their responses signify.
  • Their composite variables are somewhat arbitrary and don't survive factor analysis (e.g. there is an environmentalism factor tucked away in their "free market" items, and its relationship to some of the science variables is unflattering, and of course, unreported.)
  • They deleted hundreds of participants – a full 28% of their data – including anyone who did not answer every single question. No one is obligated to answer every question, nor will they necessarily have opinions on bizarre theories they've never heard of, or on lots of things. I'm completely unfamiliar with the practice of deleting large swaths of data.
  • They've known for more than a year that there was a 32,757-year-old and seven minors in their data, and did nothing. This suggests a somewhat broad lack of concern about data quality and truth. I think it would be overventuresome to be interested in the rest of their data, their claimed effects, and so forth. We have much higher-quality data on conspiracy beliefs from professionals at places like Gallup.

I don't understand how anyone could let a paper just sit there if they know the data is bad and specific claims in the paper are false. No credible social psychologist would simply do nothing upon discovering that there were minors in their data, or a five-digit age. I'd be running to my computer to confirm any report that claims I'd made in a peer-reviewed journal article rested on bad data, fake participants, etc. I wouldn't be able to sleep if I knew I had something like that out there, and would have to retract the paper or submit a corrected version. You can't just leave it there, knowing that it's false.

Their behavior is beyond strange at this point. The best case scenario here is that Lewandowsky is the worst survey researcher we'll ever see. My undergraduates do far better work than this. This is ridiculous.

There's more to the story. They also claimed "although gender exhibited some small associations, its inclusion as a potential mediator in the SEM model (Figure 2) did not alter the outcome."

Gender cannot be a mediator between these variables, since gender is usually pre-assigned and somewhat fixed, so I don't know what they're talking about. In any case, gender is strongly associated with both views of vaccines and views of GMO. It is the strongest predictor of views of GMO, out of all the variables in the study. It remains so in an SEM model with all their preferred variables included. (The effects are in opposite directions – women are more negative on GMOs and more positive toward vaccines than men.)

In any case, something is very wrong here. The authors should explain how the 32,757-year-old got into their data. They should explain how minors got into their data. They should explain why they did nothing for more than a year. This is a very simple dataset – it's a simple spreadsheet with 42 columns, about as simple as it gets in social science. It shouldn't have taken more than a few days to sort it out and run a correction, retraction, or whatever action the circumstances dictated. These eight purported participants allowed them to claim that age wasn't a factor. It allowed them to focus on the glitzy political stuff, allowed them to focus on finding something negative to pin on conservatives.

They don't tell you until late in the paper that conservatism is negatively correlated with belief in conspiracies – the exact opposite of what they claimed in the earlier scam paper that APS helped promote. Also note that we already know from much higher quality research that Democrats are more likely than Republicans to believe in the moon hoax, though it's a small minority in both cases (7% vs. 4%), and that Democrats endorse every version of the JFK conspiracy at higher rates. I think some journals might be unaware that the pattern of these conspiracy beliefs across the left-right divide is already well-documented by researchers who have much higher methodological standards – professional pollsters at Gallup, Pew, et al. We don't need junky data from politically-biased academics when we already have high-quality data from professionals.

Which brings us back to the previous paper. APS extended that scam in their magazine, fabricating completely new and false claims that were not made in the paper at all, such as that free market endorsement was positively correlated with belief in an MLK assassination conspiracy and the moon-landing hoax. Neither of these claims is true. The data showed the exact opposite for the MLK item (which we already knew from real and longstanding research) – free market endorsement predicted rejection of that conspiracy, r = -.144, p < .001. And there was no correlation at all between free market views and belief in the moon-landing hoax, r = .029, p = .332. APS just made it up... They smeared millions of people, a wide swath of the public, attaching completely false and damaging beliefs to them.

They've so far refused to run a correction. It's unconscionable and inexplicable. The Dallas Morning News has much higher standards of integrity and truthfulness than the Association for Psychological Science. I don't understand how this is possible. This whole situation is an ethical and scientific collapse.

I'm drafting a longer magazine piece about this and related scams, especially the role of journals and organizations like APS, IOP, and AAAS in promoting and disseminating fraudulent science. This situation is beyond embarrassing at this point. If anything were to keep me from running the magazine piece, it's that it's so embarrassing, as a member of the field, to report that this junk can actually be published in peer-reviewed journals, that no one looks at the data, and that a left-wing political agenda will carry you a long way and insulate you from normal standards of scientific conduct and method. This reality is not what I expected to find when I chose to become a social scientist. I'm still struggling to frame it.

Normally, the host institution would investigate reports of fraud or misconduct, but the system appears to be broken. Lewandowsky has not been credibly investigated by the University of Western Australia. They've even refused data requests because they deemed the requester overly critical of Lewandowsky. That's stunning – I've never heard of a university denying a data request by referring to the views of the requester. UWA seems to have exited the scientific community. Science can't function if data isn't shared, if universities actively block attempts to uncover fraud or falsity in their researchers' work.

To this day Lewandowsky refuses to release his data for the junk moon hoax study. That's completely unacceptable, and there is no excuse for Psychological Science and APS to retain that paper as though it has some sort of epistemic standing – we already know that it's false, and the authors won't release the data, or even the national origin, age, or gender of the participants.

It's ridiculous to have a system that depends entirely on one authority to investigate misconduct, especially an authority that will have a conflict in interest, as a host university often will. It puts everything on one committee or even one individual, dramatically reducing the likelihood of clean inquiries. The way journals and scientific bodies have tried to escape any responsibility is unconscionable, and completely unsustainable long-term.

I'm enormously disappointed with people like Eric Eich and APS head Alan Kraut for failing to act against Lewandowsky's earlier scam, and in the latter case, for failing to retract the fabricated false effects in the Observer magazine. Falsely linking millions people to the belief that the moon-landing was a hoax was an incredibly vicious thing for the authors to do, and for APS to do.

Eich, Kraut, and that whole body should take participants' welfare – and that of their cohort in the public at large – a hell of a lot more seriously. I'm stunned by how little they care about the impact such defamation can have on human lives, and how willing they are to harm conservatives. Imagine how a person might be treated if people thought he or she believed the moon landings were a hoax.  We have a responsibility to conduct ethical research, and to not publish false papers. They inexcusably failed to act on the discovery that numerous claims in the earlier paper were false, that the sample was absurdly invalid and unusable, and that there are likely minors in the Psychological Science data (the authors said in the paper that their cutoff age was 10, and neither the journal nor APS have responded to me on that issue, nor have they released the data – Lewandowsky refuses to release the data, and removed the age variable and lots of other data from the data file they've made available. Refusal to release data should lead to the automatic retraction of a paper.)

I'm very confused by the lack of concern about minors in the data. I don't know if this is controversial, but we can't have minors in our data. This is true at several levels. First, we would need specific IRB approval to have minors participate in a study – the study would have be focused on children, not some web survey asking about three different assassinations. Second, we don't want minors in our studies for scientific reasons – we don't want to make claims about human behavior, claims that are implicitly centered on adult human behavior, based on non-adult data. Third, it's illegal to secure the participation of minors in research without their parents' consent. This PLOS ONE study purportedly used an American sample, and I think the same legal concerns would apply in Australia. I'm still stunned that APS doesn't care about minors in the Psych Science data – that was something I'd expect any journal to care a great deal about. The ethical issues are much larger than those we normally face.

If the minors in the data are real, that's a problem, and we couldn't use them. If they were adult participants who gave fake ages, then that's also a problem, and we couldn't use them, certainly not in analyses involving age. If they were not participants at all, then that's obviously a problem as well. The authors should explain how this data came to be – they should've done this over a year ago.

In any case, I hope that PLOS ONE makes a better showing than APS, and I'm confident they will. They know about the issue, and are investigating. We can't have minors and 32,757-year-olds in our data, and we certainly can't make false claims based on such bad data. Enough with the scams.



¹ NOTE: I just added (Jan 8, 2015) the information about the October 4, 2013 disclosure of the minors and 32,757-year-old. Brandon Shollenberger was the whistleblower (username Corpus in that comment on Lewandowsky's website.) I've always known that I was not the original discoverer of the minors and the paleo-participant – my earlier draft stating that the authors knew since August 18, 2014 was charitable.

In fact, I wasn't the first person to point any of the major issues with Lewandowsky's recent publications. It was people outside of the field – laypeople, bloggers and the like, who discovered the lack of moon hoax believers in the moon hoax paper, who pointed out that the participants were recruited from leftist blogs, that they could be anyone from any country of any age, none of which has been disclosed, etc. Throughout this whole saga, it was laypeople who upheld basic scientific, ethical, and epistemic standards. The scientists, scholars, editors, and authorities have been silent or complicit in the malicious smearing of climate skeptics, free marketeers, and other insignificant victims. I was late to the party.

Also note that this business has been going on for some time. In 2011, Psychological Science published a bizarre sole-authored Lewandowsky paper where he investigated whether laypeople project the pause in global warming to continue. It's one of the strangest papers I've ever seen. He had two graphs/conditions – one of stock prices and one of global temperatures. He touted a significant difference in the slopes participants extrapolated onto the graphs – they evidently projected a higher slope to the temps compared to the stock prices. This is supposed to mean something, apparently. But...ah... the graphs were labeled:

- Share Price SupremeWidget Corporation

- NASA - GISS

An obviously fake corporation name vs... NASA. I don't understand what's happening here. I feel like I stepped into something I fundamentally do not understand. Psychological Science?

66 Comments
Andy West link
1/6/2015 04:11:09 am

"Their behavior is beyond strange at this point"

Indeed. I think extreme (climate cultural) bias and a desperate attempt to reduce cognitive dissonance. Described in the 3 posts below using Lew and crew's <em>own papers</em> on cognitive bias (pretty reasonable and mainstream work largely prior to his jumping off the deep end into climate stuff and conspiracy ideation). A much much more important conclusion from these posts applying Lew and crew's papers to the climate domain though, is that the climate Consensus itself has to be pretty much soaked in bias, to the extent that CAGW is a powerful culture in its own right.

http://wattsupwiththat.com/2014/11/06/wrapped-in-lew-papers-the-psychology-of-climate-psychologization-part1/
http://wattsupwiththat.com/2014/11/08/wrapped-in-lew-papers-the-psychology-of-climate-psychologization-part2/
http://wattsupwiththat.com/2014/11/09/wrapped-in-lew-papers-the-psychology-of-climate-psychologization-part3/

Reply
Jonathan Abbott link
1/6/2015 04:53:36 am

I really shouldn't be surprised by anything that comes with Lewandowsky's name attached and yet somehow I still am. I think it's that others let him carry on with this sort of garbage.

Reply
geoff chambers link
1/6/2015 05:32:30 am

This is the third paper in which Lewandowsky has made statements which he knew to be false long before publication. In the pre-published version of the APS “Moon Hoax” paper made available in July 2012, he claimed that the survey had been publicised at the SkepticalScience blog. Barry Woods asked him about it privately and Lewandowsky lied, saying he knew it had been, since he had the URL somewhere. I asked John Cook of SkepticalScience about it independently in September 2012 and Cook lied to me. Then Cook told the truth to Lewandowsky around November 2012 in an email uncovered via FOI by Simon Turnill. The paper was published unchanged in June 2013. I had previously written to Psychological Science about it, and editor in chief Professor Eich told me he was forwarding my letter to Lewandowsky and said he'd get back to me with his remarks. He never did.
The Recursive Fury paper, published around April 2013, was full of misquotes, which Brandon Shollenberger and I had already pointed out publicly months before. The paper was eventually withdraw, accompanied by a note from the editors whose tortuous wording, devised with the help of Lewandowsky's lawyers, seemed to suggest, without quite saying so, that there was nothing wrong with the paper scientifically.
Between the first very public airing of the evidence that Lewandowsky was a serial liar and the eventual publication of “Moon Hoax” he was awarded a chair of Psychology at Bristol University, England, together with a medal from the Royal Society and a five figure sum (in sterling) of government money from a fund aimed at attracting scientific talent to Britain.
It is inconceivable that the authorities at Bristol University, the Royal Society, the Wellcome Foundation and other funding organisations didn't know of the accusations against him. Googling his name turns up dozens of articles at ClimateAudit, WattsUpWithThat and my blog clearly identifying him as a liar and a charlatan.

Reply
Mark Pawelek
1/6/2015 05:58:37 am

They don't seem very competent or caring people to me. They should've detected the fake outliers straight away. These professors may need to go back to school!

Reply
Russell link
1/6/2015 06:22:07 am

But Jose', don't you appreciate that the age old conspiracy to make you look like a conspiracy theorist has salted the historical record with erstwhile conservatives sounding like generic fools on unconroversial matters of science , and put some of them to bed with PR hacks and focus groups to compound the problem?

Advertising happens.

Reply
Barry Woods
1/6/2015 06:35:56 am

LOG12 (or is it Log13)

Let's not forget this other point blank refusal by the Vice Chancellor the University of Western Australia, to provide data for LOG12

For the stated purpose of submitting a comment to Psychological Science (which suggested by the Chief Editor - Prof Erich Eich)


From: Paul Johnson
Sent: Friday, March 28, 2014 8:08 AM
To: Barry Woods
Cc: Murray Maybery ; Kimberley Heitman
Subject: request for access to data

Mr B. Woods

Dear Mr Woods,

I refer to your emails of the 11th and 25th March directed to Professor Maybery, which repeat a request you made by email dated the 5th September 2013 to Professor Lewandowsky (copied to numerous recipients) in which you request access to Professor Lewandowsky’s data for the purpose of submitting a comment to the Journal of Psychological Science.

It is not the University’s practice to accede to such requests.

Yours faithfully,
Professor Paul Johnson,
Vice-Chancellor

http://climateaudit.org/2014/03/30/uwa-vice-chancellor-johnson-circles-the-wagons/

----------------------------------------

yes - I was surprised he put that in writing..

Disappointing for me? (yes)

Devastating for the credibility of the University, journal and the field of psychology?

I leave Lewandowsky et al out of that list as it appears that they cant help themselves (activists) but the academy should protect it's own credibility?

My request, that prompted this response is here:
http://unsettledclimate.org/2014/04/05/i-requested-data-from-the-university-of-western-australia/


Reply
Barry Woods
1/6/2015 06:47:14 am

From the LOG12 paper (Psychological science)

"An additional 161 responses were eliminated because the respondent’s age was implausible (< 10 or > 95 years old), values for the consensus items were outside the range of the rating scale, or responses were incomplete. This left 1,145 complete records for analysis."

-----------------

thus, in the flagship journal of the APS, it seems that 11-17 yr olds are both plausible and appropriate.....

how many of the tiny number of respondents that believed in the conspiracy theories, were minors?

don't know, Lewandowsky et al, UWA, APS, Psychological Science will not release the data (if they've lost it, well that is just as bad)

Reply
Doug Proctor
1/6/2015 06:56:31 am

The Weapons of Mass Destruction lie and it's absolute lack of blowback demonstrated a long time ago that the politically useful untruth is immune from complaint.
Whatil is most disturbing is the lack of professional care required these days to receive the social advantages of being considered a "professional". The days of snake-oil salesmen masquerading as doctors are with us still.

Reply
Barry Woods
1/6/2015 07:38:58 am

From PLOS One -

The journal proudly lists all the media coverage the paper has had! (following Jose's comment!)

---------
http://www.plosone.org/annotation/listThread.action?root=72775
PLOS One


The following articles represent some of the media coverage that has occurred for this paper:

Publication: Guardian Environment
Title: “Climate sceptics more likely to be conspiracy theorists and free market advocates, study claims”
http://www.theguardian.com/environment/planet-oz/2013/oct/02/climate-change-denial-skeptics-psychology-study-conspiracy-theories

Publication: NeuroLogica Blog
Title: “Politics, Science Rejection, and Conspiracy Thinking”
http://www.popsci.com/article/science/surprise-conspiracy-theorists-are-more-likely-disavow-vaccines-climate-science-and

Publication: Popular Science
Title: “Surprise! Conspiracy Theorists Are More Likely To Disavow Vaccines, Climate Science And GM Foods”
http://www.popsci.com/article/science/surprise-conspiracy-theorists-are-more-likely-disavow-vaccines-climate-science-and

Publication: Scientific American Blogs
Title: “Motivated reasoning: Fuel for controversies, conspiracy theories and science denialism alike”
http://blogs.scientificamerican.com/absolutely-maybe/2013/10/14/motivated-reasoning-fuel-for-controversies-conspiracy-theories-and-science-denialism-alike/

Publication: The Burrill Report
Title: “Princess Di, Faked Moon Landings, and the Denial of Science -”
http://www.burrillreport.com/article-princess_di_faked_moon_landings_and_the_denial_of_science.html

Publication: The Conversation
Title: “Right, left, wrong: people reject science because ...”
http://theconversation.com/right-left-wrong-people-reject-science-because-18789

Publication: Times Live
Title: “Conspiracy theorists, conservatives more likely to reject science”
http://www.timeslive.co.za/scitech/2013/10/14/conspiracy-theorists-conservatives-more-likely-to-reject-science


Publication: Vaccine Nation
Title: “Belief in a range of conspiracy theories predicts vaccine denial”
http://blogs.terrapinn.com/vaccinenation/2013/10/04/belief-conspiracy-theories-predicts-vaccine-denial/

-------------------

All comments to PLOS One here
http://www.plosone.org/article/comments/info%3Adoi%2F10.1371%2Fjournal.pone.0075637


How many thousands of people have read those media articles about this paper, and believed that it shows that 'climate sceptics' are conspiracy theorists based on it (especially the rather nasty Guardian article)

Reply
John M
1/8/2015 03:49:18 am

Well, that's pretty much the point. In fact, most people won't even read the whole article, nevermind looking at the paper or data, but just the headline and maybe the first few paragraphs. That's also why part of the game is to get a politically useful headline and why articles bury the details and mention of any uncertainty or weaknesses deep into the article near the bottom, where few people will bother to read it.

If you want a great example of how it works, see this article by Robert F Kennedy, Jr. which claims, "Fox News will not be moving into Canada after all! The reason: Canada regulators announced last week they would reject efforts by Canada's right wing Prime Minister, Stephen Harper, to repeal a law that forbids lying on broadcast news." A left-of-center friend, who is by no means stupid, passed it along through Facebook, as did many others on the left:

http://www.huffingtonpost.com/robert-f-kennedy-jr/fox-news-will-not-be-moving-into-canada-after-all_b_829473.html

It took me all of a couple of minutes with Google to find out that Fox News had been in Canada since 2004. So how many people simply believed the article because it tickled their confirmation biases? Enough did and the meme continued to circulate that PolitiFact.com made an entry about it in 2014.

http://www.politifact.com/punditfact/statements/2014/jul/14/facebook-posts/fox-news-banned-canada/

This is all about political narratives trumping facts. And one strategy that you'll find in it is a poisoning the well strategy:

http://en.wikipedia.org/wiki/Poisoning_the_well

"Don't listen to Fox News. They lie." "Don't listen to Rush Limbaugh. He's bigoted." "Don't listen to Sarah Palin. She's stupid."
"Don't listen to climate deniers. They're tools of the oil industry."

It's all about controlling the narrative and making sure as many people as possible don't listen to anyone who challenges that narrative. And at some point, that Eye of Sauron is going to turn toward José Duarte if he becomes enough of a threat to their narrative to smear him, just as it's turned toward Steve McIntyre, Roger Pielke, Judith Curry, Lennart Bengtsson, and others to smear and bully them.

Reply
DocBud
1/6/2015 07:58:41 am

José, you still have one instance of 35,757 in the post:

"Something is very wrong here. The authors should explain how the 35,757-year-old got into their data."

Reply
Joe Duarte
1/6/2015 11:17:03 am

Thanks. Fixed.

Reply
MikeInMinnesota
1/6/2015 08:17:52 am

I guess I just assumed the age problems were a result of a bug in the random number software they were using to generate their data.

Reply
geoff chambers link
1/6/2015 08:23:22 am

Doug Proctor: “The Weapons of Mass Destruction lie and it's absolute lack of blowback demonstrated a long time ago that the politically useful untruth is immune from complaint”
You're talking about the governments of the UK and the USA. Barry Woods and I are talking about a simple American citizen who has been shown to be a liar and a charlatan, but who continues to be defended by his Australian university (which has proudly proclaimed that it will continue to harbour his defamatory and retracted article) and who has been awarded a five figure sum by the world's oldest scientific society to come to England the better to propagate his lies.
This situation is not immune from complaint. I'm a UK citizen, but not a UK taxpayer. There are millions of people in Bri!tain and in Australia who have good reason to complain that this charlatan continues to benefit from the fact that “scientists” are accorded a faith accorded;to no other group.

Reply
Andy West link
1/6/2015 09:28:10 am

Geoff, I think you mean 'Australian citizen'.

Reply
Brad Keyes link
1/11/2015 10:37:42 pm

Andy,

Lewandowsky was whelped in the US. Only decades later did he inflict himself on the Antipodes. What passports he owns, I'm not sure.

John M
1/6/2015 10:01:32 am

Let me offer an explanation for the 32,757 value and how it wound up in the data (though this explanation in no way excuses it not being scrubbed from the data before it was analyzed). As this page explains...

http://en.wikipedia.org/wiki/Integer_%28computer_science%29

...a signed 16-bit integer can have a value from −32,768 to 32,767 (which corresponds to −2^15 to 2^15-1, since 0 is also a positive integer). That value represents the maximum value of a signed 16-bit integer variable in the software (an unsigned integer variable that cannot represent negative numbers ranges from 0 to 65,535 or 2^16 for the 16 bits that can be 0 or 1). So my guess would be that somewhere in their software gathering the data or along the way, either (A) the value got set the the maximum signed integer, (B) a NULL or absent value was represented as the maximum signed integer, or (C) someone was messing with the software to see what the maximum age they could squeeze out of the software without it crashing or doing something nasty). But the reason why it's that particular unusual value is that it's the maximum value that a 16-bit integer variable can hold.

Reply
Joe Duarte
1/6/2015 11:16:39 am

Interesting point. But a max 16-bit INT would be off by 10. It's 32,757, so the max minus 10?

It would be nuts for the NULL to be represented by a huge integer, though I've seen people use 999 or -999 and the like to replace NULL in stats software. That's always crazy, but it's normally done when the data is not itself an integer, like text strings.

Reply
John M
1/6/2015 11:32:55 am

If age was entered at an integer, it's possible it was a person seeing how high of an integer they could enter and that's where they stopped. That it's off by 10 is interesting, but that it matches on all of the other digits suggests some relationship to that limit somehow to me. If you use that website, maybe you could test a number field and see what the highest value you can enter is. It's possible the programmers set the interface limit to 10 less than the internal maximum. I'd also be curious to know what it does to negative ages.

Anon
1/6/2015 09:11:32 pm

If it was writted to file as a signed integer, and -9 was used as a missing data flag, but then later read from the file as an unsigned integer, you might end up with that value.

dean_1230
1/12/2015 01:23:29 am

One other option, however remote this may be, is that they're using a spreadsheet and the "date" formatting and it's converting the date to the integer. In this case, 32757 in Excel is the integer form of the "short date" September 6th, 1989.

John M
1/6/2015 10:29:12 am

"Their behavior is beyond strange at this point."

I don't think it is. This is what happens when people have a political agenda to promote and winning is their only principle. There is plenty of agenda-based science out there promoting all sorts of things, both on the left and right, and confirmation biases.

I don't think the Lewandowsky studies are about serious science, because they know the vast majority of people are never going to actually read the paper, look at the data, or read a serious rebuttal. They are about grabbing sensational headlines in the mainstream media and on biased sites like the Huffington Post and io9.com, where they serve a dual purpose. The first purpose is to reassure their own side that they are right and good. The second goal is to convince the casual reader who isn't going to look beyond the headline and maybe the first few paragraphs that their side is righteous, good, and on the side of angels while the other side is insane, evil, and on the side of demons.

When things like the Lewandowsky study are posted on sites like io9.com, nobody looks at the paper or questions the findings. They clap like trained seals and have their Two Minutes Hate in the message boards about how this proves the other side is evil and terrible and maybe shouldn't be allowed to vote, determine how their children are educated, or (at the extremes) live. And these are often people who fancy themselves as being intelligent and caring deeply about getting science right.

That's what this is all about. It's not science. It's politics. And that's the third rail you are dancing on with your opinions.

Reply
Richard Tol link
1/6/2015 06:44:40 pm

Lew has now acknowledged the error on Twitter.

Reply
Jeremy Poynton link
1/6/2015 07:34:50 pm

That's good. Only another how many now for him to acknowledge?

Reply
Joe Duarte
1/7/2015 01:19:21 am

He should've acknowledged it in August when he was alerted to it. It doesn't matter to acknowledge it now. It should never have happened -- it's absolutely inexcusable to have a five-digit age and a bunch of minors in your published data. This is garbage.

What's most outrageous is how deceitful he is. I was stunned when I saw him emphasizing the p-value of a correlation between belief in the HIV-AIDS link and some other variable, maybe the moon business, on his website. He talked about how it was a one-in-a-million probability or something, pointing readers to the p-value when the correlation was completely meaningless because essentially zero people in the study rejected the HIV-AIDS link. And most of them were not in accord on the other variable.

He was getting meaningless correlations because almost everyone took the same position, all variance on the agree side of the scale, but he made it sound like rejecting the HIV-AIDS link was an actual phenomenon in the study and that it was related to something else, because look at this p-value. It was the most incredible example of misrepresentation by a social scientist that I've ever seen, aside from the scam paper itself. I consider that fraud, and so do a number of other people, especially when you don't disclose the distribution of scores -- I'm going to publish journal articles laying out why we need to be a lot more vigorous in how we define and deal with fraud. Established power structures in academia have a vested interest in keeping fraud narrowly defined, like focused on plagiarism. We need to be a lot more serious than that. What this guy does is despicable. He should have no place in the field.

I'm also struck by the massive difference in quality between professional pollsters like Gallup, Pew, ANES, et al, vs. psychology journals. Those organizations are in a different world of quality, standards, and professionalism. Lewandowsky is complete junk compared to them -- I'm not sure why we'd want to read junk research when we can get real research from professionals. Even if he fixes one of the errors, the whole study is junk, some of the composite variables are arbitrary and don't survive factor analysis, the items are broken and fail to meet basic psychometric standards, and we can't be confident in the validity of any of the scores if age itself was so corrupted. No one would pay for Lewandowsky's research -- no company would pay for market research of this quality. This is incredibly trivial, low-effort, low-quality research, and worst of all it's converted into misleading and false claims.

Reply
Barry Woods
1/7/2015 03:58:48 am

It serves his purpose...
... promoting 'the consensus' (which Lewandowsky advised Cook is what matters, to persuade the public), and putting people outside the consensus as beneath the pale, as mad, bad or stupidly crazy

(and of course, the actual object of the consensus, is never really defined, and/or is interchangeable)

these papers had the same purpose as Cook's 97% paper, research, in a peer reviewed journal say. So politicians like Obama (or Ed Davey - UK Minister of State - in the UK) can wave around and say science says this and ignore the critics..

nobody EVER actually reads the papers, asks for the dataset and see if it is junk or not, nobody cares..

and the journals get to boast about high impact science (and relevant), quoted in the media and lots of citations..

and the activist academics build up whole careers and the grants, and awards and prizes and honours, keep rolling in. Same for the Universities, they get star academics, RS medal prize winners, impact in the media, from politicians, credibility and recognition and funding of course for star performers and departments.

James
1/8/2015 04:35:23 pm

I'd support this comment. I've been a professional market researcher for over 30 years, and often looked askance at academic survey research. But Lew is in a league of his own, his work is unbelievably bad.

Sheri
1/7/2015 02:43:58 am

It seems likely that Lew believes as long as he acknowledges the error, says he's sorry, then no one will care and he can continue on his current track. That seems to work for much of the political arena now. Just be angry that something happened, say you're sorry or that you'll investigate and then pretend the whole thing did not happen. Lew seems quite political.

Reply
Barry Woods
1/7/2015 04:07:47 am

In the LOG 12 (NASA faked the Moon landing, therefore (climate ) science is a hoax ) paper - it says less than 10 yr old and greater than 95 yr old responses were excluded.

Which allows minors between 10 and 17 to HAVE been included..
How many (of the very few) responses depend on minors for any of those conspiracy positives?

A partial dataset with age,gender respses to some questions, and ~300 stripped out, (and no IP's no referring urls) is unofficially available (Lew emailed it to a complete stranger on request) which is posted on a few sceptical blogs

How many of them were minors, University of Western Australia refuse to say, or release any of the data officially. And the APS' flagship journal - Psychological Science say nothing we can do about UWA (of course they could, retract it on principle, no data availability, no paper in their journal

Gary
1/7/2015 04:01:46 am

I suspect that age was calculated from date of birth data entered by the survey participants. In the data I work with daily I frequently see incorrectly entered dates. The very first thing I do is locate and flag such errors. It's the peak of incompetence not to examine raw data before doing any analysis when the frequent occurrence of such errors is well known. Peer reviewers share equal blame with the author for not finding the problem.

Reply
osseo
1/7/2015 06:44:52 am

Gary, it was peer reviewed? How old were the reviewers?

Reply
classical_hero
1/10/2015 07:07:13 am

They must be dinosaurs, to allow an age that old.

RogueElement451
1/8/2015 12:39:46 am

It is absolutely incredible that such lame ,(can I say lame?)
Supposedly scientific endeavors, are allowed the fresh air of publication.
When will the MSM get onside with the Sceptics? The answer I suppose is never ,since "nothing much happening here ,move on " is not a great soundbite.
Perhaps we need to up the anti and let the MSM know that they have been told the BIGGEST LIES! That exaggeration was not only vital to the cause but that it has been necessary to promote left wing ideology, THEY HAVE BEEN USED !
I do not think they will get it , the media are all about today and how noisy is that headline , apologies get printed on page 5 ,sub para 4 line 2 .

Reply
RomanM
1/8/2015 02:57:19 am

As in his earlier paper, Lewandowsky failed to do even the simplest quality control checks on the remaining respondents who managed to notice the trick question "catch1". People will sometimes answer without even reading the questions or use patterns to quickly finish a survey. This appears to be the case for a substantial number of individuals in Lewandowsky's opus.

One obvious check is to look at the frequency of chosen answers for each person. However, the data given in the Excel file contained 13 questions for which the values had been reversed. These would need to be reversed back to their original form to reflect the actual choices made by the subject. There were 27 cases where the subject selected the same answer every time for the 39 questions. In 23 of those, the chosen answer was the non-committal middle value 3. The remaining four cases were all different each one having a single answer 39 times as well.

There were a further 22 responders who limited their choices to only two unique responses. A majority of these had similar characteristics to the ones with a single choice in that there was one value chosen 33 or more times and the second value 6 or fewer times. All of these cases should at have been checked for consistency and at worst removed from the data set completely. I might add that none of the 49 cases involved an age outside the range 18 to 67.

Here is a short R script to do the analysis described above:

### Function to calculate frequency of 1 to 5 answers
### for each subject.
qc.quests = function(dats,ans=1:5) {
dims= dim(dats)
outmat = matrix(NA,nrow(dats), length(ans))
for(ind1 in 1:nrow(dats)) {
xdat = dats[ind1,]
for(ind2 in 1:length(ans)){
outmat[ind1,ind2] = sum(xdat == ans[ind2])
}
}
rownames(outmat)=1:nrow(dats)
colnames(outmat) = 1:5
outmat}

### lewdat = data frame of 1001 surveys.
### Trick question "catch1" removed from data.

lewdat.unrev = lewdat
lewdat.unrev[,c(1,5,7,9,12,14,16,19,21,22,24,26,27)] = 6 - lewdat.unrev[,c(1,5,7,9,12,14,16,19,21,22,24,26,27)]

summat.unrev = qc.quests(lewdat.unrev[,1:39])

### Show all individuals with at most two distinct answers.
summat.unrev[which(rowSums(summat.unrev == 0) >=3),] ### 49 x 5 matrix of counts

Reply
Richard Tol link
1/9/2015 04:36:07 am

unbelievable

that's one of the first things you check

Reply
geoff chambers link
1/9/2015 06:57:03 am

RomanM
I address the Lewandowsky question, not as a statistician , but as an ex-market researcher with only the most basic grasp of statistics, and I ask you the same question which I put to Steve McIntyre over the question of the two evident fake responses to the “Moon Hoax” survey: “By what right do you decide that a certain pattern of responses justifies exclusion from an opinion survey?”
In the comment at ClimateAudit I posited the case of a respondent who justified his choices by saying: “It's a principle of mine to always tick the right hand box in any survey”.
OK, he's crazy, and he's going to screw up your statistics, but from the moment you claim to represent what the public thinks, you have to accept all the public, and nothing but the public (not your choice of what should constitute the public). This is not a trivial point.
The only mainstream media journalist I found who made even the gentlest criticism of Lewandowsky mentioned a survey which found that 13% of Americans believed that Obama was the offspring of the Devil (this figure fell to 7% among Democrats, if I remember correctly).
These figures measure something, (probably the seriousness with which people take your survey.) By eliminating them, you are falsifying the result of your survey.
José Duarté in a comment above made a strong defence of commercial opinion polls, an opinion I entirely share, despite the fact that we are probably 180° apart politically. In the days I worked in market research, when interviews were face-to-face, you carried out the most rigorous checks on the researchers ( the interviewers in the field) but the responses of informants were considered sacred.

Reply
RomanM
1/9/2015 11:42:13 pm

Geoff, you ask:

<blockquote>By what right do you decide that a certain pattern of responses justifies exclusion from an opinion survey?

In the comment at ClimateAudit I posited the case of a respondent who justified his choices by saying: “It's a principle of mine to always tick the right hand box in any survey”. OK, he's crazy, and he's going to screw up your statistics, but from the moment you claim to represent what the public thinks, you have to accept all the public, and nothing but the public (not your choice of what should constitute the public).</blockquote>

The intent of collecting data, whether by survey or by any other means, is to gain information on a specific population or situation. Statisticians have spent a lot of time and effort developing methodology to optimally extract the information relating to the purpose for which the study has beeen done. If the data set has been contaminated by spurious information which is unrelated to the matter at hand, why would you not wish to remove that information from consideration in order to prevent it from adversely influencing the results of the analysis?

It is doubtful that anyone would object to correcting typographical entry errors on the basis of the "sanctity" of the survey process. There are can be other cases of inappropriate "data" which also do not carry proper relevant information to the topic under consideration. E.g., providing responses which do not reflect a thought process other than completing the survey quickly or the intentional creation of a pattern of answers which are intended to skew the results in a specific direction. These can be as deleterious to the quality of the information contained in the data set as an age of 32757 years. Unless the aim of a survey is to determine how many people are "crazy" in the way they respond to surveys, you should not include the respondent in your example in your study if you were certain that they had done what you suggest.

There are no simple fixed procedures for evaluating and/or removing responses from the data set. Each case must be looked at individually and the reasons for removal given in reporting the results. Consistency checks (e.g. which items were reversed in the Lewandowsky data in the questionable responses as suggested by the blog host) and any available external information can help in the assesment process. Sensitivity tests can be run to compare the effect of the exclusion of the suspect data. Rejected data should be made available along with the data that had been included in the study for evaluating the quality of the results.

As a matter of interest, here are the responses in which exactly two values were selected in the unreversed survey form. I put it into HTML table format. Hope it works.


<table width="335">
<tbody>
<tr>
<td width="53"></td>
<td width="32"></td>
<td width="21"></td>
<td width="21"></td>
<td colspan="2" width="44">RAW</td>
<td width="21"></td>
<td width="31"></td>
<td width="21"></td>
<td width="21"></td>
<td colspan="3" width="70">REVERSE</td>
</tr>
<tr>
<td>Subject</td>
<td></td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td></td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td>25</td>
<td></td>
<td>1</td>
<td>0</td>
<td>38</td>
<td>0</td>
<td>0</td>
<td></td>
<td>0</td>
<td>0</td>
<td>38</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>200</td>
<td></td>
<td>0</td>
<td>0</td>
<td>38</td>
<td>1</td>
<td>0</td>
<td></td>
<td>0</td>
<td>0</td>
<td>38</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>230</td>
<td></td>
<td>6</td>
<td>0</td>
<td>33</td>
<td>0</td>
<td>0</td>
<td></td>
<td>4</td>
<td>0</td>
<td>33</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<td>353</td>
<td></td>
<td>0</td>
<td>0</td>
<td>35</td>
<td>0</td>
<td>4</td>
<td></td>
<td>2</td>
<td>0</td>
<td>35</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<td>422</td>
<td></td>
<td>0</td>
<td>0</td>
<td>0&l

RomanM
1/9/2015 11:46:07 pm

Looks like HTML formatting doesn't work and that the number of lines is restricted. I'll try posting the table elsewhere and link to it.

RomanM
1/10/2015 12:49:01 am

Table available here:

http://statpad.wordpress.com/2015/01/10/681/

Joe Duarte
1/10/2015 01:26:28 pm

Yes, sorry Roman, HTML does not work in the comments on Weebly-hosted websites apparently. It will work on the new website, which will go live any day now.

Joe Duarte
1/9/2015 07:28:14 am

Good work sir.

I would be very cautious about throwing out or "cleaning" data in a survey study, as would most social psychologists. However, the same response to 39 questions would probably qualify as bad data, especially if those 39 questions included numerous reverse-scored items and the participant still gave the same numerical response to those.

The participants who gave one of two answers to every item are a tougher call, and researchers will have different positions -- by default I would keep them. Those who gave the same answer to at least 33 items, and a single different choice to the remaining six or fewer items, are a tough call. I think many researchers would remove them, but there isn't a settled best practice there.

I wonder what happens to the effects if the 27 uniform responders are removed. Well, it's probably not an interesting question given the context of the study's various flaws. The study doesn't have much epistemic standing given the data quality and the low-quality items. It would be hard to treat any findings as interesting or generalizable.

Reply
Jeff Id
1/8/2015 11:23:25 am

Jose,

Thank you for the hours of studious dissection required to debunk this nightmare work. One cannot screw up as much work as Lew has unintentionally. At this point I believe it is clear that Lewandowsky is a fra** and this statement isn't any news to him.

He knows his work is completely invalid and he doesn't care any more than a religious terrorist cares about their victims because it is for the cause.

Roman's comment is even more statistically telling than the paleo-participant although less exciting for the masses. It shows a large scale bias in the data whereas a biased researcher such as Lewandowsky can pretend to hand wave even an average biasing single point away.

Reply
Stephen Pruett
1/9/2015 07:42:28 am

This is particularly disheartening and disgusting to me as a department head for a biomedical-research oriented department. We spend untold hours on ethics training, responsible conduct of research training, IRB approvals, IACUC approvals, Biosafety approvals, lab safety inspections, fire safety inspections, etc., etc., etc. These have all been presented to me as absolutely mandatory, and I have been told that failure to comply is potentially a career ending act. Yet, we have in this post a description of one person blatantly ignoring numerous ethical and compliance regulations at two different institutions with complete impunity. I sincerely hope that your post is wrong and that this is not related to the researcher's political leanings (e.g., left is always fine but right has to actually follow the rules), but in the absence of any action by the author, the institutions, the societies, or the journals, it is difficult to conclude otherwise. I don't think the scientific community as a whole realizes that this could literally be the first story in future history books in the chapter describing the demise of science in the 21st century.

I am particularly angry, because I have published in Plos One and had to meet rigorous peer review standards to get the articles accepted. I will be copying this post to the Editors and letting them know that I will never send anything else to them until this is addressed.

Reply
Joe Duarte
1/9/2015 10:25:44 am

Stephen, thank you for your comment. It's a gust of fresh air and sanity in this troubling situation, and it's exactly how I'd expect any credible scientist to feel.

I'm confident PLOS ONE will get it right. I think they're the wave of the future as far as scientific journals are concerned, as long as journals can manage the potential conflict-of-interest in being paid by the authors of the papers they publish. There are various ways humans can manage these sorts of conflicts, so I don't see it as critical.

I think one of the issues here, or with the other scams, is deeply epistemological. I think journals and scientific bodies are relying too much on social proof and standing or age in order to signal whether something is true, instead of just evaluating claims and evidence. Given the conflict of interest in having university officials investigate fraud hosted by the same university, this presents some problems in how journals process information about fraud, falsity, and such. We don't seem to have a credible system in place yet.

Reply
Barry Woods
1/11/2015 03:18:27 am

very true - University of Western Australia gave Prof Lewandowsky, Cook and Marriot a clean ethics bill of health for the Recursive Fury paper - Frontiers in Psychology. With UWA investigating the failings of it's own ethics department..

Frontiers just looked at the evidence and retracted it anyway.
Maybe PLOS One will do the right thing.. (but why so slow?)

Put it appears Psychological Science and the APS, will not

Brad Keyes link
1/11/2015 11:09:20 pm

Joe,

I think one of the issues in Naomi Oreskes' argument-from-consensus scam is deeply epistemological. I think journals and scientific bodies are relying too much on social proof and majoritarianism in order to signal whether something is true, instead of just evaluating claims and evidence.

Social proof is a pre-scientific system of reasoning. It may or may not work when browsing iTunes for new songs, it may or may not work as a page rank algorithm, but it is completely useless—worse than random—for seekers of knowledge about the material world ("scientists," as they'e now called).

The modern scientific method repudiates social proof entirely. (Entirely, Joe. I want to stress that adverb: entirely.)

Consensus carries zero weight in scientific epistemology. Anyone who tries to pass it off as a reason why you should agree with them is a science denier. They have waived the right to be employed as, referred to as or treated as a scientist. They probably don't even deserve to partake in the fruits of science, such as medical care. Screw science deniers.

geoff chambers link
1/11/2015 01:26:11 am

RomanM
Thanks for the full and thoughtful reply. I'm sure that in terms of good statistical practice, your argument is perfectly sound. But I look at this from the point of view of the average reader of an unbiassed media article on the subject (ok, such a thing doesn't exist...) who wouldn't be impressed by the idea that you could disprove the findings by deciding on theoretical grounds who was or wasn't a serious respondent.
In Lewandowsky's world, it's perfectly logical to believe that there are people who believe every conspiracy theory going. To deny this possibility is to counter one circular argument with another.
As José notes, the survey has so many flaws that it's hardly worth devoting too much attention to any single one. The fact that there was no “don't know” option, and failure to answer a question disqualified the respondent, means that Lewandowsky surveyed 1300 readers of anti-sceptic blogs who consider themselves omniscient – an odd sample if ever there was one.

Reply
nightspore
1/11/2015 04:31:05 am

Does anyone know whether Elizabeth Loftus has expressed any misgivings about having appeared on the masthead with this guy? (I'm thinking of the "Subterranean war on science" paper.)

It's also worth pointing out that Lewandowsky is currently a department chair, according to Wikipedia. Finally, I recall seeing the name listed as an editor of some volume in cognitive psychology published a while back (although I couldn't bring it up just now when I tried searching for it on the Web). So he's not exactly some fly-by-nighter who's just touched down in the AGW arena. I think that has implications, which makes this whole case even more disturbing.

Reply
RomanM
1/11/2015 11:11:03 pm

chrisB, there is nothing strange about the distribution of the last digit of the age data.

A chi-square test in R for uniformity of the the counts that you used (which excluded the 5 year old and Methuselah):

0 1 2 3 4 5 6 7 8 9
108 114 109 94 115 90 87 92 88 102

produces a chi-square value of 10.8398 with 9 df corresponding to a p-value of 0.2868. There is nothing unusual here whatsoever.

Reply
ChrisB
1/12/2015 02:57:25 am

I was wondering if you had applied the Yates's correction for continuity. As such I get a p<0.005

Reply
RomanM
1/12/2015 03:39:15 am

Yates correction is used in 2x2 contingency tables or in a 2x1 goodness-of-fit test for a simple binomial. The net effect of applying the correction is to reduce the magnitude of the chi-square statistic and correspondingly increase the p-value. It would not be used here with a multinomial distribution with 10 categories.

If you are selecting specific categories on the basis of frequency size and then testing them either individually or as a group with a chi-square test, this would be completely inappropriate and the numbers obtained would not be meaningful.

Paul Matthews link
1/12/2015 02:48:18 am

"Throughout this whole saga, it was laypeople who upheld basic scientific, ethical, and epistemic standards."

Another related example of this today.
A paper by 5 climate scientists/SKS activists made an elementary error, multiplying by something rather than dividing.

http://rankexploits.com/musings/2015/yes-some-things-are-obvious/#comment-134430

The basic error escaped the notice of all five authors and the peer reviewers, but was noticed by amateur Nic Lewis.

Unsurprisingly, the effect of their error was to push the result in the direction they wanted it to go (higher climate sensitivity).

There is a lot of scope for some social science/psychology research here. How such basic mistakes can be made by people who have pre-determined in their heads what they want the answer to be ("climate sceptics are conspiracy theorists" or "right-wingers are anti-science" or "climate sensitivity is high"...)

Reply
Richard Tol link
1/12/2015 07:32:30 pm

Paul: Not sure we'd need much extra research here. The result is codified as "extraordinary claims need extraordinary evidence" which implies, of course, that we lower our guard when finding what we had expected to find.

Reply
Brandon Shollenberger
1/12/2015 03:05:47 am

I've noticed a couple people above are talking about suspicious responses where people give the same answer(s) to every question. That reminds me of an issue I brought up in my comment which first pointed out the problematic age values in this data set.

There were hundreds of entries in this survey which were inconsistent. Hundreds of people claimed to hold sets of beliefs which are seemingly impossible to hold. For instance, 77 people claimed human CO2 emissions don't cause global warming yet also said greenhouse gas emissions were responsible for most of the observed warming or have caused serious damage to the planet's climate. It's difficult to see how one can use responses which deny the greenhouse effect in regard to CO2 yet claim the greenhouse effect is real/causing damage.

An even more direct example is another 195 people said they were unsure human CO2 emissions cause climate change yet agreed humans cause most (or dangerous) global warming. Stephan Lewandowsky hand-waved this issue away by saying we expect inconsistent responses, but I find that difficult to accept. If people can't answer five questions on an issue in a consistent manner, how can we possibly hope to know what their belief on the issue actually is?

Incidentally, I still think it's funny Lewandowsky used reverse-coded questions in this. The stated reason for doing so is to address bias in answer order. The problem with that is there are uneven amounts of questions. If you have three questions with one order and two questions with another, you can't simply combine them and say the bias has vanished. At a minimum, you need to compare the distribution of responses (which as I recall raises significant concerns with one of the first questions about beliefs on global warming).

Reply
geoff chambers link
1/13/2015 05:55:12 am

Brandon
I'd like to raise the same kind of epistemological question with you that I raised with RomanM above. Just as I defended the right of respondents to be absurdly consistent in their madness, I would defend their right to contradict themselves.
The problem lies not in any defect of reasoning in the minds of respondents, but in a psychological blindness in the social scientists who pose these questions. A proper appreciation of the perversion of science inherent in current practices may help answer José's question as to why commercial survey firms are better at applying normal scientific methodology than many social scientists.
When I started looking at current research in the social sciences, as cited by Lewandowsky in his bibliographies, I was struck by the use of batteries of tests where the same question is rephrased several times. This provides the researcher with a rich source of potentially statistically significant data, using factor analysis or one of its cousins, while persuading the man in the street that the social scientist is an inquisitor out to trap him into giving the wrong answer. Lewandowsky did this in “Moon Hoax” where, instead of asking people simple questions like: “are you Democrat/Republican?” “Do you believe in Global Warming Yes/No?” he went round the houses asking the same question in half a dozen different ways.
A normal person, faced with an inquisitor who insists on posing what appears to be essentially the same question phrased in different ways, will end up by contradicting himself. Reverse coding, which seems a logical response to possible problems of bias, only compounds the problem, by producing questions of the form: “Do you agree that it is not true that..?”
In criticising Lewandowsky's interpretation of his own research you say: “If people can't answer five questions on an issue in a consistent manner, how can we possibly hope to know what their belief on the issue actually is?” Agreed. We can't, if we insist on asking the same question five times, because people don't like being treated as idiots, or suspects. They may never have heard of the Inquisition or the Enlightenment, but, whether they know it or not, they've several centuries of political and social emancipation behind them, and it will show up in their responses.
[The latter thought is more or less borrowed from the Marxist thinkers I admire like E.P.Thompson. A nd in this week when every Pope and Imam on the planet is speaking up in favour of the atheist anarchists of Charlie Hebdo, I appreciate the possibility of being able to express it here, particularly as I am banned from expressing it at the leftwing Guardian or the Marxist LeftFootForward.]
We may have here a response to the question posed by José above, as to why commercial companies are better at social research than university researchers in the social sciences. It's because their clients want answers to questions, while too often academic researchers want confirmation of the conclusions they've already arrived at.
A battery of stupid questions on what respondents feel may be more effective in providing a statistically significant response than a simple list of percentages replying yes/no to a series of questions on how people vote, or what they think.
I'm a rare bird, a defender of social science and the urgent need to understand how our society works in the tiny world of climate sceptics, where most people assimilate social science to socialism and reject it out of hand. Understanding what's wrong with Lewandowsky goes far beyond the failings of one person. Sociology (and maybe social science in general) has become part of the problem to which it should be the solution. This site, and this thread, may be part of the solution to that problem.

Reply
srp
1/13/2015 09:08:13 am

I am not a psychologist, but I believe there is a lot of empirical work backing up the practice of asking subjects the same question different ways, e.g. in things like the MMPI. In the case of the questions Brandon cited, though, these are NOT the same thing--each is a different aspect of a chain of arguments. So the Chambers critique would not apply even were it valid.

Dan
1/15/2015 02:25:05 am

I can think of two possible explanations for why someone may be unsure about CO2 being the diver of climate change, yet be convinced that humans cause global warming.

The first, rigorous explanation is that they may believe that there is a different greenhouse gas, be it methane, chloroflorocarbon, sulfer oxides, etc. that are the true casue of the problem.

The second is not rigorous, but it is perfectly explainable for a person to not understand the mechanism but to accept a conclusion, in much the same way that youcan operate a motor vehicle without understanding the Otto cycle, or believe inbreeding is bad without understanding Mendellian genetics.

In any opinion survey, you have to allow for the fact that an individual may have inconsistent, arbitrary, or unfounded opinions on nay subject without denying that those opinions exist.

Reply
Mark Blumler
1/13/2015 12:03:30 am

Really interesting and informed discussion! I only wish to emphasize that it is human nature to make errors that skew the data so as to conform with one's beliefs. Unless we clearly distinguish that from fraud, friends of the researcher in question will be reluctant to point out errors, once they are published. And yes, Lewandosky does seem to be a fraud.

Reply
geoff chambers link
1/13/2015 05:24:31 pm

Srp
“..each is a different aspect of a chain of arguments. So the Chambers critique would not apply even were it valid.”
Try thinking of the respondent to a survey as a complex human being living in a specific complex society, instead of simply as a source of data.
Responding to an on-line survey is a pretty weird thing to do when you think about it, but people do it because they assimilate it to the familiar experience of being stopped in the street by a nice lady who interrupts your thoughts about what you're going to have for lunch to ask you your opinions about the state of the planet or whatever. This also is a pretty weird thing to happen, unthinkable even sixty years ago, but we humans are infinitely adaptable and we've got used to it.
Surveys are useful things. Ask people who killed Kennedy or what happened on 9/11 and you discover that a significant proportion of the population of the Western world doesn't believe what the government tells them. That's interesting.
Ask people if they think they live on a fragile planet and you'll find that a large majority agree with a meaningless assertion which sums up a vague idea that's around. Then ask the same question fifteen times in slightly different ways (“mankind has a destructive effect on his environment” etc.) and you have a thing called the New Environmental Paradigm which is not only your meal ticket to fame and fortune, but a way of influencing the way the world is run.
Asking the same question fifteen times in slightly different ways is insane. Only policemen do that when interrogating people suspected of serious crimes.
And sociologists. José is asking why. That's real social science.

Reply
TinyCO2
1/15/2015 04:10:55 am

I think that questionnaires about climate change are a minefield for unreliable answers. People just don’t know enough about the subject and that very much includes Dr Lewandowsky. The questions tend to be lacking in the subtlety of the issues. I’m aware that when I’ve filled out climate questionnaires I know my answers paint me as a warmist. In reality I don’t even consider myself a lukewarmer since that suggests I don’t have much of a protest against the science and its solutions. If a question is worded badly my answers may appear conflicted. Questions that change direction of positive to negative are easy to miss when people are in a hurry or confused, so again they may make the wrong answer. Words like ‘significant’ mean very different things to different people. Most members of the public would not class a few points of a degree warming as significant but may be sure that hurricanes and tornados have increased significantly. Both views would be wrong. Many are unaware where movie hype stops and science begins. Their responses are not a measure of educated opinion, merely climate change advertising.

And that brings me to anotherpoint. What possible scientific justification is there for this study? I can understand that policy makers want to know more about those who oppose action on AGW but what practical use could they make of a potential link between the prevalence of conspiracy ideation and opinion on AGW? Apart form trying to use it politically as a smear tactic? Is that ethical? I can think of a million better questions that might help the authorities ‘sell’ AGW to largely ignorant public and I’m on the opposite team.

For people who want to understand how people tick, tick boxes are a poor substitute for live study. With the advent of the internet, researchers don’t even have to leave their safe little desk to observe homo scepticus. What a pity Dr Lew preferred to create the Piltdown Climate Sceptic instead.

Reply
RO
1/14/2015 12:04:43 am

Geoff, slight changes to your interesting post above:

"Ask people who killed Kennedy or what happened on 9/11 and you discover that a significant proportion of the population of the Western world *say they don't* believe what the government tells them."

"Ask people if they think they live on a fragile planet and you'll find that a large majority *say they* agree with a meaningless assertion which sums up a vague idea that's around."

Reply
Paul Matthews link
1/15/2015 11:25:06 pm

There is now a brief Corrigendum posted at the PlosOne comment site

http://www.plosone.org/annotation/listThread.action?root=85195


It says "The Age variable contained two outliers that represent typos or software errors in otherwise correct records"

Reply
John M
1/16/2015 06:41:49 am

"In fact, I wasn't the first person to point any of the major issues with Lewandowsky's recent publications. It was people outside of the field – laypeople, bloggers and the like, who discovered the lack of moon hoax believers in the moon hoax paper, who pointed out that the participants were recruited from leftist blogs, that they could be anyone from any country of any age, none of which has been disclosed, etc. Throughout this whole saga, it was laypeople who upheld basic scientific, ethical, and epistemic standards. The scientists, scholars, editors, and authorities have been silent or complicit in the malicious smearing of climate skeptics, free marketeers, and other insignificant victims. I was late to the party."

I want to add that this is why attempts to silence criticism and skepticism and promote unquestioning trust in experts in various fields of science should be so troubling.

When those on the political left were outside of the political mainstream and were not the establishment, they strongly supported questioning authority, skepticism, free speech, voices outside the mainstream, transparency, and opposition to censorship and the blacklisting of those with opposing viewpoints. In doing so, they were able to push for many changes that are now widely considered good things across most of the political spectrum (opposition to racism, opposition to sexism, environmentalism, regulation of business abuses, etc.).

But now where those on the left feel that they are the authorities and establishment, they've shifted towards defending the establishment and using it's power to silence critics in exactly the same way they were once attacked by the establishment they once opposed. They are now advocating blind obedience to authority, attacking skeptics, promoting speech codes on college campuses, mocking voices outside of the mainstream, defending opacity and secrecy, trying to censor their critics, and blacklisting their enemies. While I do think there are those on the left for whom those ideals were simply a means toward an end rather than true principles, I think it's a very strong indication that many on the left now comfortably feel that they've become the authorities and establishment in the media, academia, and some sciences and parts of the government and they are now acting in the same authoritarian ways to consolidate and hold power that they once fought against as outsiders. It is what Orwell talked about in Animal Farm, when "Four legs good, two legs bad" transformed into "Four legs good, two legs better."

I encourage everyone to read Carl Sagan's Baloney Detection Kit from his book The Demon-Haunted World: Science as a Candle in the Dark published in 1995 and apply it to the behavior of the current mainstream of climate science toward skepticism:

https://www.andrew.cmu.edu/user/jksadegh/A%20Good%20Atheist%20Secularist%20Skeptical%20Book%20Collection/Carl_Sagan_The_Fine_Art_of_Baloney_Detection_sec.pdf

Reply
John M
1/17/2015 05:46:46 am

There is apparently another guy named Duarte who is making himself a thorn in the side of those trying to create alarm over climate change on the opposite side of the planet, via Watts Up With That:

http://wattsupwiththat.com/2015/01/16/calamities-oversold/

Reply
johanna
1/17/2015 11:48:14 am

Geoff Chambers, thanks for your comment. But for me it is like the Curate's Egg.

I've worked in both government agencies and private sector ones who have done surveys, and have overseen them. I studied it (briefly) at university. I'm not an expert, but far from an amateur.

Firstly, not all surveys are equal. The techniques that are used for different purposes are different. For example, I managed a survey of people with a particular disability about their use of current services. There was no need for tricky questions there.

Asking people about their opinions is a completely different animal.

While the opinion polls are often rigorously designed, the fact is that they are often wrong, when the chips are down and the votes are counted. People (and I am one of them) sometimes quail when faced with the ballot paper, whatever they may have said previously. In voluntary voting jurisdictions, they may not vote at all.

However, I must agree with your broader point. Commercial pollsters are pretty good at what they do, and not because of any dedication to academic wistfulness. If they don't deliver, they die.

It was interesting to see comments from people who had real life experience of polling in response to Lew's work. Mine even made it into Table B of his subsequent disaster, something which I will wear as a medal forever.

Polling is a very subtle and careful science, if you are looking for the truth.

Reply
geoff chambers link
1/17/2015 09:29:12 pm

Johanna
I agree entirely. RO's point above about the difference between believing something and saying you believe it touches a major flaw in much opinion research. I see no effort on the part of social scientists to distinguish between opinions, beliefs, attitudes, and stuff people say.
In the course of investigating Lewandowsky I looked at dozens of the papers he cites and discovered that there's nothing unusual about this kind of thing in modern academic research. In “Moon Hoax” he cites six times a psychologist who turns up as the principal peer reviewer of his follow up paper “Recursive Fury”. This guy turns out peer reviewed papers at the rate of one a fortnight on subjects such as anti-semitism among ethnic Malays, preferred size of the female bottom, and the sex appeal of dentists. There's nothing wrong with people making their living in weird ways, but afterwards you get the likes of Lew saying that peer review is the sole criterion for truth, and anything not in a peer-reviewed paper can be ignored.
JohnM
The attitude of many left wing academics in the humanities and the social sciences is indeed scandalous, (and I count myself as a leftist, as is at least one of the other lay bloggers who have been active in revealing this.) Though Orwell described it best in his satire on communism, it's not a left-right thing, so much as a general tendency to groupthink in a social body which regards itself as a superior caste.
The French demographer Emmanuel Todd makes the point very well when he attributes this Brahminist tendency to the rise of university education. When the highly educated formed a tiny percentage of the population concentrated in a small number of professions, they inevitably shared their world view and culture with the people around them. When 20-30% of the population goes to university, they form a critical mass in society and start looking round for an ideology to distinguish them from the ignorant hoi polloi. There's a big satisfaction to be had from defending the planet from its destructive population, whether you're a hippy buddhist vegan, a leftwing academic or a millionaire looking for a worthy cause to support and a fast buck to make.

Reply



Leave a Reply.

    José L. Duarte

    Social Psychology, Scientific Validity, and Research Methods.

    Archives

    February 2019
    August 2018
    July 2017
    December 2015
    October 2015
    September 2015
    August 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014

    Categories

    All

    RSS Feed

  • Home
  • Blog
  • Media Tips
  • Research
  • Data
  • An example
  • Haiku
  • About