This is so refreshing. The Journal of Basic and Applied Social Psychology has banned p-values.
And confidence intervals. Their reasons are extremely good reasons, and well-articulated. I forgot to stress this point explicitly in the recent essay on the Lewandowsky fraud, but descriptive statistics always trump inferential. A p-value has no inherent substantive meaning, nor does the underlying statistic (this is especially true of a linear correlation on scale items where the participants' actual placement on the items is undisclosed, as it was in the LOG12 paper, a situation that was apparently fully satisfying to Eric Eich, Psychological Science and APS.) The way we use inferential statistics is often barbaric, though rarely fraudulent. This is a great first step. I think Trafimow and Marks' explanation will be informative to a lot of readers and scientists. I think a lot of us have the wrong idea of what a confidence interval represents. This will be interesting long-term for its impact on our use of terms like "effect". A low p-value doesn't mean there's an effect. An inferential statistic doesn't mean there's an effect. Other data, including descriptive data, is needed to know if we have an effect. We're especially vulnerable to false or overstated effects when we use student samples because such samples artificially reduce noise (student samples are extremely homogeneous on various dimensions, dimensions that will contribute noise or unexplained variance in community samples.)
0 Comments
I don't think I fully processed the implications of the Inbar and Lammers data.
37.5% of social psychologists in their survey explicitly reported a willingness to discriminate against conservative job candidates. These are people who chose the midpoint or higher, where the midpoint was Somewhat inclined to favor a liberal vs. conservative candidate. The 37.5% figure might be an understatement, given that the lower values on the scale still represent some inclination to discriminate. For some reason I'd been working with the much smaller ballpark figure of 20%, which is closer to the figures on discriminating in paper and grant reviews. I thought a 20% base rate might be enough to confer herd immunity given the academic career and hiring model. 37.5% is enormous. I think if we plugged it into a good model, it would be catastrophic for the careers of incoming conservatives (if they could be identified, if they were open with their views, had written op-eds in the school newspaper, maintained a blog, or were affiliated with Young Americans for Freedom, and so forth.) I'm curious what base rates we'd see for racial and gender discrimination in the private sector, maybe going back to the 1960s or so. I'm curious at two levels: self-reported willingness to discriminate, as in these figures, and some sort of estimate of the actual base rate of discrimination from hiring managers. I think it's generally reasonable to assume that the actual base rate of discrimination will be higher than the self-reported rate. Also, given that academic hiring is committee-driven, at any given base rate there will be more exposure to discrimination than in the private sector, where one person might make the decision. The 37.5% figure means virtually every hiring committee will include a discriminator, which is partly why I think this figure might be catastrophic. It will depend on how these committees work and a few other variables. I've had a couple of conservative RAs, and I'm not sure what we should tell them in general. I'm not sure it makes sense for a conservative to go to graduate school given the level of discrimination in the field. It seems quite unlikely that they'll be able to have a career. In our paper, we relate the following story of a graduate student in a top social psychology program: “I can’t begin to tell you how difficult it was for me in graduate school because I am not a liberal Democrat. As one example, following Bush’s defeat of Kerry, one of my professors would email me every time a soldier’s death in Iraq made the headlines; he would call me out, publicly blaming me for not supporting Kerry in the election. I was a reasonably successful graduate student, but the political ecology became too uncomfortable for me. Instead of seeking the professorship that I once worked toward, I am now leaving academia for a job in industry.” That's incredibly vicious and unprofessional behavior, and it's not unusual in academia. (It's also intellectually vapid and provincial, given that the next Democratic President stepped up the drone strikes and killed far more people with them than Bush did – I assume that we're not just caring about the lives of American soldiers.) In fact, I'm not sure I would've gone to graduate school had I know that 37.5% of social psychologists explicitly report a willingness to discriminate. It's hard to say. I think libertarians will fare at least slightly better, but we'll often be encoded as conservatives. Note the student above isn't necessarily a conservative – he merely says that he or she is not a liberal Democrat. I think the core issue is that academic liberals think liberalism is true, and conservatism is false and malicious. This is tautological, but I think it's very important to linger on the fact they think liberalism is true, all the way down, and what the implications are. And they often equate conservatism with creationism and various anti-science proclivities. We see that in some of the commentaries – people think the intellectual landscape consists of 1) the left, and 2) creationists/religious zealots. It seems implausible that anyone would think that's the intellectual landscape, but it's somewhat common in academia. If you think liberalism is completely true, and conservatism is anti-science, it makes no sense to bring conservatives into the field. Calls for diversity would have no appeal, and would be nonsensical. So I think the core issue is exposing the breadth of the intellectual landscape, and the inherent potential breadth and nuance of human scholarship. I think people are granting far too primacy to the intellectual and political landscapes of our day. We're a small part of the grand sweep, one that extends thousands of years into the past, and will extend thousands of years into the future. We just happened to be born here – we could have been born in any other era, with any other landscape. I think academic liberals take for granted that they are on the side of history, that their values and aims are inherently progressive, hence they co-opted that term. They might be right. I certainly agree with some of their major positions. But there is a lot of apparent baggage that comes with contemporary academic liberalism, theories and neuroticisms that I don't think we should be confident will endure through the ages. And core features of the framework could certainly be wrong. One potential danger is that it is so untested, so unpressured, given the intellectual homogeneity of the academy. That would always worry me. For example, while I see some merit in environmentalism, it's the most untested, unexamined, unpressured ideology I've ever seen. It has no symmetric opposite (people who hate trees and savor pollution), but alternative schools wouldn't grant its premises so we wouldn't expect a symmetric opposite. No one is deconstructing it. I can imagine lots of refutations and alternatives to environmentalism, but we don't see that work in academia because everyone seems to embrace it. That's dangerous. Even if we thought liberalism was completely true, it's unreasonable to demand that people embrace the one true philosophy at age 22 or whenever they'd apply to a social psychology program, or at age 28 when they look for a job. Another concern is that the academic left has some distinctive antibodies and defense mechanisms for handling dissent, almost a taxonomy. Dissent is precategorized and marginalized as privilege, racism, sexism, and assorted "motives". That can be a powerful shield against substantive disagreement, and installs begging-the-question as a chronic fallacy. Maybe all ideologies have these sorts of immune systems. I know Scientology has a distinctive lexicon for people who speak out against the church, maybe a specific term for former Scientologists who speak out. In any case, I think assuming one's ideology and philosophy is so true that discrimination is justified is just begging to be featured in the wrong chapters of history. My empirical work focuses on envy, particularly it's benign and malicious forms. I was inspired by van de Ven, Zeelenberg, and Pieters (2009). The Dutch language has separate words for benign and malicious envy, which helped frame their research.
Malicious envy: hostility toward the target, appraisal that the disparity is unfair, low confidence in one's ability to close the disparity Benign envy: still an unpleasant, negative emotion (not admiration), but focused on self-improvement, greater effort, closing the gap They characterized malicious envy as leveling down, bringing the target down, while benign envy is leveling up, bringing yourself up to their level of achievement or whatever it is. Something I've been thinking about lately is perhaps a more generalized malice or hostility toward people who stand out, speak out, or invoke certain ideals / ethical principles. That last bit seems to be a forceful induction. Here's what I'm talking about: 1. Judith Curry says climate scientists need to better communicate the uncertainty, need to be more responsible and so forth. 2. A scientist responds with "so you think you're the one to drive this" or "so you think you're better than us." It's that last part, the "you think you're better than us" that has always struck me. Judy wasn't the first time I had seen that kind of thing. Some people are responding to idealists, at least outspoken idealists (who are the only kind they're going to know about) with an instant social comparison. And it's not just social comparison. There's a bit of extra content in there – it's not just "this person is better than me", but "this person thinks they're better than me." It doesn't necessarily concede that the idealist is "better", just that they think they are. That response has always struck me, that element of the other person thinking they're better than oneself. It's not obvious to me why we would respond that way to an idealist or crusader. They generally aren't saying anything about being better than anyone. I suppose it's easy to draw out that implication, though it's optional. I've seen it in graduate school, especially with female students toward an especially attractive female student. How women treat women is worthy of a lot of research in itself. I once heard a grad student say "She thinks she's better than me" based on the person's way of standing. I was so puzzled. I asked why do you think she thinks she's better than you? That seems so specific. And a lot to infer from a stance. The whole "X thinks he/she is better than me" framework puzzles me. I never think someone thinks they're better than me. I routinely think someone is actually better than me in some specific skill or domain, but I don't really have a concept of comprehensive betterness as a person, unless we're comparing regular good people to bank robbers. It doesn't occur to me that someone thinks that. The social comparison aspect reminds me of what Sonja Lyubomirsky wrote in her book about how she and Lee Ross hypothesized that happier people engage in downward social comparison – that this would be a reason why they were happier. To their surprise, they found that happier people tend not to engage in social comparison at all. But what triggers the hostility to idealists? It looks somewhat like malicious envy, but I don't know that it satisfies the definition of envy. There's isn't anything obvious that the other person has, unless it's about fame or attention, though in many of these cases there isn't a lot of that. Stepping back, envy makes sense as a signal, even an evolved mechanism, because it carries important information – that it is possible to be doing better than one is doing. It's possible because here is a person who proves it, someone with more resources, more acclaim, more money, meat, fur, whatever. This might be the most efficient way for a human to learn that it is possible to achieve more on some consequential dimension. I suppose an idealist could signal to me that it's possible to be a better person, maybe that I should have attended to the things they're talking about, that I have failed to be a good person. And that appraisal gets converted to "they think they're better than me". That seems pretty coarse – I'll have to think about it more. Clearly lots of people respond to idealists with support, even worship depending on the context. Yet some people seem to dispositionally respond with malice, and the social comparison seems to be a part of that process. I'd really like to dig into that process. I don't think there's a lot of research on people's responses to idealists, virtues in others, etc. Feel free to jump ahead of me on the data collection. One of the things that may matter is that some idealists are seen as overly preachy, insufferable, while some aren't. So characteristics of the target might moderate effects. I've seen similar responses to my efforts, but I think I've seen a lot more directed at Judy. I'm pretty sure some of the abuse she gets is due to the fact that she's a woman, so I may never experience quite as much. A physicist who had said something similar to her asked me "So you're the one to change social science?" or something along those lines. I was so puzzled by that mentality. What if the answer is yes? What if it's no? Is it even answerable? It seemed to be meant as "so you think you're better than us" except it was accompanied by "do your peers agree with you?" That focus on peers and what other people think was also very strange to me. We wouldn't be able to answer that question meaningfully in any way that would predict the validity or truth of someone's work at any arbitrary time point. It would be much better to just evaluate their work. Science as a cliquish peer-focused culture is obviously going to have some problems and dysfunctions compared to a scientific culture that prizes independent thought, integrity, and good epistemology. Those responses, that non-engagement with substance, just defaulting to social comparison and looking around for what other people think, seem like they might be related. If we iterate that and extend it, it would bring every outspoken person down, pull in every outlier, because they will always be in the minority in the beginning, for some arbitrary time period. It reminds me of the Hawaiian saying about crabs in a bucket. It looks like the leveling down process, but without a concrete object of envy. In the streets, it's just called being a hater, but I don't think we have a formal conception of it in social psychology. It might be too rare to document, or perhaps not. There are signs of malice toward achievers all over the culture. That's achievers, but I think there might be a more specific response and process regarding people in a moral domain, idealists, crusaders, outspoken reformers, and so forth. The moral domain might have particular power as an induction, relative to just achieving a lot of success. We'll see. Separately, on the issue of gender-specific phenomena, a female clinical researcher shared a story with me, as something that could be the basis of organizational research. In a large company, there was a fast-rising woman. She was very smart and very qualified, with a law degree and maybe an MBA. And she was very beautiful, which the researcher thought was the key element. When this thoroughly qualified woman rose to a top position, other women in the company attributed her rise to sleeping her way to the top, or to her beauty. Anecdotally, there seems to be a lot going on in how older women see younger women, especially if they're beautiful. (Evolutionary psychologists can easily build a narrative to explain this.) Female executives have told me that the male mentorship model (in the private sector) is different from the female mentorship model. In particular, the hypothesis is that older men see a young man as a way to leave a legacy, while older women see a younger woman as a threat. This will have to be explored empirically, but it makes sense. I love SPSP, the conference for the Society of Personality and Social Psychology, the flagship professional organization for social psychology.
It always messes me up, in a good way. Conferences and colloquia are pleasantly unpleasant for me. They're extremely generative. Whatever a researcher is talking about always sparks a cascade of ideas for me. This might be partly driven by the fact that when you're sitting in a talk, there is nothing else to do but think, or listen and think. It's a very focusing setting. Jon Krosnick and Lee Jussim organized a symposium on scientific integrity. Hart Blanton is one of the speakers and I'm eager to see how he unpacks this: "(1) misidentified measurement and causal models, (2) treatment of arbitrary psychological metrics as if they are meaningful..." I'm especially curious about what he means by "arbitrary" metrics. It could be something I haven't thought of, and it would be thrilling if Hart is miles ahead of me on methodological considerations. The model identification issues will be interesting, but there I'm guessing I know what he means. In any case, it's great to see people take a deep interest in methodological issues. Most of the talks will be on empirical research, which is as it should be. I've focused on methodological issues lately because 1) I think they're extremely interesting, and 2) I think I can have a bigger impact right now on the methodological side than the empirical side, given the methods I have access to. I'm on record as saying that data is sometimes overrated in our field, that data flowing from invalid methods is not going to give us much insight into human nature or behavior. I don't yet have access to the kinds of empirical methods I want to use. I like Mturk, and I use it, but there are hard constraints on what I can do with it. I've got piles of data to write up, but in general I think my effects are only about 70% likely to be true. That's not terrible, but I want to supplement it with other methods, mostly field work and organizational samples. I stopped using student samples a couple of years ago, because I think the probability of an effect being true, valid or interesting falls sharply if it's based on student samples. Others will disagree, and it's unlikely to be a simple matter, but in general I think the burden is on us to show that student samples are valid, not on others to show that they're not valid. For anything beyond sensory-perception or basic cognitive processes, I'd want broader samples. There might be a dance battle at SPSP if certain people are amenable. Latent Class Modeling: I'm pretty sure this is the future. It's a much more powerful, higher resolution method than linear correlation and its derivatives. The whole issue of frog jumping I talked about in an earlier post is eliminated by attending to different profiles of participants, which LCM does.
Non-narcissistic self-esteem: It would be extremely cool if someone built a reliable self-esteem measure that doesn't also carry narcissism. RSE is going to carry both high self-esteem and high narcissism (or narcissistic self-esteem and non-narcissistic self-esteem.) So you have to use supplemental measures, like NPI-16, to tease them apart, or cross explicit and implicit self-esteem (which isn't as valid, probably.) Kernis was heading in that general direction before he passed away. Implicit self-esteem is tough. I know that Diener and colleagues used initial letter liking as an implicit measure. I think it might have done some work in predicting life satisfaction. (Letter liking is presenting people with their own initials and capturing their attitudes, like J for me, or D. I'm not sure how much validation we have on that.) Rosenberg also bothers me a bit because is wording is dated, and probably comes off as awkward to a lot of contemporary participants. This is likely to be another source of noise. I'll probably try out some new measures. I also love Simine Vazire's method of repeating questions on personality measures, basically Are you sure this is the right answer? It correlates better with peer ratings, thus seems more accurate. These two new papers in JPSP are extremely interesting:
Finkel, Eastwick, and Reis: "Best research practices in psychology: Illustrating epistemological and pragmatic considerations with the case of relationship science." In recent years, a robust movement has emerged within psychology to increase the evidentiary value of our science. This movement, which has analogs throughout the empirical sciences, is broad and diverse, but its primary emphasis has been on the reduction of statistical false positives. The present article addresses epistemological and pragmatic issues that we, as a field, must consider as we seek to maximize the scientific value of this movement. Regarding epistemology, this article contrasts the false-positives-reduction (FPR) approach with an alternative, the error balance (EB) approach, which argues that any serious consideration of optimal scientific practice must contend simultaneously with both false-positive and false-negative errors. Regarding pragmatics, the movement has devoted a great deal of attention to issues that frequently arise in laboratory experiments and one-shot survey studies, but it has devoted less attention to issues that frequently arise in intensive and/or longitudinal studies. We illustrate these epistemological and pragmatic considerations with the case of relationship science, one of the many research domains that frequently employ intensive and/or longitudinal methods. Specifically, we examine 6 research prescriptions that can help to reduce false-positive rates: preregistration, prepublication sharing of materials, postpublication sharing of data, close replication, avoiding piecemeal publication, and increasing sample size. For each, we offer concrete guidance not only regarding how researchers can improve their research practices and balance the risk of false-positive and false-negative errors, but also how the movement can capitalize upon insights from research practices within relationship science to make the movement stronger and more inclusive. Waytz, Hirshfield, and Tamir: "Mental simulation and the meaning of life." Mental simulation, the process of self-projection into alternate temporal, spatial, social, or hypothetical realities is a distinctively human capacity. Numerous lines of research also suggest that the tendency for mental simulation is associated with enhanced meaning. The present research tests this association specifically examining the relationship between two forms of simulation (temporal and spatial) and meaning in life. Study 1 uses neuroimaging to demonstrate that enhanced connectivity in the medial temporal lobe network, a subnetwork of the brain’s default network implicated in prospection and retrospection, correlates with self-reported meaning in life. Study 2 demonstrates that experimentally inducing people to think about the past or future versus the present enhances self-reported meaning in life, through the generation of more meaningful events. Study 3 demonstrates that experimentally inducing people to think specifically versus generally about the past or future enhances self-reported meaning in life. Study 4 turns to spatial simulation to demonstrate that experimentally inducing people to think specifically about an alternate spatial location (from the present location) increases meaning derived from this simulation compared to thinking generally about another location or specifically about one’s present location. Study 5 demonstrates that experimentally inducing people to think about an alternate spatial location versus one’s present location enhances meaning in life, through meaning derived from this simulation. Study 6 demonstrates that simply asking people to imagine completing a measure of meaning in life in an alternate location compared with asking them to do so in their present location enhances reports of meaning. This research sheds light on an important determinant of meaning in life and suggests that undirected mental simulation benefits psychological well-being. I've never had to think about scientific fraud until the last few months. It's been an interesting journey. I've got a paper coming soon that focuses on this issue. I'm of two minds about the need to think about it. It raises interesting ethical and epistemological questions, and I'm deeply interested in both ethics and epistemology, but it also feels parasitical, an exquisite way to distract people from their research. So on that count, part of me resents even having to think about fraud, write about fraud, or deal with people who engage in fraud.
I've talked to a number of people about the recent cases, and I want to highlight something. A couple of people seemed to think that fraud = data fabrication, that they're isomorphic. In that scenario, fraud is inherently hidden, and can only be uncovered by authorities opening up a lab freezer and pulling out the stem cells or doing forensics on someone's data. The definition of scientific fraud is itself a focus of ongoing scholarly inquiry and discussion. There are lots of definitions in the literature. I'm not aware of any that define fraud so narrowly as to restrict it to fabrication. Data fabrication, or what I sometimes call "spreadsheet fraud", is but one class of fraud. We probably tend to think of recent salient cases of fraud and treat them as the prototypical form of fraud, perhaps the only form, in a manner similar to the availability heuristic. The most famous of the recent cases in social psychology is probably that of Diederik Stapel, who admitted fabricating research. My former program chair at UNC, Larry Sanna, evidently engaged in similar misconduct, though much less information has been made publicly available there than in the Stapel case. One thing we know for sure is that he destroyed the careers of his graduate students – and my friends – before they had even begun. That is really something. Lewandowsky said this: "NASA Faked the Moon Landing—Therefore (Climate) Science Is a Hoax: An Anatomy of the Motivated Rejection of Science" ...when only three participants out of 1145 in his blog-posted web survey held those two beliefs. He also said: "Endorsement of free markets also predicted the rejection of other established scientific findings, such as the facts that HIV causes AIDS and that smoking causes lung cancer." ...when 95% and 96% of free market endorsers agreed that HIV causes AIDS, and that smoking causes lung cancer, respectively. I think that's going to have to be fraud any day of the week. The key element of the concept of scientific fraud – and any definition we'd find – is deception or misrepresentation. Note that the definitions commonly include falsification, in addition to and distinct from fabrication. The definition of falsification commonly includes misrepresenting the results of statistical analyses or omitting data or results such that the findings are misrepresented or the reader misled. For an example of the former element in a definition, see here. At no point did Lewandowsky et al. inform the reader that there were only three moon-climate hoax people in a sample of 1145, or that only ten people endorsed the moon hoax to begin with. At no point did the authors disclose that only 16 of 1145 people disputed that HIV causes AIDS, 11 disputed the smoking--lung cancer link, or that only 5% and 4% of free market endorsers disputed those facts. Such trivial numbers cannot be used draw inferences or run correlations – these could be errant keystrokes, sticky keys, or a few felines. There was no data in this study to support the authors' claims. (Also, the effect proclaimed in the title is not reported in the paper.) As you can see, in the Lewandowsky case, we don't need to open up anyone's lab freezer. We don't need a committee or authority to tell us this is fraud. We don't need any further information – the numbers I gave above are not in dispute. Anyone can just look at the (stripped) data file the authors released. Our mortal eyes and brains are sufficient to validate my claims or any other claims about the data. It's just an Excel spreadsheet with some survey responses, one of the simplest datasets we'll ever see. The epistemological structure here is that Lewandowsky et al made claims A and B. I then argue that claims A and B are false by reference to their own data (and have the potential to harm millions of innocent people who would be linked to beliefs they clearly do not hold.) I point to the data, produce the numbers. I further argue this kind of conduct – making false claims of the sort they made – comfortably fits within the definition of scientific fraud, not just mine, but many extant definitions and almost certainly the common person's definition. I take it to be a safe assumption that the authors knew their data, had looked at it and so forth. They knew there were only three people who could possibly fit the effect they asserted in their title, meaning there was no effect. They certainly knew this for at least a year before I stumbled on this case, and they've done nothing to correct the record, nor have they retracted the paper. They've done more the opposite of that, and Lewandowsky's explanation for his title belongs in a museum of scientific fraud. The nature of my claim is such that it is based on their claims and data – no frozen tissue samples or university officials will be relevant, logically or epistemically, to what I am saying. Again, the numbers are not in dispute. My claim rests on those numbers and the claims the authors made. There is no other evidence that is relevant or necessary to my particular argument – nothing needs to be uncovered. In that respect, this is an unusual fraud case. People might not be used to cases of this sort, with this structure, but we don't need a novel definition of fraud to enfold cases like these. Existing definitions seem quite adequate, though social psychology as a field hasn't had rigorous conversations about how fraud should be defined, nor do we have institutions or mechanisms to deal with it. I assumed we did, and I realize now that I took the wrong inference from the Stapel, Smeesters, and Hauser cases. In every one of those cases, which are about the only cases I knew of, it looked as though the host universities did an admirable job of thoroughly investigating the fraud or misconduct (I'm not clear on what the Hauser case turned out to be.) The Levelt Committee at the University of Tilburg did a commendable job of investigating Stapel, and produced an excellent report. I now realize that all those cases started with a member of the university community walking into an ethics office or whatever and reporting either fraud or that something wasn't right. As a social psychologist, things are starting to make a bit more sense now. It's much easier for officials and institutions to ignore e-mails from an outsider alleging that one of their researchers committed fraud than it is to ignore a member of their own community sitting in front of them. I should have seen the implications of these factors long ago. In fact, Nature has repeatedly discussed how difficult it can be to get universities to investigate fraud. That's not an excuse for institutions like the University of Western Australia or Queensland. Their officials should be energetic about investigating such cases, should have a fundamentally different orientation toward outsiders reporting any kind of misconduct. The people in those positions should be selected for their eagerness to pursue such cases, for their integrity, and their non-investment in maintaining the university's public image. Such people exist, but I think it's hard to find them if you're not looking very specifically for them. However, these institutions' behavior is not as strange as I used to think, folding in the above factors, just from a descriptive social psychology standpoint. (I never actually contacted Queensland given that they tried to hide or destroy the evidence of the 97% fraud by sending legal threat letters to the whistleblower who released said evidence. In a move I've never heard of, they even told him that he could not divulge receipt of the letters, claiming that the legal threat letters were themselves copyrighted.) I hope that clears some things up. There seems to be a subculture in science that carries a sense of extraordinary entitlement and privilege with respect to fraud accusations, a degree of entitlement that would not be found outside of academia. I think Australian philosopher Brian Martin is right in arguing that scientific elites have a strong vested interest in defining fraud as narrowly as possible. I think it's clear that we're going to have a conflict of interest, as a vocation, in how vigorously we want to define fraud. I think many scientists will agree with this – it's an almost trivial insight given everything we know about human nature, and this reality does not require that most scientists be fraudsters, or that even ten percent of them are. And I think it's certainly reasonable to want to err on the side of false negatives, rather than false positives (this assumes that our options truly map to differentiable rates of false negatives and false positives – I'm not sure that's true.) In the Lewandowsky case, some dispute might come down to a researcher's right to say X predicts Y1 when there is no one at Y1, but there is a Pearson correlation between X and Y with a sufficiently low p-value. In this case, I think it's pretty clear-cut because the HIV-AIDS variable was an opposite variable – there were two levels of disagreement and two levels of agreement. Anytime we have that kind of scale, where one side is the substantive opposite of the other, we know that we can get a positive or negative correlation even if scores are clustered exclusively on one side of the scale. For example, we can easily have a negative correlation between free market endorsement and an HIV-AIDS item even if no one disputes that HIV causes AIDS. A negative correlation does not at all imply that free market views predict rejection of that basic scientific fact – not to anyone who is familiar with the formula for correlation. Neither positive nor negative correlations imply any particular placement on the scale. This phenomenon is made possible by having a scale with more than two points, and is made more and more possible the richer the scale (e.g. a seven-point scale.) This is basic stats. And disagreeing with me on our license to link people to views they do not hold simply because we have an inferential statistic driven by variance on the other side of the scale will do nothing to justify the title, so I think you'd have your work cut out for you. You'd also have to argue that we can make any inferences about any of these variables with data from a survey posted on environmentalist blogs, open to anyone in the world including fakers, and where the authors either stripped or never collected age, gender, and nationality. There's a core logic problem with any notion that fraud should only be defined as data fabrication, that nothing we say can be fraud, that fraud only relates to numbers and not words. If we can say anything we want, proclaim any effect in our titles and papers regardless of whether we have data to back them up, then no one would need to fabricate data. If words can't be fraud, then all future fraud can be shifted from data to words. My outrage in this case – and it really is outrage – is that millions of people were falsely linked to beliefs that could be incredibly damaging to them. We can never take it back. It's out there now, because one of our own put it out there. It could harm people for years to come. One day Okcupid and similar services might be able to quantify the harm – would you want to date someone, to have sex with someone, who disputes that HIV causes AIDS? Is there any doubt that having read in the NYT that free market endorsement predicts rejection of the HIV-AIDS link might bias someone – even non-consciously – against the conservatives, libertarians, and economists they see on a dating site or meeting in day-to-day life? This case is probably the most vivid consequence I've seen of the political homogeneity and bias of the field. I doubt it will ever affect me personally, that anyone would think someone with a PhD and several books to his name might dispute that HIV causes AIDS (in some future scenario where I found myself single.) But it could clearly affect millions of others. I've never understood social psychology to be a vehicle of mass harm. All our vaunted research ethics and IRBs count for nothing if we can't manage to police people, editors, journals, and bodies like APS when they plant false and unbelievably harmful links that could impact millions of innocent people. That breaks my heart. It really breaks my heart that anyone could do this, that I could be associated with a field that does things like this. It will further break my heart if I come to find that other social psychologists' hearts don't bleed upon processing what happened here. Fake Rolexes A couple of people have e-mailed me trying to get me to accuse Cook or Lewandowsky of mundane economic/criminal fraud, perhaps to expose me to a lawsuit. One of these people identified himself as Brendan Montague, a person who has reputation as some sort of environmentalist operative (try google.) In both cases, these people, having failed to hook me, have argued that we simply cannot use the concept of scientific fraud because any use of the word fraud must mean economic criminal fraud, like selling fake Rolexes. This is easily the most specious, witless, and sinister argument I've ever heard. The idea that we simply cannot speak of such a well-established, ancient concept as scientific fraud is the only use I have ever had for the word execrable. Scientific fraud is, as I mentioned, a topic of explicit scholarship and inquiry. There are countless journal articles contributing definitions, examining causes, estimating rates, and so on. There are countless cases of it in the news, all over the world. When we speak of scientific fraud, no one thinks we're talking about selling someone a pound of beef that weighs eleven ounces. Not being able to speak of scientific fraud is tantamount to not being able to speak, and I think we might be facing a rather broad assault on freedom of speech from the modern left. Concepts like "hate speech" and "offensive" are thrown all over the place, and I'm sure how they're defined, whether their application consistently fits any coherent definition, or whether they are categorically destructive concepts to have in a theory of free speech. Note that the concept of journalistic fraud is very similar to scientific fraud, well-established, and widely used. When the story broke that that Jayson Blair had fabricated all those stories at the New York Times, no one would have taken the topic to be economic criminal fraud. As far as I recall, no one was talking about criminal charges at all. Scientists and journalists are somewhat privileged in that respect – fraud generally has no criminal justice implications. I don't know much about the Michael Mann lawsuit, but if this is an actual plaintiff's argument in that case, we might want to pay more attention to the case. It seems revolutionary that we'd be banned from using well-established concepts if they happen to have a different use in criminal law. I'm not sure we've ever heard such an argument. Perhaps this is why all those groups across the political spectrum filed amicus briefs siding with the defendants (I forget who all the defendants are.) Gavin Schmidt made a ridiculous argument on Twitter in support of Mann, something to the effect that a charge of (scientific) fraud is defamation per se, which rests on the assumption that we're talking about criminal fraud, an assumption we can dismiss with prejudice. It also rests on the assumption that any such allegation is false, a core feature of defamation. It's so self-serving for scientists try to silence anyone who would call out scientific fraud, to maintain some sort of aristocratic privilege. (Tone edits above: I'm struggling to find the appropriate tone with some of these posts. It disappoints me that I still drop to words like stupid and idiotic. There are uses for those words, but I don't think they accomplish much. The idea of not being able to use concepts like scientific or journalistic fraud is so ludicrous and dangerous that I get real snappy. Nothing bothers me more than these assaults on freedom of speech. Without that, I'm not sure what we'd have left. Also edited the "If this is a plaintiff's argument, Michael Mann is a threat to freedom of speech..." clause, since someone rightly pointed out it was based on a conditional, and we probably shouldn't make sweeping statements of that sort based on conditionals. I think I have a tone problem. On the one hand, I think these issues merit a forceful tone, but on the other hand, "forceful" is a continuum, and sometimes my tone seems excessive even to me. I'll have to think about this some more.) Moreover, I think this new theory of non-contextual word use is incredibly dark. Can you imagine a reality where we were not allowed to speak of scientific fraud unless some authority told us we could? (Or journalistic fraud.) We know that universities and these "fact-finding" bodies have profound conflicts of interest. Ask Nature. It's so strange to place fraud investigations in the hands of such conflicted parties – we don't see that kind of rigged setup outside of academia. We know that we lack the institutions, mechanisms, and anti-corruption measures we would need to reliably police fraud. We don't even have independent investigators. (FYI, I used to help companies comply with the Sarbanes-Oxley Act.) We know that the scientific community is likely to have some conflicts of interest here. We know that academia in our era is intensely politicized, and that one ideology dominates and discriminates. Model in your head what would happen to the public's trust in science if the term scientific fraud were legally removed from our vocabulary, where fraud-fighters were silenced. If you think the public doesn't trust science now, I think you might want to get a stopwatch to see how long it takes for taxpayers to defund science in that scenario. Imagine we were talking about plagiarism, and we said someone stole someone else's work. Would Schmidt and other climate activists say that we must be accusing the plagiarist of grand theft auto? What other words are we not allowed to use? What other words are used to mean different things in other walks of life? Should we be banned from using any word that is used in criminal law, no matter how well-established it is in our non-criminal-law domain? This is as intellectually bankrupt as anything we're going to see in a normal human lifespan. These people definitely need to check their privilege. (I'm not sure how to adjudicate falsity where a political commentator is assessing a scientist's work, as in the Mann case, especially given that the scientist has clearly done some sloppy work. There might be issues of satire or similar license. There might be some presumption of the writer's right to interpret and judge scientists' work, some range of reasonable and expected divergence. No one should be beholden to investigations by institutions that have such obvious conflicts of interest, or really to any authorities or powers that be. It's demonstrably the case that authorities cannot be consistently relied upon to investigate fraud. People should be free to see, think, and speak for themselves. I don't know the details of what Steyn said, though I remember something about the word "bogus", in addition to "fraud". I can't imagine a universe, certainly not one that features the First Amendment, where writers aren't allowed to assess someone's work as bogus. I don't think Mann has ever been accused of doing anything remotely like what Lewandowsky did (I'm not sure anyone has ever done what Lewandowsky did), but I haven't followed his career. He's the last climate scientist I would trust, given his awful behavior, the way he treats Judith Curry, and how much he politicizes climate science.) In Conclusion People seem to be worried about my career, discrimination and so forth. I know full well that all this is extremely poorly timed with respect to my academic career, that calling out fraud, advocating a more credible and scientifically consensual definition of fraud, or even just using the word fraud will lead some people to discriminate against me in the job market. I know it could kill my academic career. But if people seriously think that this would deter me, or that it ought to deter me, I think they have failed to build in themselves the classic scientific virtues. I also think they're heartless with respect to the potential victims of this garbage. It would never occur to me to sacrifice basic principles of integrity, ethics, or beneficence in order to get an academic job, or tenure. The thought is alien to me. I don't know how to live like that. I'm going to be discriminated against anyway: 37.5% of social psychologists explicitly stated that they would discriminate against a conservative job candidate (maybe libertarians would face less discrimination – I'm not sure.) 44.1% said they thought their departmental colleagues would discriminate. These numbers might be catastrophic for the career prospects of anyone like me. They might be more than sufficient, given some math on opportunities, iterations, and network effects, to lock people out. They might, in effect, confer herd immunity for a politically biased and tribal field. The discrimination will touch people when they submit to journals, apply for grants, and so forth. The math just might be catastrophic. Perhaps there's literature on the math of discrimination and the thresholds necessary to confer herd immunity. My timescale seems to be different from the hecklers'. I don't care about the short-term. I'm happy to be right fifty or a hundred years from now, in terms of what other people think, and I think it's pretty obvious that these cases are only going to look worse and worse as time passes. Still, I think it's unrealistic to expect that these cases are going to turn out well for the researchers even in the short-term. I used to help companies comply with Sarbanes-Oxley. I know what the standards are in the world at large, certainly in America at large. Presumably, calling out political bias as an issue will spark even more discrimination, as will disclosing past experiences of discrimination, such as the fact that the University of Arizona social psychology program denied me admission after probing my views of Jimmy Carter, of all people. I decided to do that after hearing some horror stories and seeing some new data. The discrimination in social psychology is really a disgrace, and these "scholars" don't seem to have any grasp of the complexity of human thought, the range of perspectives a thinker might take or even create. They seem to have no sense of time and place, don't understand our small place in the grand sweep, give far too much primacy to the political identities of our day. People who would discriminate against someone for their momentary views of a distant President do not understand the nature of the enterprise. Add my anti-fraud activism against old, tenured white men to the basket, and the fact that I'm a first-generation American, first-generation college graduate, English-as-a-second-language Mexican-American from a copper-mining town with some nice pubs might not save me. I am not unaware of this fact. But I'll be a social scientist regardless of whether American academia chooses to discriminate against me again. It will look terrible for me to be out there, locked out of academic social psychology or philosophy of science, as the publications and research pile up and up and up. It will be ridiculous, will make social psychology and academia look absurdly unethical and discriminatory. Be that as it may, it's never been easier to be a social scientist outside of academia, what with the internet, crowdfunding of research, the fact that social science has never been expensive, etc. I'm not going to look the other way on this garbage, not when we're talking about "findings" that can be so harmful to so many people. I've read the books about Enron, and I'm, well, a social psychologist. I know what bias does, what tribalism does. Whatever happens, happens. This is simply the universe as I find it. Here is a tiny preview of upcoming methodological work relevant to recent events:
The Frog Jump You have two survey variables: Variable 1: AB Variable 2: CD AB: A and B are opposites sharing a 4-point scale (2 pts for A; 2 pts for B) CD: C and D are opposites sharing a 4-point scale (2 pts for C; 2 pts for D) What do I mean by opposites? We have lots of Opposite Scales. Canonical forms are: Disagreement -- Agreement Dislike -- Like Unhappy -- Happy They can vary in how stark the flip is, but in general one side is expressing the substantive opposite of the other side, and has profound implications for how we describe things like correlations – or whether it's even appropriate for a scientist to use correlations on such measures. In our example, it's a stark flip, a virtually dichotomous four-point scale: Absolutely Disagree, Disagree, Agree, and Absolutely Agree. There's not even mild or moderate disagreement or agreement, nor is there a neutral or ambivalent midpoint, nor is there a separate unscored No Opinion option (which I strongly advocate.) Descriptives: 99% of the sample is at the D side of CD. - 99% of people at A are at D (A is large majority of sample) - 95% of people at B are at D (B is small fraction of sample) Still, for some reason you decide to run a Pearson correlation between AB and CD. Negative correlation, nice mystical, meaningless p-value; sacrifice a goat to the Gods. Write it up and say "B predicts C." (Instead of saying AB is negatively correlated with CD, which will be hard because A and B are opposites, and C and D are opposites. Choose to talk only about B and C, even though no one is at C.) (And don't tell anyone that there's no one at C, or that 95% of people at B are at D.) Part of the dark magic here is using variance within D – remember D includes two points of the scale (Absolutely Agree and Agree) – to power a negative correlation between AB and CD (there won't always variance on one side that will power a correlation – it just happened to go down that way here), then frog jump it to C when you write it up. Even though everyone's at D, the unscrupulous researcher only refers to C, which is the substantive opposite of D. There are several root causes here. One is that disagreement and agreement scales are being wrongly treated as continuous variables. This is why we can't use a single letter for each variable in the example. There are really two variables in each variable. One is disagreement, and one is agreement, which are substantive opposites (and there was no midpoint here) This will be less bad if you've got variance across both sides, not a 99% on one side situation. But if all your participants are one side of a substantively dichotomous scale, if you treat that scale as continuous, run a correlation on it, and then frog jump to the other side of the variable (where no one is) when you write it up, that's dark, dark business. It's wildly irresponsible, and the severity of the misconduct goes up as the amount of harm people could suffer from being falsely linked to that empty side of the opposite variable goes up. In this case, C was disagreement with "HIV causes AIDS." (D was agreement, where 99% were.) Running linear correlations on substantively dichotomous variables opens up the opportunity to frog jump from one side to the other when researchers write up the results. It conflates direction (+ or - correlation) for destination (C when everybody's at D), and creates enormous opportunities for bias and fraud. A researcher could use a correlation to proclaim the opposite of the truth, in this case "B predicts C", when in fact almost all B are at D, not C. The word "predict" is a common way to describe correlations and regressions, but using it as it was used in this case is false ("B predicts C".) If you wanted to find out if in fact B predicts C, a linear correlation between AB and CD will not give you the answer. Completely different analyses are needed, which will require real scientists. This might be underexposed, but linear correlation is not an inherently valid methodological decision. An r statistic with a desirable p-value is not inherently meaningful. Nothing is. This is especially true when the variables are opposite variables. We can't use variance on one side of an opposite variable to say anything about the other side. People who would frog jump on something like "HIV causes AIDS" and falsely link millions of innocent people to denial of that fact should turn in their lab coats. Editors who allow this should edit no more. Journals with the word "science" in their titles and who publish this incredibly harmful, wildly unethical malpractice should vigorously reform and reconnect with their calling. Let's say you want to find out what climate skeptics think and why they think it. What's a better method?
1. Find a registry on the blogosphere where climate skeptics have laid out their views in some detail, and how they came to those views, as well as their educational and professional backgrounds. Read these entries and report their content in some sort of organized fashion in a journal article. 2. Post a survey on environmentalist, anti-skeptic blogs, open to anyone in the world of any age, including fakes. Ask participants if the moon landings were a hoax and if HIV causes AIDS, along with climate questions. Notice that about 2% of self-identified climate skeptics in your survey endorse the moon-landing hoax. Report that climate skepticism springs from belief in the moon-landing hoax. 1 or 2, you be the judge. When the story of Method 2 is told, in magazines, books, television specials, and so forth, wow. I want to see the looks on people's faces. It's going to be fun. The University of Western Australia and Bristol might want to get ahead of this thing before they end up under it. Eric Eich's role in all of this is fascinating, as is APS. It's actually sparked some scientific curiosity, in myself and others, on the cognitive processes involved, the strange combination of the exquisiteness of the human eye and its willful non-use. Blind people should be offended by this. All these good eyes going to waste. In any case, regarding Method 1, here it is. Nice work by applied mathematics Professor Paul Matthews of Nottingham, a gentleman and a scholar. Such men were once the rule in the academy. My hat to you, sir. For years I've thought it's ridiculous that women are smaller than men. There's no reason why we should just accept this.
Embodied cognition is an endlessly interesting area of research. I don't know what's been done on the condition of chronic smallness with respect to other beings, mates, authorities, and so forth. The fact that women are smaller than men is a profound state of nature that could easily shape hundreds of downstream consequences, and can certainly be a factor in the cloak of silence that is all familiar to too many women. We evolved this way for a variety of reasons, and I don't remember all the factors. I haven't thought about sexual dimorphism in a long time either. Evolution is a bitch though. It staggers me, takes my breath away, to think about how we evolved, how humans emerged from all of this. It staggers me to think of how a conceptual consciousness emerged. That is not a normal thing, an ever-before thing. It staggers me to think of the evolution of human language, with is tightly bound up in our conceptual consciousness. Evolution sprouted beings who appear to have free will at important levels of analysis, beings who do art and airplanes and archaeology. Out of the blood and death and relentless math of fitness value, the punctuated equilibria, genetic bottlenecks and asteroid reboots, we got rappers, Frederick Douglass, the Magna Carta, telescopes, Mighty Mouse, and I Know What You Did Last Summer. And I can see evolution as granular, ruthless, arbitrary, and charmless. Our adaptations are often orthogonal to our current values and aims, or even incompatible with them. Separately, I think it's perfectly reasonable for someone to not understand evolution, to find it unintuitive, and to be unmoved by it. I think screaming at people who aren't charmed by evolution is a fascinating strategy, and an unimpressive scholarly pastime. Evolutionary psychologists like to point out that we're still evolving. True dat. But this observation misses an important fact. We are clearly going to take the reins on a much shorter timescale than it will take for any ongoing adaptive processes to yield noteworthy changes in who we are. It's 2015. We know we're going to improve ourselves in the future. It will probably start with disease prevention. China is already looking into boosting average IQ. I think we know far too little about IQ, intelligence, human cognitive processes that bear on both intelligence and aptitude, the facets of intelligence, the trade-offs, or the meaning of life to be mucking around with genetic engineering aimed at producing more talented engineers or better clay for tiger moms anytime soon. It's too early. Gattaca was a warning. But we know it's inevitable that we will improve ourselves – "improve" by our own standards, which are the only ones that matter. Within a hundred years, we will surely see some movement there, and probably within fifty (I think Kurzweil overestimates the speed of progress.) At some point we'll be able to sort out this whole chronic smallness situation, this fundamental fact that shapes the way women and men interact. Our size difference shapes the nature and relation of men and women in more ways than I will ever be able to identify. It's not going to be simple. I might be missing something. Perhaps we would lose something important, something more important than the benefits of equalization. I doubt it. It's ridiculous that women are smaller than men. |
José L. DuarteSocial Psychology, Scientific Validity, and Research Methods. Archives
February 2019
Categories |