This is a popular idea, has gotten good coverage, most recently in FastCo. Leaf Van Boven has been doing great work on this for years. (He was Gilovich's graduate student – the FastCo article quotes Gilovich a lot because it's based on a new review article by Gilovich & Kumar.)
One of the things I want to nail in my book is the limited relevance of main effects. Meaning, statements like "X makes people happier", "Y is good for you", "Z is bad for you", "Self-esteem predicts bullying", and of course the FastCo title: "The Science of Why You Should Spend Your Money on Experiences, Not Things." There's a systematic problem in the coarseness and lack of utility of main effects in social science, and in a lot of biomedical work. There are many different issues at play, at different levels of analysis, that make the above statements problematic (the epistemic and methodological problems with "X makes people happier" are different from the problems with claims like "Self-esteem predicts bullying".) But let's focus on this story on the science of why you should spend your money on experiences rather than things... 34% of you should not do this. Let's backtrack to the studies. This is often measured by asking people what made them happier in retrospect, which is probably a valid way of measuring this question (there are probably several valid ways of getting at this.) Do you think 100% of people reported that experiences made them happier than purchases? Probably not. We never get universality in social psychology or positive psychology research. I think science writers should linger on these sorts of questions and should get the actual percentages for the effects they report (my new website will make these sorts of issues more salient and not dependent on a particular post.) In one of the flagship studies on this (Van Boven & Gilovich, 2003), 57% of participants reported that an experience made them happier than a purchase (they were asked to think of one of each, where their motivation for the purchase or experience had been to increase their happiness.) 34% rated the purchase as making them happier than the experience. (The remainder were Not Sure or declined to answer. Also, I'm using "purchase" to mean purchase of a thing or object.) So, if you read this article at FastCo – or similar articles at the NYT or Slate – and you act on it, spending your money on experiences instead of purchasing something you've treasured, 34% of you will be less happy than you would've been if you had bought whatever you wanted to buy. We might even say 34% of you will be less happy than if you had never read the article, if it changed your decision. That's interesting. It may not matter much, but it could. Qualifiers include: -- "Less happy" is not necessarily unhappy. What was measured in this study was people's choices of which made them happier. In a lot of cases, both experiences and purchases might have made them happy, but one just made them happier than the other. (The mean score in how happy a purchase made them was above the midpoint at 6.62 on a 1 – 9 scale, and experiences were at 7.51. Different study, same paper.) -- This is recalled, reflective judgement of what made them happier. I don't know much about any literature on the validity or nature of recalled happiness grading. I assume it's not as unreliable as predicting what will make us happy (see Dan Gilbert's excellent work there.) -- There are likely to be all sorts of contextual effects and circumstances that dictate one's choices. Happiness as a concept may not be commensurable across individuals, and it certainly doesn't have to be the most important consideration. I think this reality sometimes gets lost in positive psychology – people don't have to care a lot about happiness. They might care about meaning or some other value more than happiness. They might care about the happiness of other people, like their spouses or children, more than their own (though making them happy would presumably make oneself happy in that case.) In fact, suffering can be very valuable. There seems to be an intolerance for suffering implied in positive psychology and in modern politics. I think suffering might be getting short shrift. Obviously, this can all be very complicated, and the right decisions for you might be heavily and consciously influenced by your philosophy, values, or faith. Certainly what is happening when people answer a question like “When you think about these two purchases, which makes you happier?” might be very complicated. But say we keep it simple. The headline and article only apply to 57% of people, if we take everything at face value, assume that readers of the article are very similar on average to the samples used in the studies, and so forth. (This was an excellent sample, as it was a random digit dialing survey by professionals at Harris Interactive, so we don't have to worry about people making claims about human nature based on research using college students. It was conducted in 2000, so random digit dialing was quite valid – landlines were still common.) This phenomenon of the media reporting an effect as though it applies to everyone is recurrent. And a lot of times, the percentage of the population to whom it applies is not very large, maybe a slim majority like 57%. Sometimes, it will be a minority – a minority of the sample can drive an effect, whether it's a correlation or a difference between experimental groups. In this case, it could easily have been that 40% were happier from experiences, 34% were happier from purchases, and the rest were don't know/no opinion. That's a very normal outcome. Note that the researchers dig into the factors that shape the effects – Gilovich and Kumar address the heterogeneity, and Van Boven has done lots of work getting at moderators. I'm focused on how findings are communicated and simplified by the media. This isn't a scandal or anything – lots of people might live better lives because they read about this research. Some people might be worse off if they make decisions based on what they think science says they should do. It's complicated, and I don't have any easy answers. Broader tangent: Part of the story here is hedonic adaptation, how we can be happy for a time with some new circumstance or purchase, but over time our happiness returns to baseline – the thing that made us happier doesn't do much for us anymore. Researchers have noted we don't seem to be getting happier across generations, and the implication is that we should be (because our objective standard of living is much higher.) I've always found that strange, and discuss it my chapter on measuring well-being. For one thing, the baseline is happiness – most people report being slightly or somewhat happy, and mean happiness is above the midpoint. Given the way we measure happiness, there isn't a lot of room to capture movement upward, for a number of reasons. Second, it's not clear to me why we would want or need more happiness. I certainly wouldn't think it would be wonderful if people were maximally happy all the time. We have negative emotions for a reason – lots of reasons, really. The environment is not neutral with respect to our well-being, and it's certainly not the kind of place where we could be super happy all the time and still function as organisms. Mild happiness might be optimal for a civilization – I don't really have a strong position there, but assuming that the mean level of happiness should be some value X, or in some range R, that is different from the status quo is a pretty big assumption. I think it would need some justification.
1 Comment
2014 was the hottest year on record by some hundredths of a degree. It was not significantly hotter than 2005 or 2010. See the Berkeley BEST lab for details. Global surface warming paused or slowed down after 1998 – there is some dispute about whether to call it a pause or a slowdown. We'll treat it as a pause or a plateau because that is the least favorable assumption for the point I make below (treating it as slow warming makes the points below even stronger.)
The "hottest year on record" got lots of hype, and people made false inferences about it with respect to global warming, the pause, etc. I saw deeply misleading stories in the New York Times and the Washington Post, which worries me. They're supposed to be the best. When you have a rise in a variable, followed by a plateau, any given data point during the plateau has a decent chance of being the highest on on record. You're on top of a rise. Think of your weight in your 20s – any given year has a decent chance of being your heaviest on record, up to that point, since you've spent most of your life growing and gaining weight and you're now sitting on top of that rise. The probability of any given year being the hottest on record is 1/n, where n is the number of years in the plateau. (That calculation assumes that variance during the plateau is within the margin of the elbow of the rise, an assumption that is satisfied if you look at global temp data.) Justin Gillis wrote an incredibly misleading article at the New York Times. He does that a lot, and I think he's ethically obligated to disclose his political ideology to his readers when he writes about politically-charged topics. In general, it's irresponsible for science writers who are also environmentalists to conceal this from readers when they write about climate science. This is especially true if they relate to environmentalism as a religion, as the recently resigned IPCC chair did – he should have told us it was his religion before he ever took the job. Gillis seems unaware of what it means to be on top of a rise, and that any given year has a decent chance of being the hottest on record. We can also blame climate scientists he quoted. Stefan Rahmstorf said: “However, the fact that the warmest years on record are 2014, 2010 and 2005 clearly indicates that global warming has not ‘stopped in 1998,’ as some like to falsely claim.” The fact that 2014, 2010, and 2005 are the hottest years on record is another way of describing a pause or plateau in a universe where variables have variability. It is definitely not evidence of continued warming, and is fully consistent with warming having stopped. That's what "stopped" looks like. If we peaked at 1998 or whenever, and see random variance from that year, several of the subsequent years will be the hottest on record. (That these years were the hottest on record is also consistent with warming, but we'd need more info to know if significant warming has occurred.) Michael Mann said: “It is exceptionally unlikely that we would be witnessing a record year of warmth, during a record-warm decade, during a several decades-long period of warmth..." What is going on here? Why would a scientist ever say something like this? It is exceptionally likely that we'd be witnessing a record year of warmth during a record-warm decade. This is precisely when we'd expect to see it. This is also another way of describing a pause or plateau. Gavin Schmidt said: "Why do we keep getting so many record-warm years?” Because the earth warmed. If the earth warms and it does not subsequently cool, we will get a number of record-warm years. This is another way of describing a plateau, pause, or a question on a high school statistics test. This worries me. What the hell are these people talking about? Why don't they know basic probability? Why is no one pointing out that when you're on top of a rise, any given year has a decent chance of being the hottest on record? This is basic stuff. There was also a lot of nonsense in the media about a 1 in 27 million chance that 2014 was natural. Peter Gleick, a propagandist employed by the same bizarre journal that published the 97% fraud, even tweeted something to this effect. Years are not randomly drawn from a hat, and yearly temperature averages are not independent data points. It's not meaningful to compare 2014 to hundreds or thousands of other years and calculate odds or probabilities. 2014 followed 2013, and 2012, and so forth. Its temperatures are deeply influenced and constrained by the state of the earth's climate in prior years. It's not as though the earth hits a reset button as the clock strikes midnight in Times Square. None of this says anything about future warming or model projections – my point is that as a basic mathematical and statistical fact, there's a decent chance any given year will be the hottest on record even if we assume no actual long-term warming. Gillis seemed to think the proposition "Global warming has paused" is contested by the observation that 2014 is the hottest year on record by hundredths of degrees. That is simply incorrect. There's no logical intersection between the two claims. This stuff worries me, and I get grumpy about it, because this is simple math and probability. Our civilization seems extremely vulnerable to misinformation and innumeracy. It makes me uneasy that we can't get basic stats right in 2015. I feel like we're going to do something stupid, something harmful. Not necessarily on climate change, but something – if we can't get basic logic or probability, basic stats awareness, from the New York Times or the Washington Post, then I don't know where the public is going to get it. They're supposed to be the best. The science writers there are supposed to be the best. They have an ethical responsibility to not mislead the public, and when they stumble, they have an ethical obligation to correct their misinformation. I sent those newspapers a fuller version of this a month ago, when it was fresh, and they wouldn't publish it. I'm sure I was one of a sea of submissions – what's important is that they need to publish someone who knows basic math and statistics, who won't make such big errors. They need to be truthful and valid in how they report science. Alternative ways of understanding or expressing the above: -- Having a hottest year on record around now is consistent with both a pause and actual long-term warming. A pause after decades of warming will include some number of hottest years on record. -- Variability around a flat line means that some of the data points will be above that line. If the flat line appears after a long rise, those above-line data points will be the highest on record. In a comment on my post on Significance, I discovered that our rate of published Type-1 error in science would probably be higher if humans had eight fingers instead of ten. Type-1 error is when we wrongly reject the null hypothesis – when our studies seem to give us evidence of an effect or link that in reality is false in the population at large. It's a false positive, a finding that isn't a true finding. Setting our threshold of statistical significance at p = 0.05 is one way we try to reduce Type-1 error.
People have had trouble understanding that comment – it's too brief, skips some steps. I'll lay it out more clearly here. In the post on Significance, I pointed out that one reason why we use the p = 0.05 threshold for statistical significance in many of our tests is that humans have ten fingers. Because we have ten fingers, we use a base-10 number system, and we tend to prefer numbers that are multiples of ten or five for many purposes. It's probably intuitive for most of you that scientists would have been unlikely to choose 0.04 or 0.061 as our significance threshold. We needed something in that ballpark, something sufficiently stringent, and it's not surprising that we chose 0.05 – nor would it be surprising if we had chosen 0.10 or 0.01. We see those numbers as nice clean, "round" numbers, not 0.03 or 0.04. I noted that if we had eight fingers, and thus had ultimately settled on a base-8 number system, we might use 0.04 as our threshold for significance. Holding everything else about human nature and psychology constant, it seems likely that in that scenario, we'd prefer numbers that were multiples of eight and four, just as we prefer tens and fives in our universe. In a comment, Jonathan Jones pointed out that 0.04 in base-8 is actually 0.0625 base-10. What does that mean? It means the numeral, the string of symbols 0.04 would represent a different value in a base-8 number system than it does in base-10. It means that when people in a base-8 civilization write or say 0.04, it represents a different quantity of stuff (or of probability) than it does when people in our society write or say 0.04. It's difficult to think in different number base systems, because we're so conditioned to certain symbols corresponding to certain values. It's similar to the Stroop task, where you might have to identify the color of the word green when it's displayed in orange-colored text (either identifying the named color or the displayed color, depending on the task.) To understand different bases it's helpful to distinguish a numerical value and the symbols we use to represent those values. You can get this just by noting that we could any symbol we wanted to represent the number 4, e.g. we could treat @ as 4, but we've long settled on the symbol 4. A base-10 system has ten numerical digits, ten unique graphemes that represent the first ten integers (including zero): 0 1 2 3 4 5 6 7 8 9. A grapheme is an elemental visual symbol of written languages, what in computing we might call a character (see Unicode). Every letter you're currently reading is a grapheme – i.e. the letters of the alphabet are graphemes. The symbols for numerical digits are also graphemes (Note that the word digit comes from the Latin digitus, which means finger or toe, which helps illustrate how our number system is based on our finger count.) There is no digit or grapheme to represent the value ten (10) because we've used one of the ten digits to represent zero. Once we hit ten, we need multiple digits to represent numbers. A base-8 system has eight numerical digits, eight unique graphemes that represent the first eight integers (including zero): 0 through 7. The integers 0 1 2 3 4 5 6 7 – just those single digit integers – represent the same values in base-8 and base-10. Things change once we get past the number 7, or into multiple digits. The numerals 8 and 9 do not exist in a base-8 system. Once we get past 7, we need multiple digits to represent numbers, just like in base-10 we need multiple digits to go higher than 9. In base-8, the value 8 is represented as 10. The value 9 is represented as 11. And 0.04 in base-8 represents the value 0.0625 in base-10. Why? How do we convert decimal, fractional values from one system to the other? Let's start with 0.1. Using our standard positioning, 0.1 represents one-nth of the base. So in our base-10 system, 0.1 means one-tenth. In base-8, 0.1 means one-eighth. Then let's try 0.01. This represents one-n²th of the base. So in base-10, 0.01 means one-hundredth (1/10²). In base-8, it means one-sixty-fourth (1/8²). Therefore, in base-8, 0.04 is four-sixty-fourths, or 4/64, which is 0.0625 in base-10. Since a p-value of 0.0625 or lower is easier to obtain (and literally more likely) than a p-value of 0.05 or lower, more Type-1 errors would be published. If we had eight fingers, it's quite plausible our threshold would be 0.0625 (base-10), which we would call 0.04, and we'd have a slightly more error-prone scientific culture. That's interesting. This assumes the same scientific ecology where significant findings are favored over marginally significant and non-significant findings – it assumes we're holding everything else constant, which seems quite right. It also assumes that a different objective threshold doesn't impact the quality of the research, or have any other tricky dynamic effects. A lot of the rules of thumb and threshold values we use in our civilization are ultimately grounded in the fact that we have ten fingers. This reminds me of the book I'll write on evolutionary psychology and exobiology in ten years. I think it would be very fruitful if evolutionary psychologists (and biologists) zoomed back a bit and framed human evolution by thinking of how various background factors would be different on other planets, and what the implications are. It goes much further than the number of fingers we have. For example, think about fire and its many impacts and implications, and consider that fire will be atmospherically impossible on many planets (even those in habitable zones), and how that would impact the course of life compared to Earth, the kinds of organisms that are possible and not possible, and consequences ten or twenty steps deep into the model. This is so refreshing. The Journal of Basic and Applied Social Psychology has banned p-values.
And confidence intervals. Their reasons are extremely good reasons, and well-articulated. I forgot to stress this point explicitly in the recent essay on the Lewandowsky fraud, but descriptive statistics always trump inferential. A p-value has no inherent substantive meaning, nor does the underlying statistic (this is especially true of a linear correlation on scale items where the participants' actual placement on the items is undisclosed, as it was in the LOG12 paper, a situation that was apparently fully satisfying to Eric Eich, Psychological Science and APS.) The way we use inferential statistics is often barbaric, though rarely fraudulent. This is a great first step. I think Trafimow and Marks' explanation will be informative to a lot of readers and scientists. I think a lot of us have the wrong idea of what a confidence interval represents. This will be interesting long-term for its impact on our use of terms like "effect". A low p-value doesn't mean there's an effect. An inferential statistic doesn't mean there's an effect. Other data, including descriptive data, is needed to know if we have an effect. We're especially vulnerable to false or overstated effects when we use student samples because such samples artificially reduce noise (student samples are extremely homogeneous on various dimensions, dimensions that will contribute noise or unexplained variance in community samples.) I don't think I fully processed the implications of the Inbar and Lammers data.
37.5% of social psychologists in their survey explicitly reported a willingness to discriminate against conservative job candidates. These are people who chose the midpoint or higher, where the midpoint was Somewhat inclined to favor a liberal vs. conservative candidate. The 37.5% figure might be an understatement, given that the lower values on the scale still represent some inclination to discriminate. For some reason I'd been working with the much smaller ballpark figure of 20%, which is closer to the figures on discriminating in paper and grant reviews. I thought a 20% base rate might be enough to confer herd immunity given the academic career and hiring model. 37.5% is enormous. I think if we plugged it into a good model, it would be catastrophic for the careers of incoming conservatives (if they could be identified, if they were open with their views, had written op-eds in the school newspaper, maintained a blog, or were affiliated with Young Americans for Freedom, and so forth.) I'm curious what base rates we'd see for racial and gender discrimination in the private sector, maybe going back to the 1960s or so. I'm curious at two levels: self-reported willingness to discriminate, as in these figures, and some sort of estimate of the actual base rate of discrimination from hiring managers. I think it's generally reasonable to assume that the actual base rate of discrimination will be higher than the self-reported rate. Also, given that academic hiring is committee-driven, at any given base rate there will be more exposure to discrimination than in the private sector, where one person might make the decision. The 37.5% figure means virtually every hiring committee will include a discriminator, which is partly why I think this figure might be catastrophic. It will depend on how these committees work and a few other variables. I've had a couple of conservative RAs, and I'm not sure what we should tell them in general. I'm not sure it makes sense for a conservative to go to graduate school given the level of discrimination in the field. It seems quite unlikely that they'll be able to have a career. In our paper, we relate the following story of a graduate student in a top social psychology program: “I can’t begin to tell you how difficult it was for me in graduate school because I am not a liberal Democrat. As one example, following Bush’s defeat of Kerry, one of my professors would email me every time a soldier’s death in Iraq made the headlines; he would call me out, publicly blaming me for not supporting Kerry in the election. I was a reasonably successful graduate student, but the political ecology became too uncomfortable for me. Instead of seeking the professorship that I once worked toward, I am now leaving academia for a job in industry.” That's incredibly vicious and unprofessional behavior, and it's not unusual in academia. (It's also intellectually vapid and provincial, given that the next Democratic President stepped up the drone strikes and killed far more people with them than Bush did – I assume that we're not just caring about the lives of American soldiers.) In fact, I'm not sure I would've gone to graduate school had I know that 37.5% of social psychologists explicitly report a willingness to discriminate. It's hard to say. I think libertarians will fare at least slightly better, but we'll often be encoded as conservatives. Note the student above isn't necessarily a conservative – he merely says that he or she is not a liberal Democrat. I think the core issue is that academic liberals think liberalism is true, and conservatism is false and malicious. This is tautological, but I think it's very important to linger on the fact they think liberalism is true, all the way down, and what the implications are. And they often equate conservatism with creationism and various anti-science proclivities. We see that in some of the commentaries – people think the intellectual landscape consists of 1) the left, and 2) creationists/religious zealots. It seems implausible that anyone would think that's the intellectual landscape, but it's somewhat common in academia. If you think liberalism is completely true, and conservatism is anti-science, it makes no sense to bring conservatives into the field. Calls for diversity would have no appeal, and would be nonsensical. So I think the core issue is exposing the breadth of the intellectual landscape, and the inherent potential breadth and nuance of human scholarship. I think people are granting far too primacy to the intellectual and political landscapes of our day. We're a small part of the grand sweep, one that extends thousands of years into the past, and will extend thousands of years into the future. We just happened to be born here – we could have been born in any other era, with any other landscape. I think academic liberals take for granted that they are on the side of history, that their values and aims are inherently progressive, hence they co-opted that term. They might be right. I certainly agree with some of their major positions. But there is a lot of apparent baggage that comes with contemporary academic liberalism, theories and neuroticisms that I don't think we should be confident will endure through the ages. And core features of the framework could certainly be wrong. One potential danger is that it is so untested, so unpressured, given the intellectual homogeneity of the academy. That would always worry me. For example, while I see some merit in environmentalism, it's the most untested, unexamined, unpressured ideology I've ever seen. It has no symmetric opposite (people who hate trees and savor pollution), but alternative schools wouldn't grant its premises so we wouldn't expect a symmetric opposite. No one is deconstructing it. I can imagine lots of refutations and alternatives to environmentalism, but we don't see that work in academia because everyone seems to embrace it. That's dangerous. Even if we thought liberalism was completely true, it's unreasonable to demand that people embrace the one true philosophy at age 22 or whenever they'd apply to a social psychology program, or at age 28 when they look for a job. Another concern is that the academic left has some distinctive antibodies and defense mechanisms for handling dissent, almost a taxonomy. Dissent is precategorized and marginalized as privilege, racism, sexism, and assorted "motives". That can be a powerful shield against substantive disagreement, and installs begging-the-question as a chronic fallacy. Maybe all ideologies have these sorts of immune systems. I know Scientology has a distinctive lexicon for people who speak out against the church, maybe a specific term for former Scientologists who speak out. In any case, I think assuming one's ideology and philosophy is so true that discrimination is justified is just begging to be featured in the wrong chapters of history. My empirical work focuses on envy, particularly it's benign and malicious forms. I was inspired by van de Ven, Zeelenberg, and Pieters (2009). The Dutch language has separate words for benign and malicious envy, which helped frame their research.
Malicious envy: hostility toward the target, appraisal that the disparity is unfair, low confidence in one's ability to close the disparity Benign envy: still an unpleasant, negative emotion (not admiration), but focused on self-improvement, greater effort, closing the gap They characterized malicious envy as leveling down, bringing the target down, while benign envy is leveling up, bringing yourself up to their level of achievement or whatever it is. Something I've been thinking about lately is perhaps a more generalized malice or hostility toward people who stand out, speak out, or invoke certain ideals / ethical principles. That last bit seems to be a forceful induction. Here's what I'm talking about: 1. Judith Curry says climate scientists need to better communicate the uncertainty, need to be more responsible and so forth. 2. A scientist responds with "so you think you're the one to drive this" or "so you think you're better than us." It's that last part, the "you think you're better than us" that has always struck me. Judy wasn't the first time I had seen that kind of thing. Some people are responding to idealists, at least outspoken idealists (who are the only kind they're going to know about) with an instant social comparison. And it's not just social comparison. There's a bit of extra content in there – it's not just "this person is better than me", but "this person thinks they're better than me." It doesn't necessarily concede that the idealist is "better", just that they think they are. That response has always struck me, that element of the other person thinking they're better than oneself. It's not obvious to me why we would respond that way to an idealist or crusader. They generally aren't saying anything about being better than anyone. I suppose it's easy to draw out that implication, though it's optional. I've seen it in graduate school, especially with female students toward an especially attractive female student. How women treat women is worthy of a lot of research in itself. I once heard a grad student say "She thinks she's better than me" based on the person's way of standing. I was so puzzled. I asked why do you think she thinks she's better than you? That seems so specific. And a lot to infer from a stance. The whole "X thinks he/she is better than me" framework puzzles me. I never think someone thinks they're better than me. I routinely think someone is actually better than me in some specific skill or domain, but I don't really have a concept of comprehensive betterness as a person, unless we're comparing regular good people to bank robbers. It doesn't occur to me that someone thinks that. The social comparison aspect reminds me of what Sonja Lyubomirsky wrote in her book about how she and Lee Ross hypothesized that happier people engage in downward social comparison – that this would be a reason why they were happier. To their surprise, they found that happier people tend not to engage in social comparison at all. But what triggers the hostility to idealists? It looks somewhat like malicious envy, but I don't know that it satisfies the definition of envy. There's isn't anything obvious that the other person has, unless it's about fame or attention, though in many of these cases there isn't a lot of that. Stepping back, envy makes sense as a signal, even an evolved mechanism, because it carries important information – that it is possible to be doing better than one is doing. It's possible because here is a person who proves it, someone with more resources, more acclaim, more money, meat, fur, whatever. This might be the most efficient way for a human to learn that it is possible to achieve more on some consequential dimension. I suppose an idealist could signal to me that it's possible to be a better person, maybe that I should have attended to the things they're talking about, that I have failed to be a good person. And that appraisal gets converted to "they think they're better than me". That seems pretty coarse – I'll have to think about it more. Clearly lots of people respond to idealists with support, even worship depending on the context. Yet some people seem to dispositionally respond with malice, and the social comparison seems to be a part of that process. I'd really like to dig into that process. I don't think there's a lot of research on people's responses to idealists, virtues in others, etc. Feel free to jump ahead of me on the data collection. One of the things that may matter is that some idealists are seen as overly preachy, insufferable, while some aren't. So characteristics of the target might moderate effects. I've seen similar responses to my efforts, but I think I've seen a lot more directed at Judy. I'm pretty sure some of the abuse she gets is due to the fact that she's a woman, so I may never experience quite as much. A physicist who had said something similar to her asked me "So you're the one to change social science?" or something along those lines. I was so puzzled by that mentality. What if the answer is yes? What if it's no? Is it even answerable? It seemed to be meant as "so you think you're better than us" except it was accompanied by "do your peers agree with you?" That focus on peers and what other people think was also very strange to me. We wouldn't be able to answer that question meaningfully in any way that would predict the validity or truth of someone's work at any arbitrary time point. It would be much better to just evaluate their work. Science as a cliquish peer-focused culture is obviously going to have some problems and dysfunctions compared to a scientific culture that prizes independent thought, integrity, and good epistemology. Those responses, that non-engagement with substance, just defaulting to social comparison and looking around for what other people think, seem like they might be related. If we iterate that and extend it, it would bring every outspoken person down, pull in every outlier, because they will always be in the minority in the beginning, for some arbitrary time period. It reminds me of the Hawaiian saying about crabs in a bucket. It looks like the leveling down process, but without a concrete object of envy. In the streets, it's just called being a hater, but I don't think we have a formal conception of it in social psychology. It might be too rare to document, or perhaps not. There are signs of malice toward achievers all over the culture. That's achievers, but I think there might be a more specific response and process regarding people in a moral domain, idealists, crusaders, outspoken reformers, and so forth. The moral domain might have particular power as an induction, relative to just achieving a lot of success. We'll see. Separately, on the issue of gender-specific phenomena, a female clinical researcher shared a story with me, as something that could be the basis of organizational research. In a large company, there was a fast-rising woman. She was very smart and very qualified, with a law degree and maybe an MBA. And she was very beautiful, which the researcher thought was the key element. When this thoroughly qualified woman rose to a top position, other women in the company attributed her rise to sleeping her way to the top, or to her beauty. Anecdotally, there seems to be a lot going on in how older women see younger women, especially if they're beautiful. (Evolutionary psychologists can easily build a narrative to explain this.) Female executives have told me that the male mentorship model (in the private sector) is different from the female mentorship model. In particular, the hypothesis is that older men see a young man as a way to leave a legacy, while older women see a younger woman as a threat. This will have to be explored empirically, but it makes sense. I love SPSP, the conference for the Society of Personality and Social Psychology, the flagship professional organization for social psychology.
It always messes me up, in a good way. Conferences and colloquia are pleasantly unpleasant for me. They're extremely generative. Whatever a researcher is talking about always sparks a cascade of ideas for me. This might be partly driven by the fact that when you're sitting in a talk, there is nothing else to do but think, or listen and think. It's a very focusing setting. Jon Krosnick and Lee Jussim organized a symposium on scientific integrity. Hart Blanton is one of the speakers and I'm eager to see how he unpacks this: "(1) misidentified measurement and causal models, (2) treatment of arbitrary psychological metrics as if they are meaningful..." I'm especially curious about what he means by "arbitrary" metrics. It could be something I haven't thought of, and it would be thrilling if Hart is miles ahead of me on methodological considerations. The model identification issues will be interesting, but there I'm guessing I know what he means. In any case, it's great to see people take a deep interest in methodological issues. Most of the talks will be on empirical research, which is as it should be. I've focused on methodological issues lately because 1) I think they're extremely interesting, and 2) I think I can have a bigger impact right now on the methodological side than the empirical side, given the methods I have access to. I'm on record as saying that data is sometimes overrated in our field, that data flowing from invalid methods is not going to give us much insight into human nature or behavior. I don't yet have access to the kinds of empirical methods I want to use. I like Mturk, and I use it, but there are hard constraints on what I can do with it. I've got piles of data to write up, but in general I think my effects are only about 70% likely to be true. That's not terrible, but I want to supplement it with other methods, mostly field work and organizational samples. I stopped using student samples a couple of years ago, because I think the probability of an effect being true, valid or interesting falls sharply if it's based on student samples. Others will disagree, and it's unlikely to be a simple matter, but in general I think the burden is on us to show that student samples are valid, not on others to show that they're not valid. For anything beyond sensory-perception or basic cognitive processes, I'd want broader samples. There might be a dance battle at SPSP if certain people are amenable. Latent Class Modeling: I'm pretty sure this is the future. It's a much more powerful, higher resolution method than linear correlation and its derivatives. The whole issue of frog jumping I talked about in an earlier post is eliminated by attending to different profiles of participants, which LCM does.
Non-narcissistic self-esteem: It would be extremely cool if someone built a reliable self-esteem measure that doesn't also carry narcissism. RSE is going to carry both high self-esteem and high narcissism (or narcissistic self-esteem and non-narcissistic self-esteem.) So you have to use supplemental measures, like NPI-16, to tease them apart, or cross explicit and implicit self-esteem (which isn't as valid, probably.) Kernis was heading in that general direction before he passed away. Implicit self-esteem is tough. I know that Diener and colleagues used initial letter liking as an implicit measure. I think it might have done some work in predicting life satisfaction. (Letter liking is presenting people with their own initials and capturing their attitudes, like J for me, or D. I'm not sure how much validation we have on that.) Rosenberg also bothers me a bit because is wording is dated, and probably comes off as awkward to a lot of contemporary participants. This is likely to be another source of noise. I'll probably try out some new measures. I also love Simine Vazire's method of repeating questions on personality measures, basically Are you sure this is the right answer? It correlates better with peer ratings, thus seems more accurate. These two new papers in JPSP are extremely interesting:
Finkel, Eastwick, and Reis: "Best research practices in psychology: Illustrating epistemological and pragmatic considerations with the case of relationship science." In recent years, a robust movement has emerged within psychology to increase the evidentiary value of our science. This movement, which has analogs throughout the empirical sciences, is broad and diverse, but its primary emphasis has been on the reduction of statistical false positives. The present article addresses epistemological and pragmatic issues that we, as a field, must consider as we seek to maximize the scientific value of this movement. Regarding epistemology, this article contrasts the false-positives-reduction (FPR) approach with an alternative, the error balance (EB) approach, which argues that any serious consideration of optimal scientific practice must contend simultaneously with both false-positive and false-negative errors. Regarding pragmatics, the movement has devoted a great deal of attention to issues that frequently arise in laboratory experiments and one-shot survey studies, but it has devoted less attention to issues that frequently arise in intensive and/or longitudinal studies. We illustrate these epistemological and pragmatic considerations with the case of relationship science, one of the many research domains that frequently employ intensive and/or longitudinal methods. Specifically, we examine 6 research prescriptions that can help to reduce false-positive rates: preregistration, prepublication sharing of materials, postpublication sharing of data, close replication, avoiding piecemeal publication, and increasing sample size. For each, we offer concrete guidance not only regarding how researchers can improve their research practices and balance the risk of false-positive and false-negative errors, but also how the movement can capitalize upon insights from research practices within relationship science to make the movement stronger and more inclusive. Waytz, Hirshfield, and Tamir: "Mental simulation and the meaning of life." Mental simulation, the process of self-projection into alternate temporal, spatial, social, or hypothetical realities is a distinctively human capacity. Numerous lines of research also suggest that the tendency for mental simulation is associated with enhanced meaning. The present research tests this association specifically examining the relationship between two forms of simulation (temporal and spatial) and meaning in life. Study 1 uses neuroimaging to demonstrate that enhanced connectivity in the medial temporal lobe network, a subnetwork of the brain’s default network implicated in prospection and retrospection, correlates with self-reported meaning in life. Study 2 demonstrates that experimentally inducing people to think about the past or future versus the present enhances self-reported meaning in life, through the generation of more meaningful events. Study 3 demonstrates that experimentally inducing people to think specifically versus generally about the past or future enhances self-reported meaning in life. Study 4 turns to spatial simulation to demonstrate that experimentally inducing people to think specifically about an alternate spatial location (from the present location) increases meaning derived from this simulation compared to thinking generally about another location or specifically about one’s present location. Study 5 demonstrates that experimentally inducing people to think about an alternate spatial location versus one’s present location enhances meaning in life, through meaning derived from this simulation. Study 6 demonstrates that simply asking people to imagine completing a measure of meaning in life in an alternate location compared with asking them to do so in their present location enhances reports of meaning. This research sheds light on an important determinant of meaning in life and suggests that undirected mental simulation benefits psychological well-being. I've never had to think about scientific fraud until the last few months. It's been an interesting journey. I've got a paper coming soon that focuses on this issue. I'm of two minds about the need to think about it. It raises interesting ethical and epistemological questions, and I'm deeply interested in both ethics and epistemology, but it also feels parasitical, an exquisite way to distract people from their research. So on that count, part of me resents even having to think about fraud, write about fraud, or deal with people who engage in fraud.
I've talked to a number of people about the recent cases, and I want to highlight something. A couple of people seemed to think that fraud = data fabrication, that they're isomorphic. In that scenario, fraud is inherently hidden, and can only be uncovered by authorities opening up a lab freezer and pulling out the stem cells or doing forensics on someone's data. The definition of scientific fraud is itself a focus of ongoing scholarly inquiry and discussion. There are lots of definitions in the literature. I'm not aware of any that define fraud so narrowly as to restrict it to fabrication. Data fabrication, or what I sometimes call "spreadsheet fraud", is but one class of fraud. We probably tend to think of recent salient cases of fraud and treat them as the prototypical form of fraud, perhaps the only form, in a manner similar to the availability heuristic. The most famous of the recent cases in social psychology is probably that of Diederik Stapel, who admitted fabricating research. My former program chair at UNC, Larry Sanna, evidently engaged in similar misconduct, though much less information has been made publicly available there than in the Stapel case. One thing we know for sure is that he destroyed the careers of his graduate students – and my friends – before they had even begun. That is really something. Lewandowsky said this: "NASA Faked the Moon Landing—Therefore (Climate) Science Is a Hoax: An Anatomy of the Motivated Rejection of Science" ...when only three participants out of 1145 in his blog-posted web survey held those two beliefs. He also said: "Endorsement of free markets also predicted the rejection of other established scientific findings, such as the facts that HIV causes AIDS and that smoking causes lung cancer." ...when 95% and 96% of free market endorsers agreed that HIV causes AIDS, and that smoking causes lung cancer, respectively. I think that's going to have to be fraud any day of the week. The key element of the concept of scientific fraud – and any definition we'd find – is deception or misrepresentation. Note that the definitions commonly include falsification, in addition to and distinct from fabrication. The definition of falsification commonly includes misrepresenting the results of statistical analyses or omitting data or results such that the findings are misrepresented or the reader misled. For an example of the former element in a definition, see here. At no point did Lewandowsky et al. inform the reader that there were only three moon-climate hoax people in a sample of 1145, or that only ten people endorsed the moon hoax to begin with. At no point did the authors disclose that only 16 of 1145 people disputed that HIV causes AIDS, 11 disputed the smoking--lung cancer link, or that only 5% and 4% of free market endorsers disputed those facts. Such trivial numbers cannot be used draw inferences or run correlations – these could be errant keystrokes, sticky keys, or a few felines. There was no data in this study to support the authors' claims. (Also, the effect proclaimed in the title is not reported in the paper.) As you can see, in the Lewandowsky case, we don't need to open up anyone's lab freezer. We don't need a committee or authority to tell us this is fraud. We don't need any further information – the numbers I gave above are not in dispute. Anyone can just look at the (stripped) data file the authors released. Our mortal eyes and brains are sufficient to validate my claims or any other claims about the data. It's just an Excel spreadsheet with some survey responses, one of the simplest datasets we'll ever see. The epistemological structure here is that Lewandowsky et al made claims A and B. I then argue that claims A and B are false by reference to their own data (and have the potential to harm millions of innocent people who would be linked to beliefs they clearly do not hold.) I point to the data, produce the numbers. I further argue this kind of conduct – making false claims of the sort they made – comfortably fits within the definition of scientific fraud, not just mine, but many extant definitions and almost certainly the common person's definition. I take it to be a safe assumption that the authors knew their data, had looked at it and so forth. They knew there were only three people who could possibly fit the effect they asserted in their title, meaning there was no effect. They certainly knew this for at least a year before I stumbled on this case, and they've done nothing to correct the record, nor have they retracted the paper. They've done more the opposite of that, and Lewandowsky's explanation for his title belongs in a museum of scientific fraud. The nature of my claim is such that it is based on their claims and data – no frozen tissue samples or university officials will be relevant, logically or epistemically, to what I am saying. Again, the numbers are not in dispute. My claim rests on those numbers and the claims the authors made. There is no other evidence that is relevant or necessary to my particular argument – nothing needs to be uncovered. In that respect, this is an unusual fraud case. People might not be used to cases of this sort, with this structure, but we don't need a novel definition of fraud to enfold cases like these. Existing definitions seem quite adequate, though social psychology as a field hasn't had rigorous conversations about how fraud should be defined, nor do we have institutions or mechanisms to deal with it. I assumed we did, and I realize now that I took the wrong inference from the Stapel, Smeesters, and Hauser cases. In every one of those cases, which are about the only cases I knew of, it looked as though the host universities did an admirable job of thoroughly investigating the fraud or misconduct (I'm not clear on what the Hauser case turned out to be.) The Levelt Committee at the University of Tilburg did a commendable job of investigating Stapel, and produced an excellent report. I now realize that all those cases started with a member of the university community walking into an ethics office or whatever and reporting either fraud or that something wasn't right. As a social psychologist, things are starting to make a bit more sense now. It's much easier for officials and institutions to ignore e-mails from an outsider alleging that one of their researchers committed fraud than it is to ignore a member of their own community sitting in front of them. I should have seen the implications of these factors long ago. In fact, Nature has repeatedly discussed how difficult it can be to get universities to investigate fraud. That's not an excuse for institutions like the University of Western Australia or Queensland. Their officials should be energetic about investigating such cases, should have a fundamentally different orientation toward outsiders reporting any kind of misconduct. The people in those positions should be selected for their eagerness to pursue such cases, for their integrity, and their non-investment in maintaining the university's public image. Such people exist, but I think it's hard to find them if you're not looking very specifically for them. However, these institutions' behavior is not as strange as I used to think, folding in the above factors, just from a descriptive social psychology standpoint. (I never actually contacted Queensland given that they tried to hide or destroy the evidence of the 97% fraud by sending legal threat letters to the whistleblower who released said evidence. In a move I've never heard of, they even told him that he could not divulge receipt of the letters, claiming that the legal threat letters were themselves copyrighted.) I hope that clears some things up. There seems to be a subculture in science that carries a sense of extraordinary entitlement and privilege with respect to fraud accusations, a degree of entitlement that would not be found outside of academia. I think Australian philosopher Brian Martin is right in arguing that scientific elites have a strong vested interest in defining fraud as narrowly as possible. I think it's clear that we're going to have a conflict of interest, as a vocation, in how vigorously we want to define fraud. I think many scientists will agree with this – it's an almost trivial insight given everything we know about human nature, and this reality does not require that most scientists be fraudsters, or that even ten percent of them are. And I think it's certainly reasonable to want to err on the side of false negatives, rather than false positives (this assumes that our options truly map to differentiable rates of false negatives and false positives – I'm not sure that's true.) In the Lewandowsky case, some dispute might come down to a researcher's right to say X predicts Y1 when there is no one at Y1, but there is a Pearson correlation between X and Y with a sufficiently low p-value. In this case, I think it's pretty clear-cut because the HIV-AIDS variable was an opposite variable – there were two levels of disagreement and two levels of agreement. Anytime we have that kind of scale, where one side is the substantive opposite of the other, we know that we can get a positive or negative correlation even if scores are clustered exclusively on one side of the scale. For example, we can easily have a negative correlation between free market endorsement and an HIV-AIDS item even if no one disputes that HIV causes AIDS. A negative correlation does not at all imply that free market views predict rejection of that basic scientific fact – not to anyone who is familiar with the formula for correlation. Neither positive nor negative correlations imply any particular placement on the scale. This phenomenon is made possible by having a scale with more than two points, and is made more and more possible the richer the scale (e.g. a seven-point scale.) This is basic stats. And disagreeing with me on our license to link people to views they do not hold simply because we have an inferential statistic driven by variance on the other side of the scale will do nothing to justify the title, so I think you'd have your work cut out for you. You'd also have to argue that we can make any inferences about any of these variables with data from a survey posted on environmentalist blogs, open to anyone in the world including fakers, and where the authors either stripped or never collected age, gender, and nationality. There's a core logic problem with any notion that fraud should only be defined as data fabrication, that nothing we say can be fraud, that fraud only relates to numbers and not words. If we can say anything we want, proclaim any effect in our titles and papers regardless of whether we have data to back them up, then no one would need to fabricate data. If words can't be fraud, then all future fraud can be shifted from data to words. My outrage in this case – and it really is outrage – is that millions of people were falsely linked to beliefs that could be incredibly damaging to them. We can never take it back. It's out there now, because one of our own put it out there. It could harm people for years to come. One day Okcupid and similar services might be able to quantify the harm – would you want to date someone, to have sex with someone, who disputes that HIV causes AIDS? Is there any doubt that having read in the NYT that free market endorsement predicts rejection of the HIV-AIDS link might bias someone – even non-consciously – against the conservatives, libertarians, and economists they see on a dating site or meeting in day-to-day life? This case is probably the most vivid consequence I've seen of the political homogeneity and bias of the field. I doubt it will ever affect me personally, that anyone would think someone with a PhD and several books to his name might dispute that HIV causes AIDS (in some future scenario where I found myself single.) But it could clearly affect millions of others. I've never understood social psychology to be a vehicle of mass harm. All our vaunted research ethics and IRBs count for nothing if we can't manage to police people, editors, journals, and bodies like APS when they plant false and unbelievably harmful links that could impact millions of innocent people. That breaks my heart. It really breaks my heart that anyone could do this, that I could be associated with a field that does things like this. It will further break my heart if I come to find that other social psychologists' hearts don't bleed upon processing what happened here. Fake Rolexes A couple of people have e-mailed me trying to get me to accuse Cook or Lewandowsky of mundane economic/criminal fraud, perhaps to expose me to a lawsuit. One of these people identified himself as Brendan Montague, a person who has reputation as some sort of environmentalist operative (try google.) In both cases, these people, having failed to hook me, have argued that we simply cannot use the concept of scientific fraud because any use of the word fraud must mean economic criminal fraud, like selling fake Rolexes. This is easily the most specious, witless, and sinister argument I've ever heard. The idea that we simply cannot speak of such a well-established, ancient concept as scientific fraud is the only use I have ever had for the word execrable. Scientific fraud is, as I mentioned, a topic of explicit scholarship and inquiry. There are countless journal articles contributing definitions, examining causes, estimating rates, and so on. There are countless cases of it in the news, all over the world. When we speak of scientific fraud, no one thinks we're talking about selling someone a pound of beef that weighs eleven ounces. Not being able to speak of scientific fraud is tantamount to not being able to speak, and I think we might be facing a rather broad assault on freedom of speech from the modern left. Concepts like "hate speech" and "offensive" are thrown all over the place, and I'm sure how they're defined, whether their application consistently fits any coherent definition, or whether they are categorically destructive concepts to have in a theory of free speech. Note that the concept of journalistic fraud is very similar to scientific fraud, well-established, and widely used. When the story broke that that Jayson Blair had fabricated all those stories at the New York Times, no one would have taken the topic to be economic criminal fraud. As far as I recall, no one was talking about criminal charges at all. Scientists and journalists are somewhat privileged in that respect – fraud generally has no criminal justice implications. I don't know much about the Michael Mann lawsuit, but if this is an actual plaintiff's argument in that case, we might want to pay more attention to the case. It seems revolutionary that we'd be banned from using well-established concepts if they happen to have a different use in criminal law. I'm not sure we've ever heard such an argument. Perhaps this is why all those groups across the political spectrum filed amicus briefs siding with the defendants (I forget who all the defendants are.) Gavin Schmidt made a ridiculous argument on Twitter in support of Mann, something to the effect that a charge of (scientific) fraud is defamation per se, which rests on the assumption that we're talking about criminal fraud, an assumption we can dismiss with prejudice. It also rests on the assumption that any such allegation is false, a core feature of defamation. It's so self-serving for scientists try to silence anyone who would call out scientific fraud, to maintain some sort of aristocratic privilege. (Tone edits above: I'm struggling to find the appropriate tone with some of these posts. It disappoints me that I still drop to words like stupid and idiotic. There are uses for those words, but I don't think they accomplish much. The idea of not being able to use concepts like scientific or journalistic fraud is so ludicrous and dangerous that I get real snappy. Nothing bothers me more than these assaults on freedom of speech. Without that, I'm not sure what we'd have left. Also edited the "If this is a plaintiff's argument, Michael Mann is a threat to freedom of speech..." clause, since someone rightly pointed out it was based on a conditional, and we probably shouldn't make sweeping statements of that sort based on conditionals. I think I have a tone problem. On the one hand, I think these issues merit a forceful tone, but on the other hand, "forceful" is a continuum, and sometimes my tone seems excessive even to me. I'll have to think about this some more.) Moreover, I think this new theory of non-contextual word use is incredibly dark. Can you imagine a reality where we were not allowed to speak of scientific fraud unless some authority told us we could? (Or journalistic fraud.) We know that universities and these "fact-finding" bodies have profound conflicts of interest. Ask Nature. It's so strange to place fraud investigations in the hands of such conflicted parties – we don't see that kind of rigged setup outside of academia. We know that we lack the institutions, mechanisms, and anti-corruption measures we would need to reliably police fraud. We don't even have independent investigators. (FYI, I used to help companies comply with the Sarbanes-Oxley Act.) We know that the scientific community is likely to have some conflicts of interest here. We know that academia in our era is intensely politicized, and that one ideology dominates and discriminates. Model in your head what would happen to the public's trust in science if the term scientific fraud were legally removed from our vocabulary, where fraud-fighters were silenced. If you think the public doesn't trust science now, I think you might want to get a stopwatch to see how long it takes for taxpayers to defund science in that scenario. Imagine we were talking about plagiarism, and we said someone stole someone else's work. Would Schmidt and other climate activists say that we must be accusing the plagiarist of grand theft auto? What other words are we not allowed to use? What other words are used to mean different things in other walks of life? Should we be banned from using any word that is used in criminal law, no matter how well-established it is in our non-criminal-law domain? This is as intellectually bankrupt as anything we're going to see in a normal human lifespan. These people definitely need to check their privilege. (I'm not sure how to adjudicate falsity where a political commentator is assessing a scientist's work, as in the Mann case, especially given that the scientist has clearly done some sloppy work. There might be issues of satire or similar license. There might be some presumption of the writer's right to interpret and judge scientists' work, some range of reasonable and expected divergence. No one should be beholden to investigations by institutions that have such obvious conflicts of interest, or really to any authorities or powers that be. It's demonstrably the case that authorities cannot be consistently relied upon to investigate fraud. People should be free to see, think, and speak for themselves. I don't know the details of what Steyn said, though I remember something about the word "bogus", in addition to "fraud". I can't imagine a universe, certainly not one that features the First Amendment, where writers aren't allowed to assess someone's work as bogus. I don't think Mann has ever been accused of doing anything remotely like what Lewandowsky did (I'm not sure anyone has ever done what Lewandowsky did), but I haven't followed his career. He's the last climate scientist I would trust, given his awful behavior, the way he treats Judith Curry, and how much he politicizes climate science.) In Conclusion People seem to be worried about my career, discrimination and so forth. I know full well that all this is extremely poorly timed with respect to my academic career, that calling out fraud, advocating a more credible and scientifically consensual definition of fraud, or even just using the word fraud will lead some people to discriminate against me in the job market. I know it could kill my academic career. But if people seriously think that this would deter me, or that it ought to deter me, I think they have failed to build in themselves the classic scientific virtues. I also think they're heartless with respect to the potential victims of this garbage. It would never occur to me to sacrifice basic principles of integrity, ethics, or beneficence in order to get an academic job, or tenure. The thought is alien to me. I don't know how to live like that. I'm going to be discriminated against anyway: 37.5% of social psychologists explicitly stated that they would discriminate against a conservative job candidate (maybe libertarians would face less discrimination – I'm not sure.) 44.1% said they thought their departmental colleagues would discriminate. These numbers might be catastrophic for the career prospects of anyone like me. They might be more than sufficient, given some math on opportunities, iterations, and network effects, to lock people out. They might, in effect, confer herd immunity for a politically biased and tribal field. The discrimination will touch people when they submit to journals, apply for grants, and so forth. The math just might be catastrophic. Perhaps there's literature on the math of discrimination and the thresholds necessary to confer herd immunity. My timescale seems to be different from the hecklers'. I don't care about the short-term. I'm happy to be right fifty or a hundred years from now, in terms of what other people think, and I think it's pretty obvious that these cases are only going to look worse and worse as time passes. Still, I think it's unrealistic to expect that these cases are going to turn out well for the researchers even in the short-term. I used to help companies comply with Sarbanes-Oxley. I know what the standards are in the world at large, certainly in America at large. Presumably, calling out political bias as an issue will spark even more discrimination, as will disclosing past experiences of discrimination, such as the fact that the University of Arizona social psychology program denied me admission after probing my views of Jimmy Carter, of all people. I decided to do that after hearing some horror stories and seeing some new data. The discrimination in social psychology is really a disgrace, and these "scholars" don't seem to have any grasp of the complexity of human thought, the range of perspectives a thinker might take or even create. They seem to have no sense of time and place, don't understand our small place in the grand sweep, give far too much primacy to the political identities of our day. People who would discriminate against someone for their momentary views of a distant President do not understand the nature of the enterprise. Add my anti-fraud activism against old, tenured white men to the basket, and the fact that I'm a first-generation American, first-generation college graduate, English-as-a-second-language Mexican-American from a copper-mining town with some nice pubs might not save me. I am not unaware of this fact. But I'll be a social scientist regardless of whether American academia chooses to discriminate against me again. It will look terrible for me to be out there, locked out of academic social psychology or philosophy of science, as the publications and research pile up and up and up. It will be ridiculous, will make social psychology and academia look absurdly unethical and discriminatory. Be that as it may, it's never been easier to be a social scientist outside of academia, what with the internet, crowdfunding of research, the fact that social science has never been expensive, etc. I'm not going to look the other way on this garbage, not when we're talking about "findings" that can be so harmful to so many people. I've read the books about Enron, and I'm, well, a social psychologist. I know what bias does, what tribalism does. Whatever happens, happens. This is simply the universe as I find it. |
José L. DuarteSocial Psychology, Scientific Validity, and Research Methods. Archives
February 2019
Categories |