2014 was the hottest year on record by some hundredths of a degree. It was not significantly hotter than 2005 or 2010. See the Berkeley BEST lab for details. Global surface warming paused or slowed down after 1998 – there is some dispute about whether to call it a pause or a slowdown. We'll treat it as a pause or a plateau because that is the least favorable assumption for the point I make below (treating it as slow warming makes the points below even stronger.)
The "hottest year on record" got lots of hype, and people made false inferences about it with respect to global warming, the pause, etc. I saw deeply misleading stories in the New York Times and the Washington Post, which worries me. They're supposed to be the best. When you have a rise in a variable, followed by a plateau, any given data point during the plateau has a decent chance of being the highest on on record. You're on top of a rise. Think of your weight in your 20s – any given year has a decent chance of being your heaviest on record, up to that point, since you've spent most of your life growing and gaining weight and you're now sitting on top of that rise. The probability of any given year being the hottest on record is 1/n, where n is the number of years in the plateau. (That calculation assumes that variance during the plateau is within the margin of the elbow of the rise, an assumption that is satisfied if you look at global temp data.) Justin Gillis wrote an incredibly misleading article at the New York Times. He does that a lot, and I think he's ethically obligated to disclose his political ideology to his readers when he writes about politically-charged topics. In general, it's irresponsible for science writers who are also environmentalists to conceal this from readers when they write about climate science. This is especially true if they relate to environmentalism as a religion, as the recently resigned IPCC chair did – he should have told us it was his religion before he ever took the job. Gillis seems unaware of what it means to be on top of a rise, and that any given year has a decent chance of being the hottest on record. We can also blame climate scientists he quoted. Stefan Rahmstorf said: “However, the fact that the warmest years on record are 2014, 2010 and 2005 clearly indicates that global warming has not ‘stopped in 1998,’ as some like to falsely claim.” The fact that 2014, 2010, and 2005 are the hottest years on record is another way of describing a pause or plateau in a universe where variables have variability. It is definitely not evidence of continued warming, and is fully consistent with warming having stopped. That's what "stopped" looks like. If we peaked at 1998 or whenever, and see random variance from that year, several of the subsequent years will be the hottest on record. (That these years were the hottest on record is also consistent with warming, but we'd need more info to know if significant warming has occurred.) Michael Mann said: “It is exceptionally unlikely that we would be witnessing a record year of warmth, during a record-warm decade, during a several decades-long period of warmth..." What is going on here? Why would a scientist ever say something like this? It is exceptionally likely that we'd be witnessing a record year of warmth during a record-warm decade. This is precisely when we'd expect to see it. This is also another way of describing a pause or plateau. Gavin Schmidt said: "Why do we keep getting so many record-warm years?” Because the earth warmed. If the earth warms and it does not subsequently cool, we will get a number of record-warm years. This is another way of describing a plateau, pause, or a question on a high school statistics test. This worries me. What the hell are these people talking about? Why don't they know basic probability? Why is no one pointing out that when you're on top of a rise, any given year has a decent chance of being the hottest on record? This is basic stuff. There was also a lot of nonsense in the media about a 1 in 27 million chance that 2014 was natural. Peter Gleick, a propagandist employed by the same bizarre journal that published the 97% fraud, even tweeted something to this effect. Years are not randomly drawn from a hat, and yearly temperature averages are not independent data points. It's not meaningful to compare 2014 to hundreds or thousands of other years and calculate odds or probabilities. 2014 followed 2013, and 2012, and so forth. Its temperatures are deeply influenced and constrained by the state of the earth's climate in prior years. It's not as though the earth hits a reset button as the clock strikes midnight in Times Square. None of this says anything about future warming or model projections – my point is that as a basic mathematical and statistical fact, there's a decent chance any given year will be the hottest on record even if we assume no actual long-term warming. Gillis seemed to think the proposition "Global warming has paused" is contested by the observation that 2014 is the hottest year on record by hundredths of degrees. That is simply incorrect. There's no logical intersection between the two claims. This stuff worries me, and I get grumpy about it, because this is simple math and probability. Our civilization seems extremely vulnerable to misinformation and innumeracy. It makes me uneasy that we can't get basic stats right in 2015. I feel like we're going to do something stupid, something harmful. Not necessarily on climate change, but something – if we can't get basic logic or probability, basic stats awareness, from the New York Times or the Washington Post, then I don't know where the public is going to get it. They're supposed to be the best. The science writers there are supposed to be the best. They have an ethical responsibility to not mislead the public, and when they stumble, they have an ethical obligation to correct their misinformation. I sent those newspapers a fuller version of this a month ago, when it was fresh, and they wouldn't publish it. I'm sure I was one of a sea of submissions – what's important is that they need to publish someone who knows basic math and statistics, who won't make such big errors. They need to be truthful and valid in how they report science. Alternative ways of understanding or expressing the above: -- Having a hottest year on record around now is consistent with both a pause and actual long-term warming. A pause after decades of warming will include some number of hottest years on record. -- Variability around a flat line means that some of the data points will be above that line. If the flat line appears after a long rise, those above-line data points will be the highest on record.
23 Comments
In a comment on my post on Significance, I discovered that our rate of published Type-1 error in science would probably be higher if humans had eight fingers instead of ten. Type-1 error is when we wrongly reject the null hypothesis – when our studies seem to give us evidence of an effect or link that in reality is false in the population at large. It's a false positive, a finding that isn't a true finding. Setting our threshold of statistical significance at p = 0.05 is one way we try to reduce Type-1 error.
People have had trouble understanding that comment – it's too brief, skips some steps. I'll lay it out more clearly here. In the post on Significance, I pointed out that one reason why we use the p = 0.05 threshold for statistical significance in many of our tests is that humans have ten fingers. Because we have ten fingers, we use a base-10 number system, and we tend to prefer numbers that are multiples of ten or five for many purposes. It's probably intuitive for most of you that scientists would have been unlikely to choose 0.04 or 0.061 as our significance threshold. We needed something in that ballpark, something sufficiently stringent, and it's not surprising that we chose 0.05 – nor would it be surprising if we had chosen 0.10 or 0.01. We see those numbers as nice clean, "round" numbers, not 0.03 or 0.04. I noted that if we had eight fingers, and thus had ultimately settled on a base-8 number system, we might use 0.04 as our threshold for significance. Holding everything else about human nature and psychology constant, it seems likely that in that scenario, we'd prefer numbers that were multiples of eight and four, just as we prefer tens and fives in our universe. In a comment, Jonathan Jones pointed out that 0.04 in base-8 is actually 0.0625 base-10. What does that mean? It means the numeral, the string of symbols 0.04 would represent a different value in a base-8 number system than it does in base-10. It means that when people in a base-8 civilization write or say 0.04, it represents a different quantity of stuff (or of probability) than it does when people in our society write or say 0.04. It's difficult to think in different number base systems, because we're so conditioned to certain symbols corresponding to certain values. It's similar to the Stroop task, where you might have to identify the color of the word green when it's displayed in orange-colored text (either identifying the named color or the displayed color, depending on the task.) To understand different bases it's helpful to distinguish a numerical value and the symbols we use to represent those values. You can get this just by noting that we could any symbol we wanted to represent the number 4, e.g. we could treat @ as 4, but we've long settled on the symbol 4. A base-10 system has ten numerical digits, ten unique graphemes that represent the first ten integers (including zero): 0 1 2 3 4 5 6 7 8 9. A grapheme is an elemental visual symbol of written languages, what in computing we might call a character (see Unicode). Every letter you're currently reading is a grapheme – i.e. the letters of the alphabet are graphemes. The symbols for numerical digits are also graphemes (Note that the word digit comes from the Latin digitus, which means finger or toe, which helps illustrate how our number system is based on our finger count.) There is no digit or grapheme to represent the value ten (10) because we've used one of the ten digits to represent zero. Once we hit ten, we need multiple digits to represent numbers. A base-8 system has eight numerical digits, eight unique graphemes that represent the first eight integers (including zero): 0 through 7. The integers 0 1 2 3 4 5 6 7 – just those single digit integers – represent the same values in base-8 and base-10. Things change once we get past the number 7, or into multiple digits. The numerals 8 and 9 do not exist in a base-8 system. Once we get past 7, we need multiple digits to represent numbers, just like in base-10 we need multiple digits to go higher than 9. In base-8, the value 8 is represented as 10. The value 9 is represented as 11. And 0.04 in base-8 represents the value 0.0625 in base-10. Why? How do we convert decimal, fractional values from one system to the other? Let's start with 0.1. Using our standard positioning, 0.1 represents one-nth of the base. So in our base-10 system, 0.1 means one-tenth. In base-8, 0.1 means one-eighth. Then let's try 0.01. This represents one-n²th of the base. So in base-10, 0.01 means one-hundredth (1/10²). In base-8, it means one-sixty-fourth (1/8²). Therefore, in base-8, 0.04 is four-sixty-fourths, or 4/64, which is 0.0625 in base-10. Since a p-value of 0.0625 or lower is easier to obtain (and literally more likely) than a p-value of 0.05 or lower, more Type-1 errors would be published. If we had eight fingers, it's quite plausible our threshold would be 0.0625 (base-10), which we would call 0.04, and we'd have a slightly more error-prone scientific culture. That's interesting. This assumes the same scientific ecology where significant findings are favored over marginally significant and non-significant findings – it assumes we're holding everything else constant, which seems quite right. It also assumes that a different objective threshold doesn't impact the quality of the research, or have any other tricky dynamic effects. A lot of the rules of thumb and threshold values we use in our civilization are ultimately grounded in the fact that we have ten fingers. This reminds me of the book I'll write on evolutionary psychology and exobiology in ten years. I think it would be very fruitful if evolutionary psychologists (and biologists) zoomed back a bit and framed human evolution by thinking of how various background factors would be different on other planets, and what the implications are. It goes much further than the number of fingers we have. For example, think about fire and its many impacts and implications, and consider that fire will be atmospherically impossible on many planets (even those in habitable zones), and how that would impact the course of life compared to Earth, the kinds of organisms that are possible and not possible, and consequences ten or twenty steps deep into the model. |
José L. DuarteSocial Psychology, Scientific Validity, and Research Methods. Archives
February 2019
Categories |