Investigation: Ultra-Processed Diets by Hall et al. (2019)

[This is Part One of a two-part analysis in collaboration with Nick Brown. Part Two is on Nick’s blog.]

I. 

Recently we came across a 2019 paper called Ultra-Processed Diets Cause Excess Calorie Intake and Weight Gain: An Inpatient Randomized Controlled Trial of Ad Libitum Food Intake, by Kevin D. Hall and colleagues. 

Briefly, Hall et al. (2019) is a metabolic ward study on the effects of “ultra-processed” foods on energy intake and weight gain. The participants were 20 adults, an average of 31.2 years old. They had a mean BMI of 27, so on average participants were slightly overweight, but not obese.

Participants were admitted to the metabolic ward and randomly assigned to one of two conditions. They either ate an ultra-processed diet for two weeks, immediately followed by an unprocessed diet for two weeks — or they ate an unprocessed diet for two weeks, immediately followed by an ultra-processed diet for two weeks. The study was ad libitum, so whether they were eating an unprocessed or an ultra-processed diet, participants were always allowed to eat as much as they wanted — in the words of the authors, “subjects were instructed to consume as much or as little as desired.”

The authors found that people ate more on the ultra-processed diet and gained a small amount of weight, compared to the unprocessed diet, where they ate less and lost a small amount of weight.

We’re not in the habit of re-analyzing published papers, but we decided to take a closer look at this study because a couple of things in the abstract struck us as surprising. Weight change is one main outcome of interest for this study, and several unusual things about this measure stand out immediately. First, the two groups report the same amount of change in body weight, the only difference being that one group gained weight and the other group lost it. In the ultra-processed diet group, people gained 0.9 ± 0.3 kg (p = 0.009), and in the unprocessed diet group, people lost 0.9 ± 0.3 kg (p = 0.007). (Those ± values are standard errors of the mean.) It’s pretty unlikely for the means of both groups to be identical, and it’s very unlikely that both the means and the standard errors would be identical.

It’s not impossible for these numbers to be the same (and in fact, they are not precisely equal in the raw data, though they are still pretty close), especially given that they’re rounded to one decimal place. But it is weird. We ran some simple simulations which suggest that this should only happen about 5% of the time — but this is assuming that the means and SDs of the two groups are both identical in the population, which itself is very unlikely.

Another test of interest reported in the abstract also seemed odd. They report that weight changes were highly correlated with energy intake (r = 0.8, p < 0.0001). This correlation coefficient struck us as surprising, because it’s pretty huge. There are very few measures that are correlated with one another at 0.8 — these are the types of correlations we tend to see between identical twins, or repeated measurements of the same person. As an example, in identical twins, BMI is correlated at about r = 0.8, and height at about r = 0.9.

We know that these points are pretty ticky-tacky stuff. By themselves, they’re not much, but they bothered us. Something already seemed weird, and we hadn’t even gotten past the abstract.

Even the authors found these results surprising, and have said so on a couple of occasions. As a result, we decided to take a closer look. Fortunately for us, the authors have followed best practices and all their data is available on the OSF.

To conduct this analysis, we teamed up with Nick Brown, with additional help from James Heathers. We focused on one particular dependent variable of this study, weight change, while Nick took a broader look at several elements of the paper.

II. 

Because we were most interested in weight change, we decided to begin by taking a close look at the file “deltabw”. In mathematics, delta usually means “change” or “the change in”, and “bw” here stands for “body weight”, so this title indicates that the file contains data for the change in participants’ body weights. On the OSF this is in the form of a SAS .sas7bdat file, but we converted it to a .csv file, which is a little easier to work with.

Here’s a screenshot of what the deltabw file looks like:

In this spreadsheet, each row tells us about the weight for one participant on one day of the 4-week-long study. These daily body weight measurements were performed at 6am each morning, so we have one row for every day. 

Let’s also orient you to the columns. “StudyID” is the ID for each participant. Here we can see that in this screenshot we are looking just at participant ADL001, or participant 01 for short. The “Period” variable tells us whether the participant was eating an ultra-processed (PROC) or an unprocessed (UNPROC) diet on that day. Here we can see that participant 01 was part of the group who had an unprocessed diet for the first two weeks, before switching to the ultra-processed diet for the last two weeks. “Day” tells us which day in the 28-day study the measurement is from. Here we show only the first 20 days for participant 01. 

“BW” is the main variable of interest, as it is the participant’s measured weight, in kilograms, for that day of the study. “DayInPeriod” tells us which day they are on for that particular diet. Each participant goes 14 days on one diet then begins day 1 on the other diet. “BaseBW” is just their weight for day 1 on that period. Participant 01 was 94.87 kg on day one of the unprocessed diet, so this column holds that value as long as they’re on that diet. “DeltaBW” is the difference between their weight on that day and the weight they were at the beginning of that period. For example, participant 01 weighed 94.87 kg on day one and 94.07 kg on day nine, so the DeltaBW value for day nine is -0.80.

Finally, “DeltaDaily” is a variable that we added, which is just a simple calculation of how much the participant’s weight changed each day. If someone weighed 82.85 kg yesterday and they weigh 82.95 kg today, the DeltaDaily would be 0.10, because they gained 0.10 kg in the last 24 hours.

To begin with, we were able to replicate the authors’ main findings. When we don’t round to one decimal place, we see that participants on the ultra-processed diet gained an average of 0.9380 (± 0.3219) kg, and participants on the unprocessed diet lost an average of 0.9085 (± 0.3006) kg. That’s only a difference of 0.0295 kg in absolute values in the means, and 0.0213 kg for the standard errors, which we still find quite surprising. Note that this is different from the concern about standard errors raised by Drs. Mackerras and Blizzard. Many of the standard errors in this paper come from GLM analysis, which assumes homogeneity of variances and often leads to identical standard errors. But these are independently calculated standard errors of the mean for each condition, so it is still somewhat surprising that they are so similar (though not identical).  

On average these participants gained and lost impressive, but not shocking amounts of weight. A few of the participants, however, saw weight loss that was very concerning. One woman lost 4.3 kg in 14 days which, to quote Nick Brown, “is what I would expect if she had dysentery” (evocative though perhaps a little excessive). In fact, according to the data, she lost 2.39 kg in the first five days alone. We also notice that this patient was only 67.12 kg (about 148 lbs) to begin with, so such a huge loss is proportionally even more concerning. This is the most extreme case, of course, but not the only case of such intense weight change over such a short period.

The article tells us that participants were weighed on a Welch Allyn Scale-Tronix 5702 scale, which has a resolution of 0.1 lb or 100 grams (0.1 kg). This means it should only display data to one decimal place. Here’s the manufacturer’s specification sheet for that model. But participant weights in the file deltabw are all reported to two decimal places; that is, with a precision of 0.01 kg, as you can clearly see from the screenshot above. Of the 560 weight readings in the data file, only 55 end in zero. It is not clear how this is possible, since the scale apparently doesn’t display this much precision. 

To confirm this, we wrote to Welch Allyn’s customer support department, who confirmed that the model 5702 has 0.1 kg resolution.

We also considered the possibility that the researchers measured people’s weight in pounds and then converted to kilograms, in order to use the scale’s better precision of 0.1 pounds (45.4 grams) rather than 100 grams. However, in this case, one would expect to see that all of the changes in weight were multiples of (approximately) 0.045 kg, which is not what we observe.

III.

As we look closer at the numbers, things get even more confusing. 

As we noted, Hall et al. report participant weight to two decimal places in kilograms for every participant on every day. Kilograms to two decimal places should be pretty sensitive, but there are many cases where the exact same weight appears two or even three times in a row. For example, participant 21 is listed as having a weight of exactly 59.32 kg on days 12, 13, and 14, participant 13 is listed as having a weight of exactly 96.43 kg on days 10, 11, and 12, and participant 06 is listed as having a weight of exactly 49.54 kg on days 23, 24, and 25. 

Having the same weight for two or even three days in a row may not seem that strange, but it is very remarkable when the measurement is in kilograms precise to two decimal places. After all, 0.01 kg (10 grams) is not very much weight at all. A standard egg weighs about 0.05 kg (50 grams). A shot of liquor is a little less, usually a bit more than 0.03 kg (30 grams). A tablespoon of water is about 0.015 kg (15 grams). This suggests that people’s weights are varying by less than the weight of a tablespoon of water over the course of entire days, and sometimes over multiple days. This uncanny precision seems even more unusual when we note that body weight measurements were taken at 6 am every morning “after the first void”, which suggests that participants’ bodily functions were precise to 0.01 kg on certain days as well. 

The case of participant 06 is particularly confusing, as 49.54 kg is exactly one kilogram less, to two decimal places, than the baseline for this participant’s weight when they started, 50.54 kg. Furthermore, in the “unprocessed” period, participant 06 only ever seems to lose or gain weight in full increments of 0.10 kilograms. 

We see similar patterns in the data from other participants. Let’s take a look at the DeltaDaily variable. As a reminder, this variable is just the difference between a person’s weight on one day and the day before. These are nothing more than daily changes in weight. 

Because these numbers are calculated from the difference between two weight measurements, both of which are reported to two decimal places of accuracy, these numbers should have two places of accuracy as well. But surprisingly, we see that many of these weight changes are in full increments of 0.10.

Take a look at the histograms below. The top histogram is the distribution of weight changes by day. For example, a person might gain 0.10 kg between days 15 and 16, and that would be one of the observations in this histogram. 

You’ll see that these data have an extremely unnatural hair-comb pattern of spikes, with only a few observations in between. This is because the vast majority (~71%) of the weight changes are in exact multiples of 0.10, despite the fact that weights and weight changes are reported to two decimal places. That is to say, participants’ weights usually changed in increments like 0.20 kg, -0.10 kg, or 0.40 kg, and almost never in increments like -0.03 kg, 0.12 kg, or 0.28 kg. 

For comparison, on the bottom is a sample from a simulated normal distribution with identical n, mean, and standard deviation. You’ll see that there is no hair-comb pattern for these data.

As we mentioned earlier, there are several cases where a participant stays at the exact same weight for two or three days in a row. The distribution we see here is the cause. As you can see, the most common daily change is exactly zero. Now, it’s certainly possible to imagine why some values might end up being zero in a study like this. There might be a technical incident with the scale, a clerical error, or a mistake when recording handwritten data on the computer. A lazy lab assistant might lose their notes, resulting in the previous day’s value being used as the reasonable best estimate. But since a change of exactly zero is the modal response, a full 9% of all measurements, it’s hard to imagine that these are all omissions or technical errors.

In addition, there’s something very strange going on with the trailing digits:

On the top here we have the distribution of digits in the 0.1 place. For example, a measurement of 0.29 kg would appear as a 2 here. This follows about the distribution we would expect, though there are a few more 1’s and fewer 0’s than usual. 

The bottom histogram is where things get weird. Here we have the distribution of digits in the 0.01 place. For example, a measurement of 0.29 kg would appear as a 9 here. As you can see, 382/540 of these observations have a 0 in their 0.01’s place — this is the same as that figure of 71% of measured changes being in full increments of 0.10 kg that we mentioned earlier. 

The rest of the distribution is also very strange. When the trailing digit is not a zero, it is almost certainly a 1 or a 9, possibly a 2 or an 8, and almost never anything else. Of 540 observed weight changes, only 3 have a trailing digit of 5.

We can see that this is not what we would expect from (simulated) normally distributed data:

It’s also not what we would expect to see if they were measuring to one decimal place most of the time (~70%), but to two decimal places on occasion (~30%). As we’ve already mentioned, this doesn’t make sense from a methodological standpoint, because all daily weights are to two decimal places. But even it somehow were a measurement accuracy issue, we would expect an equal distribution across all the other digits besides zero, like this:

This is certainly not what we see in the reported data. The fact that 1 and 9 are the most likely trailing digit after 0, and that 2 and 8 are most likely after that, is especially strange.

IV. 

When we first started looking into this paper, we approached Retraction Watch, who said they considered it a potential story. After completing the analyses above, we shared an early version of this post with Retraction Watch, and with our permission they approached the authors for comment. The authors were kind enough to offer feedback on what we had found, and when we examined their explanation, we found that it clarified a number of our points of confusion. 

The first thing they shared with us was this erratum from October 2020, which we hadn’t seen before. The erratum reports that they noticed an error in the documented diet order of one participant. This is an important note but doesn’t affect the analyses we present here, which have very little to do with diet conditions.

Kevin Hall, the first author on this paper, also shared a clarification on how body weights were calculated:

I think I just discovered the likely explanation about the distribution of high-precision digits in the body weight measurements that are the main subject of one of the blogs. It’s kind of illustrative of how difficult it is to fully report experimental methods! It turns out that the body weight measurements were recorded to the 0.1 kg according to the scale precision. However, we subtracted the weight of the subject’s pajamas that were measured using a more precise balance at a single time point. We repeated subtracting the mass of the pajamas on all occasions when the subject wore those pajamas. See the example excerpted below from the original form from one subject who wore the same pajamas (PJs) for three days and then switched to a new set. Obviously, the repeating high precision digits are due to the constant PJs! 😉

This matches what is reported in the paper, where they state, “Subjects wore hospital-issued top and bottom pajamas which were pre-weighed and deducted from scale weight.” 

Kevin also included the following image, which shows part of how the data was recorded for one participant: 

If we understand this correctly, the first time a participant wore a set of pajamas, the pajamas were weighed to three decimals of precision. Then, that measurement was subtracted from the participant’s weight on the scale (“Patient Weight”) on every consecutive morning, to calculate the participant’s body weight. For an unclear reason, this was recorded to two decimals of precision, rather than the one decimal of precision given by the scale, or the three decimals of precision given by the PJ weights. When the participant switched to a new set of pajamas, the new set was weighed to three decimals of precision, and that number was used to calculate participant body weight until they switched to yet another new set of pajamas, etc.

We assume that the measurement for the pajamas is given in kilograms, even though they write “g” and “gm” (“qm”?) in the column. I wish my undergraduate lab TAs were as forgiving as the editors at Cell Metabolism.

This method does account for the fact that participant body weights were reported to two decimal places of precision, despite the fact that the scale only measures weight to one decimal place of precision. Even so, there were a couple of things that we still found confusing.

The variable that interests us the most is the DeltaDaily variable. We can easily calculate that variable for the provided example, like so:

We can see that whenever a participant doesn’t change their pajamas on consecutive days, there’s a trailing zero. In this way, the pajamas can account for the fact that 71% of the time, the trailing digits in the DeltaDaily variable were zeros. 

We also see that whenever the trailing digit is not zero, that lets us identify when a participant has changed their pajamas. Note of course that about ten percent of the time, a change in pajamas will also lead to a trailing digit of zero. So every trailing digit that isn’t zero is a pajama change, though a small number of the zeros will also be “hidden” pajama changes.

In any case, we can use this to make inferences about how often participants change their pajamas, which we find rather confusing. Participants often change their pajamas every day for multiple days in a row, or go long stretches without apparently changing their pajamas at all, and sometimes these are the same participants. It’s possible that these long stretches without any apparent change of pajamas are the result of the “hidden” changes we mentioned, because about 10% of the time changes would happen without the trailing digit changing, but it’s still surprising.

For example, participant 05 changes their pajamas on day 2, day 5, and day 10, and then apparently doesn’t change their pajamas again until day 28, going more than two weeks without a change in PJs. Participant 20, in contrast, changes pajamas at least 16 times over 28 days, including every day for the last four days of the study. The record for this, however, has to go to participant 03, who at one point appears to have switched pajamas every day for at least seven days in a row. Participant 03 then goes eight days in a row without changing pajamas before switching pajamas every day for three days in a row. 

Participant 08 (the participant from the image above) seems to change their pajamas only twice during the entire 28-day study, once on day 4 and again on day 28. Certainly this is possible, but it doesn’t look like the pajama-wearing habits we would expect. It’s true that some people probably want to change their pajamas more than others, but this doesn’t seem like it can be entirely attributed to personality, as some people don’t change pajamas at all for a long time, and then start to change them nearly every day, or vice-versa.

We were also unclear on whether the pajamas adjustment could account for the most confusing pattern we saw in the data for this article, the distribution of digits in the .01 place for the DeltaDaily variable:

The pajamas method can explain why there are so many zeros — any day a participant didn’t change their pajamas, there would be a zero, and it’s conceivable that participants only changed their pajamas on 30% of the days they were in the study. 

We weren’t sure if the pajamas method could explain the distribution of the other digits. For the trailing digits that aren’t zero, 42% of them are 1’s, 27% of them are 9’s, 9% of them are 2’s, 8% of them are 8’s, and the remaining digits account for only about 3% each. This seems very strange.

You’ll recall that the DeltaDaily values record the changes in participant weights between consecutive days. Because the weight of the scale is only precise to 0.1 kg, the data in the 0.01 place records information about the difference between two different pairs of pajamas. For illustration, in the example Kevin Hall provided, the participant switched between a pair of pajamas weighing 0.418 kg and a pair weighing 0.376 kg. These are different by 0.042 kg, so when they rounded it to two digits, the difference we see in the DeltaDaily has a trailing digit of 4. 

We wanted to know if the pajama adjustment could explain why the difference (for the digit in the 0.01’s place) between the weights of two pairs of pajamas are 14x more likely to be a 1 than a 6, or 9x more likely to be a 9 than a 3. 

Verbal arguments quickly got very confusing, so we decided to run some simulations. We simulated 20 participants, for 28 days each, just like the actual study. On day one, simulated participants were assigned a starting weight, which was a random integer between 40 and 100. Every day, their weight changed by an amount between -1.5 and 1.5 by increments of 0.1 (-1.5, -1.4, -1.3 … 1.4, 1.5), with each increment having an equal chance of occuring. 

The important part of the simulation were the pajamas, of course. Participants were assigned a pajama weight on day 1, and each day they had a 35% chance of changing pajamas, and being assigned a new pajama weight. The real question was how to generate a reasonable distribution of pajama weights. We didn’t have much to go off of, just the two values in the image that Kevin Hall shared with us. But we decided to give it a shot with just that information. Weights of 418 g and 376 g have a mean of just under 400 g and a standard deviation of 30 g, so we decided to sample our pajama weights from a normal distribution with those parameters.

When we ran this simulation, we found a distribution of digits in the 0.01 place that didn’t show the same saddle-shaped distribution as in the data from the paper:

We decided to run some additional simulations, just to be sure. To our surprise, when the SD of the pajamas is smaller, in the range of 10-20 g, you can sometimes get saddle-shaped distributions just like the ones we saw in data from the paper. Here’s an example of what the digits can look like when the SD of the pajamas is 15 g:

It’s hard for us to say whether a standard deviation of 15 g or of 30 g is more realistic for hospital pajamas, but it’s clear that under certain circumstances, pajama adjustments can create this kind of distribution (we propose calling it the “pajama distribution”).

While we find this distribution surprising, we conclude that it is possible given what we know about these data and how the weights were calculated.

V. 

When we took a close look at these data, we originally found a number of patterns that we were unable to explain. Having communicated with the authors, we now think that while there are some strange choices in their analysis, most of these patterns can be explained when we take into account the fact that pajama weights were deducted from scale weights, and the two weights had different levels of precision.

While these patterns can be explained by the pajama adjustment described by Kevin Hall, there are some important lessons here. The first, as Kevin notes in his comment, is that it can be very difficult to fully record one’s methods. It would have been better to include the full history of this variable in the data files, including the pajama weights, instead of recording the weights and performing the relevant comparisons by hand. 

The second is a lesson about combining data of different levels of precision. The hair-comb pattern that we observed in the distribution of DeltaDaily scores was truly bizarre, and was reason for serious concern. It turns out that this kind of distribution can occur when a measure with one decimal of precision is combined with another measure with three decimals of precision, with the result being rounded to two decimals of precision. In the future researchers should try to avoid combining data in this way to avoid creating such artifacts. While it may not affect their conclusions, it is strange for the authors to claim that someone’s weight changed by (for example) 1.27 kg, when they have no way to measure the change to that level of precision.

There are some more minor points that this explanation does not address, however. We still find it surprising how consistent the weight change was in this study, and how extreme some of the weight changes were. We also remain somewhat confused by how often participants changed (or didn’t change) their pajamas. 

This post continues in Part Two over at Nick Brown’s blog, where he covers several other aspects of the study design and data.

Thanks again to Nick Brown for comparing notes with us on this analysis, to James Heathers for helpful comments, and to a couple of early readers who asked to remain anonymous. Special thanks to Kevin Hall and the other authors of the original paper, who have been extremely forthcoming and polite in their correspondence. We look forward to ongoing public discussion of these analyses, as we believe the open exchange of ideas can benefit the scientific community.

Statistics is an Excellent Servant and a Bad Master

I.

Imagine a universe where every cognitive scientist receives extensive training in how to deal with demand characteristics. 

(Demand characteristics describe any situation in a study where a participant either figures out what a study is about, or thinks they have, and changes what they do in response. If the participant is friendly and helpful, they may try to give answers that will make the researchers happy; if they have the opposite disposition, they might intentionally give nonsense answers to ruin the experiment. This is a big part of why most studies don’t tell participants what condition they’re in, and why some studies are run double-blind.)

In the real world, most students get one or two lessons about demand characteristics when they take their undergrad methods class. When researchers are talking about a study design, sometimes we mention demand, but only if it seems relevant.

Let’s return to our imaginary universe. Here, things are very different. Demand characteristics are no longer covered in undergraduate methods courses — instead, entire classes are exclusively dedicated to demand characteristics and how to deal with them. If you major in a cognitive science, you’re required to take two whole courses on demand — Introduction to Demand for the Psychological Sciences and Advanced Demand Characteristics

Often there are advanced courses on specific forms of demand. You might take a course that spends a whole semester looking at the negative-participant role (also known as the “screw-you effect”), or a course on how to use deception to avoid various types of demand. 

If you apply to graduate school, how you did in these undergraduate courses will be a major factor determining whether they let you in. If you do get in, you still have to take graduate-level demand courses. These are pretty much the same as the undergrad courses, except they make you read some of the original papers and work through the reasoning for yourself. 

When presenting your research in a talk or conference, you can usually expect to get a couple of questions about how you accounted for demand in your design. Students are evaluated based on how well they can talk about demand and how advanced the techniques they use are.

Every journal requires you to include a section on demand characteristics in every paper you submit, and reviewers will often criticize your manuscript because you didn’t account for demand in the way they expected. When you go up for a job, people want to know that you’re qualified to deal with all kinds of demand characteristics. If you have training in dealing with an obscure subtype of demand, it will help you get hired.

It would be pretty crazy to devote such a laser focus to this one tiny aspect of the research process. Yet this is exactly what we do with statistics.

II. 

Science is all about alternative explanations. We design studies to rule out as many stories as we can. Whatever stories remain are possible explanations for our observations. Over time, we whittle this down to a small number of well-supported theories. 

There’s one alternative explanation that is always a concern. For any relationship we observe, there’s a chance that what we’re seeing is just noise. Statistics is a set of tools designed to deal with this problem. This holds a special place in science because “it was noise” is a concern for every study in every field, so we always want to make sure to rule it out.    

But of course, there are many alternative explanations that we need to be concerned with. Whenever you’re dealing with human participants, demand characteristics will also be a possible alternative. Despite this, we don’t jump down people’s throats about demand. We only bring up these issues when we have a reason to suspect that it is a problem for the design we’re looking at.

There will always be more than one way to look at any set of results. We can never rule out every alternative explanation — the best we can do is account for the most important and most likely alternatives. We decide which ones to account for by using our judgement, by taking some time to think about what alternatives we (and our readers) will be most concerned about. 

The right answer will look different for different experiments. But the wrong answer is to blindly throw statistics at every single study. 

Statistics is useful when a finding looks like it could be the result of noise, but you’re not sure. For example, let’s say we’re testing a new treatment for a disease. We have a group of 100 patients who get the treatment and a control group of 100 people who don’t get the treatment. If 52/100 people recover when they get the treatment, compared to 42/100 recovering in the control group, does that mean the treatment helped? Or is the difference just noise? I can’t tell with just a glance, but a simple chi-squared test can tell me that p = .013, meaning there’s only a 1.3% chance that we would see something like this from noise alone.

That’s helpful, but it would be pointless to run a statistical test if we saw 43/100 people recover with the treatment, compared to 42/100 in the control group. I can tell that this is very consistent with noise (p > .50) just by looking at it. And it would be pointless to run a statistical test if we saw 98/100 people recover with the treatment, compared to 42/100 in the control group. I can tell that this is very inconsistent with noise (p < .00000000000001) just by looking at it. If something passes the interocular trauma test (the conclusion hits you between the eyes), you don’t need to pull out another test.

This might sound outlandish today, but you can do perfectly good science without any statistics at all. After all, statistics is barely more than a hundred years old. Sir Francis Galton came up with the concept of the standard deviation in the 1860s, and the story with the ox didn’t happen until 1907. It took until the 1880s to dream up correlation. Karl Pearson was born in 1857 but didn’t do most of his statistics work until around the turn of the century. Fisher wasn’t even born until 1890. He introduced the term variance for the first time in 1918, but both that term and the ANOVA didn’t gain popularity until the publication of his book in 1925.

This means that Galileo, Newton, Kepler, Hooke, Pasteur, Mendel, Lavoisier, Maxwell, von Helmholtz, Mendeleev, etc. did their work without anything that resembled modern statistics, and that Einstein, Curie, Fermi, Bohr, Heisenberg, etc. etc. did their work in an age when statistics was still extremely rudimentary. We don’t need statistics to do good research.

This isn’t an original idea, or even a particularly new one. When statistics was young, people understood this point better. For an example, we can turn to Sir Austin Bradford Hill. He was trained by Karl Pearson (who, among other things, invented the chi-squared test we used earlier), was briefly president of the Royal Statistical Society, and was sometimes referred to as the world’s leading medical statistician. As early as the 1920s, he was pioneering the introduction of the randomized clinical trial in medicine. As far as opinions on statistics go, the man was pretty qualified. 

While you may not know his name, you’re probably familiar with his work. He was one of the researchers who demonstrated the connection between cigarette smoking and lung cancer, and in 1965 he gave a speech about his work on the topic. Most of the speech was a discussion of how one can infer a causal relationship from largely correlational data, as he had done with the smoking-lung cancer connection, a set of considerations that came to be known as the Bradford Hill criteria

But near the end of the speech, he turns to a discussion of tests of significance, as he calls them, and their limitations:

No formal tests of significance can answer [questions of cause and effect]. Such tests can, and should, remind us of the effects that the play of chance can create, and they will instruct us in the likely magnitude of those effects. Beyond that they contribute nothing to the ‘proof’ of our hypothesis. 

Nearly forty years ago, amongst the studies of occupational health that I made for the Industrial Health Research Board of the Medical Research Council was one that concerned the workers in the cotton-spinning mills of Lancashire (Hill 1930). … All this has rightly passed into the limbo of forgotten things. What interests me today is this: My results were set out for men and women separately and for half a dozen age groups in 36 tables. So there were plenty of sums. Yet I cannot find that anywhere I thought it necessary to use a test of significance. The evidence was so clear cut, the differences between the groups were mainly so large, the contrast between respiratory and non-respiratory causes of illness so specific, that no formal tests could really contribute anything of value to the argument. So why use them?

Would we think or act that way today? I rather doubt it. Between the two world wars there was a strong case for emphasizing to the clinician and other research workers the importance of not overlooking the effects of the play of chance upon their data. Perhaps too often generalities were based upon two men and a laboratory dog while the treatment of choice was deducted from a difference between two bedfuls of patients and might easily have no true meaning. It was therefore a useful corrective for statisticians to stress, and to teach the needs for, tests of significance merely to serve as guides to caution before drawing a conclusion, before inflating the particular to the general. 

I wonder whether the pendulum has not swung too far – not only with the attentive pupils but even with the statisticians themselves. To decline to draw conclusions without standard errors can surely be just as silly? Fortunately I believe we have not yet gone so far as our friends in the USA where, I am told, some editors of journals will return an article because tests of significance have not been applied. Yet there are innumerable situations in which they are totally unnecessary – because the difference is grotesquely obvious, because it is negligible, or because, whether it be formally significant or not, it is too small to be of any practical importance. What is worse, the glitter of the t-table diverts attention from the inadequacies of the fare. Only a tithe, and an unknown tithe, of the factory personnel volunteer for some procedure or interview, 20% of patients treated in some particular way are lost to sight, 30% of a randomly-drawn sample are never contacted. The sample may, indeed, be akin to that of the man who, according to Swift, ‘had a mind to sell his house and carried a piece of brick in his pocket, which he showed as a pattern to encourage purchasers.’ The writer, the editor and the reader are unmoved. The magic formulae are there. 

Of course I exaggerate. Yet too often I suspect we waste a deal of time, we grasp the shadow and lose the substance, we weaken our capacity to interpret the data and to take reasonable decisions whatever the value of P. And far too often we deduce ‘no difference’ from ‘no significant difference.’ Like fire, the chi-squared test is an excellent servant and a bad master.

III.

We grasp the shadow and lose the substance. 

As Dr. Hill notes, the blind use of statistical tests is a huge waste of time. Many designs don’t need them; many arguments don’t benefit from them. Despite this, we have long disagreements about which of two tests is most appropriate (even when both of them will be highly significant), we spend time crunching numbers when we already know what we will find, and we demand that manuscripts have their statistics arranged just so — even when it doesn’t matter.

This is an institutional waste of time as well as a personal one. It’s weird that students get so much training in statistics. Methods are almost certainly more important, but most students are forced to take multiple stats classes, while only one or two methods classes are even offered. This is also true at the graduate level. Methods and theory courses are rare in graduate course catalogs, but there is always plenty of statistics.

Some will say that this is because statistics is so much harder to learn than methods. Because it is a more difficult subject, it takes more time to master. Now, it’s true that students tend to take several courses in statistics and come out of them remembering nothing at all about statistics. But this isn’t because statistics is so much more difficult. 

We agree that statistical thinking is very important. What we take issue with is the neurotic focus on statistical tests, which are of minor use at best. The problem is that our statistics training spends multiple semesters on tests, while spending little to no time at all on statistical thinking. 

This also explains why students don’t learn anything in their statistics classes. Students can tell, even if only unconsciously, that the tests are unimportant, so they have a hard time taking them seriously. They would also do poorly if we asked them to memorize a phone book — so much more so if we asked them to memorize the same phone book for three semesters in a row.

The understanding of these tests is based on statistical thinking, but we don’t teach them that. We’ve become anxious around the tests, and so we devote more and more of the semester to them. But this is like becoming anxious about planes crashing and devoting more of your pilot training time to the procedure for making an emergency landing. If the pilots get less training in the basics, there will be more emergency landings, leading to more anxiety and more training, etc. — it’s a vicious cycle. If you just teach students statistical thinking to begin with, they can see why it’s useful and will be able to easily pick up the less-important tests later on, which is exactly what I found when I taught statistics this way.

The bigger problem is turning our thinking over to machines, especially ones as simple as statistical tests.

Your new overlord.

Sometimes a test is useful, sometimes it is not. We can have discussions about when a test is the right choice and when it is the wrong one. Researchers aren’t perfect, but we have our judgement and damn it, we should be expected to use it. We may be wrong sometimes, but that is better than letting the p-values call all the shots. 

We need to stop taking tests so seriously as a criterion for evaluating papers. There’s a reason, of course, that we are on such high alert about these tests — the concept of p-hacking is only a decade old, and questionable statistical practices are still being discovered all the time. 

But this focus on statistical issues tends to obscure deeper problems. We know that p-hacking is bad, but a paper with perfect statistics isn’t necessarily good — the methods and theory, even the basic logic, can be total garbage. In fact, this is part of how we got in the p-hacking situation in the first place: by using statistics as the main way of telling if a paper is any good or not! 

Putting statistics first is how we end up with studies with beautifully preregistered protocols and immaculate statistics, but deeply confounded methods, on topics that are unimportant and frankly uninteresting. This is what Hill meant when he said that “the glitter of the t-table diverts attention from the inadequacies of the fare”. Confounded methods can produce highly significant p-values without any p-hacking, but that doesn’t mean the results of such a study are of any value at all. 

This is why I find proposals to save science by revising statistics so laughable. Surrendering our judgement to Bayes factors instead of p-values won’t do anything to solve our problems. Changing the threshold of significance from .05 to .01, or .005, or even .001 won’t make for better research. We shouldn’t try to revise statistics, we should use it less often. 


Thanks to Adam Mastroianni, Grace Rosen, and Alexa Hubbard for reading drafts of this piece.

I Don’t Give a Shit About 5% More Damage

In The Witcher 3, some abilities increase your fast attack damage by 5%. Boooooring. When I kill that cockatrice, I have no idea if the 5% extra damage saved my ass, but my guess is that it didn’t make the difference between success and failure.

Game designers often have the natural instinct to push their systems to be more granular, i.e. using smaller and smaller pieces. It seems to offer a way to meet several design goals. Sometimes we want to capture aspects of real life that other systems have left out. RPGs are abstractions of real life, and because it’s so easy to notice what is missing, it can be tempting to write those parts back in. But this can get ridiculous pretty quickly. “D&D is ok, but it’s strange how getting a really good night’s sleep doesn’t improve performance, the way it does in the real world. I want my characters to get a bonus if they’ve slept well the night before. A +1 bonus on a d20 roll is too high. Wait! What if I made it d100? Then I could give them a +1 for each two hours of sleep, up to +5!”

This has two major problems. First, keeping track of systems like this is unbearable. “Wait! I forgot to add my bonus from using my whetstone every night, and that other bonus for fighting opponents who are using bludgeoning weapons. Can I re-roll that last attack?” Actually doing the calculations is even worse.

You can partially avoid this if a computer keeps track of all the bonuses and does all the math for you. This is why video games, like The Witcher 3, are more likely to use granular systems and more likely to be successful when they do so. But of course, I was just complaining about The Witcher. Fixing the math doesn’t make granularity work.

The second problem is that small changes are, well, small. There’s a flavor aspect — double damage feels like a much bigger deal than 5% more damage — but flavor isn’t the main issue. The real concern is that small effects don’t make for a strategic system. If an ability doubles my damage, then I want to take that into account, and so do my enemies. If an ability increases my damage by 5%, I will do the same thing I was going to do in the first place. I’ll deal a little more damage, which probably won’t come close to tipping the balance of the battle, and it will be a huge headache keeping track of it all.

(There are some secondary problems as well — a system like this is much more likely to end up with game-breaking hacks that come from combining modifiers, and you probably won’t be able to iron all of them out before release.) 

The solution is to make your game less granular, by using the biggest pieces possible. Players don’t care about multipliers below x2 — and neither should you. Toss them out.

Even better are effects that go beyond multipliers. In The Witcher 3, Geralt can also unlock an ability that lets him deflect arrows and crossbow bolts. That change is more fundamental, because it actually changes how the mechanics work.

Agency is what makes games enjoyable — I did that, I made it happen. My choices were relevant. To make attributions of agency, you need to be able to easily determine causality — if I can’t point to the major factor(s) responsible for an outcome, I can’t tell if my choices mattered. That ruins the game.

Links for December 2020

There are 37 official editions of Scrabble, each of which has its own distribution of letter tiles. There are also many unofficial versions, including Anglo-Saxon, Bambara, Klingon, and L33tspeak.

During a series of diplomatic talks in 1958, Mao invited Kruschev to his private pool. Kruschev couldn’t swim and was forced to use a flotation device (which Henry Kissinger described as “water wings”) in order to accept his host’s invitation to join him in the water.

Basketball is back, which means an unending stream of bickering about who is the GOAT. Only one man, however, has performed the double-double to end all double doubles. In 1921 William Howard Taft became the Chief Justice of the United States after serving as the President from 1909-1913. Take that, LeBron. 

According to nature, crabs are the most perfect form. You may not like it, but 🦀 is what peak performance looks like. This sacred knowledge inspired us so much, we even made a meme. We think this is the first step in the process of memes themselves evolving to be more crab-like. 

We’re still thinking about the Lieutenant Governor of Pennsylvania, his wife, and their rad house. They have my vote. 

Looking to spice things up in the bedroom? Why not try history’s most mysterious sex position, first described in Aristophanes’ classic comedy Lysistrata in 411 B.C. “The women are very reluctant, but the deal is sealed with a solemn oath around a wine bowl, Lysistrata choosing the words and Calonice repeating them on behalf of the other women. It is a long and detailed oath, in which the women abjure all their sexual pleasures, including the Lioness on the Cheese Grater (a sexual position).”

It May Surprise You to Learn the Senate is a Beacon of Liberal Politics

The Senate gets a lot of grief these days. Vox wants you to know that the Senate is a much bigger problem than the Electoral College. GQ makes the case for abolishing the Senate. Someone at the New Yorker tries to answer the question “how broken is the Senate?“, but in our opinion spends far more time than is necessary comparing the senators to various zoo animals. We can accept that Senator John Thune bears a certain resemblance to a gazelle, but Senator Jim Bunning really doesn’t look anything like a “maddened grizzly”.

The basic argument against the Senate is that it’s undemocratic. Senators aren’t elected proportionally, and so some senators represent more people than others. If democracy is all about giving a voice to the people, it seems pretty perverse to give more of a voice to some people than to others. 

But it turns out that disproportionate representation isn’t just compatible with democracy, it’s one of the most important safeguards of a liberal society.

It’s not just that every person deserves a vote. Liberalism also says that every way of life deserves to exist, as long as it doesn’t infringe on someone else’s way of life (e.g. no cannibals). After all, America isn’t a melting pot, it’s more of a patchwork quilt. I’m not into Lutheranism, extreme body modification, or small yappy dogs, but I think that people who are into these things deserve to be able to live how they want and celebrate these aspects of their lifestyle.

The basic argument against Democracy is the old saying that democracy is two wolves and a sheep voting on what’s for dinner (no, it wasn’t Benjamin Franklin). With 1/3 of the vote, the sheep always gets eaten. In a country with 49 sheep and 51 wolves, as long as we have strict proportional representation, all the sheep still get eaten. If the vice president is a wolf, even a 50/50 split isn’t safe.

If the sheep all live in Sheepsylvania, however, they have a better chance to stand up for themselves. They may be outnumbered, but they still get two votes in the Senate. If they also have friends in Elkowa, Beavermont, and Llamassouri, that provides even more protection. It may not be enough to save them, but they will still do a lot better than they would with proportional representation. Disproportionate representation allows them to protect themselves even when they are enormously outnumbered.

States don’t correspond perfectly to different ways of life, and this is a fair criticism of the system. Disproportionate representation might work even better if we explicitly tied representation to specific minority groups. But states do have some correspondence to different ways of life. 

Most people these days think about disproportionate representation in terms of liberal versus conservative. But really, the differences in disproportionate representation today are urban versus rural. It happens to be that most rural states are also conservative, but population density comes with being a rural area, not from voting Republican. There are plenty of rural voters who are very liberal but still prefer to live in the woods. 
It’s not hard to imagine that urban voters — who are already more privileged in terms of wealth and education — might accidentally or even intentionally pass laws that would destroy a rural way of life for millions of people. For just one example, consider how decisions made in major cities can impact rural schooling. It’s important to have a political system that allows minorities to protect themselves.

Your state doesn’t even have to be all that rural to begin with. The Senate benefits the interests of pretty much anyone not living in California (11.9% of the population), Texas (8.7%), Florida (6.5%), or New York (5.8%). If you’re from Virginia, Hawaii, Iowa, Louisiana, Maryland, etc., and you don’t want California and Texas telling you how to live your life, then the Senate is acting in your favor.

We’d like to take this opportunity to remind you of Bernie Sanders. 

The state of Vermont has a very unusual but, we think, excellent way of life. Anarcho-socialist-libertarian-progressivism isn’t a way of life shared by most Americans, but we think it has a lot going for it. If representation were proportionate, we could maybe send Bernie, as an independent, to the house of representatives, where he would be just one voice among 435. But with disproportionate representation, we’ve sent Bernie to the US Senate, where we can punch above our weight. Bernie can work to protect our way of life, and he can help to bring our values (flannel, maple syrup, and Ben & Jerry’s) to the rest of the country. You’re welcome, America.

Film Concept: Gangsters, Thugs, and Local Government

I. 

People who are decently well-off usually don’t appreciate how thin the line between “organized crime” and “local government” can be for the very poor.

In the Great Depression, notorious mobster Al Capone organized soup kitchens in Chicago. More recently, Brazilian gangs, in response to government failure to take action against the pandemic, declared a unilateral quarantine order in Rio de Janeiro, saying “If the government won’t do the right thing, organized crime will”. In some parts of Japan, the yakuza really are the de facto local government, and in the wake of natural disasters like earthquakes, they’re often faster to provide aid than the Japanese government is.

When you’re poor, the sad truth is that the de jure government probably doesn’t care about you much. There probably aren’t a lot of legitimate jobs in your area; you can’t afford to move away; even if you’re very talented, someone with better connections or a fancier-sounding degree will probably beat you out when competing for the few good jobs available.

This is especially true for marginalized groups, in particular when they’re targeted by law enforcement. In a legal system like ours, there are so many pointless and mutually contradictory laws that everyone is guilty of something. If the police watch you for long enough, they will eventually find something that they can arrest you for. (Obviously it’s even worse if they’re willing to lie or plant evidence, but the point is that it can happen even without this.) 

Even if they only put you away for a few weeks, a criminal record will probably kill your chances of getting a legitimate job in the future. If you want to serve your community, or even just put food on the table, your only choice may be an illegal job. 

But “criminal” doesn’t mean “evil”. Modern governments criminalize lots of things they really shouldn’t. If I couldn’t get a legal job, I would be pretty happy selling weed. I don’t think weed should be illegal, and there would be plenty of satisfied customers, so I would be open to sticking my thumb in the government’s eye over this issue if I didn’t have any other option. A similar argument can be made for other drugs, prescription medications, etc. — even giving medical care without a license. If none of these work for you, then remember that during prohibition, the government criminalized alcohol. Ask yourself how guilty you would feel selling booze in the 1920’s, if you had no other job prospects.

Since criminal activity is often the only way for the very poor to make their way in the world, criminal organizations are often the only local institutions around. And because the official government doesn’t really care about these neighborhoods (they may even be actively antagonistic), criminal organizations often end up being the only thing protecting the poor.

The affluent have a hard time understanding all of this, and for many people, a reasoned argument can’t shake the scary image of the criminal or gang member as an uncultured, unreasoning thug.

The good news is that this is what art is for. Fiction can give us, even if only distantly, the sense of what life is like for people who are different from us.

II.

So let’s imagine what a movie to flip this script would look like. Our hero is a young black man who grew up in a poor but respectable suburb of a major American city. He’s talented, but there aren’t many opportunities in his hometown. Like many young men with few options, he joins the Army. He’s quickly recognized as a crack shot and natural leader, gets recruited to the Green Berets, and receives multiple commendations. He also makes some very close friends. Once he returns to civilian life, one of his best friends from the Army runs for mayor, wins, and the protagonist spends the next few years helping his friend try to make things better in their city. He makes some money, starts dating a woman from a well-respected family, and begins thinking of settling down.

But when war is declared in the Middle East, his friend the mayor returns to military service to serve his country, and our hero joins him. They’re shipped overseas, and see a few years of intense combat. His friend the mayor goes missing in battle, presumed captured; our hero is injured and, after recuperating, is honorably discharged from service.

He’s sent home, only to find that things are worse than ever. The new mayor neglects social services in favor of pursuing a “tough on crime” agenda popular with the middle class. The police are encouraged to make lots of high-profile arrests, and they quickly grow fat on civil forfeiture. Constant harassment by the police leads anyone with the means to try to leave the poor parts of town. As money flows out of the neighborhood, so do most businesses, taking with them the last few legitimate jobs.

Soon, almost no one can make a living without turning to some kind of crime. Often this is only opening a hairdresser’s without a license, or running a restaurant in your living room, but the cops crack down on these businesses just the same — and legally, they’re in the right.

The first thing our hero sees when he gets home is the police arresting a kid who tried to steal food from a gas station. He steps in to try to help, but the cops pull their guns on him. With his Special Forces training, he’s able to disarm the cops, free the kid, and make his escape, all without hurting anyone. What he doesn’t know is that he’s made a powerful enemy. One of the cops he embarrassed was the new sheriff, a close ally of the corrupt mayor, who recognizes him. The sheriff puts out an APB, and soon our hero finds that cops are crawling all over the city looking for him, and no one will take him in.

He eventually finds shelter with a preacher at a local church, who has seen enough of police brutality. He had shut down the church and begun to turn to drinking, but seeing someone stand up to the cops and get away with it has given him new hope for the future. 

It’s not just the preacher who takes note. Our hero starts attracting followers. First it’s his young cousin, a flashy dresser and accomplished boxer, hot-headed but idealistic. Next it’s a real beast of a man, a former bouncer who’s out of work and goes by “Lil’ Jon” or something, who impresses our hero by first beating him in a fight, and then throwing him off a bridge and into the river. Soon, more than a dozen people are hanging out in the basement of the abandoned church. 

Our hero can’t get legal work — in the eyes of the law, he’s a wanted criminal, who assisted the escape of a thief, resisted arrest, and assaulted officers. For one reason or another, neither can any of his followers. Even if he turned himself in, now that he knows he personally embarrassed the sheriff, he’s not confident he would survive to make it to trial.

But just because he can’t get legal work doesn’t mean he can’t make a difference for his community. The cops have been stealing property, cars, even cash from anyone they want, so he decides to steal it back. Under cover of night, the new band of friends break into the impound lot and take as many cars as they can drive away, leaving the guards hogtied but otherwise unharmed. 

With this success under their belt, the group grows bolder. They find the location of a multi-millionaire CEO’s summer home up in the hills, break in, and take everything of value. Next they knock over an armored car on its way to a bank, taking everything and even recruiting the driver to join their cause. With so much money on hand, the preacher helps them launder it, distributing the money to those in need and making it appear to come from the church.

Their fame, or maybe infamy, grows. When the cops try to arrest people on trumped-up charges, our hero intervenes, and many of the people he saves (now considered criminals, whether they like it or not) decide to join him. His fiancée escapes from her controlling parents and finds him hiding in the urban jungle. When bounty hunters are sent to track him down, more often than not, they end up being convinced by his cause and joining him instead. Even some of the cops on the force throw away their badges and turn outlaw. The sheriff and the mayor stop calling him a “violent wanted criminal” and start calling him a “notorious gang leader”.

The rest of the movie is dedicated to all of the tricks they pull. They place a call on an anonymous tip line, “revealing” that the gang headquarters is in an abandoned mall. Half of the cop cars in the city converge on the mall, leaving the gang to heist a shipment of insulin, which they distribute for free to the needy. Our hero disguises himself and poses as a bounty hunter, joining the hunt for his own gang. He crashes a fundraiser at the mayor’s house and tells the rich what he really thinks of them. He gets captured and the rest of the gang has to break him out of jail. Eventually, his friend the mayor is released in a prisoner-of-war exchange, comes back, wins election once again, and pardons them all. The wicked mayor and the sheriff are exposed for their crimes and held accountable, and our hero finally marries his sweetheart. And of course, you’d call it Hood Robin.

(This idea isn’t even quite as out there as it sounds.)

You Make My Head Hurt

“Catastrophic failure [of the unhelmeted skull] during testing…experiencing a maximum load of 520 pounds of force,” says the Journal of Neurosurgery: Pediatrics.

According to NASA, the average push strength of an adult male is about 220 lbs of force, with a standard deviation of 68 lbs. If Gregor Clegane were three standard deviations from the mean (the top 0.1%), he would be able to produce about 424 lbs of force, which is not quite enough. He would need to be about 4.5 standard deviations above average to crush a skull with his bare hands.

This is pretty extreme, but if strength is normally distributed in Westeros, Gregor would only be about 1 in 147,160. Another way of saying this is that if one baby were born every day, a man as strong as this would come around about every 403 years. Since birth rates are much higher than that, it’s not impossible.

This is also consistent with what we know about Gregor in general. He’s described as being nearly eight feet tall, or 96 inches. The average height of men in the United States is about 70 inches, with a standard deviation of 4 inches. This means that Gregor is about 6.5 standard deviations taller than average. It seems likely that he would be similarly above average in terms of his strength.

Verdict – it is statistically possible that someone strong enough to crush a skull with their hands exists in Westeros, and Ser Gregor is a good candidate for the role.

Mulch Ado About Nothing

See part one here

Well, it’s been more than a month, and still the fabled glories of slime mold cultivation evade me. The first serious snow of the season fell this week. Despite daily drenchings of the prescribed amount, nothing has grown. None has been spotted casually growing on rotting wood around the forest either. It may just be a bad year for slime molds, due to the exceptionally dry summer. 

Either we will try again next year or we will cave and buy a culture kit.

Links for October

Looking for a new cocktail to juice up your fall? Why not try Torpedo Juice–a mixture of pineapple juice and the 180-proof grain alcohol used as fuel in Navy torpedo motors. On second thought, this sounds more like a summer beverage. 

During the first year of his presidency, George H.W. Bush’s love of pork rinds drove a 31% increase in sales of this non-kosher snack. 

Lithium’s psychiatric effects were originally “discovered” by taking urine from patients at the Bundoora Repatriation Mental Hospital (near Melbourne, Australia) and injecting it into the abdominal cavities of guinea pigs. Early pregnancy tests involved injecting women’s pee into mice, rabbits, or frogs (note: Piss Prophets is an amazing name for a feminist punk rock band if you’re looking to start the next Pussy Riot). Maybe if we want faster biomedical research, we need to try injecting more kinds of pee into more kinds of animals.

The Department of Defense represents 77% of the federal government’s energy consumption. In Fiscal Year 2017, the DoD consumed over 82 times more BTUs than NASA. The federal agency with the second largest energy consumption is the Postal Service. 

Trump is famous for giving his political opponents cheeky nicknames, but his zingers pale in comparison to those of classical Chinese philosophers. Case in point: “Mozi criticized Confucians by saying they ‘behave like beggars; grasp food like hamsters, stare like he-goats, and walk around like castrated pigs. 是若人氣,鼸鼠藏,而羝羊視,賁彘起。(墨子·非儒下)’”

In his eulogy at Graham Chapman’s memorial service, John Cleese said, “I guess that we’re all thinking how sad it is that a man of such talent, of such capability and kindness, of such unusual intelligence, should now, so suddenly, be spirited away at the age of only forty-eight, before he’d achieved many of the things of which he was capable, and before he’d had enough fun. Well, I feel that I should say, Nonsense. Good riddance to him, the freeloading bastard, I hope he fries!” Amazingly, we have this on video.

Agent Orange was just one of a family of colorful chemicals put to use by the United States Military. During the Vietnam War, Project AGILE deployed 9 types of Rainbow Herbicides (Agent Pink, Agent Green, Agent Purple, Agent Blue, Agent White, and 4 types of Agent Orange) across Southeast Asia between 1961 and 1971. I feel like I should lighten the mood but all of the rainbow jokes I can think of are extremely offensive and/or distasteful.

Slime Mold Part I

It’s late September, which for me means seasonal hankerings for autumnal coffee beverages and questionable slime mold studies. I finally got a Pumpkin Cream Cold Brew at Starbucks yesterday. Unfortunately, no plasmodiums have presented themselves as willing partners to help me solve mazes or design efficient transit systems

I emailed a local expert (someone who wrote an article about slime molds and has a doctorate in a relevant field) a few weeks ago. They advised me to wait until cooler temps and rain naturally induce the spores that lay dormant through the summer. We’ve now had several nights below freezing, but very little fall rain, and I worry my ideal cultivation window is rapidly closing. 

It’s time to take fate into my own hands. 

I’ve located six locations near to my house with slightly different elevations, exposures, and soil types to serve as cultivation grounds. At each I have placed two piles of brown mulch. I am using this type of mulch because it is what I had at the house. One pile will serve as the site control, while the second will be watered daily. I used a 32 oz. can to scoop the mulch to ensure approximately similar amounts in each condition.

Figure 1: Freshly strewn mulch piles at sites 1-6.

Watered piles will get two scoops of water from a little red plastic scoop that was originally used for protein powder. I chose this because its vibrant hue will make it easier to find when I inevitably misplace it. Also, it was the first thing I found.

Figure 2: A 32 oz. can for creating mulch piles and carrying water and red scoop for dosage. 

I’m using tap water because I really don’t think using distilled water will make a difference — the water at our house comes from a well and is probably more similar to rainwater than treated water would be. The budget for this project is also $0. 

A seventh location with 4 mulch piles will examine a wider range of watering conditions–1 scoop, 2 scoops, 4 scoops, and 6+ scoops are the treatment conditions.

Figure 3: Mulch piles at site 7.

The 1 scoop pile isn’t a real control, but there are already a bunch of no-water piles around at the other sites, so having another one here seemed boring. I was going to have the highest value be 8 scoops but I’m saying 6+ because the can I use only really holds a total of 14 scoops so to do all of them in one go it would end up being 7 scoops for the final pile, which just doesn’t follow as nicely from the previous numbers as 6 or 8 does. Still, I want to have this one to be pretty wet so on some days I might go and fill up the can a second time so I can give it 8 (or more) scoops. I fully expect this to degenerate into a “just drench it” scenario within the week.