Philosophical Transactions: Lithium in Scottish Drinking Water with Al Hatfield

Previous Philosophical Transactions:

Al Hatfield is a wannabe rationalist (his words) from the UK who sent us some data about water sources in Scotland. We had an interesting exchange with him about these data and, with Al’s permission, wanted to share it with all of you! Here it is:


I know you’re not that keen on correlations and I actually stopped working on this a few months ago when you mentioned that in the last A Chemical Hunger post, but after reading your post today I wanted to share it anyway, just in case it does help you at all. 

It’s a while since I read all of A Chemical Hunger but I think this data about Scottish water may support a few things you said:

– The amount of Lithium in Scottish water is in the top 4 correlations I found with obesity (out of about 40 substances measured in the water)

– I recall you predicted the top correlation would be about 0.5, the data I have implies it’s 0.55, so about right.

– I recall you said more than one substance in the water may contribute to obesity, my data suggested 4 substances/factors had correlations of more than 0.46 with obesity levels and 6 were more than 0.41.


– Scottish Water test and record how much of up to 43 substances is in each reservoir/water source in Scotland

– their data is in pdf format but I converted it to Excel

– Scottish Water don’t publish Lithium levels online but I did a Freedom of Information request and they emailed it to me and I added it to the spreadsheet.

– I used the website to get the water quality data for a reservoir for every city/big town in Scotland and lined it up in the spreadsheet.

– I used Scottish Health Survey – Local Area Level data to find out what percentage of people are obese in each area of Scotland and then matched it as well as I could to a reservoir/water source.

– I then used the Data Analytics add-on in Excel to work out the correlations between the substances in the water and obesity.

Correlations with obesity (also in attachment)

Conductivity 0.55

Chloride 0.52

Boron 0.47

Lithium 0.47

Total Trihalomethanes 0.42

Sodium 0.42

Sulphate 0.38

Fluoride 0.37

Colony Counts After 3 Days At 22øc 0.34

Antimony 0.33

Gross Beta Activity 0.33

Total organic carbon 0.31

Gross Alpha Activity 0.30

Cyanide 0.26

Iron 0.26

Residual Disinfectant – Free 0.23

Arsenic 0.23

Pesticides – Total Substances 0.23

Coliform Bacteria (Total coliforms) 0.23

Copper 0.19

PAH – Sum Of 4 Substances 0.19

Nitrite 0.17

Colony Counts After 48 Hours At 37øc 0.16

Nickel 0.13

Nitrite/Nitrat e formula 0.13

Nitrate 0.12

Cadmium 0.11

Turbidity 0.08

Bromate 0.08

Colour 0.06

Lead -0.10

Manganese -0.12

Hydrogen ion (pH) -0.12

Aluminium -0.15

Chromium -0.15

Ammonium (total) -0.22

2_4-Db -0.25

Residual Disinfectant – Total -0.36

2_4-D -0.42

Dicamba -0.42

MCPB -0.42

MCPP(Mecoprop) -0.42

Scottish Water definition of Conductivity

Conductivity is proportional to the dissolved solids content of the water and is often used as an indication of the presence of dissolved minerals, such as calcium, magnesium and sodium.

Anyway, not sure if that’s any help to you at all but I enjoy your blog and thought I would send it in. Let me know if you have any questions.



Hi Al,

Wow, thanks for this! We’ll take a look and do a little more analysis if that’s all right, and get back to you shortly. 

Do you know the units for the different measurements here, especially for the lithium? We’d be interested in seeing the original PDFs as well if that’s not too much hassle.




You’re welcome! That’s great if you can analyse it as I am very much an amateur. 

The units for the Lithium measurements are µgLi/l. I’ve attached the Lithium levels Scottish Water sent me. I think they cover every water source they test in Scotland (though my analysis only covered about 15 water sources).

Sorry I don’t have access to the original pdfs as they’re on my other computer and I’m away at the moment. But I have downloaded a couple of pdfs online. Unfortunately the online versions have been updated since I did my analysis in late November, but hopefully you can get the idea from them and see what measurements Scottish Water use.

Let me know if you’d like anything else.



Hey Al,

So we’ve taken a closer look at the data and while everything is encouraging, we don’t feel that we’re able to draw any strong conclusions.

We also get a correlation of 0.47 between obesity and lithium levels in the water. The problem is, this relationship isn’t significant, p = 0.078. Basically this means that the data are consistent with a correlation anywhere between -0.06 and 0.79, and since that includes zero (no relationship), we say that it’s not significant.

This still looks relatively good for the hypothesis — most of the confidence interval is positive, and these data are in theory consistent with a correlation as high as 0.79. But on the whole it’s weak evidence, and doesn’t meet the accepted standards.

The main reason this isn’t significant is that there are only 15 towns in the dataset. As far as sample sizes go, this is very small. That’s just not much information to work with, which is why the correlation isn’t significant. For similar reasons, we haven’t done any more complicated analyses, because we won’t be able to find much with such a small sample to work with. 

Another problem is that correlation is designed to work with bivariate normal distributions — two variables, both of them approximately normally distributed, like so: 

Usually this doesn’t matter a ton. Even if you’re looking at a correlation where the two variables aren’t really normally distributed, it’s usually ok. And sometimes you can use transformations to make the data more normal before doing your analysis. But in this case, the distribution doesn’t look like a bivariate normal at all:  

Only four towns in the dataset have seriously elevated lithium levels, and those are the four fattest towns in the dataset. So this is definitely consistent with the hypothesis.

But the distribution is very strange and very extreme. In our opinion, you can’t really interpret a correlation you get from data that looks like this, because while you can calculate a correlation coefficient, correlation was never intended to describe data that are distributed like this.

On the other hand, we asked a friend about this and he said that he thinks a correlation is fine as long as the residuals are normal (we won’t get into that here), and they pretty much are normal, so maybe a correlation is fine in this case? 

A possible way around this problem is nonparametric correlation tests, which don’t assume a bivariate normal distribution in the first place. Theoretically these should be kosher to use in this scenario because none of their assumptions are violated, though we admit we don’t use nonparametric methods very often. 

Anyways, both of the nonparametric correlation tests we tried were statistically significant — Kendall rank correlation was significant (tau = 0.53, p = .015), and so was the Spearman rank correlation (rho = 0.64, p = .011). Per these tests, obesity and lithium levels are positively correlated in this dataset. The friend we talked to said that in his opinion, nonparametric tests are the more conservative option, so the fact that these are significant does seem suggestive. 

We’re still hesitant to draw any strong conclusions here. Even if the correlations are significant, we’re working with only 15 observations. The lithium levels only go up to 7 ppb in these data, which is still pretty low, at least compared to lithium levels in many other areas. So overall, our conclusion is that this is certainly in line with the lithium hypothesis, but not terribly strong evidence either way.

A larger dataset of more than 15 towns would give us a bit more flexibility in terms of analysis. But we’re not sure it would be worth your time to put it together. It would be interesting if the correlation were still significant with 30 or 40 towns, and we could account for some of the other variables like Boron and Chloride. But, as we’ve mentioned before, in this case there are several reasons that a correlation might appear to be much smaller than it actually is. And in general, we think it can sometimes be misleading to use correlation outside the limited set of problems it was designed for (for example, in homeostatic systems).

That said, if you do decide to expand the dataset to more towns, we’d be happy to do more analysis. And above all else, thank you for sharing this with us!


[Addendum: In case anyone is interested in the distribution in the full lithium dataset, here’s a quick plot of lithium levels by Scottish Unitary Authority: 


Thanks so much for looking at it. Sounds like I need to brush up on my statistics! Depending how bored I get I may extend it to 40 towns some time, but for now I’ll stick with experimenting with a water filter.

All the best,


The Only True Wisdom is Knowing that You Can’t Draw a Bicycle


Early on in science there would never even could be a replication crisis or anything because everyone was just trying all the stuff. They were writing letters to each other with directions, trying each others’ studies, and seeing what they could confirm for themselves.  

Today, scientists would tell you that replicating someone else’s work takes decades of specialized training, because most findings are too subtle and finicky to be reproduced by just anyone. For example, consider this story from Harvard psychology professor Jason Mitchell, about how directions depend on implicit knowledge, and it’s impossible to fully explain your procedure to anyone:

I have a particular cookbook that I love, and even though I follow the recipes as closely as I can, the food somehow never quite looks as good as it does in the photos. Does this mean that the recipes are deficient, perhaps even that the authors have misrepresented the quality of their food? Or could it be that there is more to great cooking than just following what’s printed in a recipe? I do wish the authors would specify how many millimeters constitutes a “thinly” sliced onion, or the maximum torque allowed when “fluffing” rice, or even just the acceptable range in degrees Fahrenheit for “medium” heat. They don’t, because they assume that I share tacit knowledge of certain culinary conventions and techniques; they also do not tell me that the onion needs to be peeled and that the chicken should be plucked free of feathers before browning. … Likewise, there is more to being a successful experimenter than merely following what’s printed in a method section. Experimenters develop a sense, honed over many years, of how to use a method successfully. Much of this knowledge is implicit.

Mitchell believes in a world where findings are so fragile that only extreme insiders, close collaborators of the original team, could possibly hope to reproduce their findings. The implicit message here is something like, “don’t bother replicating ever; please take my word for my findings.” 

The general understanding of replication is slightly less extreme. To most researchers, replication is when one group of scientists at a major university reproduce the work of another group of scientists at a different major university. There’s also a minority position that replications should be done by many labs, that replication is an internal process of double-checking: “take the community’s word”. 

But this doesn’t seem quite right to us either. If a finding can’t be confirmed by outsiders like you — if you can’t see it for yourself — it doesn’t really “count” as replication. This used to be the standard of evidence (confirm it for yourself or don’t feel bound to take it seriously) and we think this is a better standard to hold ourselves to.

It’s not that Mitchell is wrong — he’s right, there is a lot of implicit knowledge involved in doing anything worth doing. Sometimes science is really subtle and hard to replicate at home; other times, it isn’t. But whether or not a particular study is easy or hard to replicate is a dodge. This argument is a load of crap because the whole reason to do research in the first place is a fight against received wisdom.

The motto of the Royal Society, one of the first scientific societies, was and still is nullius in verba. Roughly translated, this means, “take no one’s word” or “don’t take anyone’s word for it”. We think this is a great motto. It’s a good summary of the kind of spirit you need to investigate the world. You have the right to see for yourself and make up your own mind; you shouldn’t have to take someone’s word. If you can take someone else’s word for it — a king, maybe — then why bother? 

In the early 1670s, Antonie van Leeuwenhoek started writing to the Royal Society, talking about all the “little animals” he was seeing in drops of pond water when he examined them under his new microscopes. Long particles with green streaks, wound about like serpents, or the copper tubing in a distillery. Animals fashioned like tiny bells with long tails. Animals spinning like tops, or shooting through the water like pikes. “Little creatures,” he said, “above a thousand times smaller than the smallest ones I have ever yet seen upon the rind of cheese.”

Wee beasties

Naturally, the Royal Society found these reports a little hard to believe. They had published some of van Leewenhoek’s letters before, so they had some sense of who the guy was, but this was almost too much:

Christiaan Huygens (son of Constanijn), then in Paris, who at that time remained sceptical, as was his wont: ‘I should greatly like to know how much credence our Mr Leeuwenhoek’s observations obtain among you. He resolves everything into little globules; but for my part, after vainly trying to see some of the things which he sees, I much misdoubt me whether they be not illusions of his sight’. The Royal Society tasked Nehemiah Grew, the botanist, to reproduce Leeuwenhoek’s work, but Grew failed; so in 1677, on succeeding Grew as Secretary, Hooke himself turned his mind back to microscopy. Hooke too initially failed, but on his third attempt to reproduce Leeuwenhoek’s findings with pepper-water (and other infusions), Hooke did succeed in seeing the animalcules—‘some of these so exceeding small that millions of millions might be contained in one drop of water’ 

People were skeptical and didn’t take van Leewenhoek at his word alone. They tried to get the same results, to see these little animals for themselves, and for a number of years they failed. They got no further help from van Leewenhoek, who refused to share his methods, or the secrets of how he made his superior microscopes. Yet even without a precise recipe, Hooke was eventually able to see the tiny, wonderful creatures for himself. And when he did, van Leewenhoek became a scientific celebrity almost overnight. 

If something is the truth about how the world works, the truth will come out, even if it takes Robert Hooke a few years to confirm your crazy stories about the little animals you saw in your spit. Yes, research is very exacting, and can demand great care and precision. Yes, there is a lot of implicit knowledge involved. The people who want to see for themselves might have to work for it. But if you think what you found is the real McCoy, then you should expect that other people should be able to go out and see it for themselves. And assuming you are more helpful than van Leewenhoek, you should be happy to help them do it. If you don’t think people will be able to replicate it at their own bench, are you sure you think you’ve discovered something?

Fast forward to the early 1900s. Famous French Physicist Prosper-René Blondlot is studying the X-Rays, which had been first described by Wilhelm Röntgen in 1895. This was an exciting time for rays of all stripes — several forms of invisible radiation had just been discovered, not only X-Rays but ultraviolet light, gamma rays, and cathode rays. 

Also he looked like a wizard

So Blondlot was excited, but not all that surprised, when he discovered yet another new form of radiation. He was firing X-rays through a quartz prism and noticed that a detector was glowing when it shouldn’t be. He performed more experiments and in 1903 he announced the discovery of: N-rays!  

Blondlot was a famous physicist at a big university in France, so everyone took this seriously and they were all very excited. Soon other scientists had replicated his work in their own labs and were publishing scores of papers on the subject. They began documenting the many strange properties of N-rays. The new radiation would pass right through many substances that blocked light, like wood and aluminum, but were obstructed by water, clouds, and salt. They were emitted by the sun and by human bodies (especially flexed muscles and certain areas of the brain), as well as rocks that had been left in the sun and been allowed to “soak up” the N-rays from sunlight. 

The procedure for detecting these rays wasn’t easy. You had to do everything just right — you had to use phosphorescent screens as detectors, you had to stay in perfect darkness for a half hour so your eyes could acclimate, etc. Fortunately Blondlot was extremely forthcoming and always went out of his way to help provide these implicit details he might not have been able to fit in his reports. And he was vindicated, because with his help, labs all over the place were able to reproduce and extend his findings.

Well, all over France. Some physicists outside France, including some very famous ones, weren’t able to reproduce Blondlot’s findings at all. But as before, Blondlot was very forthcoming and did his best to answer everyone’s questions. 

Even so, over time some of the foreigners began to get a little suspicious. Eventually some of them convinced an American physicist, Robert W. Wood, to go visit Blondlot in France to see if he could figure out what was going on. 

What a dude. Classic American

Blondlot took Wood in and gave him several demonstrations. To make a long story short (you can read Wood’s full account here; it’s pretty interesting), Wood found a number of problems with Blondlot’s experiments. The game was really up when Wood secretly removed a critical prism from one of the experiments, and Blondlot continued reporting the same results as if nothing had happened. Wood concluded that N-rays and all the reports had been the work of self-deception, calling them “purely imaginary”. Within a couple of years, no one believed in N-rays anymore, and today they’re seen as a cautionary tale. 

So much for the subtlety and implicit knowledge needed to do cutting-edge work. Maybe your results are hard to get right, but maybe if other people can’t reproduce your findings, they shouldn’t take your word for it.

This is the point of all those chemistry sets your parents (or cool uncle) gave you when you were a kid. This is the point of all those tedious lab classes in high school. They were poorly executed and all but this was the idea. If whatever Röntgen or Pasteur or Millikan or whoever found is for real, you should be able to reproduce the same thing for yourself in your high school with only the stoner kid for a lab assistant (joke’s on you, stoners make great chemists — they’re highly motivated).

Some people will scoff. After all, what kind of teenager can replicate the projects reported in a major scientific journal? Well, as just one example, take Dennis Gabor: “during his childhood in Budapest, Gabor showed an advanced aptitude for science; in their home laboratory, he and his brother would often duplicate the experiments they read about in scientific journals.”

Clearly some studies will be so complicated that Hungarian teenagers won’t be able to replicate them, or may require equipment they don’t have access to. And of course the Gabor brothers were not your average teenagers. But it used to be possible, and it should be made possible whenever possible. Because otherwise you are asking the majority of people to take your claims on faith. If a scientist is choosing between two lines of work of equal importance, one that requires a nuclear reactor and the other that her neighbor’s kids can do in their basement, she should go with the basement.

It’s good if one big lab can recreate what another big lab claims to have found. But YOU are under no obligation to believe it unless you can replicate it for yourself.

You can of course CHOOSE to trust the big lab, look at their report and decide for yourself. But that’s not really replication. It’s taking someone’s word for something. 

There’s nothing wrong with taking someone’s word; you do it all the time. Some things you can’t look into for yourself; and even if you could, you don’t have enough time to look into everything. So we are all practical people and take the word of people we trust for lots of things. But that’s not replication.

Something that you personally can replicate is replication. Watching someone else do it is also pretty close, since you still get to see it for yourself. Something that a big lab would be able to replicate is not really replication. It’s nice to have confirmation from a second lab, but now you’re just taking two people’s word for it instead of one person’s. Something that can in principle be replicated, but isn’t practical for anyone to actually attempt, is not replication at all.

If it cannot be replicated even in principle, then what exactly do you think you’re doing? What exactly do you think you’ve discovered here? 

What ever happened to all the public science demonstrations

We find it kind of concerning that “does replicate” or “doesn’t replicate” have come to be used as synonyms of “true” and “untrue”. It’s not enough to say that things replicate or not. Blondlot’s N-ray experiments were replicated hundreds of times around France, until all of a sudden they weren’t; van Leeuwenhoek’s observations of tiny critters in pond water weren’t replicated for years, until they were. The modern take on replication (lots of replications from big labs = good) would have gotten both of these wrong. 


If knowing the truth about some result is important to you, don’t just take someone’s word for it. Don’t leave it up to the rest of the world to do this work; we’re all bunglers, you should know that. If you can, you should try it for yourself.

So let’s look at some examples of REAL replication. We’ll take our examples from psychology, since as we saw earlier, they’re in the thick of the modern fight over replication.

We also want to take a minute to defend the psychologists, at least on the topic of replication (psychology has other sins, but that’s a subject for another time). Psychology has gotten a lot of heat for being the epicenter of the replication crisis. Lots of psychology studies haven’t replicated under scrutiny. There have been many high-profile disputes and attacks. Lots of famous findings seem to be made out of straw

Some people have taken this as a sign that psychology is all bunkum. They couldn’t be more wrong — it’s more like this. One family in town gets worried and hires someone to take a look at their house. The specialist shows up and sure enough, their house has termites. Some of the walls are unsafe; parts of the structure are compromised. The family is very worried but they start fumigating and replacing boards that the termites have damaged to keep their house standing. All the other families in town laugh at them and assume that their house is the most likely to fall down. But the opposite is true. No other family has even checked their home for termites; but if termites are in one house in town, they are in other houses for sure. The first family to check is embarrassed, yes, but they’re also the only family who is working to repair the damage.

The same thing is going on in psychology. It’s very embarrassing for the field to have their big mistakes aired in public; but psychology is also distinct for being the first field willing to take a long hard look at themselves and make a serious effort to change for the better. They haven’t done a great job, but they’re one of the only fields that is even trying. We won’t name names but you can bet that other fields have just as many problems with p-hacking — the only difference is that those fields are doing a worse job rooting it out. 

The worst thing you can say about psychology is that it is still a very young field. But try looking at physics or chemistry when they were only 100 years old, and see how well they were doing. From this perspective, psychology is doing pretty ok. 

Despite setbacks, there has been some real progress in psychology. So here are a few examples of psychological findings that can actually be replicated, by any independent researcher in an afternoon. You don’t have to take our word or anyone else’s word for these findings if you don’t want to. Try it for yourself! Please do try this at home, that’s the point.

Are these the most important psychology findings? Probably not — we picked them because they’re easy to replicate, and you should be able to confirm their results from your sofa (disclaimer: for some of them, you may have to leave your sofa). But all of them are things we didn’t know about 150 years ago, so they represent a real advance in what we know about the mind.

For most of these you will need a small group of people, because most of these are statistically true results, not guaranteed to work in every case. But as long as you have a dozen people or so, they should be pretty reliable.

Draw a Bicycle — Here’s a tricky one you can do all on your own. You’ve seen a bicycle before, right? You know what they look like? Ok, draw one. 

Unless you’re a bicycle mechanic, chances are you’ll be really rubbish at this — most people are. While you can recognize a bicycle no problem, you don’t actually know what one looks like. Most people produce drawings that look something like this:

Needless to say, that’s not a good representation of the average bicycle.

Seriously, try this one yourself right now. Don’t look up what a bicycle looks like; draw it as best you can from memory and see what you get. We’ll put a picture of what a bicycle actually looks like at the end of this post. 

Then, tweet your bicycle drawings at us at @mold_time on twitter

(A similar example: which of the images below shows what a penny looks like?)

Wisdom of the CrowdWisdom of the crowd refers to the fact that people tend to make pretty good guesses on average even when their individual guesses aren’t that good. 

You can do this by having a group of people guess how many jellybeans are in a jar of jellybeans, or how much an ox weighs. If you average all the guesses together, most of the time it will be pretty close to the right answer. But we’ve found it’s more fun to stand up there and ask everyone to guess your age.

We’ve had some fun doing this one ourselves, it’s a nice trick, though you need a group of people who don’t know you all that well. It works pretty well in a classroom. 

This only works if everyone makes their judgments independently. To make sure they don’t influence each other’s guesses, have them all write down their guesses on a piece of paper before blurting it out. 

Individual answers are often comically wrong — sometimes off by up to a decade in both directions — but we’ve been very impressed. In our experience the average of all the guesses is very accurate, often to within a couple of months. But give it a try for yourself.

Emotion in the Face — You look at someone’s face to see how they’re feeling, right? Well, maybe. There’s a neat paper from a few years ago that has an interesting demonstration of how this isn’t always true. 

They took photos of tennis players who had just won a point or who had just lost a point, and cut apart their faces and bodies (in the photos; no tennis pros were harmed, etc.). Then they showed people just the bodies or just the faces and asked them to rate how positively or negatively the person was feeling:

They found that people could usually tell that a winning body was someone who was feeling good, and a losing body was someone feeling bad. But with just the faces, they couldn’t tell at all. Just look above – for just the bodies, which guy just won a point? How about for the faces, who won there?

Then they pushed it a step further by putting winning faces on losing bodies, and losing faces on winning bodies, like so:

Again, the faces didn’t seem to matter. People thought chimeras with winning bodies felt better than chimeras with losing bodies, and seemed to ignore the faces.

This one should be pretty easy to test for yourself. Go find some tennis videos on the internet, and take screenshots of the players when they win or lose a point. Cut out the faces and bodies and show them to a couple friends, and ask them to rate how happy/sad each of the bodies and faces seems, or to guess which have just won a point and which have just lost. You could do this one in an afternoon. 

Anchoring — This one is a little dicey, and you’ll need a decent-sized group to have a good chance of seeing it. 

Ask a room of people to write down some number that will be different for each of them — like the last four digits of their cell phone number, or the last two digits of their student ID or something. Don’t ask for part of their social security number or something that should be kept private. 

Let’s assume it’s a classroom. Everyone takes out their student ID and writes down the last two digits of their ID number. If your student ID number is 28568734, you write down “34”.

Now ask everyone to guess how old Mahatma Gandhi was when he died, and write that down too. If this question bores you, you can ask them something else — the average temperature in Antarctica, the average number of floors in buildings in Manhattan, whatever you like.

Then ask everyone to share their answers with you, and write them on the board. You should see that people who have higher numbers as the last two digits of their student ID number (e.g. 78 rather than 22) will guess higher numbers for the second question, even though the two numbers are unrelated. They call this anchoring. You can plot the student ID digits and the estimates of Gandhi’s age on a scatterplot if you like, or even calculate the correlation. It should come out positive.

Inattentional Blindness — If you’ve taken an intro psych class, then you’re familiar with the “Invisible Gorilla” (for everyone else, sorry for spoiling). In the biz they call this “inattentional blindness” — when you aren’t paying attention, or your attention is focused on one task, you miss a lot of stuff.

Turns out this is super easy to replicate, especially a variant called “change blindness”, where you change something but people don’t notice. You can swap out whole people and about half the time, no one picks up on it.

Because it’s so easy, people love to replicate this effect. Like this replication from NOVA, or this British replication, or this replication from National Geographic. You can probably find a couple more on YouTube if you dig around a bit. 

This one isn’t all that easy to do at home, but if you can find a couple accomplices and you’re willing to play a prank on some strangers, you should be able to pull it off. 

(Or you can replicate it in yourself by playing I’m on Observation Duty.)

False Memory — For this task you need a small group of people. Have them put away their phones and writing tools; no notes. Tell them you’re doing a memory task — you’ll show them a list of words for 30 seconds, and you want them to remember as many words as possible. 

Then, show them the following list of words for 30 seconds or so: 

After 30 seconds, hide or take down the list. 

Then, wait a while for the second half of the task. If you’re doing this in a classroom, do the first step at the beginning of class, and the second half near the end.

Anyways, after waiting at least 10 minutes, show them these words and ask them, which of the words was on the original list? 

Most people will incorrectly remember “sleep” as being on the original list, even though, if you go back and check, it’s not. What’s going on here? Well, all of the words on the original list are related to sleep — sleep adjectives, sleep sounds, sleep paraphernalia — and this leads to a false memory that “sleep” was on the list as well. 

You can do the same thing for other words if you want — showing people a list of words like “sour”, “candy”, and “sugar” should lead to false memories of the word “sweet”. You can also read the list of words aloud instead of showing it on a screen for 30 seconds, you should get the same result either way. 

Draw your own conclusions about what this tells us about memory, but the effect should be pretty easy to reproduce for yourself.

We don’t think all false memory findings in psychology bear out. We think some of them aren’t true, like the famous Loftus & Palmer (1974) study, which we think is probably bullshit. But we do think it’s clear that it’s easy to create false memories under the right circumstances, and you can do it in the classroom using the approach we describe above.

You can even use something like the inattentional blindness paradigms above to give people false memories about their political opinions. A little on the tricky side but you should also be able to replicate this one if you can get the magic trick right. And if this seems incredible, ridiculous, unbelievable — try it for yourself! 

Oh yeah, and here’s that bicycle: 

Three Angles on Erik Hoel’s Aristocratic Tutoring

Erik Hoel, concerned that we’re not getting our fair share of geniuses, suggests that aristocratic tutoring is what’s missing:

Let us call this past form aristocratic tutoring, to distinguish it from a tutor you meet in a coffeeshop to go over SAT math problems while the clock ticks down. It’s also different than “tiger parenting,” which is specifically focused around the resume padding that’s needed for kids to meet the impossible requirements for high-tier colleges. Aristocratic tutoring was not focused on measurables. Historically, it usually involved a paid adult tutor, who was an expert in the field, spending significant time with a young child or teenager, instructing them but also engaging them in discussions, often in a live-in capacity, fostering both knowledge but also engagement with intellectual subjects and fields.

“Aristocratic tutoring” is not how we would describe it, but otherwise this sounds about right. We think Erik is right that historical tutoring was better than education today. But we don’t think being aristocratic is what made it better. So here are three other angles on the same idea:


It’s no secret that school sux. It’s not that tutoring is good, it’s that mechanized schooling is really bad. If we got rid of formal 20th century K-12 education, and did homeschooling / unschooling / let kids work at the costco, we would get most of the benefits of tutoring without all the overhead and inequality.

Our personal educational philosophy is that, for the most part, the most important thing you can do for your students is expose them to things they wouldn’t have encountered otherwise. Sort of in the spirit of, you can lead a horse to water, but you can’t make him drink. So K-12 education gums up the works by making bad recommendations, having students spend a lot of time on mediocre stuff, and keeping them so busy they can’t follow up on the better recommendations from friends and family. 

From this perspective, mechanized schooling is actually a net negative — it is worse than nothing, and if we just let kids run around hitting each other with sticks or whatever, we would get more geniuses. 

Future Geniuses

But another possibility is that mechanized schooling is net neutral, and the problem is that we’ve lost some active ingredient that makes tutoring effective.


Education no longer includes moral instruction. Back in the day, a proper education taught you more than “the mitochondria is the powerhouse of the cell” — it taught you to take your character as seriously as your scholarship, to lead and to serve, and to understand your moral responsibilities. Tutoring worked because tutors inspired their pupils. Modern education is a lot of things, but “inspiring” ain’t one of them.

Back when formal education could still be inspiring, it still produced brilliant individuals. People have pointed out that the Manhattan Project was led by a group of strangely brilliant Hungarian scientists. Not only did most of them come from Budapest, many of them went to the same high school, and some of them had the same math teacher, László Rátz. Eugene Wigner, a Nobel Laureate in physics and one of Rátz’s pupils, had this to say:

… there were many superb teachers at the Lutheran gymnasium. But the greatest was my mathematics teacher László Rátz. Rátz was known not only throughout our gymnasium but also by the church and government hierarchy and among many of the teachers in the country schools. I still keep a photograph of Rátz in my workroom because he had every quality of a miraculous teacher: He loved teaching. He knew the subject and how to kindle interest in it. He imparted the very deepest understanding. Many gymnasium teachers had great skill, but no one could evoke the beauty of the subject like Rátz.

Rátz may or may not have been responsible for Wigner’s success, and he didn’t teach everyone involved in the Manhattan Project; our point is just that these Hungarians lived in a time when high school math teachers could still inspire former students to describe them as “miraculous”. This seems to be an aspect of the educational system that we have lost.

If this is right, then we don’t need to worry about tutoring being aristocratic. You shouldn’t need tutors or even miraculous Hungarian math teachers. Other things that are also inspiring / socially encouraging would work just as well — see for example the amazing progress of the speedrunning community, a bunch of teenage nerds bootstrapping a scene by inspiring each another to insane degrees of precision.

Erik hints at this by mentioning the social element. “For humans,” he says, “engagement is a social phenomenon; particularly for children, this requires interactions with adults who can not just give them individual attention, but also model for them what serious intellectual engagement looks like.” Individual attention is good, but we also think kids are good at teaching themselves. The active ingredient to us is showing kids “what serious intellectual engagement looks like”, and most kids today don’t see that until college (if ever).


The real problem is segregating children. Tutoring worked because you exposed children to people practicing a real skill (even if it’s only speaking their native language), or working in an actual profession. Modern education exposes them only to teachers.

At the end of your German tutelage you can speak to people you wouldn’t have been able to speak to before, read books and poems you wouldn’t have been able to read. At the end of your taxidermy tutelage you can take samples and stuff birds, and could theoretically make a living at it. Meanwhile at the end of high school you can write a five-point essay, a “skill” that you will never use again as long as you live.

So the problem is not the lack of tutoring per se, as much as the lack of giving children any sense of the real world at all. Today, children have to be sent to guidance counselors to be advised on what is out there. Teenagers dream of being youtubers and influencers. This isn’t their fault — these are some of the only professions where they actually understand what is involved. It’s the fault of adults, for not letting children see any of the many ways they could actually go out and exercise their powers in the world.

But tutoring isn’t the only way to expose children to real skills. So did working in the family business, and so did apprenticeships. Writing about why nerds are unpopular, Paul Graham says: 

I’m suspicious of this theory that thirteen-year-old kids are intrinsically messed up. If it’s physiological, it should be universal. Are Mongol nomads all nihilists at thirteen? I’ve read a lot of history, and I have not seen a single reference to this supposedly universal fact before the twentieth century. Teenage apprentices in the Renaissance seem to have been cheerful and eager. They got in fights and played tricks on one another of course (Michelangelo had his nose broken by a bully), but they weren’t crazy.

As far as I can tell, the concept of the hormone-crazed teenager is coeval with suburbia. I don’t think this is a coincidence. I think teenagers are driven crazy by the life they’re made to lead. Teenage apprentices in the Renaissance were working dogs. Teenagers now are neurotic lapdogs. Their craziness is the craziness of the idle everywhere.

Paul is right; in many parts of the world, useful apprenticeship was the historical norm. As anthropologist David Graeber writes:  

Feudal society was a vast system of service… the form of service that had the most important and pervasive influence on most people’s lives was not feudal service but what historical sociologists have called “life-cycle” service. Essentially, almost everyone was expected to spend roughly the first seven to fifteen years of his or her working life as a servant in someone else’s household. Most of us are familiar with how this worked itself out within craft guilds, where teenagers would first be assigned to master craftsmen as apprentices, and then become journeymen… In fact, the system was in no sense limited to artisans. Even peasants normally expected to spend their teenage years onward as “servants in husbandry” in another farm household, typically, that of someone just slightly better off. Service was expected equally of girls and boys (that’s what milkmaids were: daughters of peasants during their years of service), and was usually expected even of the elite. The most familiar example here would be pages, who were apprentice knights, but even noblewomen, unless they were at the very top of the hierarchy, were expected to spend their adolescence as ladies-in-waiting—that is, servants who would “wait upon” a married noblewoman of slightly higher rank, attending to her privy chamber, toilette, meals, and so forth, even as they were also “waiting” for such time as they, too, were in a position to marry and become the lady of an aristocratic household themselves.

Service was especially pervasive in England. “Few are born who are exempted from this fate,” wrote a Venetian visitor around 1500, “for everyone, however rich he may be, sends away his children into the houses of others, whilst he, in return, receives those of strangers into his own.”

Even just having your children around adults and being a part of adult conversations will go a long way. For what it’s worth, this is how we were raised, i.e. mostly around adults.

This may be another element common to the cases Erik mentions — most of the geniuses he names seem to have had very little contact with children outside their immediate family. Whether or not this is good for children psychologically is a separate question, but it does seem to lead to very skilled adults.

In fact, the number of children in a family might also be a factor. There was a time when most families were pretty large, so a lot of children had several older siblings. If you have five older brothers, you get both benefits — other children to play with, and a more direct line to adulthood through your older siblings. Erik mentions the example of Bertrand Russell, and we wonder if this might be more representative than he realizes:

When Bertrand Russell’s older brother introduced him to geometry at the age of 11, Russell later wrote in his autobiography that it was: “… one of the great events of my life, as dazzling as first love.” Is that really solely his innate genetic facility, or was mathematics colored by the love of his older brother?

It’s easy to come up with other examples (though of course this is not universal). Charles Darwin was the fifth of six children. The Polgár sisters are all chess prodigies, and were intentionally raised to be geniuses, but the youngest daughter Judit is the best of the three. Jane Austen had five older brothers and an older sister. Her eldest brother James wrote prologues and epilogues for plays the family staged and it seems as though this moved Jane to try her hand at something similar.

So part of the success of tutoring might simply be exposing a child to subjects “before they are ready”, and one way to reliably do that is to have them overhear the lessons of their older siblings, who they are ready to imitate.

This ties neatly into the social/moral element we mention above. Children may be moved by a passionate tutor, or a beloved uncle, or a cousin, or a medical student who lives in the spare room. But they will always be influenced by older siblings, and the more older siblings there are, the more gates to adult influence will be opened. Maybe if we want more geniuses, people need to start having larger families.

Control and Correlation


A thermostat is a simple example of a control system. A basic model has only a few parts: some kind of sensor for detecting the temperature within the house, and some way of changing the temperature. Usually this means it has the ability to turn the furnace off and on, but it might also be able to control the air conditioning. 

The thermostat uses these abilities to keep the house at whatever temperature a human sets it to — maybe 72 degrees. Assuming no major disturbances, the control system can keep a house at this temperature indefinitely.

In the real world, control systems are all over the place.

Imagine that a car is being driven across a hilly landscape.

A man is operating this car. Let’s call him Frank. Now, Frank is a real stickler about being a law-abiding citizen, and he always makes sure to go exactly the speed limit. 

On this road, the speed limit is 35 mph. So Frank uses the gas pedal and the brake pedal to keep the car going the speed limit. He uses the gas to keep from slowing down when the road slopes up, and to keep the car going a constant speed on straightaways. He uses the brake to keep from speeding up when the road slopes down.

The road is hilly enough that frequent use of the gas and brake are necessary. But it’s well within Frank’s ability, and he successfully keeps the needle on 35 mph the whole time. 

Together, Frank and the car form a control system, just like a thermostat, that keeps the car at a constant speed. You could also replace Frank’s brain with the car’s built-in cruise control function, if it has one, and that might provide an even more precise form of control. But whatever is doing the calculations, the entire system functions more or less the same way. 

Surprisingly, if you graph all the variables at play here — the angle of the road, the gas, the brake, and the speed of the car at each time point — speed will not be correlated with any of the other variables. Despite the fact that the speed is almost entirely the result of the combination of gas, brake, and slope (plus small factors like wind and friction), there will be no apparent correlation, because the control system keeps the car at a constant 35 mph. 

High precision technical diagram

Similarly, if you took snapshots of many different Franks, driving on many different roads at different times, there would be no correlation between gas and speed in this dataset either.

We understand something about the causal system that is Frank and his car, and how this system responds to local traffic regulations, so we understand that gas and brake and angle of the road ARE causally responsible for that speed of 35 mph. But if an alien were looking at a readout of the data from a bunch of cars, their different speeds, and the use of various drivers’ implements as they rattle along, it would be hard pressed to figure out that the gas makes the car speed up and the brake makes it slow down. 


We see that despite being causally related, gas and brake aren’t correlated with speed at all.

This is a well-understood, if somewhat understated, problem in causal inference. We’ve all heard that correlation does not imply causation, but most of us assume that when one thing causes another thing, those two things will be correlated. Hotter temperatures cause ice cream sales; and they’re correlated. Fertilizer use causes bigger plants; correlated. Parental height causes child height; you’d better believe it, they’re correlated. 

But things that are causally related are not always correlated. Here’s another example from a textbook on causal inference

Weirdly enough, sometimes there are causal relationships between two things and yet no observable correlation. Now that is definitely strange. How can one thing cause another thing without any discernible correlation between the two things? Consider this example, which is illustrated in Figure 1.1. A sailor is sailing her boat across the lake on a windy day. As the wind blows, she counters by turning the rudder in such a way so as to exactly offset the force of the wind. Back and forth she moves the rudder, yet the boat follows a straight line across the lake. A kindhearted yet naive person with no knowledge of wind or boats might look at this woman and say, “Someone get this sailor a new rudder! Hers is broken!” He thinks this because he cannot see any relationship between the movement of the rudder and the direction of the boat.

Let’s look at one more example, from the same textbook: 

[The boat] sounds like a silly example, but in fact there are more serious versions of it. Consider a central bank reading tea leaves to discern when a recessionary wave is forming. Seeing evidence that a recession is emerging, the bank enters into open-market operations, buying bonds and pumping liquidity into the economy. Insofar as these actions are done optimally, these open-market operations will show no relationship whatsoever with actual output. In fact, in the ideal, banks may engage in aggressive trading in order to stop a recession, and we would be unable to see any evidence that it was working even though it was!


There’s something interesting that all of these examples — Frank driving the car, the sailor steering her boat, the central bank preventing a recession — have in common. They’re all examples of control systems.

Like we emphasized at the start, Frank and his car form a system for controlling the car’s speed. He goes up and down hills, but his speed stays at a constant 35 mph. If his control is good enough, there will be no detectable variation in the speed at all. 

The sailor and her rudder are acting as a control system in the face of disturbances introduced by the wind. Just like Frank and his car, this control system is so good that to an external observer, there appears to be no change at all in the variable being controlled.

The central bank is doing something a little more complicated, but it is also acting as a control system. Trying to prevent a recession is controlling something like the growth of the economy. In this example, the growth of the economy continues increasing at about the same rate because of the central bank’s canny use of open-market operations, bonds, liquidity, etc. in response to some kind of external shock that would otherwise cause economic growth to stall or plummet — that would cause a recession. And “insofar as these actions are done optimally, these open-market operations will show no relationship whatsoever with actual output.”

The same thing will happen with a good enough thermostat, especially if it has access to both heating and cooling / air conditioning. The thermostat will operate its different interventions in response to external disturbances in temperature (from the sun, wind, doors being left open, etc.), and the internal temperature of the house will remain at 72 degrees, or whatever you set it at.

If you looked at the data, there would be no correlation between the house’s temperature and the methods used to control that temperature (furnace, A/C, etc.), and if you didn’t know what was going on, it would be hard to tell what was causing what.

In fact, we think this is the case for any control system. If a control system is working right, the target — the speed of Frank’s car, the direction of the boat, the rate of growth in the economy, the temperature of the house — will remain about the same no matter what. Depending on how sensitive your instruments are, you may not be able to detect any change at all. 

If control is perfect — if Frank’s car stays at exactly 35 mph — then the system is leaking literally no information to the outside world. You can’t learn anything about how the system works because any other variable plotted against MPH, even one like gas or brake, will look something like this: 

This is true even though gas and brake have a direct causal influence on speed. In any control system that is functioning properly, the methods used to control a signal won’t be correlated with the signal they’re controlling. 

Worse, there will be several variables that DO show relationships, and may give the wrong impression. You’re looking at variables A, B, C, and D. You see that when A goes up, so does B. When A goes down, C goes up. D never changes and isn’t related to anything else — must not be important, certainly not related to the rest of the system. But of course, A is the angle of the road, B is the gas pedal, C is the brake pedal, and D is the speed of the car. 

If control isn’t perfect, or your instruments are sensitive enough to detect when Frank speeds up or slows down by fractions of an mph, then some information will be let through. But this doesn’t mean that you’ll be able to get a correlation. You may be able to notice that the car speeds up a little on the approach to inclines and slows down when it goes downhill, and you may even be able to tie this to the gas and brake. But it shouldn’t show up as a correlation — you would have to use some other analysis technique, but we’re not sure if such a technique exists.

And if you don’t understand the rest of the environment, you’ll be hard pressed to tell which variation in speed is leaked from the control system and which is just noise from other sources — from differences in friction across the surface of the road, from going around curves, from imperfections in the engine, from Frank being distracted by birds, etc.


This seems like it might be a big problem, because control systems are found all over biology, medicine, and psychology.

Biology is all about homeostasis — maintaining stability against constant outside disturbances. Lots of the systems inside living things are designed to maintain homeostatic control over some important variable, because if you don’t have enough salt or oxygen or whatever, you die. But figuring out what controls what can be kind of complicated. 

(If you’re getting ready to lecture us on the difference between allostasis and homeostasis, go jump in a pond instead.)

Medicine is the applied study of one area of biology (i.e. human biology, for the most part), so it faces all the same problems biology does. The human body works to control all sorts of variables important to our survival, which is good. But if you look at a signal relevant to human health, and want to figure out what controls that signal, chances are it won’t be correlated with its causes. That’s… confusing. 

Lots of people forget that psychology is biological, but it obviously is. The brain is an organ too; it is made up of cells; it works by homeostatic principles. This is an under-appreciated perspective within psychology itself but some people are coming around; see for example this recent paper.

If you were to ask us what field our book A Chemical Hunger falls under, we would say cognitive science. Hunger is pretty clearly regulated in the brain as a cognitive-computational process and it’s pretty clearly part of a number of complicated homeostatic systems, systems that are controlling things like body weight and energy. So in a way, this is psychology too.

It’s important to remember that statistics was largely developed in fields like astronomy, demography, population genetics, and agriculture, which almost never deal with control systems. Correlation as you know it was introduced by Karl Pearson (incidentally, also a big racist; and worse, a Sorrows of Young Werther fan), whose work was wide-ranging but largely focused on genetic inheritance. While correlation was developed to understand things like barley yields, and can do that pretty well, it just wasn’t designed with control systems in mind. It may be unhelpful, or even misleading, if you point it at the wrong problem.

For a mathematical concept, correlation is not even that old, barely 140 years. So while correlation has captured the modern imagination, it’s not surprising that it isn’t always suited to scientific problems outside the ones it was invented to tackle.

Links for February 2022

all aboard

Lady Wonder “was a mare some claimed to have psychic abilities and be able to perform intellectually demanding tasks such as arithmetic and spelling. … Lady was said to have predicted the outcome of boxing fights and political elections, and was consulted by the police in criminal investigations.“

Did you ever spend time in… middle school? If so, you may recognize some of these urban legends about drugs. Who can forget such classics as “Bananadine” or “Man permanently thinks he is an orange and is terrified of being turned into a glass of orange juice.” We love that Wikipedia has an article on this. 

Monte Testaccio is an artificial hill in Rome over 100 feet high, and 1 km in circumference, composed of fragments of broken ancient Roman pottery dating from the time of the Roman Empire. Gotta go back to Rome so I can look at this friggin’ bing.

Also per Wikipedia: Albert Einstein loved the children’s puppet show Time for Beany. “On one occasion, the physicist interrupted a high-level conference by announcing, ‘You will have to excuse me, gentlemen. It’s Time for Beany.’”

Possible good news about PFAS

Beautiful houses in Oman.

Our predictions for 2050 are already coming to pass in small ways. Delivery robots are so common in some cities (e.g. Milton Keynes in the UK) that there are already delivery robot traffic jams. (Also reminded of the time a delivery robot caught fire on Berkeley campus and students made a memorial for it.) This furry did the Moderna vaccine we told you science was gonna get weirder and cooler.

Alex Wellerstein writes a retrospective on 10 years of NUKEMAP. “Historians should not be surprised by the passing of time, but people are, and historians are people, so, well, here I am, continually surprised.” Relatedly, if you ever think nuclear war is about to occur, consider taking a 90-day trip to New Zealand.

Other explosions: According to Fire in the sky — a history of meteoritics, there are a lot more documented cases of asteroid impacts than we realized! It’s only a matter of time before an asteroid wipes out a town — and THIS time, we’ll capture it on video

Or maybe Russia will crash-land the International Space Station in our backyard, who knows.

In animation, Worthikids is the guy to watch. Here’s a good interview with him about his process.

Breastfeeding by humans of animals — much more common than you might think! “The reasons for this are varied: to feed young animals, to drain a woman’s breasts, to promote lactation, to harden the nipples before a baby is born, to prevent conception, and so on. … In far northern Japan, the Ainu people are noted for holding an annual bear festival at which a captured bear, raised and suckled by the women, is sacrificed.”

Best in Blogging this month: 

  • Adam at Experimental History describes bureaucratic psychosis. “The best way I’ve found to keep it at bay is to simply excuse myself from other people’s Renaissance Fair realities and go play somewhere else. Let the obtuse administrators, sadistic gatekeepers, and conmen consultants rule their blob-land; I am happy sharing a little corner of the world with people who see me as a person.”
  • Applied Divinity Studies put out a two-part series on the purported shoplifting wave in San Francisco (Part 1, Part 2). We recommend reading it in full, but to summarize, ADS thinks that this supposed crime spree is a complete fantasy, driven by selective reporting and “an abject failure to do even the bare minimum of background research”. Seriously chilling implications about how much you can trust reporting and for our political landscape. “If you stick though this series, you’ll get to hear… how we ended up in this weird and wacky world where libertarian VCs somehow end up agreeing with liberals like Nancy Pelosi and London Breed, and where the stance they all agree on is that we should be tough on a crime, a stance historically antithetical to both parties’ platforms.”
  • If you’re still concerned about the downfall of civilization, consider this series (Part I, Part II, Part III) from A Collection of Unmitigated Pedantry on the question, “how bad was the fall of Rome (in the West)?” Choice quote from the ending: 

The collapse of the Roman Empire in the West is a complex sequence of events and one that often resists easy answers, but it is a useful one to think about, particularly as we now sit atop our own fragile clockwork economic mechanism, suspended not a few feet but many miles above the grinding poverty of pre-industrial life and often with our own arsonists, who are convinced that the system is durable and stable because they cannot imagine it ever vanishing.

Until it does.

Independent hacker P4x fucks up North Korea.

Check out this “glitch art object”! We want one. Actually, here’s a build log.

Edward Snowden: “it’s not VR if i can’t get into a fistfight with kermit the frog”

Philosophical Transactions: JP Callaghan on Lithium Pharmacokinetics

In the beginning, scientific articles were just letters. Scholars wrote to each other about whatever they were working on, celebrating their discoveries or arguing over minutiae, and ended up with great stacks of the things. People started bringing interesting letters to meetings of the Royal Society to read aloud, then scientists started addressing their letters to the Royal Society directly, and eventually Henry Oldenburg started pulling some of these letters together and printing them as the Philosophical Transactions of the Royal Society, the first scientific journal.

In continuance of this hallowed tradition, in this blog post we are publishing some philosophical transactions of our own: correspondence with JP Callaghan, an MD/PhD student at a large Northeast research university going into anesthesia. He has expertise in protein statistical mechanics and kinetic modeling, so he reached out to us with several ideas and enlightened criticisms.

With JP Callaghan’s help we have lightly edited the correspondence for clarity, turning the multi-threaded format of the email exchange into something more linear. We found the conversation very informative, and we hope you do as well! So without further ado: 

JP Callaghan:  Hi guys, great work on A Chemical Hunger

I’m sure someone already suggested this but the Fulbright program executes the “move abroad” experiment every year. In fact, they do the reverse experiment as well, paying foreigners to move to the US. The Phillipines Fulbright program seems especially active.

(The Peace Corps is already doing this experiment as well, but that’s probably probably more confounded since people are often living in pretty rustic locations.)

You could pretty easily imagine paying these folks a little extra money to send you their weight once a month or whatever.

SLIME MOLD TIME MOLD:  Thank you! Yeah, we’ve been trying to figure out the best way to pursue this one, using existing data if possible. Fulbright is a good idea, especially US <–––> Philippines, and especially because we suspect young people will show weight changes faster. We’ve also thought about trying to collect a sample of expats, possibly on reddit, since there are a lot of anecdotes of weight loss in those communities.

The tricky thing is finding someone who has an in with one of these groups. We probably can’t just cold call Fulbright and ask how much all their scholars weigh, though we’ll start asking around. 

JPC: Unfortunately my connection with the Fulbright was brief, superficial, and many years ago. I can ask around at my university, though. I’m not filled with unmitigated optimism, but the worst they can do is say no/ignore me.

Also, I wanted to mention that lithium level measurements are extremely common measurements in clinical practice. It’s used to monitor therapeutic lithium (for e.g. bipolar folks). (Although I will concede usually they are measuring .5 – 1.5 mmol/L which would be way higher than serum levels due to contamination.) Also, it’s interesting that the early pharmacokinetic studies also measured urine lithium (see e.g. Barbara Ehrlich’s seminal 1980 paper) so there’s precedent for that as well. I’m led to understand from my lab medicine colleagues that it’s a relatively straightforward (aka cheap) electrochemical assay, at least in common clinical practice.

SMTM:  We’ve looked into measurement a bit. We’re concerned that serum levels aren’t worth measuring, since lithium seems to accumulate in the brain and we suspect that would be the mechanism (a commenter suggested it might also be accumulation in bone). But if we were to do clinical measurements, we’d probably measure lithium in urine or maybe even in saliva, since there’s evidence they’re good proxies for one another and for the levels in serum, and they’re easier to collect. Urine might be especially important if lithium clearance rate ends up being a piece of the puzzle, which it seems like it might. 

JPC: It is definitely true that lithium accumulates inside cells (definitely rat neurons and human RBCs, probably human neurons, but maybe not human muscle; see e.g. that Ehrlich paper I mentioned). The thing is, lithium kinetics seem to be pretty fast. Since it’s an ion, it doesn’t partition into fat the way other long-lasting medications and toxins do, and so it’s eliminated fairly quickly by the kidneys. (THC is a classic example of a hydrophobic “contaminant”; this same physical chemistry explains why a long-time pothead will test positive for THC for months, but you can stop using cocaine and, 72 hours later, screen negative.)

It might be worth your time to look at some of the lithium washout experiments that have been done over the years (e.g. Hunter, 1988 where they see lithium levels rapidly decline after stopping lithium therapy that had been going on for a month).

I suppose, though, that I’m not aware of any data that specifically excludes the possibility that there is a very slow “third compartment” where lithium can deposit (such as, as your commenter suggested, bone; although I don’t know much about whether or not lithium can incorporate into the hydroxyapatite matrix in bone. It’s mostly calcium phosphate and I’m not sure if lithium could “find a place” in that crystalline matrix).

Anyway, though, my understanding is that lithium kinetics in the brain are relatively fast. (For instance, see Ebadi, et al where they measure [Li] in rat brains over time.) So even if you have a highly accumulated slow bone compartment, the levels of lithium you’d get in the brain would still be super low, because it equilibrates with the blood quickly and therefore is subject to rapid elimination by the kidneys.

However, I don’t think you need to posit accumulation for your hypothesis. If you’re exposed to constant, low levels of lithium, you reach an equilibrium. There’s some super low serum concentration, some rather-higher intracellular concentration, and it’s all held in steady state by the constant intake via the GI tract (say, in the water) and constant elimination by the kidneys. Perhaps this is what you’re getting at when you say the rate of elimination might be very important?

Instead, consider some interesting pharmacodynamics: low-level (or maybe widely fluctuating, since lithium is also quickly cleared?) exposure to lithium messes with the lipostat. This process is probably really slow, maybe because weight change is slow or maybe because of some kind of brain adaptation process or whatever. We have good reason to suspect low-level lithium has neurological effects already anyway through some of the population-level suicide data I’m sure you’re aware of.

Urine and serum levels of lithium are only good proxies for one another at steady state. I really strongly suggest you guys look at that Ehrlich paper. She measures serum, intra-RBC, and urine [Li] after a dose of lithium carbonate (the most common delayed-release preparation of pharmaceutical lithium).

Another good one is Gaillot et al which demonstrates how important the form of lithium (lithium carbonate vs LiCl) is to the kinetics. (As an aside, this might be a reason for lithium grease to be so bad; lithium grease is apparently some kind of weird soap complex with fatty acids, maybe it gets trapped in the GI tract or something.)

SMTM: The rat studies are interesting but don’t rats seem like a bad comparison for determining something like rate of clearance? Besides just not being human, their metabolisms are something like 6-8x faster than ours and their lifespans are about 20 times shorter. Also human brains are huge. What do you think?

JPC: Certainly I agree that rats are not people and are bad models in many ways. I think that renal function is the key parameter you’d want to compare. The most basic measure of kidney function is the GFR (glomerular filtration rate), which basically measures how much fluid gets pushed through the “kidney filter” per unit time. Unfortunately in people we measure it in volume/time/body surface area and in rats volume/time/mass which makes a comparison less obvious than I was hoping. To be honest, I am not sure how well rat kidney function and human kidney function is comparable. (Definitely more comparable than live and dead human kidney function, though 😉.)

What do you mean by ”their metabolisms are something like 6-8x faster than ours”? Like, calories/mass/time? Usually when I think about “metabolic rate” I am thinking of energy usage. When we think about drug elimination, the main things that matter are 1) liver function (for drugs that are hepatically metabolized) 2) various tissue enzyme function (e.g. plasma esterases for something like esmolol) and 3) renal function. I don’t generally think about basal metabolic rate as being a pertinent factor, really, except perhaps in cases where it’s a proxy for hepatic metabolism.

Lithium is eliminated (“cleared”) almost exclusively by the kidney and it undergoes no metabolic transformations, so I wouldn’t worry about anything but kidney function for its clearance.

You’re right, though, the 20x lifespan difference could be an issue. If we are worried about accumulation on the timescale of years, then obviously a shorter rat life is a problem. But (if I read your blog posts right) rats as experimental animals are also getting fatter so presumably the effect extends to them on the timescale of their life? (Did you have data in rats? I don’t remember.)

Indeed, if it’s actually just that there a constant low-level “infusion” of lithium via tapwater, grease exposure at work, etc giving rise to a low steady-state lithium (rather than actual bioaccumulation) this would explain why the effect does extend to these short-lived experimental animals.

SMTM: You make good points about laboratory animals. There are data on rats and they do seem to be getting heavier. Let’s stick a pin in this one for a now, you may find this next bit is relevant to the same questions:

In your opinion, are the studies you cite consistent or inconsistent with the findings of Amdisen et al. 1974 and Shoepfer et al. 2021? Also potentially relevant is Amidsen 1977. We describe their findings near the end of this section — basically they seem to suggest that Li accumulates preferentially in the bones, thyroid, and parts of the brain. The total sample size is small but it seems suggestive. We agree accumulation may not be essential to the theory but doesn’t this look like evidence of accumulation? We’ve attached copies of Amdisen et al. 1974 and Amdisen 1977 as PDFs in case you want to take a closer look. [SMTM’s Note: If anyone else wants to see these papers, you can email us.]

Especially interesting that Ebadi et al. say, “it has been shown that sodium intake exerts a significant influence on the renal elimination of lithium (Schou, 1958b)”, somewhat in line with our speculation here. We’ll have to look into that. 


JPC: Thanks for the papers. As you predicted, I’m finding them super interesting.

Shoepfer et al, 2021 is a lovely, very interesting paper (complete with some adorable Deutsch-English). I was aware of it but had not taken the time to read it yet.

By my read, it is primarily seeking to establish this new, nuclear fission based approach to measuring lithium in pathology tissue. After spending some time with it, I don’t really know how to interpret their findings. The main reason I am not sure what to do with this paper is that the results are in dead peoples’ brains. Indeed, they specifically note in their ‘limitations’ section: “The lithium distribution patterns so far obtained with the NIK method, thus in no way contradicting given literature references, are based on post mortem tissue.” The reason this is pertinent is that there is a lot of active transport of other monovalent cations (K, Na) and so I would worry that this is true for lithium as well and (obviously) this is almost certainly disrupted in dead people.

The second thing is that the tissue was fixed in (presumably) formalin and stained with hematoxylin and eosin before measuring lithium, which then comes out in units of mass/mass. Obviously in living tissue there’s lots of water and whatnot, and the mass-density of water and formalin is going to be pretty different.

So, as the authors say, I would say it’s neither consistent nor inconsistent with other data.

SMTM: It’s true that all the brain samples we have in humans are in dead brain tissue, but this seems like an insurmountable issue, right? Looking at dead tissue is the only way to get even a rough estimate of how much lithium is in the brain, since as far as we know there’s no way to test the levels in a living human brain, or if there is, no one has taken those measurements and it’s outside our current budget. 

In any case, the most relevant findings from these studies, at least in our opinion, are 1) that lithium definitely reaches brain tissue and sticks around for a while, and 2) regardless of absolute levels, there seems to be relatively more lithium in parts of the brain that regulate appetite and weight gain. These conclusions seem likely to hold even given all the reasonable concerns about dead tissue. What do you think?  

JPC: I agree. In my mind, the main question is whether or not lithium persists in the brain after cessation of lithium therapy. Put more rigorously, what is the rate of exchange between the “brain compartment” and (probably) the “serum compartment.” (I guess it could also be eliminated by CSF too maybe? Or “glymphatics”? idk I guess nobody really understands the brain.)

The main issue I have is this: if you’re exposed, say, to 20 ppb lithium and your serum has 20 ppb lithium and so does the cytoplasm in your neurons, this is actually the null hypothesis (that lithium is an inert substance that just flows down its concentration gradient). It’s obviously false (we know lithium concentrates in RBCs of healthy subjects, for instance), but this paper doesn’t help me decide if lithium 1) passively diffuses throughout the body 2) is actively concentrated in neurons, or even 3) is actively cleared from cells, simply because I don’t really know what to do with the number.

The second issue is the preparation. Maybe formalin fixation washes lithium away, or when it fixes cell membranes maybe the lithium is allowed to diffuse out. Maybe it poorly penetrates myelin sheaths, and has a tendency to concentrate the lithium inside cells by making the extracellular environment more hydrophobic (nature abhors an unsolvated ion).

Another reason I am so skeptical of the “slow lithium kinetics” hypothesis is just the physical chemistry of lithium. It’s a tiny, charged particle. Keeping these sorts of ions from moving around and distributing evenly is actually really hard in most cases. There are a few cases of ionic solids in the human body (various types of kidney stones, bones, bile stones] but for the most part these involve much less soluble ions than lithium and everything is dissolved and flows around at its whim except where it’s actively pumped.

SMTM: This is a good point, and in addition, the fact that tourists and expats seem to lose weight quickly does seem to be a point in favor of fast lithium over slow lithium. If those anecdotes bear out in some kind of more systematic study, “slow lithium kinetics” starts looking really unlikely. Another possibility, though, is that young people are the only ones who lose weight quickly on foreign trips, and there’s something like a “weight gain in the brain, reservoir in the bone” system where people remain dosed for a long time once enough has built up in their bones (or some other reservoir).

JPC: Very possible. Also young people generally have better renal function. There are tons of people walking around with their kidneys at like 50% or worse who don’t even know it.

A third and distant issue what I mentioned about the active transport of Na and K that happens in neurons (IIRC something like 1/3 of your calories are spent doing this) ceasing when you’re dead. This is also a fairly big deal, though, since there are various cation leak channels in cell membranes (for electrical excitability reasons, I think; ask an electrical engineer or a different kind of biophysicist) through which Li might also escape. (Since, after all, a reasonable hypothesis for the mechanism of action is that Li uses Na channels.)

Between these three difficulties, I do actually see this as borderline insurmountable for ascertaining how much lithium is in an alive brain based on these data. Basically, it comes down to “I don’t know how much lithium I should expect there to be in these experiments.”

However, “relatively more lithium in parts of the brain that regulate appetite and weight gain” is a good point. I think that this is something you actually can reasonably say: it seems like there is more lithium in these areas than other areas. The within-experiment comparisons definitely seem more sound. It would also be consistent with the onset of hunger/appetite symptoms below traditionally-accepted therapeutic ranges.

I do also want to clarify what I mean by “no accumulation.” There is of course a sort of accumulation for all things at all times. You take a dose of some enteral medication, it leaches into your bloodstream from your gut, accumulating first in the serum. It then is distributed throughout the body and accumulates in other compartments (brain, liver, kidney, bone, whatever). Assuming linear pharmacokinetics, there’s some rate that the drug goes in to and out of each of these compartments. 

If you keep taking the drug and the influx rate (from the serum into a compartment) is higher than the efflux rate (back to the serum from the compartment), the steady state in the compartment will be higher than the serum at steady state. In some sense, this could be called “accumulation.” But in another sense, if both these rates are fast, your accumulation is transient and quickly relaxes to zero if you clear the serum compartment of drug (which we know happens in normal individuals in the case of lithium). Although the concentration in the third compartment is indeed higher than in the serum, if you stop taking the drug, it will wash out (first from the serum then, more slowly, from the accumulating compartment).

SMTM: Thanks, this clarification is helpful. To make sure we understand, “accumulation” to you means that a contaminant goes to a part of the body, stays there, and basically never leaves. But you’re open to “a sort of accumulation” where 50 units go into the brain every day and only 10 units are cleared, leading to a more-or-less perpetual increase in the levels. Is that right? 

JPC: Yes. I would frame this in terms of rates, though. So 5 x brain concentration units go to the brain and 1 x brain concentration units go out of the brain per unit time, such that you get a steady state concentration difference between the serum in the brain of in_rate / out_rate (in this case).

You guys seem mathy so I’ll add: for an arbitrary number of compartments this is just a first-order ODE. You can represent this situation as rate matrix K where element i, j represents the rate (1/time) that material flows from compartment i to j (or maybe j to i, I can never remember). Anyway this usually just boils down to something looking like an eigenvector problem to get the stationary distribution of things. (Obviously things get more complicated when you have pulsatile influx.)

The key question, though, is what effect does this high concentration in the accumulating compartment have on the actual physiology? If we have slowly-resolving, high concentration in the brain, then I think we could call this clinical (ie neuropharmacologically significant) accumulation. However, I think the case in the brain is that you have higher-than-serum concentrations, but that these concentrations quickly resolve after cessation of lithium therapy. My reasoning for this is that lithium pharmacokinetics are classically well-modeled with two- and three-compartment models, which mostly have pretty fast kinetics (rate parameters with half lives in the hours range).

SMTM: This is interesting because our sense is sort of the opposite! Specifically, our understanding is that most people who go off clinical doses of lithium do not lose much weight and tend to keep most of the weight they gained as a side effect (correct us if we’re wrong, we haven’t seen great documentation of this). 

This seems at least suggestive that relatively high levels of lithium persist in the brain for a long time. On the other hand, clinical doses are really, really huge compared to trace doses, so maybe there is just so much in the brain compartment that it sometimes takes decades to clear. Ok we may not actually disagree, but it seemed like an interesting minor point of departure that might be worth considering.

JPC: I don’t know about this! I agree that slower (months to years) kinetics of lithium in the brain could explain this. An alternative (relatively parsimonious) explanation would be that, as Guyenet proposes, there simply is no mechanism for shedding excess adiposity. So if you gain weight as the result of any circumstance, if it stays on long enough for the lipostat to habituate to it, you just have a new, higher adiposity setpoint and have great difficulty eliminating that weight. That is, not being able to get the weight off after lithium-related weight gain might just be normal physiology.

The idea that clinical doses are just huge is sort of interesting. Normally, we think of the movement of ions in these kinetics models as having first-order kinetics (i.e. flux is proportional to concentration), but if you have truly shitboats of lithium in the brain, you could imagine that efflux might saturate (i.e. there are only so many transporters for the lithium to get out, since I imagine the cell membrane itself is impenetrable to Li+). This could be interesting. Not sure how you’d investigate it though. Probably patch-clamp type studies in ex vivo neurons? These are unfortunately expensive and extremely technical.


JPC: I see Amdisen et al. 1974 describes a fatal dose of lithium, which is very different pharmacokinetically from therapeutic doses. Above about 2.0 mmol/L (~2x therapeutic levels), lithium kinetics become nonlinear—that is, the pharmacokinetics are no longer fixed and the drug begins to influence its own clearance. In the case of lithium, high doses of lithium reduce clearance, leading to a vicious cycle of toxicity. This is a big deal clinically, often leading to the need for emergent hemodialysis.

So this is consistent with the papers I mentioned earlier (Ehrlich et al, Galliot et al) in the sense that cannot really conflict because they are reporting on two very different pharmacokinetic regimes.

You can’t directly compare the lithium kinetics in this patient to those in healthy people. You can see in figure 1 that the patient’s “urea” (I assume what we’d call BUN today?) explodes, which is a result of renal failure. It sounds like the patient wasn’t making any urine, i.e. has zero lithium clearance.

Figure 1 from Amdisen et al. 1974

SMTM: True, it’s hard to tell. But FWIW lithium also seems to be cleared through other sources like sweat, so even renal failure doesn’t mean zero lithium clearance, just severely reduced. (Though not sure the percent. 50% through urine? 80%? 99%?)

JPC: Yes this is true, of course. My intuition would be that it’s closer to 99% or even like 99.9%. The kidney’s “function” (I guess you have to be a bit careful not to anthropomorphize/be teleological about the kidney here, but you know what I mean) is to eliminate stuff from the blood via urine, which it does very well, whereas sweat and other excreta have other functions.

Let’s assume for a second that lithium and sodium are the same and that the body doesn’t distinguish (obviously false; all models are wrong but some are useful) and let’s do some math.

In the ICU we routinely track “ins and outs” very carefully. Generally normal urine output is 0.5 – 1.5 mL/kg body weight/hr. In a 70 kg adult call it >800 mL/day. But because we also know how much fluid is going in, we know how much we lose to evaporation (sweat, spitting, coughing up gunk, etc), which we call “insensible losses.” This is usually 40-800 mL/day.

A normal sweat chloride (which we use to check for cystic fibrosis) is <29 mM. Because sweat doesn’t have a static charge, we know there’s some positive counterion. Let’s assume it’s all sodium. So call it 30 mM NaCl, and calculate 800 mL x 30 mM = 24 mmol NaCl and 40 mL x 30 mM = 1.2 mmol. These are collected using (I think) topical pilocarpine to stimulate sweat production, so this would be an upper bound probably. It’s pretty close to what they find here which is in athletes during training (full disclosure I didn’t read the whole thing), which seems like it would be similar to the pilocarpine case (i.e. unlikely to be sustained throughout the day).

We also measure 24-hour sodium elimination when investigating disorders of the kidney. A first-reasonabe-google-hit normal range is 40-220 mmol Na/24 hours. (Of course, this is usually done when fluid-restricting the patient, so this would be on the low end of normal. If you go to Shake Shack and eat a giant salty burger your urine urea and Na are going to skyrocket. If you’re in a desert, your urine will be WAY concentrated, but maybe lower volume. It’s hard to generalize so this is at best a Fermi estimation type of deal.)

Anyhow, we’re looking at somewhere between 2x and 250x more sodium eliminated in the urine. Again my guess is that we’d be closer to the 250x number and not the 2x number for some of the reasons I mention above. Also I worry you can’t just multiply insensible losses * sweat [Na] because as water evaporates it gets drawn out of the body as free water to re-hydrate the Na, or something.

In writing this up, I also found this paper which also does some interesting quantification of sweat electrolytes (again we get a mean sweat [Na] of 37 and [Cl] of 34), but in some of the later plots (Figure 2) we can see that [Na] and [Cl] go way low and that the average seems to be being pulled up by a long tail of high sweat electrolytes.

So not sure what to take away from that but I thought I’d share my work anyway. 🙂


JPC: In the case of bone, however, there might be something here! You could imagine the bone being a large but slowly-exchanging depot of lithium. I’d be interested to see if anyone has measured bone lithium levels in folks who were, say, on chronic therapeutic lithium. I’m not aware of anything like that.

SMTM: It seems to fit Amdisen et al. 1974. That case study is of a woman who was on clinical levels of lithium for three years, and had relatively high concentrations in her bones. Like you say, a fatal dose of lithium is very different pharmacokinetically from therapeutic doses, but the rate at which lithium deposits in bone is presumably (?) much slower than for other tissues, so this may be a reasonable estimate of how much had made it into her bones from three years of clinical treatment. Sample size of one, etc., but like you say there doesn’t seem to be any other data on lithium in bones. 

JPC: I think it’s hard to say for sure if high concentration in her bones is due to the chronic therapy or the overdose. However, they note higher (0.77 vs 0.59 mmol/kg) in dense bone (iliac crest) than in spongey bone (vertebral body; there’s a better name than spongey… maybe cumulus? I don’t remember.). That’s interesting because it suggests to me (assuming that the error in the measurement is << 0.77-0.59) there is more concentrating effect in mineralized bone than all the cellular components (osteoclasts, osteoblasts, hematopoietic cells etc). 

Anyway it’s suggestive that maybe there is deposition in bone. I wouldn’t hang my hat on it, but it is definitely consistent with it. I also agree that bone mineralization/incorporation seems like it ought to be on a longer timescale than cellular transport, so that is consistent as well. Obviously n=1, etc etc, but it’s kind of cute.

SMTM: Maybe we should see if we could do a study, there must be someone out there with a… skeleton bank? What do you call that? 

JPC: A cadaver lab? I think most medical schools have them (ours does). In an academic medical setting, I would just get an IRB to collect bone samples from all the cadavers or maybe everyone who gets an autopsy that’s sufficiently extensive to make it easy to collect some bone. This would be a convenience sample, of course, but it would be interesting. Correlate age, zip code, renal function if known?

Because the patient is dead, there’s no risk of harm, and because they’re already doing the autopsy/dissection/whatever it should be relatively straightforward to collect in most cases (I mean, they remove organs and stuff to weigh and examine them so grabbing a bit of bone is easy). Unfortunately all these people got sick and died so you have a little bit of a problem there. For example, if someone had cancer and was cachectic, what can you learn from that? Idk.

In vivo bone biopsies are also a relatively common procedure done by interventional radiology under CT guidance (it’s SUPER COOL). You also have the problem that people are getting their biopsies for a reason, and usually the reason boils down to “we think that this bone looks weird,” so your samples would be almost by definition abnormal.

SMTM: Great! Maybe we can find someone with a cadaver lab and see if we can make it happen. This is a very cool idea.

Control Systems

SMTM: Earlier you mentioned the idea that the body’s set point can only be raised, but it seems really unlikely to us that there’s no mechanism for shedding excess adiposity. 

JPC: Hmm. You guys are definitely better read on this subject than I am, but do I fear I have oversimplified the Guyenet hypothesis somewhat. My recollection is that it is more that there’s no driving force for the lipostat setpoint to return to a healthy level if it has habituated to a higher level of adiposity.

I like the analogy to iron. (I don’t think that Guyenet makes this connection, but I read The Hungry Brain years ago so I’m not sure.) It turns out that the body has no way of directly eliminating iron, so when iron levels get high, the body just turns off the “get more iron” system. Eventually, iron slowly makes its way out of the body because bleeding, entropy, etc etc and the iron-absorption system clicks back on. (This is relevant because patients who receive frequent transfusions, such as those with sickle cell, get iron overload due to their inability to eliminate the extra iron.)

I guess, by analogy, it would be that the mechanism for shedding adiposity would be “turn off the big hunger cues.” It’s not no mechanism, it’s just a crappy, passive, poorly-optimized mechanism. (Presumably because, like how nobody got transfusions prior to the 20th century, there was never an unending excess of trivially-accessible and highly palatable food in our evolutionary history.)

SMTM: Well, overfeeding studies raise people’s weights temporarily but they quickly go back to where they were before. Anecdotally, a lot of people who visit lean countries lose decent amounts of weight in just a few weeks. And occasionally people drop a couple hundred pounds for no apparent reason (if the contamination hypothesis is correct, this probably happens in rare cases where a person serendipitously eliminates most of their contamination load all at once). And people do have outlets like fidgeting that seem to be a mechanism beyond just “turn off the big hunger cues.” All this seems to suggest that weight is controlled in both directions.

JPC: Proponents of the above hypothesis would explain this by saying that the lipostat doesn’t have time to habituate to the new setpoint during the timescale of an overfeeding study, and so they lose the weight by having their “acute hunger cues” turned off. Whereas as weight creeps up year after year, the lipostat slowly follows the weight up. You do bring up a good point about fidgeting, though.

My thought was that bolus-dosed lithium (in food or elsewhere) might serve the function of repeated overfeeding episodes, each one pushing the lipostat up some small amount, leading to overall slow weight gain. 

I think combining the idea that the brain concentrates lithium with an “up only” lipostat might give you this effect? If we say 1) lithium probably concentrates first in areas controlling hunger and thirst, leading to an effect on this at lower-than-theraputic serum concentrations, you might see weeks of weight-gain effect from a bolus 2) that we know that weight gain can occur on this timescale and then not revert (see the observation, which I read about in Guyenet, that most weight is gained between thanksgiving and NYE). What do you think?

SMTM: To get a little more into the weeds on this (because you may find it interesting), William Powers says in some of his writing (can’t recall where) that control systems built using neurons will have separate systems for “push up” and “push down” control. If he’s right, then there are separate “up lipostats” and “down lipostats”, and presumably they function or fail largely separately. This suggests that a contaminant that breaks one probably doesn’t break the other, and also suggests that the obesity epidemic would probably be the result of two or more contaminants.

JPC: Yes! Super interesting. There are lots of places in the brain where this kind of push-pull system is used. I remember very clearly a neuroscience professor saying, while aggressively waving his hands, that “engineers love this kind of thing and that’s probably why the brain does it too.” I wonder if he was thinking of Powers’ work when he said that.

SMTM: Let’s say that contaminant A raises the set point of the “down lipostat”, and contaminant B raises the set point of the “up lipostat”. Someone exposed to just A doesn’t necessarily get fatter, but they can drift up to the new set point if they overeat. At the same time, with exercise and calorie restriction, there’s nothing keeping them from pushing their weight down again. 

Someone exposed to both A and B does necessarily get fatter, because they are being pushed up, and they have to fight the up lipostat to lose any weight, which is close to impossible. (This might explain why calorie restriction seems to work as a diet for some people but doesn’t work generally.) 

Someone exposed to just B, or who has a paradoxical reaction to A, sees their up and down lipostats get in a fight, which looks like cycles of binging and purging and intense stress. This might possibly present as bulimia.

There isn’t enough evidence to tell to this level of detail, but a plausible read based on this theoretical perspective is that we might see something like, lithium raises the set point of the down lipostat and PFAS raise the set point of the up lipostat, and you only get really obese if you get exposed to high doses of both. 

JPC: Very interesting! It’s definitely appealing on a theoretical level. (See: your recent post on beauty in science.) I just don’t know anything about the state of the evidence in the systems neuroscience of obesity to say if it’s consistent or inconsistent with the data. (Same is of course true of the lipostat-creep hypothesis above.)

I’m not sure about why you think the two systems would function separately? Certainly, for us to see a change, there would have to be a failure of one or the other population preferentially but I’m not sure why this would be less common than one effect or the other. They’d be likely anatomical neighbors, and perhaps even developmentally related. I guess it would all depend on the actual physiology. I’m thinking, for instance, of how the eye creates center-surround receptive fields using the same photoreceptors in combination with some (I think) inhibitory interneurons (neural NOT gates). The same photoreceptor, hooked up a different way, acts to activate or inhibit different retinal ganglion cells (the cells that make up the optic nerve… I think. It’s been a while.). Another example might be the basal ganglia, which (allegedly) functions to select between different actions, but mostly our drugs act to “do more actions” by being pro-dopaminergic (for instance to treat Parkinsons) or “do fewer actions” by being antidopaminergic (as in antipsychotics like haloperidol).

SMTM: Yeah good points and good question! We have reasons to believe that these systems (and other paired systems) do function more or less separately, but it might be too long to get into here. Long story short we think they are computationally separate but probably share a lot of underlying hardware. 


SMTM: What do you think of a model based on peak lithium exposure? Our concern is that most sources of exposure are going to be lognormally distributed. Most of the time you get small doses, but very rarely you get a really really large dose. Most food contains no lithium grease, but every so often some grease gets on your hamburger during transport and you eat a big glob of it by accident. 

Lognormal Distribution

Or even more concerning: you live downriver from a coal power plant, and you get your drinking water from the river. Most of the time the river contains only 10-20 ppb Li+, nothing all that impressive. But every few months they dump a new load of coal ash in the ash pond, which leaches lithium into the river, and for the next couple of days you’re drinking 10,000 ppb of lithium in every glass. This leads to a huge influx, and your compartments are filled with lithium. 

This will deplete over time as your drinking water goes back to 10 ppb, but if it happens frequently enough, influx will be net greater than efflux over the long term and the general lithium levels in your compartments will go up and up. But anyone who comes to town to test your drinking water or your serum will find that levels in both are pretty low, unless they happen to show up on one of the very rare peak exposure days. So unless you did exhaustive testing or happened to be there on the right day, everything would look normal.

JPC: I totally vibe with the prediction that intake would be lognormally distributed. From a classic pharmacokinetic perspective, I would expect lognormally-distributed lithium boluses to actually be buffered by the fact that renal clearance eliminates lithium in proportion to its serum concentration–that is, it gets faster as lithium concentrations go up.

But I’m a big believer that you should shut up and calculate so I coded up a three compartment model (gut -> serum <-> tissue), made up some parameters* that seemed reasonable and gave the qualitative behavior I expected). Then either gave the model either 300 mg lithium carbonate three times a day (a low-ish dose of the the preparation given clinically), or three-times-a-day doses drawn from a lognormal distribution with two parameter sets (µ=1.5 and σ=1.5 or σ=2.5; this corresponds to a median dose of about 4.4 mg lithium carbonate in both cases, since the long tail doesn’t influence the median very much).

* k_gut->serum = 0.01 per minute

* k_serum->brain = 0.01 per minute

* k_brain->serum = 0.0025 per minute

* k_serum->urine = 0.001 per minute

* V_d,serum = 16 L

In my opinion, this gives us the following hypothesis: lognormally distributed doses of lithium with sufficient variability should create transient excursions of serum lithium into the therapeutic range.

Because this model includes that slow third compartment, we can also ask what the amount of lithium in that compartment is:

My interpretation of this is that the third compartment smooths the very spiky nature of the serum levels and, in that third compartment, you get nearly therapeutic levels of lithium in the third compartment for whole weeks (days ~35-40) after these spikes, especially if you get two spikes back to back. (Which it seems to me would be likely if you have, like, a coal ash spill or it’s wolfberry season or whatever.)

There clearly are a ton of limitations here: the parameters are made up by me, real kinetics are more like two slow compartments (this has one), lithium carbonate is a delayed preparation that almost certainly has different kinetics from food-based lithium, and I have no idea how realistic my lognormal parameters are, to name a few. However, I think the general principle holds: the slow compartment “smooths” the spikes, and so doing seems to be able to sustain highish [Li] even when the kidney is clearing it by feasting when Li is plentiful and retaining it during famine periods.

I’m not sure if this supports your hypothesis or not (do you need sustained brain [Li] above some threshold to get weight gain? I don’t think anyone knows…) but I thought the kinetics were interesting and best discussed with actual numbers and pictures than words. What do you guys think? Is this what you expected?

SMTM: Yes! Obviously the specifics of the dynamics matter a lot, but this seems to be a pretty clear demonstration of what we expected — that it’s theoretically possible to get therapeutic levels in the second compartment (serum) and sometimes in the third compartment (brain?), even if the median dose is much much lower than a therapeutic dose. 

And because of the lognormal distribution, most samples of food or serum would have low levels of lithium — you would have to do a pretty exhaustive search to have a good chance of finding any of the spikes. So if something like this is what’s happening, it would make sense that no one has noticed. 

It would be interesting to make a version of this model that also includes low-level constant exposure from drinking water (closer to 0.1 mg per day) and looks at dynamics over multiple years, getting an impression of what lifetime accumulation might look like, but that sounds like a project for another time.


JPC: Another thought is that thyroid concentrations may also matter. If lithium induces a slightly hypothyroid effect, people will gain weight that way too, since common (even classic) symptoms of hypothyroidism are weight gain and decreased activity. (It also proposes an immediate hypothesis [look at T3 vs TSH] and intervention [give people just a whiff of levothyroxine and see if it helps].) There’s also some thought that lithium maybe impacts thirst (full disclosure have not read this article except the abstract)?

SMTM: Also a good note, and yes, we do see signs of thyroid concentration. Some sort of thyroid sample would also be less invasive than a brain sample, right? 

JPC: Yes. We routinely biopsy thyroid under ultrasound guidance for the evaluation of thyroid nodules (i.e. malignant vs benign). These biopsies might be a source of tissue you could test for lithium, but I’m not sure. The pathologists may need all the tissue they get for the diagnosis, they may not. Doing it on healthy people might be hard because it’s expensive (you need a well-trained operator) and more importantly it’s not a risk free procedure: the thyroid is highly vascular and if you goof you can hit a blood vessel and “brisk bleeding into the neck” is a pretty bad problem (if rare).

That said, it is definitely less invasive than a brain biopsy, and actually safer than the very low bar of “less invasive than a brain biopsy” implies.


SMTM: Do you have clinical experience with lithium? 

JPC: Minimal but non-zero. I had a couple of patients on lithium during my psychiatry rotation and I think one case of lithium toxicity on my toxicology rotation. I do know a lot of doctors, though, so I could ask around if they’re simple questions.

SMTM: Great! So, trace doses might be the whole story, but we’re also concerned about possible lithium accumulation in food (like we saw in the wolfberries in the Gila River Valley). We wonder if people are getting subclinical or even clinical doses from their food. We do plan to test for lithium in food, but it also occurred to us that a sign of this might be cases of undiagnosed lithium toxicity. 

Let’s make up some rough numbers for example. Let’s say that a clinical dose is 600,000 µg and lithium toxicity happens at 800,000 µg. Let’s also say that corn is the only major crop that concentrates lithium, and that corn products can contain up to 200,000 µg, though most contain less. Most of the time you eat fewer than four of these products a day and get a subclinical dose of something like 50,000 – 300,000 µg. But one day you eat five corn products that all happen to be high in lithium, and you suddenly get 1,000,000 µg. You’ve just had an overdose. If common foods concentrate lithium to a high enough level, this should happen, at least on occasion. 

If someone presents at the ER with vomiting, dizziness, and confusion, how many docs are going to suspect lithium toxicity, especially if the person isn’t on prescription lithium for bipolar? Same for tremor, ataxia, nystagmus, etc. We assume (?) no one is routinely checking the lithium blood levels of these patients for lithium, that no one would think to order this blood test. Even if they did, there’s a pretty narrow time window for blood levels detecting this spike, as far as we understand. 

So our question is something like, if normal people are occasionally presenting with lithium toxicity, would the medical system even notice? Or would these cases be misdiagnosed as heavy metal exposure / dementia / ischemic stroke / etc.? If so, is there any way we can follow up with this? Ask some ER docs to start ordering lithium tests in any mystery cases they see? Curious to know what you think, if this seems at all plausible or useful.

JPC: I have a close friend who is an ED doc! She and I talked about it and here’s our vibe:

With a presentation as nonspecific as vomiting, dizziness, and confusion, my impression is that most ED docs would be unlikely to check a lithium level, especially if the patient is well enough to say convincingly “no I didn’t take any pills and no I don’t take lithium.” At some point, you might send off a lithium level as a hail-Mary, but there are so many things that cause this that a very plausible story would be: patient comes to ED with nausea/vomiting, dizziness, and altered mental status. The ED gives maybe fluids, checks some basic labs, does an initial workup, and doesn’t find anything. Admits the patient. The next day the admitting team does some more stuff, checks some other things, and comes up empty. The patient gets better after maybe 24-48h, nobody ever thinks to check a lithium level, and since the patient is feeling better they’re discharged without ever knowing why.

Another version would go: patient is super sick, maybe their vomiting and diarrhea get them super dehydrated and give them an AKI (basically temporary kidney failure). People think “wow maybe it’s really bad gastritis or some kind of primary GI problem or something?” The patient is admitted to the ICU with some kind of gross electrolyte imbalance because they’re in kidney failure and they pooped out all their potassium, someone decides they need hemodialysis, and this clears the lithium. Again the patient gets better, and everyone is none the wiser.

Tremor, ataxia, nystagmus, etc. are more focal signs and even if someone doesn’t have a history of lithium use, and in this case our impression is that people would be more likely to check a lithium level. We also think it wouldn’t always happen. Even in classic presentations of lithium toxicity, sometimes people miss the diagnosis. (Emergency medicine is hard; people aren’t like routers where they blink the link light red when the motherboard is fried or power light goes orange if the AC is under voltage. Things are often vague and complicated and mysterious.)

Something you’d have to explain is how this isn’t happening CONSTANTLY to people with really borderline kidney function. Perhaps one explanation might be that acute lithium intoxication (i.e. not against a background of existing lithium therapy) generally presents late with the neuro stuff (or so I hear).

We think that this is plausible if it is relatively uncommon or almost always pretty mild. If we were having an epidemic of this kind of thing (like on the scale of the obesity epidemic) I think it would be weird that nobody has noticed. Unless of course it’s a pretty mild, self-resolving thing. Then, who knows! AFAIK still nobody really knows why sideaches happen—figuring it out just isn’t a priority.

On occasion, the medical-scientific community also has big misses. There’s an old line that “half of what you learn in medical school is false, you just don’t know which half.” We were convinced until 1982 that ulcers were caused by lifestyle and “too much acid”; turns out that’s completely wrong and actually it’s bacteria. I saw a paper recently that argued that pretty much all MS might be due to EBV infection (no idea if it’s any good).

I think you could theoretically “add on” a lithium level to anybody that’s getting a head CT with the indication being “altered mental status.” “Add on” just means that the lab will just take the blood they already have from the patient and run additional testing, if they have enough in the right kind of tube. The logic is that patients with new-onset, dramatic, and unexplained mental status changes often get head CTs to rule out a bleed or other intracranial badness, so a head CT ordered this way could be a sign that the ordering doc may be feeling stumped.

If you wanted to get fancy, you could try to come up with a lab signature of “nausea/vomiting/diarrhea of unclear origin” (maybe certain labs being ordered that look like a fishing expedition) and add on a lithium there as well. 

SMTM: Good point, but, isn’t it possible that it IS happening constantly to people with really borderline kidney function? The symptoms of loss of kidney function have some overlap with the symptoms of lithium intoxication, maybe people with reduced kidney function really do have this happen to one degree or another whenever they draw the short straw on dietary lithium exposure for the day. Lots of people have mysterious ailments that lead to symptoms like nausea and dizziness, seemingly at random.

Or we could look at it from the other angle — lithium can cause kidney damage, kidney disease is (very roughly) correlated with obesity at the state level, and as far as we can tell, rates of kidney disease are going up, right? Is it possible that many cases interpreted as chronic kidney disease are “actually” chronic lithium intoxication?

JPC: I guess it’s definitely possible. The “canonical” explanation to this would be that diabetes (which is obviously linked to obesity) destroys your kidneys. But, if it’s all correlated together as a vicious cycle (lithium → obesity → CKD → lithium) that’s kind of appealing too. I bet a lot is known about the obesity-diabetes-kidney disease link though and my bet without looking into it would be that there’s some problem with that hypothesis.

My thought here was that if people with marginal/no kidney function are getting mild cases, I would expect people with normal kidney function to be basically immune. Or, if people with normal kidney function get mild cases, people with marginal kidneys should get raging cases. This is because serum levels of stuff are related to the inverse of clearance. The classic example is creatinine, which is filtered by the kidney and used as a (rough) proxy for renal function.

SMTM: This is super fascinating/helpful. For a long time now we’ve been looking for a “silver bullet” on the lithium hypothesis — something which, if the hypothesis is correct, should be possible and would bring us from “plausible” to “pretty likely” or even “that’s probably what’s going on”. For a long time we thought the only silver bullet would be actually curing obesity in a sample population by making sure they weren’t consuming any lithium, but that’s a pretty tall order for a variety of reasons, not least because (as we’ve been discussing) the kinetics remain unclear! But recently we’ve realized there might be other silver bullets. One would be finding high levels of lithium in food products, but there are a lot of different kinds of foods out there, and since the levels are probably lognormal distributed you might need an exhaustive search. 

But now we think that finding people admitted to the ER with vague symptoms and high serum lithium, despite not taking it clinically, could be a silver bullet too. Even a single case study would be pretty compelling, and we could use any cases we found to try to narrow down which foods we should look at more closely. Or if we can’t find any of these cases, a study of lithium levels in thyroid or in bone could potentially be another silver bullet, especially if levels were correlated with BMI or something. 

JPC: I’m always hesitant to describe any single experiment as a silver bullet, but I agree that even a single case report, under the right conditions, of high serum lithium in someone not taking lithium would be pretty suspicious. You’d have to rule out foul play and primary/secondary gain (i.e. lying) but it would definitely be interesting. As far as finding lithium in bone or thyroid (of someone not taking lithium), I’d want to see some kind of evidence that it’s doing something, but again it’d definitely be supportive.

SMTM: Absolutely. We also don’t really believe in definitive experiments. The goal at this stage is to look for places where there might be evidence that could promote this idea from “plausible” to “likely”.

Charter Houses


There are a lot of really big houses (6+ bedrooms) on the market for around a million dollars, or sometimes less. Like this three-story, seven-bedroom house just outside Albany, NY, which recently sold for $1,181,300. Or this ten-bedroom house just south of Boston, MA, which recently sold for $990,000. Or this eight-bedroom house in Cincinnati, OH, which recently sold for $455,000. Some of these places are old bed & breakfasts; some are intended as rental properties; some of them are just big. Some of them are FRICKIN MISSILE SILOS.

Big investments generate quite a lot of money — you can draw off about 4% of an investment every year without depleting the principal, because you get back that much or more in interest. Even if you did nothing but stick the money in an S&P 500 index fund, the historical average is about 10% per year. That’s not guaranteed, but it’s pretty damn good.

If we assume 4% annually, a $3 million endowment would generate $120,000 a year, or $10,000 a month indefinitely. A $2.5 million endowment would generate $100,000 a year, or $8333.33 a month. Even a measly $2 million endowment would generate $80,000 a year, or $6,666.66 a month. 

Hail Satan

Any of these amounts would be enough to purchase a big 6-to-10 bedroom house in many areas, with some endowment left to generate interest each month. If you sink $1 million of a $3 million endowment into a house, you still have the remaining interest from $2 million every month.

Once you’ve bought a house, you could use that interest to support a houseful of people. Exact numbers vary by location, but the interest should be enough to keep the house in good repair, pay property taxes, pay for utilities and internet access, feed everyone, buy a junky car, and even give them all a small stipend. 

The big gains are in rent — getting a decent room can easily cost you $1000 a month these days, so eight people seeking out individual lodgings would be in for $8000 a month collectively, or $96,000 a year! But if they all live in an 8-bedroom house with a mortgage of $3000, that’s only $36,000 a year, and you save $60,000 annually. (And if you purchase the house outright, then of course there’s no rent at all.) 

We could throw together a bunch of examples of different houses you could buy, in different places all around the country. Or we could just give you a link to the Charter Houseulator, and you can take it for a spin yourself! Find a nice 5+ bedroom house somewhere using Zillow (big houses in Nova Scotia are pleasantly affordable!) and plug in your best estimates for all the variables.

You’ll notice a few things. It’s clear that a mortgage is the wrong choice here. You won’t come out ahead until 30 years down the road when the mortgage is finally paid off. If you have the money, buy the house outright.

Healthcare is the big stumbling block — in a lot of scenarios, you just won’t have enough to pay for everyone’s insurance. Residents might qualify for some kind of reduced rates depending on income, but this seems to vary a lot by state.

Even in the best-case scenarios, it’s hard to end up with enough to give your residents much of a stipend. This still isn’t such a bad deal — they get their rent, their food, and maybe their health insurance all covered. They even get access to a junky car. What more could you want? 

vroom vroom

The situation improves a lot if you start with an endowment of more than $3 million, of course, or if you assume you can get more than 4% interest per year. But even within these constraints, you can get pretty decent living conditions for 5-8 people if you choose a house in the right place and give them a shitty enough car. Go ahead and mess around with the values in the Houseulator and find out!


Charter houses could be used to fill all sorts of weird niches.

Young People

Maybe you think college is a waste of time (and really who doesn’t these days). Or maybe you just think we should make it easy for young people to take big risks, and work on moonshot projects that will take years to pan out.

In that case, a charter house could be an accelerator for young people. Lots of high school or college graduates would love an opportunity to not think about paying rent and focus on their passion projects for the next several years. 

In general, young people don’t mind a slightly marginal existence, so this setup fits them pretty well. The average 30-year-old would have a hard time accepting a tiny stipend, even if rent was covered and there were no strings attached. The average 30-year-old also probably has better options, where they can make a lot more money, even if they have to work for it. But the average 22-year-old would jump at the opportunity to [checks notes] get paid to not pay rent, and most 22-year-olds don’t have access to a better deal than this. This is even more true for the average 18-year-old, especially one that doesn’t want to bother with college. 

Young people! Never going to amount to anything am I right?

You might be concerned that young people would like their charter house so much they would stay forever, but this is where the very small stipend becomes an advantage. From the ages of 18 to 24 or so, survival alone is pretty enticing. But as they grow up, most of your residents will begin to dream of more than a $500 a month stipend and free rent. Soon they will hunger for more space, or nicer equipment, or a car that doesn’t have holes in the floor. They’ll find a job or some other way of making money and graduate, moving out on their own. If some of them do decide to become long-haulers, that’s ok too, since it gives your house more institutional memory.

This level of security helps people figure out their comparative advantage, and lets them found more small businesses and startups, because they don’t need to make the same kind of money right out of school. Obviously that’s good for innovation.

Research and Scholarship

Here’s a question: what’s the minimum form of scholarly institution? Existing universities are huge, but every university is made up of schools and departments, and in many cases these function almost as independent entities. How small can you go and still call it an institution? 

A charter house could be an interesting experiment in marginal scholarship. Charter houses could serve as a replacement for academic departments, possibly with a mentoring component (e.g. half of the residents are students, with mandatory turnover after a couple years). You buy a house and give it an endowment, and recruit a bunch of biologists or linguists or computer scientists, and see what kind of scholarship they produce. We don’t know if it will be good, but we’re sure it will be different.

The kind of biologists who would show up to live in an abandoned church in Oak Creek, Wisconsin or an old Victorian mansion in Normal, Indiana would be a very different kind of biologist than the kind who would take an academic job at your local university. But we think this is an advantage.

The Manhattan Project

Extrainstitutional Support

There’s an ongoing conversation about how we as a society can support people who have important but hard-to-compensate roles (see for example this twitter thread). There are lots of roles, especially in open source software but also in other areas, where the work is critical but no one is willing to pony up to pay for it.

These roles don’t fit within normal funding structures — they’re too small for a business to hire the person on, too small to form a nonprofit around them, and too big to be supported through individual donations. And beyond this, there are even more projects that someone should do, and which might attract support retrospectively, but no business or nonprofit would be willing to support prospectively.

Charter houses could solve this problem neatly. A charter house or two could easily be set up with positions offered to people who are filling these roles, providing them with a minimum of support — at the very least, free rent and free high-speed internet. These people are professionals, so this may not be enough for them — but there’s no reason they can’t get support from the charter house and make additional money in other ways. They can supplement that support by consulting, getting a real job, being a bounty hunter, etc. 

In fact, since people in this position might also have a part-time consulting gig or something, a charter house targeted at them might be able to survive on a much smaller endowment, only paying for their rent, and not covering their food and healthcare, for example. 

There are a lot of projects that would have no prospective support because they’re super high risk. But if we have an ecosystem for encouraging lots of high risk projects, we will eventually get a lot of crazy successful moonshots. Our society already does this a bit for open source software — we should do it for other important avenues of progress as well. Like apenwarr says: “The best part of free software is it sometimes produces stuff you never would have been willing to pay to develop (Linux), and sometimes at quality levels too high to be rational for the market to provide (sqlite).”

Unprincipled Mixture

You could also allow a totally unprincipled combination of all of these approaches, and we think that would work pretty damn well. It’s fine if you have three engineers working on a startup on the ground floor, an essayist sharing a bunk bed with a painter in a room above the garage, and two biologists in the attic. 

A mix of approaches is good and healthy. If you fill a house with biologists, they will all be competing with each other. They may even end up at each other’s throats — they are too similar. But mix in a little diversity, a few chemists and physicists, some experts in East Asian literature, and a Turkish math wiz who speaks almost no English, and things will work very well indeed.

It’s tempting to make each charter house alike in scope and subject — one house for the college dropouts, one house for the physicists, one house for the painters, one house for the startup accelerator, one house for the mystics, etc. But siloing people in this way is going to be counterproductive. Young people will benefit from sitting across the dinner table from old people; old people from young people. Biologists will benefit from playing video games in the living room with art historians. Philosophers will benefit from going grocery shopping with blacksmiths. Electrical engineers will benefit from fixing windows with clowns. Bartenders will benefit from cooking dinner with astronomers.

From Left: Child, Wizard, Hatter/Grandma. Not pictured: Fire Demon, Turnip

As Paul Graham says in his essay Hackers and Painters, “I’ve found that the best sources of ideas are not the other fields that have the word ‘computer’ in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation.” 

So it’s ok, even ideal, to have a charter house where most of the residents are college dropouts, and there’s one 60-year-old living in the basement maintaining ‘runk’.


Charter houses capture a number of features of other successful programs.

They’re kind of like the Alaska Fellows Program. In this program, you stick a bunch of recent college grads in a house somewhere in Alaska, where they live together for about a year. Housing and utilities are covered, and everyone gets a monthly stipend of $1000 on top of that. We hear it works great. If young people sign up for this, you can bet they would also sign up for a program with more freedom and where they didn’t have to live through the polar night.

They’re also kind of like medieval guilds. A guild was an organization devoted to a specific kind of skill, one with practical applications, and that saw to training and organization. They pooled funds and sometimes shared tools or workshops. The first universities started out as guilds of students, who banded together to hire tutors (the first professors) and for mutual protection. Other medieval examples include various religious orders, like the Franciscans or the Poor Clares. In these particular examples you personally owned no property, but you still had a place to stay. Religious orders often owned buildings (monasteries, convents, abbeys, etc.) and conducted various forms of scholarly work together. Gregor Mendel, the father of genetics, was an Augustinian friar and abbot. 

Something like charter houses already exists during college. Particular dorms will have a particular theme, or a subset of all the people in a club or frat will live together. When people graduate from college, it’s pretty common for them to share an apartment with friends for a couple years. It’s clear that people enjoy living together like this, as long as they get their own space.

This is pretty good evidence that, given the option, young people would try living in a charter house. And it seems like this is just straight-up competitive with college in almost every way. You have to pay for your housing in college, but in Soviet Russia, house pays you a charter house pays you. In most colleges you have to share a tiny, cramped room with other people, but most charter houses would be big enough for everyone to have their own room. In college you have to study some predetermined topic and take classes, but in a charter house you can spend your time on projects that actually teach you what you need to know. In college you have to hide your drugs, but in a charter house, the chemist who lives in the walk-in closet is synthesizing LSD in the bathtub.

An example of a similar successful model is Hampshire College in Amherst, Massachusetts. At Hampshire, upperclassmen don’t live in dorms, they live in mods (“modular housing”) of 6-10 students, which are like medium-size apartment buildings. The mods are big — almost everyone gets a single, and the few doubles are huge. And you can work on whatever you think is important because of the traditional Hampshire package of no majors, no tests, and no grades (yes, really!). The only downside is that you still have to pay, but charter houses fixes this. And we know this crazy system works — Hampshire has produced alumni like Ken Burns, Elliott Smith, Lupita Nyong’o, and Eugene Mirman, the man who voices Gene on Bob’s Burgers and deliverer of the best commencement speech of all time

Ken Burns does an in-depth profile of fellow Hampshire alum Eugene Mirman  

The benefit of college is of course the fact that it’s large — there are lots of people you already have something in common with, which makes it easier to build community and a strong social network. A single charter house can’t compete with that, but if you put a bunch of houses in the same town, they can support each other in various ways.

We’re not just talking about community — they can share skills and resources. The charter house full of musicians is the only house with a grand piano, but residents of the other charter houses can visit to use it. The chemists sprang for a projector or a giant TV, so everyone comes to their place for movie nights. The videographers living in the garret of the old B&B help record the experiments the electrical engineers are doing in the charter house down the street, and put it all on YouTube.

Replicating the benefits of college without the headaches isn’t just for college-age kids. Most people who went to college don’t miss the exams or the food, but a lot of them miss the sense of community and the ability to casually hang out with interesting people. Charter houses could be designed to be attractive to almost any age group.


We’re not financial advisors, so we can’t advise on how to set up the institution behind a charter house. But we can advise a little on how we think you should organize it. 

In brief, we think a charter house should have very few rules. 

Certainly you do want some rules. You probably want to have one resident who is on all the paperwork, who can collect the interest from the endowment every month, and who is responsible for paying all the bills. You want some legal firm or something to oversee the endowment. You want rules about what happens if the endowment grossly underperforms or overperforms — what happens to a house if their $2 million endowment shrinks to $1 million, or grows to $4 million? You want rules about what happens if the house ends up being abandoned. 

(A growth rate of 4% per year does seem pretty conservative, so we support a rule that if a charter house’s endowment gets too big — if it ever reaches double the original endowment, if it breaks $5 million, something like that — it should be forced to split in half and spin off a sister house nearby.)

Other than that, we don’t think you want many rules at all.

There are many rules that do seem enticing at first glance. If your charter house is intended for biologists, you might want to make a rule that only biologists can live there. If your charter house is meant to be an accelerator for young people, you might want a rule that no one over 26 can live there. You might want a rule that no one can live there for more than 4 years, to encourage turnover and give lots of people a chance to live in the house. If the house itself has only eight rooms, you might want to make a rule that no more than 10 people can live there at a time. You might want to make sure at least a few people are living in the house at all times. Maybe you want to make a rule, “no girlfriends/boyfriends”, or at least “no families/kids”. And you would probably want some rule about how people are chosen to join the house.

We have only one rule in this house: don’t leave the window open.

These seem like good ideas, but we are against them for a simple reason: they are really hard to enforce. Who is going to go check that everyone living in the house is a biologist? If the guy playing guitar in the living room says “no I’m a biologist”, what are you going to do? If you try to enforce a maximum number of residents, how will you tell who is living there and who is just visiting? How long can someone visit for, before they count as living in the house? A week? A month? 

So our recommendation is, don’t make these rules and don’t waste time and effort on trying to enforce them. It’s fine to tell a house, “I set this up for chemists” or “this house is to support open software” or “I want to support young people, so try to graduate when you can.” But don’t try to enforce these rules — trying to enforce them will just lead to internal squabbles. 

Let the residents have friends over. Let them stay as long as they need. Let them decide how they’re going to pick their housemates. And let them learn to govern themselves. This teaches them that 1) they are capable of self governance and 2) specific tips and tricks on how to actually run a small organization/government. Pretty pro-democracy. 

So we think charter houses should have as few rules as possible. On the other hand, they should definitely have traditions. Each house should have a name, house colors, maybe a crest. A motto if they can come up with one (maybe, “I am a beautiful animal! I am a destroyer of worlds!”). Perhaps an official song or chant. Traditions like a house movie (may we suggest WPDR) or a monthly poetry contest. And of course, a party every year on the day it was founded.

You can speculate and plan all you want, but you won’t know what works and what doesn’t until you give it a go. You really want someone to try it, to start some charter houses and see what they come up with, what problems they run into, and what solutions. 

You want to invite the people who will live in the house to be your co-conspirators. If you make up a bunch of rules, even good ones, and try to enforce them, your residents will resent you. But if you bring them on board, and let them tinker with it, they will surprise you.

“Let yourself be second guessed,” says Paul Graham. “When you make any tool, people use it in ways you didn’t intend, and this is especially true of a highly articulated tool like a programming language. Many a hacker will want to tweak your semantic model in a way that you never imagined. I say, let them; give the programmer access to as much internal stuff as you can without endangering runtime systems like the garbage collector.”

We feel the same way — let them get at everything except the metaphorical runtime systems. In hacking they call this the “Hands-On Imperative”, and while actual code may or may not be involved, the charter house is more than a bit of a hacking project. “Hackers can do almost anything and be a hacker,” said Burrell Smith, the designer of the Macintosh computer, at the first Hacker Conference. “You can be a hacker carpenter. It’s not necessarily high tech. I think it has to do with craftsmanship and caring about what you’re doing.”

You want these houses to be very different, and you want to use the power of evolution. Lack of diversity is so bad that in biology, they call it genetic erosion.

You want “speciation” — you want to release ideas into the world and get feedback from their success and failure. We’re going to continue with the Paul Graham quotes for a second, because charter houses are more than a little like a combination of startups and startup accelerators. “If you release a crude version 1 then iterate,” he says, “your solution can benefit from the imagination of nature, which, as Feynman pointed out, is more powerful than your own.”


Most plans to change the world require a lot of coordination. You have to argue with senators and NGOs and the university PR department, on and on and on. But anyone who can spare a couple million dollars can set up a charter house unilaterally.

Haha yes, “spare a few million dollars”, you laugh, but donations like this are made to nonprofits and universities all the time. We don’t want to take food out of the mouths of hungry children, but let’s just say that some of these donations are more inspiring than others. Like the $250 million gift from Charles B. Johnson to Yale in 2013. Or the $350 million gift from Michael Bloomberg to Johns Hopkins in 2013. Or the $400 million gift from John Paulson to Harvard in 2015.

We’re not even necessarily talking about bringing in new money — you could do a lot just by redirecting donations that are already being made. If Bloomberg wants to give several hundred million dollars to Johns Hopkins (estimated endowment: $8.8 billion), who are we to judge? But in a world where we can’t seem to stop talking about stagnation and academic decline, doesn’t it seem worth it to try a different model? How about you spend $10 million to set up three charter houses with endowments of $3.3 million each, and give Johns Hopkins a mere $340 million? Or set up ten charter houses with endowments of $5 million each, and see if Johns Hopkins can survive on $300 million?

Exhibit One: A real charity case

What’s gonna give you more bang for your buck, giving Stanford a shiny new engineering building and filling it with smartboards and swivel chairs and all of the engineering students who would have gone to Stanford whether it had a shiny new engineering building or not, or giving some of those engineers a house where they can work on stuff they think is cool, and enough food to keep them alive while they do it?

Another thing: those engineering students will take on like a hundred thousand dollars in debt if they go to Stanford! An advantage of charter houses is that nobody has to take out a loan.

So instead of giving $20 million to an institution that you are certain will muddle on in acceptable mediocrity, split that money up among several charter houses. Some will fizzle out; a couple may even explode. But others will become self-sustaining little critters that will spark and wriggle and lay plans of their own.

Ogilvy on Dating: The Consumer isn’t a Moron, She is [Hopefully] your Wife

Hello lonelyhearts. Happy Valentine’s Day.

Dating is really the oldest form of direct marketing. You’re marketing yourself — it’s a specialized form of advertising. So who better to get dating advice from than the King of Madison Avenue, David Ogilvy himself

To this end, we have collected a number of direct quotes from Ogilvy’s writing, and combined them into the greatest dating/marketing manual of all time. Now you can use it to YOUR advantage!

To help get in the spirit of things, we have replaced a few words and phrases with their romantic equivalents. These amendments always appear in ALL CAPS. Aside from this, the section titles, a few clearly-marked notes, and the order in which the passages appear, everything is Ogilvy’s.


What really decides FOLKS to DATE or not to DATE is the content of your advertising, not its form. Your most important job is to decide what you are going to say about your YOU, what benefit you are going to promise. Two hundred years ago Dr. Johnson said, “Promise, large promise is the soul of an advertisement.” When he auctioned off the contents of the Anchor Brewery he made the following promise: “We are not here to sell boilers and vats, but the potentiality of growing rich beyond the dreams of avarice.”

Handling PARTNERS once you have got them is deadly serious business. You are spending other people’s TIME, and the fate of their RELATIONSHIP often rests in your hands. But I regard the hunt for new DATES as a sport. If you play it grimly, you will die of ulcers. If you play it with lighthearted gusto, you will survive your failures without losing sleep. Play to win, but enjoy the fun.

The function of most advertising is not to persuade people to DATE your YOU, but to persuade them to DATE YOU more often than the other SINGLES in their repertoire.

You cannot generalize. … The PLAYERS which are most successful in new DATES are those whose THEM show the most sensitive insight into the psychological make-up of the prospective CUTIE. Rigidity and salesmanship do not combine.

There is one stratagem which seems to work in almost every case: get the prospect to do most of the talking. The more you listen, the wiser he thinks you are.

I never accept A DATE unless I believe that I can do a conspicuously better job than the previous BOYFRIEND.

I have never wanted to get A GIRLFRIEND so big that I could not afford to lose HER. The day you do that, you commit yourself to living with fear. Frightened GUYS lose the courage to give candid advice; once you lose that you become a lackey.

A posture of enthusiasm is not always the one best calculated to succeed. Five or six times I have turned down SUITORS which did not meet MY qualifications, only to find that the act of rejection inflamed the SINGLE’S desire to SMOOCH. 

When a FELLA PICKS YOU, it is because he has decided that YOU ARE the best available to him. His advisers have reached this decision after making a thorough study of what YOU have to offer. But as time goes by, he acquires new advisers. Every time this happens, it is expedient for YOU to convince the new adviser that his predecessor was right in selecting YA GIRL.

This meme but unironically

The most important word in the vocabulary of DATING is “test”. If you pretest your PHOTOS with THE INTERNET, and pretest your PROFILE, you will do well in the marketplace. [SMTM’s Note: Perhaps with Photofeeler]

Twenty-four out of twenty-five new BACHELORS never get out of test markets. Manufacturers who don’t test-market their BACHELORS incur the colossal cost (and disgrace) of having their BACHELORS fail on a national scale, instead of dying inconspicuously and economically in test markets.

Test your promise. Test your media. Test your PROFILE and your PICS. Test the size of your DATE NIGHTS. Test your frequency. Test your level of expenditure. Test your MEMES. Never stop testing, and your DATING will never stop improving.


In the early days of DATING APPS, I made the mistake of relying on words to do the selling; I had been accustomed to radio, where there are no pictures. I now know that in APPS you must make your pictures tell the story; what you show is more important than what you say. Words and pictures must march together, reinforcing each other. The only function of the words is to explain what the pictures are showing.

MOST SINGLES think in terms of words, and devote little time to planning their HOT PICS. Yet the illustration often occupies more ATTENTION than the copy, and it should work just as hard to sell the YOU. It should telegraph the same promise that you make in your PROFILE.

Dr. Gallup has discovered that the kind of photographs which win awards from camera clubs—sensitive, subtle, and beautifully composed—don’t work in PROFILES. What do work are photographs which arouse the reader’s curiosity. He glances at the photograph and says to himself, “What goes on here?” Then he reads your PROFILE to find out. This is the trap to set.

Harold Rudolph called this magic element “story appeal,” and demonstrated that the more of it you inject into your photographs, the more people will look at your PROFILE.

Keep your PHOTOS as simple as possible, with the focus of interest on one person. Crowd scenes don’t pull. Avoid stereotyped situations like grinning housewives pointing fatuously into open refrigerators. [SMTM’s Note: the ‘60s female equivalent of the guy holding the big fish]


I belong to [a] school, which holds that a good PROFILE is one which sells the SINGLE without drawing attention to itself. It should rivet the reader’s attention on the SINGLE. Instead of saying, “What a clever PROFILE,” the reader says, “I never knew that before. I must DATE THE HELL OUT OF THEM.”

When you sit down to write your PROFILE, pretend that you are talking to the woman on your right at a dinner party. She has asked you, “I am thinking of SEDUCING a new BOYFRIEND. WHO would you recommend?” Write your PROFILE as if you were answering that question.

(1) Don’t beat about the bush—go straight to the point. 

(2) Avoid superlatives, generalizations, and platitudes. Be specific and factual. Be enthusiastic, friendly, and memorable. Don’t be a bore. Tell the truth, but make the truth fascinating.

We make PROFILES that people want to read. You can’t save souls in an empty church. 

Profile Facts

Very few PROFILES contain enough factual information to sell the SINGLE. There is a ludicrous tradition among MATCHMAKERS that consumers aren’t interested in facts. Nothing could be farther from the truth.

When I was a door-to-door VALENTINE I discovered that the more information I gave about my SELF, the more I sold. Claude Hopkins made the same discovery about advertising, fifty years ago. But most modern DUDES find it easier to write short, lazy PROFILES. Collecting facts is hard.

The consumer isn’t a moron; she is your FUTURE wife HOPEFULLY. You insult her intelligence if you assume that a mere slogan and a few vapid adjectives will persuade her to DATE anything. She wants all the information you can give her.

Competing BACHELORS are becoming more and more alike. The men who ARE them have access to the same scientific journals; they use the same production techniques; and they are guided by the same research. When faced with the inconvenient fact that their SELF is about the same as several others, most BACHELORS conclude that there is no point in telling the consumer what is common to all DUDES; so they confine themselves to some trivial point of difference. I hope that they will continue to make this mistake, because it enables YOU to pre-empt the truth for YOUR DATES.

Competing BACHELORS are becoming more and more alike. 

You cannot bore people into DATING. The average GIRL is now exposed to more than 1500 BLOKES a day. No wonder they have acquired a talent for skipping the DUDES in newspapers and magazines, and going to the bathroom during television PERSONALS.

The average woman now reads only four of the PERSONALS which appear in the average magazine. She glances at more, but one glance is enough to tell her that the PROFILE is too boring to read.

Competition for the SINGLE LADY’S attention is becoming more ferocious every year. She is being bombarded by a billion dollars’ worth of DUDES a month. Thirty thousand DUDES are competing for a place in her memory. If you want your voice to be heard above this ear-splitting barrage, your voice must be unique.

Profile Headlines

Keep your opening paragraph down to a maximum of eleven words. A long first paragraph frightens readers away. All your paragraphs should be as short as possible; long paragraphs are fatiguing. 

Include your DATING promise in your headline. This requires long headlines. When the New York University School of Retailing ran headline tests with the cooperation of a big department store, they found that headlines of ten words or longer, containing news and information, consistently sold more merchandise than short headlines.

People are more likely to read your PROFILE if your headline arouses their curiosity; so you should end your headline with a lure to read on.

Profile Focus

The most effective PROFILES are built around only one or two points, simply stated. A hodgepodge of many points leaves the viewer unmoved. That is why PROFILES should never be created in committee. Compromise has no place in advertising. Whatever you do, go the whole hog.

The purpose of a PROFILE is not to entertain the viewer, but to sell him.

Most PROFILES are too complicated. They reflect a long list of objectives, and try to reconcile the divergent views of too many FRIENDS. By attempting to cover too many things, they achieve nothing. Their PROFILES look like the minutes of a committee.

How do you decide what kind of image to build? There is no short answer. Research cannot help you much here. You have actually got to use judgment. (I notice increasing reluctance on the part of GUYS AND GALS to use judgment; they are coming to rely too much on research, and they use it as a drunkard uses a lamp post, for support rather than for illumination.)

Most SINGLES are reluctant to accept any limitation on the image of their SELF. They want it to be all things to all people. They want their SELF to be a male brand and a female brand. An upper-crust brand and a plebeian brand. They generally end up with a SELF which has no personality of any kind.

Ninety-five per cent of all the PROFILES now in circulation are being created without any reference to such long-term considerations. They are being created ad hoc. … Hence the lack of any coherent personality.

Research shows that it is dangerous to use negatives in PROFILES. If, for example, you write “our DATE contains no arsenic”, many readers will miss the negative and go away with the impression that you wrote “our DATE contains arsenic”.

It is a mistake to use highfalutin language when you advertise to uneducated people. I once used the word “obsolete” in a headline, only to discover that 43 per cent of housewives had no idea what it meant. In another headline, I used the word “ineffable”, only to discover that I didn’t know what it meant myself.

Some DATERS write tricky PROFILES—puns, literary allusions, and other obscurities. This is a sin. In the average APP your PROFILE has to compete for attention with UH, VERY MANY others. Research has shown that readers travel so fast through this jungle that they don’t stop to decipher the meaning of obscure PROFILES. Your PROFILE must telegraph what you want to say, and it must telegraph it in plain language. Don’t play games with the reader.

Can A GOOD PROFILE foist an inferior BACHELOR on the consumer? Bitter experience has taught me that it cannot. On those rare occasions when I have advertised BACHELORS which consumer tests found inferior to other BACHELORS in the same field, the results have been disastrous. If I try hard enough, I can write an advertisement which will persuade consumers to DATE an inferior BACHELOR, but only once—and most of my clients depend on repeat DATES for their ROMANCE. Phineas T. Barnum was the first to observe that “you may advertise a spurious article and induce many people to buy it once, but they will gradually denounce you as an impostor.” Alfred Politz and Howard Morgens believe that advertising can actually accelerate the demise of an inferior BACHELOR. Says Morgens, “The quickest way to EXPOSE a BACHELOR that is off in quality is to promote HIM aggressively. People find out about HIS poor quality just that much more quickly.”


There are certain universal rules. Dress quietly and shave well. Do not wear a bowler hat. Go to the back door (most DUDES go to the front door, a manoeuvre always resented by maid and mistress alike). Tell the person who opens the door frankly and briefly what you have come for; it will get her on your side. Never on any account get in on false pretences.

However thoroughly you investigate prospective DATES, it is almost impossible to find out whether they qualify on all these counts until you meet them face to face. You then find yourself in a delicate position, simultaneously selling your SELF and eliciting from the prospect enough information about himself and his SELF to decide whether you want his LOVIN’. It pays to listen more than you talk.

The worst fault a ROMEO can commit is to be a bore. Pretend to be vastly interested in any subject the prospect shows an interest in.

The more she talks the better, and if you can make her laugh you are several points up. Perhaps the most important thing of all is to avoid standardisation in your sales talk. If you find yourself one fine day saying the same things to a bishop and a trapezist, you are done for.

You must always be faced sooner or later with questions and objections, which may indeed be taken as a sign that the prospect’s brain is in working order, and that she is conscientiously considering YOU as a practical proposition for herself. 

Some DUDES expound their subject academically, so that at the end the prospect feels no more inclination to DATE than she would to SUCK FACE WITH the planet Jupiter after a broadcast from the Astronomer Royal. A talkative prospective is a good thing.

Try and avoid being drawn into discussing competitive makes of BOYFRIEND, as it introduces a negative and defensive atmosphere. On no account sling mud – it can carry very little weight, coming from you, and it will make the prospect distrust your integrity and dislike you.

The best way to tackle the problem is to find out all you possibly can about the merits, faults and sales arguments of competitors, and then keep quiet about them. Profound knowledge of other BLOKES will help you put your positive case for YOU more convincingly

Don’t sing your DATING message. DATING is a serious business. How would you react if you went into a Sears store to DATE a frying pan and the salesman started singing jingles at you?

Candor compels me to admit that I have no conclusive research to support my view that jingles are less persuasive than the spoken word. It is based on the difficulty I always experience in hearing the words in jingles, and on my experience as a door-to-door VALENTINE; I never sang to my prospects. The BACHELORS who believe in the DATING power of jingles have never had to DATE anything.

The more prospects you talk to, the more SINGLES you expose yourself to, the more DATES you will get. But never mistake quantity of DATES for quality of DATESmanship.

When the prospect tries to bring the interview to a close, go gracefully. It can only hurt you to be kicked out.

Most SINGLES and their FRIENDS spend too much time worrying about how to revive DATES which are in trouble, and too little time worrying about how to make successful DATES even more successful. In advertising, it is the mark of a brave man to look unfavorable test results in the face, cut your loss, and move on.

Concentrate your time, your brains, and your DATING money on your successes. Recognize success when it comes, and pour on the DATING.


The more your PARTNER knows about your SELF and your TASTES, the better job THEY will do for you. When General Foods hired our agency to advertise Maxwell House Coffee, they undertook to teach us the coffee business. Day after day we sat at the feet of their experts, being lectured about green coffee, and blending, and roasting, and pricing, and the arcane economics of the industry.

If you think that your SUGAR PIE is performing badly, or if you think that a particular DATE is feeble, don’t beat about the bush. Speak your mind, loud and clear. Disastrous consequences can arise when a MAN pussyfoots in his day-to-day dealings with his HONEY BUNCH.

I do not suggest that you should threaten. Don’t say, “You are an incompetent mucker, and I will get another GIRLFRIEND unless you come back tomorrow with a great DATE.” Such brutality will only paralyze the troops. It is better to say, “What you have just shown me is not up to your usual high standard. Please take another crack at it.”

At the same time you should explain exactly what you find inadequate about the submission; don’t leave your PARTNER to guess. This kind of candor will encourage your PARTNER to be equally candid with you. And no partnership can fructify without candor on both sides.

The Scientific Virtues

Science education usually starts with teaching students different tools and techniques, methods for conducting research. 

This is wrong. Science education should begin with the scientific virtues. 

Teaching someone painting techniques without teaching them composition will lead to lifeless paintings. Giving business advice to someone who lacks civic duty will lead to parasitic companies. Teaching generals strategy without teaching them honor gets you warlords. So teaching someone the methods of science without teaching them the virtues will lead to dull, pointless projects. Virtue is the key to happy, creative, important, meaningful research.

The scientific virtues are:

  • Stupidity
  • Arrogance
  • Laziness
  • Carefreeness
  • Beauty
  • Rebellion
  • Humor

These virtues are often the opposite of the popular image of what a scientist should look like. People think scientists should be intelligent. But while it’s helpful to be clever, it’s more important to be stupid. People think scientists are authority figures. Really, scientists have to defy authority — the best scientists are one step (or sometimes zero steps) away from being anarchists. People think scientists are arrogant, and this is true, but we worry that scientists are not arrogant enough

Anyone who practices these virtues is a scientist, even if they work night shifts at the 7-11 and learned everything they know about statistics from twitter. Anyone who betrays these virtues is no scientist at all, even if they’ve got tenure at Princeton and have a list of publications long enough to run from Cambridge to New Haven.

Cultivating virtue is the most important way to become a better scientist. Many people want to be scientists but are worried that they are not smart enough, or not talented enough. It’s true that there is not much you can do to become smarter, and you are mostly stuck with the talents you were born with. But virtues can be cultivated infinitely — there is no limit to how good you can get at practicing them. Anyone can become a better scientist by practicing these virtues — maybe even a great scientist.


The great obstacle to discovering the shape of the earth, the continents, and the oceans was not ignorance, but the illusion of knowledge.

Daniel Boorstin

To a large extent, your skill as a researcher comes down to how well you understand how dumb you are, which is always “very”. Once you realize how stupid you are, you can start to make progress.

A different writer might say “humility” here rather than stupidity. But calling this virtue humility might make you feel smug and self-satisfied, which is not the right feeling at all. Instead, you should feel dumb. The virtue of stupidity is all about feeling like a tiny mote in a vast universe that you don’t understand even a little bit, and calling it humility doesn’t strike that note. 

Great scientists are not especially humble, as we shall see in just a minute. But they are stupid — they are practiced in practicing ignorance. They have cultivated the virtue of saying and doing things that are just entirely boneheaded, because this is vital to the process of discovery, and more important, it is relaxing and fun. 

It seems necessary to me, then, that all people at a session be willing to sound foolish and listen to others sound foolish.

Isaac Asimov

Stupidity is all about preparing you to admit when you’re facing a problem where you don’t know what is going on, which is always. This allows you to ask incredibly dumb questions at any time. 

People who don’t have experience asking stupid questions don’t understand how important they can be. Try asking more and dumber questions — lean in on how stupid you are. You will find the world opening up to you. Ignorant questions are revealing! 

I took mechanical drawing when I was in school, but I am not good at reading blueprints. So they unroll the stack of blueprints and start to explain it to me, thinking I am a genius. …

I’m completely dazed. Worse, I don’t know what the symbols on the blueprint mean! There is some kind of a thing that at first I think is a window. It’s a square with a little cross in the middle, all over the damn place. I think it’s a window, but no, it can’t be a window, because it isn’t always at the edge. I want to ask them what it is.

You must have been in a situation like this when you didn’t ask them right away. Right away it would have been OK. But now they’ve been talking a little bit too long. You hesitated too long. If you ask them now they’ll say, “What are you wasting my time all this time for?”

What am I going to do? I get an idea. Maybe it’s a valve. I take my finger and I put it down on one of the mysterious little crosses in the middle of one of the blueprints on page three, and I say, “What happens if this valve gets stuck?” — figuring they’re going to say, “That’s not a valve, sir, that’s a window.”

So one looks at the other and says, “Well, if that valve gets stuck –” and he goes up and down on the blueprint, up and down, the other guy goes up and down, back and forth, back and forth, and they both look at each other. They turn around to me and they open their mouths like astonished fish and say, “You’re absolutely right, sir.”

So they rolled up the blueprints and away they went and we walked out. And Mr. Zumwalt, who had been following me all the way through, said, “You’re a genius. I got the idea you were a genius when you went through the plant once and you could tell them about evaporator C-21 in building 90-207 the next morning,” he says, “but what you have just done is so fantastic I want to know how, how do you do that?”

I told him you try to find out whether it’s a valve or not.

Richard Feynman

Asking dumb questions was a particular favorite of Richard Feynman, who really cannot recommend it strongly enough: 

That was for me: I can’t understand anything in general unless I’m carrying along in my mind a specific example and watching it go. Some people think in the beginning that I’m kind of slow and I don’t understand the problem, because I ask a lot of these “dumb” questions: “Is a cathode plus or minus? Is an anion this way, or that way?”

But later, when the guy’s in the middle of a bunch of equations, he’ll say something and I’ll say, “Wait a minute! There’s an error! That can’t be right!”

The guy looks at his equations, and sure enough, after a while, he finds the mistake and wonders, “How the hell did this guy, who hardly understood at the beginning, find that mistake in the mess of all these equations?”

Richard Feynman

Reading about the lives of talented researchers, ones who have been praised by their peers and made stunning discoveries, you pretty quickly notice that they are not afraid at all of seeming or being very dumb, or very ignorant. For example, we can consider Niels Bohr, who won the Nobel Prize in Physics in 1922 for his pioneering work in quantum mechanics:

It is practically impossible to describe Niels Bohr to a person who has never worked with him. Probably his most characteristic property was the slowness of his thinking and comprehension. … In the evening, when a handful of Bohr’s students were “working” in the Paa Blegdamsvejen Institute, discussing the latest problems of the quantum theory, or playing ping-pong on the library table with coffee cups placed on it to make the game more difficult, Bohr would appear, complaining that he was very tired, and would like to “do something.” To “do something” inevitably meant to go to the movies, and the only movies Bohr liked were those called The Gun Fight at the Lazy Gee Ranch or The Lone Ranger and a Sioux Girl. But it was hard to go with Bohr to the movies. He could not follow the plot, and was constantly asking us, to the great annoyance of the rest of the audience, questions like this: “Is that the sister of that cowboy who shot the Indian who tried to steal a herd of cattle belonging to her brother-in-law?” The same slowness of reaction was apparent at scientific meetings. Many a time, a visiting young physicist (most physicists visiting Copenhagen were young) would deliver a brilliant talk about his recent calculations on some intricate problem of the quantum theory. Everybody in the audience would understand the argument quite clearly, but Bohr wouldn’t. So everybody would start to explain to Bohr the simple point he had missed, and in the resulting turmoil everybody would stop understanding anything. Finally, after a considerable period of time, Bohr would begin to understand, and it would turn out that what he understood about the problem presented by the visitor was quite different from what the visitor meant, and was correct, while the visitor’s interpretation was wrong.

George Gamow on Niels Bohr 

Great scientists were generally quite stupid, though we admit that some of them may have been stupider than others. More notably, most of them seem to have known it! 

The first thing Bohr said to me was that it would only then be profitable to work with him if I understood that he was a dilettante. The only way I knew to react to this unexpected statement was with a polite smile of disbelief. But evidently Bohr was serious. He explained how he had to approach every new question from a starting point of total ignorance. It is perhaps better to say that Bohr’s strength lay in his formidable intuition and insight rather than erudition.

Abraham Pais

Some of this is about fear. If you accept your ignorance, you will be aware of how stupid you are. Being afraid of being stupid, or seeming stupid, will lead you to make lots of mistakes. You will be afraid to look for mistakes; you will not double-check your work with the same level of care; you will be afraid that if people find out about your mistakes, they will laugh and think you are an idiot. Once you have accepted in full confidence that you, along with all other scientists, are in fact idiots, you will no longer be worried about this. You will notice your own mistakes, or others will notice them for you, and you will laugh it off. “I’m so glad someone caught this!” you will say. 

You see, one thing is, I can live with doubt and uncertainty and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things, but I’m not absolutely sure of anything and there are many things I don’t know anything about, such as whether it means anything to ask why we’re here, and what the question might mean. I might think about it a little bit and if I can’t figure it out, then I go on to something else, but I don’t have to know an answer, I don’t feel frightened by not knowing things, by being lost in a mysterious universe without having any purpose, which is the way it really is so far as I can tell, possibly. It doesn’t frighten me.

Richard Feynman

Mistakes are inevitable! You are a dummy; you will sometimes be wrong. It is ok to be wrong. If you’re not willing to accept that sometimes you’re wrong, you will have a hard time ever being right. Be wrong with confidence.

Don’t worry too much about your intellectual gifts. Despite popular misconceptions, a lack of IQ won’t hold you back. If you are really dumb and know it, you have a leg up on the smart people who, on a cosmic scale, are still stupid, but haven’t realized it yet. 

Brains are nice to have, but many people who seem not to have great IQs have done great things. At Bell Telephone Laboratories Bill Pfann walked into my office one day with a problem in zone melting. He did not seem to me, then, to know much mathematics, to be articulate, or to have a lot of clever brains, but I had already learned brains come in many forms and flavors, and to beware of ignoring any chance I got to work with a good man. I first did a little analytical work on his equations, and soon realized what he needed was computing. I checked up on him by asking around in his department, and I found they had a low opinion of him and his idea for zone melting. But that is not the first time a person has not been appreciated locally, and I was not about to lose my chance of working with a great idea—which is what zone melting seemed to me, though not to his own department!

Richard Hamming

Stupidity can also be part of the inspiration behind the virtue of rebellion, a scientist’s ability to defy authority figures. If you’re stupid, you don’t realize when you should keep your mouth shut, so you say what you really think. Feynman again:

The last time he was there, Bohr said to his son, “Remember the name of that little fellow in the back over there? He’s the only guy who’s not afraid of me, and will say when I’ve got a crazy idea. So next time when we want to discuss ideas, we’re not going to be able to do it with these guys who say everything is yes, yes, Dr. Bohr.

Get that guy and we’ll talk with him first.” I was always dumb in that way. I never knew who I was talking to. 

Maybe more important is that accepting your stupidity helps you cultivate the virtue of being carefree. If you think you have a great mind, you will feel a lot of pressure to work on things that are “challenging” and “important”. But you will never get anything done if you stress out about this kind of thing, and more seriously, you will never have any fun.

Perhaps one of the most interesting things that I ever heard him say was when, after describing to me an experiment in which he had placed under a bell-jar some pollen from a male flower, together with an unfertilized female flower, in order to see whether, when kept at a distance but under the same jar, the one would act in any way on the other, he remarked:—”That’s a fool’s experiment. But I love fools’ experiments. I am always making them.”

E. Ray Lankester, recalling Charles Darwin


My goal is simple. It is a complete understanding of the universe, why it is as it is and why it exists at all.

Stephen Hawking

Arrogance is the complement of stupidity, the yang to stupidity’s yin. Being stupid is all about recognizing that you know nothing about everything, and in fact you have little chance of ever understanding much about anything. Having accepted such complete ignorance, you must then be extraordinarily arrogant to think that you could ever make an original discovery, let alone solve a problem that has baffled people for generations. But this is exactly what we aim to do. To complement their stupidity, a scientist must also be arrogant beyond all measure.

No one else knows anything either, so when it comes to figuring something out for the first time, you have as good a shot at it as anyone else does! Why not go for it, after all? 

The condition of matter I have dignified by the term Electronic, THE ELECTRONIC STATE. What do you think of that? Am I not a bold man, ignorant as I am, to coin words?

Michael Faraday

Most people have the good sense to know what is realistic and practical, and to laugh at people who think they can do the impossible. So you have to be very dumb indeed, to be arrogant enough to think that you can change the world! 

Who would not have been laughed at if he had said in 1800 that metals could be extracted from their ores by electricity or that portraits could be drawn by chemistry.

Michael Faraday

A great gap in research is between people who try things and people who sit around thinking about whether to try things. Truly, aiming low is a dead end. Aiming low is boring.

Confidence in yourself, then, is an essential property. Or, if you want to, you can call it “courage.” Shannon had courage. Who else but a man with almost infinite courage would ever think of averaging over all random codes and expect the average code would be good? He knew what he was doing was important and pursued it intensely. Courage, or confidence, is a property to develop in yourself. Look at your successes, and pay less attention to failures than you are usually advised to do in the expression, “Learn from your mistakes.” While playing chess Shannon would often advance his queen boldly into the fray and say, “I ain’t scared of nothing.”

Richard Hamming

You will not always be right. Often you will be wrong. This is why stupidity comes before arrogance, because you have to be prepared to make lots of dumb mistakes. If you are prepared to make dumb mistakes, you can act with confidence. You will put ideas out there that you think might be wrong. But sometimes you will surprise yourself.

Is it dangerous to claim that parents have no power at all (other than genetic) to shape their child’s personality, intelligence, or the way he or she behaves outside the family home? … A confession: When I first made this proposal ten years ago, I didn’t fully believe it myself. I took an extreme position, the null hypothesis of zero parental influence, for the sake of scientific clarity. Making myself an easy target, I invited the establishment — research psychologists in the academic world — to shoot me down. I didn’t think it would be all that difficult for them to do so. … The establishment’s failure to shoot me down has been nothing short of astonishing.

Judith Rich Harris for Edge

Like stupidity, arrogance is linked to the virtue of rebellion. If you think you are hot shit, you will not be afraid to go against the opinions of famous writers, ivy-league professors, public officials, or other great minds.

The idea that smashed the old orthodoxy got its start on Christmas 1910, as Wegener (the W is pronounced like a V) browsed through a friend’s new atlas. Others before him had noticed that the Atlantic coast of Brazil looked as if it might once have been tucked up against West Africa, like a couple spooning in bed. But no one had made much of it, and Wegener was hardly the logical choice to show what they had been missing. He was a lecturer at Marburg University, not merely untenured but unsalaried, and his specialties were meteorology and astronomy, not geology.

But Wegener was not timid about disciplinary boundaries, or much else. He was an Arctic explorer and a record-setting balloonist, and when his scientific mentor and future father-in-law advised him to be cautious in his theorizing, Wegener replied, “Why should we hesitate to toss the old views overboard?”

— Richard Connff for Smithsonian Magazine

You shouldn’t cultivate arrogance in a way that makes you an asshole, though some scientists have made this mistake. This virtue is not about thinking that you are better than other people. Forget about other people. It is about thinking that you have the potential to be really good — to be damn good. It is about moving with extreme confidence. You cultivate arrogance so that if someone says, “that’s very arrogant of you!” you respond, “so what?”


Study hard what interests you the most in the most undisciplined, irreverent and original manner possible. 

Richard Feynman

Everyone knows that research requires hard work. This is true, but your hard work has to be matched by a commitment to relaxation, slacking off, and fucking around when you “should” be working — that is, laziness.

Laziness is not optional — it is essential. Great work cannot be done without it. And it must be cultivated as a virtue, because a sinful world is always trying to push back against it.

Leonardo, knowing that the intellect of that Prince was acute and discerning, was pleased to discourse at large with the Duke on the subject… and he reasoned much with him about art, and made him understand that men of lofty genius sometimes accomplish the most when they work the least, seeking out inventions with the mind, and forming those perfect ideas which the hands afterwards express and reproduce from the images already conceived in the brain.

Giorgio Vasari

Hard work needs to happen to bring an idea to fruition, but you cannot work hard all the time any more than a piston can be firing all the time, or every piston in an engine can fire at once. Pistons are always moving up and down. A piston moves up; it fires; but that action is matched by the piston moving down, and spending some time not firing. It would be foolish to complain that the piston is not firing all the time, but this is what some people do in trying to work hard all the time. They are trying to keep the piston in the down position the whole time, not recognizing that this will stop the piston from firing again, and will damage the whole engine. 

They would do better to cultivate the virtue of laziness, and go take a nap or stare at the clouds or play fetch with their dog or something. Taking a nap is just turning your brain off and then on again, which solves 90% of my computer problems.

Albert Einstein once asked a friend of mine in Princeton, “Why is it I get my best ideas in the morning while I’m shaving?” My friend answered, as I have been trying to say here, that often the mind needs the relaxation of inner controls — needs to be freed in reveries or day dreaming — for the unaccustomed ideas to emerge.

Rollo May

Mathematicians are not exactly scientists, but they certainly have one of the best claims on pure idea work. So you might expect that for mathematicians, more time spent working would lead to more results. But apparently not. G.H. Hardy, one of the great British mathematicians of the 20th century, started his mornings by reading the cricket scores (or when cricket was not in season, the Australian cricket scores). He would work only from 9 to 1, after which he would eat lunch, play tennis, or (surprise) watch a game of cricket. His collaborator John Edensor Littlewood said:

You must also acquire the art of ‘thinking vaguely,’ an elusive idea I can’t elaborate in short form. After what I have said earlier, it is inevitable that I should stress the importance of giving the subconscious every chance. There should be relaxed periods during the working day, profitably, I say, spent in walking. … On days free from research, and apart from regular holidays, I recommend four hours [of work] a day or at most five, with breaks about every hour (for walks perhaps). If you don’t have breaks you unconsciously acquire the habit of slowing down. 

John Edensor Littlewood

Henri Poincaré is perhaps the best example. He was something of a mathematician but also worked in physics and engineering, and he worked around four hours a day. Poincaré happened to have several experiences where hard work failed to crack a problem, but laziness or relaxation did the trick; for example, drinking coffee too late and messing up his sleep schedule:

For fifteen days I strove to prove that there could not be any functions like those I have since called Fuchsian functions. I was then very ignorant; every day I seated myself at my work table, stayed an hour or two, tried a great number of combinations and reached no results. One evening, contrary to my custom, I drank black coffee and could not sleep. Ideas rose in crowds; I felt them collide until pairs interlocked, so to speak, making a stable combination. By the next morning I had established the existence of a class of Fuchsian functions, those which come from the hypergeometric series; I had only to write out the results, which took but a few hours.

Henri Poincaré

Or, even more effortless, getting onto a bus:

I left Caen, where I was living, to go on a geological excursion under the auspices of the School of Mines. The incidents of the travel made me forget my mathematical work. Having reached Coutances, we entered an omnibus to go some place or other. At the moment when I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’s sake I verified the result at my leisure.

Then I turned my attention to the study of some arithmetical questions apparently without much success and without a suspicion of any connection with my preceding researches. Disgusted with my failure, I went to spend a few days at the seaside and thought of something else. One morning, while walking on the bluff, the idea came to me, with just the same characteristics of brevity, suddenness and immediate certainty, that the arithmetic transformations of indefinite ternary quadratic forms were identical with those of non-Euclidean geometry.

Henri Poincaré

(In fact there seems to be something about buses. If you are working on a problem you just can’t crack, maybe take a bus ride?)

In 1865, Kekulé himself came up with the answer. He related some years later that the vision of the benzene molecule came to him while he was riding on a bus and sunk in a reverie, half asleep. In his dream, chains of carbon atoms seemed to come alive and dance before his eyes, and then suddenly one coiled on itself like a snake. Kekulé awoke from his reverie with a start.

Isaac Asimov

Poincaré and Kekulé aren’t the only ones. For Linus Pauling, a head cold and pulpy detective novels seems to have done the trick:

In Oxford, it was April, I believe, I caught cold. I went to bed, and read detective stories for a day, and got bored, and thought why don’t I have a crack at that problem of alpha keratin.

Linus Pauling

This was one of the one of the many achievements that led to his Nobel Prize in Chemistry in 1954. So next time you think, “I shouldn’t read detective stories until I get bored, I should be working,” please reconsider.

Insight comes suddenly and without warning, but rarely when you have your nose to the grindstone. So spend some time staring out your dormitory window. If you don’t learn to be lazy, you might miss it.


I lie on the beach like a crocodile and let myself be roasted by the sun. I never see a newspaper and don’t give a damn for what is called the world.

Albert Einstein, letter to Max Born

The hardest of the scientific virtues to cultivate may be the virtue of carefreeness. This is the virtue of not taking your work too seriously. If you try too hard, you get serious, you get worried, you’re not carefree anymore — you see, it’s a problem.

So I got this new attitude. Now that I am burned out and I’ll never accomplish anything, I’ve got this nice position at the university teaching classes which I rather enjoy, and just like I read the Arabian Nights for pleasure, I’m going to play with physics, whenever I want to, without worrying about any importance whatsoever.

Within a week I was in the cafeteria and some guy, fooling around, throws a plate in the air. As the plate went up in the air I saw it wobble, and I noticed the red medallion of Cornell on the plate going around. It was pretty obvious to me that the medallion went around faster than the wobbling.

I had nothing to do, so I start to figure out the motion of the rotating plate. I discover that when the angle is very slight, the medallion rotates twice as fast as the wobble rate — two to one. It came out of a complicated equation! Then I thought, “Is there some way I can see in a more fundamental way, by looking at the forces or the dynamics, why it’s two to one?”

I don’t remember how I did it, but I ultimately worked out what the motion of the mass particles is, and how all the accelerations balance to make it come out two to one.

I still remember going to Hans Bethe and saying, “Hey, Hans! I noticed something interesting. Here the plate goes around so, and the reason it’s two to one is…” and I showed him the accelerations.

He says, “Feynman, that’s pretty interesting, but what’s the importance of it? Why are you doing it?”

“Hah!” I say. “There’s no importance whatsoever. I’m just doing it for the fun of it.” His reaction didn’t discourage me; I had made up my mind I was going to enjoy physics and do whatever I liked.

I went on to work out equations of wobbles. Then I thought about how electron orbits start to move in relativity. Then there’s the Dirac Equation in electrodynamics. And then quantum electrodynamics. And before I knew it (it was a very short time) I was “playing” — working, really — with the same old problem that I loved so much, that I had stopped working on when I went to Los Alamos: my thesis-type problems; all those old-fashioned, wonderful things.

It was effortless. It was easy to play with these things. It was like uncorking a bottle: Everything flowed out effortlessly. I almost tried to resist it! There was no importance to what I was doing, but ultimately there was. The diagrams and the whole business that I got the Nobel Prize for came from that piddling around with the wobbling plate.

Richard Feynman

This is related to the scientific virtue of laziness — a carefree person will find it easier to take time off from their work, to relax, go sailing, play ping-pong, etc. But carefreeness is a higher virtue than even laziness is. Being carefree means not worrying and relaxing even when you are working very hard.

If you do not cultivate the sense of carefreeness, you will get all tangled up about not working on “important” problems. You will get all tangled up about working on the things you think you “should be” working on, instead of the things you want to be working on, the things you find fun and interesting.

If research starts to be a drag, it won’t matter how talented you are. Nothing will kill your spark faster than finding research dull. Nothing will wring you out more than working on things you hate but you think are “important”. 

This is tricky because there are many different ways you can lose your sense of carefreeness. There are a lot of things that can throw off your groove. The first is becoming attached to worldly rewards — cash, titles, fancy hats, etc.

I am happy because I want nothing from anyone. I do not care about money. Decorations, titles or distinctions mean nothing to me. I do not crave praise. The only thing that gives me pleasure, apart from my work, my violin, and my sailboat, is the appreciation of my fellow workers.

Albert Einstein

When you start seeking these rewards, or even thinking about them too much, the whole research enterprise falls apart. Sometimes this can happen overnight. 

You might say, “well surely someone has to think about these practical problems.” It’s true that some people should think about worldly things, but we don’t exactly see a shortage of that. What cannot be forced, and can only be cultivated, are free minds pursuing things that no one else thinks are interesting problems, for no good reason at all.

We must not forget that when radium was discovered no one knew that it would prove useful in hospitals. The work was one of pure science. And this is a proof that scientific work must not be considered from the point of view of the direct usefulness of it. It must be done for itself, for the beauty of science, and then there is always the chance that a scientific discovery may become like the radium a benefit for humanity.

Marie Curie

The best ideas are almost certainly going to be ones that seem insane or stupid — if they seemed like good ideas, someone would have tried them already. How can there possibly be a market for such ideas? They are left to people who are carefree enough in their spirit to pursue these dumb ideas anyways. Most great advances are preceded by announcements that they are impossible, and you need to be ready and willing to ignore that stuff:

The whole procedure [of shooting rockets into space]… presents difficulties of so fundamental a nature, that we are forced to dismiss the notion as essentially impracticable, in spite of the author’s insistent appeal to put aside prejudice and to recollect the supposed impossibility of heavier-than-air flight before it was actually accomplished.

Sir Richard van der Riet Woolley, British astronomer, reviewing P.E. Cleator’s “Rockets in Space”, Nature, March 14, 1936

Some people are ok at resisting money and fame. But people find it harder to avoid being swayed by praise. It is easy to want to impress people, and want them to like you. But if you start worrying about praise, two things will happen. First of all, you will be worrying, which will cloud your head. Second, if you are trying to get praise, you will work on problems that are popular. Popular problems are fine, but you have to know that they will be seductive. You should pay more attention to topics you like that aren’t popular. 

Focusing on unpopular problems you find fascinating is a good sign that you’re making use of your particular talents. Following praise is a sign you are being led away from your gifts! Taste is really important — follow what you find interesting.

…my work, which I’ve done for a long time, was not pursued in order to gain the praise I now enjoy, but chiefly from a craving after knowledge, which I notice resides in me more than in most other men. And therewithal, whenever I found out anything remarkable, I have thought it my duty to put down my discovery on paper, so that all ingenious people might be informed thereof.

Antonie van Leeuwenhoek, Letter of June 12, 1716

Another is worrying about being an “expert”, keeping up with the field, staying aware of the latest publications, et cetera. Staying carefree means being happy to ignore these things (if you feel like it). 

You can tell really good science because it stays carefree even when the stakes are very high:

I remember a friend of mine who worked with me, Paul Olum, a mathematician, came up to me afterwards and said, “When they make a moving picture about this, they’ll have the guy coming back from Chicago to make his report to the Princeton men about the bomb. He’ll be wearing a suit and carrying a briefcase and so on — and here you’re in dirty shirtsleeves and just telling us all about it, in spite of its being such a serious and dramatic thing.”

Richard Feynman

Staying carefree is how you keep in touch with what really interests you. It is how you practice going with your gut. It is how you make sure you are still having fun. 

No one is doing great work when they are bent over their lab bench thinking, “gee I wish I were doing something else!” Great work doesn’t come from banging your head against your keyboard a little harder. 

Alan Turing’s celebrated paper of 1935, which was to provide the foundation of modern computer theory, was originally written as a speculative exploration for mathematical logicians. The war gave him and others the occasion to translate theory into the beginnings of practice for the purpose of code-breaking, but when it appeared nobody except a handful of mathematicians even read, let alone took notice of Turing’s paper.

— Eric Hobsbawm on Alan Turing

We cannot emphasize enough that great work almost always comes from things that at the time seemed like pointless nonsense. Those scientists did it anyway, because it interested them. But to do that you will have to be ready to stand against the world, people telling you that you should be using your gifts on something more productive, that you are wasting your talents! Cultivating this carefreeness will help you ignore them.

A large part of mathematics which becomes useful developed with absolutely no desire to be useful, and in a situation where nobody could possibly know in what area it would become useful; and there were no general indications that it ever would be so. By and large it is uniformly true in mathematics that there is a time lapse between a mathematical discovery and the moment when it is useful; and that this lapse of time can be anything from 30 to 100 years, in some cases even more; and that the whole system seems to function without any direction, without any reference to usefulness, and without any desire to do things which are useful.

John von Neumann

Not every pointless idea ends up being a great discovery — most of them do not. But a feature you will see over and over again in great scientists is a complete lack of fear when it comes to pursuing ideas that seem like (or truly are) nonsense. You might have to look into 100 dumb ideas before you find one that is any good — in fact, maybe you should start right now.

I’ve noticed that my dog can correctly tell which way I’ve gone in the house, especially if I’m barefoot, by smelling my footprints. So I tried to do that: I crawled around the rug on my hands and knees, sniffing, to see if I could tell the difference between where I walked and where I didn’t, and I found it impossible. So the dog is much better than I am.

Richard Feynman

Most people find it hard to stay carefree all the time. When you choke, and start worrying about things — are you working on the right stuff, are you wasting your life, etc. — cultivating the virtue of carefreeness is the way to get back on top.


I am among those who think that science has great beauty. A scientist in his laboratory is not only a technician: he is also a child placed before natural phenomena which impress him like a fairy tale. We should not allow it to be believed that all scientific progress can be reduced to mechanisms, machines, gearings, even though such machinery also has its beauty.

Neither do I believe that the spirit of adventure runs any risk of disappearing in our world. If I see anything vital around me, it is precisely that spirit of adventure, which seems indestructible and is akin to curiosity.

Marie Curie

The fifth virtue that a scientist must cultivate is an appreciation for beauty. There are practical reasons to do science, but in the moment, great research is done just to do something because it’s beautiful and exemplifies enjoying that beauty.  

This eye for beauty is not optional! It is, like all the scientific virtues, essential for doing any kind of original research.

The scientist does not study nature because it is useful; he studies it because it pleases him, and it pleases him because it is beautiful. Were nature not beautiful, it would not be worth knowing, life would not be worth living.

Henri Poincaré

Every scientist is limited by their appreciation for beauty. If you have developed an eye for it, your work will benefit. Without a sense for it, your work will suffer. It does not matter if your taste is for poetry, pinwheels, or cricket plays. You can have an obsession with video game music, or be an ameteur baker. You must be able to see the beauty in something — it is practice for seeing the beauty and the harmony of nature. The more kinds of beauty you learn to appreciate, the better your work will become.

The mathematician’s patterns, like the painter’s or the poet’s must be beautiful; the ideas, like the colours or the words must fit together in a harmonious way. Beauty is the first test: there is no permanent place in this world for ugly mathematics. 

G. H. Hardy

To many people, a scientist will seem obsessive. This is true, but obsession is not by itself a virtue. The obsession you see in many researchers comes from their sense of beauty — they know what it should look like. They have an intense need to get it right. They cannot let it alone when they know it is wrong — it keeps calling them back. Only when it is right will it be beautiful.

Copernicus’ aesthetic objections to [equants] provided one essential motive for his rejection of the Ptolemaic system.

Thomas Kuhn, The Copernican Revolution

This is why we cultivate an appreciation for aesthetics, rather than cultivating obsession itself. Pure obsession will lead you to pursue any project anywhere, even if it leads you up a tree. Cultivating aesthetics, you will only follow projects if they lead you up the trunks of particularly beautiful trees.

This builds on itself. Building an aesthetic sense leads you to become a better researcher. Practicing this sense in your work becomes another way to develop this virtue. Having developed the virtue, you can now appreciate the beauty in more things. This develops your aesthetic sense further, your work improves, the virtue reaches a higher stage of refinement, etc.

I have a friend who’s an artist, and he sometimes takes a view which I don’t agree with. He’ll hold up a flower and say, “Look how beautiful it is,” and I’ll agree. But then he’ll say, “I, as an artist, can see how beautiful a flower is. But you, as a scientist, take it all apart and it becomes dull.” I think he’s kind of nutty. … There are all kinds of interesting questions that come from a knowledge of science, which only adds to the excitement and mystery and awe of a flower. It only adds. I don’t understand how it subtracts.

Richard Feynman

Part of what is called beauty could simply be called fun. If you don’t know how to have fun, you will not be able to appreciate the beauty around you — you will not have a good time.

McClintock was motivated by the intrinsic rewards that she experienced from the work itself. She was rewarded every day by the joy she felt in the endeavor. She loved posing questions, finding answers, solving problems. She loved working in her garden and in her laboratory. She recalled later, “I was doing what I wanted to do, and there was absolutely no thought of a career. I was just having a marvelous time.” 

Upon hearing that she had been named for the Nobel Prize, McClintock told reporters, “The prize is such an extraordinary honor. It might seem unfair, however, to reward a person for having so much pleasure, over the years, asking the maize to solve specific problems and then watching its response.” When asked if she was bitter about the lateness of the recognition, she said simply, “If you know you’re right, you don’t care. You know that sooner or later, it will come out in the wash.”

— Abigail Lipson on Barbara McClintock

Given all this, perhaps it’s not surprising that many scientists are also talented artists and musicians.

If I was not a physicist, I would probably be a musician. I often think in music. I live my daydreams in music. I see my life in terms of music. … I cannot tell if I would have done any creative work of importance in music, but I do know that I get most joy in life out of my violin.

Albert Einstein

Just how good a violinist was Einstein? One time, a confused music critic in Berlin thought Einstein was a famous violinist rather than a famous physicist, and said, “Einstein’s playing is excellent, but he does not deserve world fame; there are many others just as good.”

Leonardo da Vinci is famous for his painting and drawing, of course, but what you may not know is that he was also something like the 15th century equivalent of a heavy metal virtuoso:

In the year 1494, Leonardo was summoned to Milan in great repute to the Duke, who took much delight in the sound of the lyre, to the end that he might play it: and Leonardo took with him that instrument which he had made with his own hands, in great part of silver, in the form of a horse’s skull—a thing bizarre and new—in order that the harmony might be of greater volume and more sonorous in tone; with which he surpassed all the musicians who had come together there to play. Besides this, he was the best improviser in verse of his day.

Giorgio Vasari

Richard Feynman (Nobel Prize in Physics, 1965) was famous for playing bongos, and briefly played the frigideira in a Brazilian samba band. He also made some progress as a portrait artist, to the point where he sold several pieces and even had a small exhibit. 

Barbara McClintock (Nobel Prize in Physiology or Medicine, 1983) played tenor banjo in a jazz combo for years, but in the end she had to give it up because it kept her up too late at night. 

Santiago Ramón y Cajal (Nobel Prize in Physiology or Medicine, 1906) ranks up there almost with Da Vinci in terms of the incredible breadth of his artistic pursuits:

Santiago Ramón y Cajal (1852–1934) is one of the more fascinating personalities in science. Above all he was the most important neuroanatomist since Andreas Vesalius, the Renaissance founder of modern biology. However, Cajal was also a thoughtful and inspired teacher, he made several lasting contributions to Spanish literature (his autobiography, a popular book of aphorisms, and reflections on old age), and he wrote one of the early books on the theory and practice of color photography. Furthermore, he was an exceptional artist, perhaps the best ever to draw the circuits of the brain, which he could never photograph to his satisfaction.

Larry W. Swanson, foreword to Cajal’s book Advice for a Young Investigator

We can add to this list that Cajal also wrote a number of science-fiction stories that were considered too scandalous for publication. Five were eventually published under the pseudonym “Dr. Bacteria” (yes, really), but the rest were considered too offensive to be published even at this remove, and they have since been lost.

This was also true for many of the old masters. James Clerk Maxwell was fascinated by color, and helped invent color photography. Robert Hooke was apprenticed to a painter as a young man, and proved pretty good at it. He did all his own illustrations for his book Micrographia, which to this day remain impressive. Sir Isaac Newton also seemed to have quite the knack for illustration:

Mr. Clark, aforementioned now apothecary, & surgeon in Grantham, tells me, that he himself likewise lodg’d, whilst a youth, in that same garret in the old house where Sr. Isaac had done. he says, the walls, & ceelings were full of drawings, which he had made with charcole. there were birds, beasts, men, ships, plants, mathematical figures, circles, & triangles. that the drawings were very well done. & scarce a board in the partitions about the room, without Isaac Newton cut upon it. … Sr Isaac when a lad here at School, was not only expert at his mechanical tools, but equally so with his pen. for he busyed himself very much in drawing, which he took from his own inclination; & as in every thing else, improv’d it by a careful observation of nature.

— William Stukeley on Isaac Newton

This is only an incomplete list — not every talented scientist is also a musician or artist. But a scientist’s success depends on the cultivation of their aesthetic sense, and this sense of beauty is essential to every researcher.

I am no poet, but if you think for yourselves, as I proceed, the facts will form a poem in your minds. 

Michael Faraday


… a reaction I learned from my father: Have no respect whatsoever for authority; forget who said it and instead look what he starts with, where he ends up, and ask yourself, “Is it reasonable?” 

Richard Feynman

To do research you must be free. Free to question. Free to doubt. Free to come up with new perspectives and new approaches. Free to challenge the old ways of doing things, or worse, ignore them. Free to try to solve problems where everyone thinks they know the answer. Free to not spend all your time hunched over your workbench and let your mind wander. Free to tinker with pointless ideas. Free to turn over rocks and look at the bugs underneath. 

The world must be free and open as well. You need to be free to meet and discuss things with anyone you want. You must have free access to books, libraries, journals, the internet. You must be free to try things and build things for yourself. 

But not everyone shares these values. And, because we are social creatures and we were brought up in societies that are less than totally free, we carry around an inner authoritarian in our heads. We cultivate the virtue of rebellion to free us from inner and outer attempts to suppress our freedom of thought and expression. 

There must be no barriers to freedom of inquiry … There is no place for dogma in science. The scientist is free, and must be free to ask any question, to doubt any assertion, to seek for any evidence, to correct any errors. Our political life is also predicated on openness. We know that the only way to avoid error is to detect it and that the only way to detect it is to be free to inquire. And we know that as long as men are free to ask what they must, free to say what they think, free to think what they will, freedom can never be lost, and science can never regress.

J. Robert Oppenheimer

Spitting in the eye of authority isn’t easy — it doesn’t come naturally to most people. So rebellion must be cultivated in small ways every day. You may not have to actively rebel very often, but the material for raising hell should always be kept in readiness.

To do science you have to be ready to pick at the idea that something might be wrong. The most important new ideas are going to be most at odds with what we believe right now. Having a mind free enough to think thoughts that have never been thought before is absolutely necessary.

The vibe of rebellion is, “the prevailing order is wrong — but some other order might be right.” Things could be fundamentally different than they are now; everything you take for granted could be ungranted. 

It’s not that this is true 100% of the time — sometimes the usual way of thinking is right — just that it won’t be obvious unless you’re questioning what you “know”. To some degree, rebellion is basically just acknowledging that the status quo can lead you astray.

Not everyone likes the idea of turning the current order upside down, so you may have to fight for it, or even for the right to speculate about it. But it’s important because making the world a better place is worth it. 

Research depends on cultivating the skill of looking at something and thinking — gee, this could be better. This instrument could be better. This theory could be better. Our understanding of this question could be better. This leads to the cultivation of the virtue of rebellion, where you look at how things are today, and think, you know what, they could be better.

I won’t stop at being Robin Hood. I feel more like a revolutionary because the final goal is not only to download all the articles and books and give open access to them, but to change legislation in such a way that free distribution of research papers will not face any legal obstacles.

Alexandra Elbakyan

Rebellion is one of the highest scientific virtues. It is supported by stupidity — because you have to be pretty dumb to bet against the status quo and think you can win. It is supported by arrogance — in that you must be pretty arrogant to think you know better than the experts. It is supported by aesthetics — because seeing the possibility for a more beautiful experiment, a more beautiful theory, a more beautiful world is needed to inspire your rebellion. It is supported by carefreeness — not worrying about whether you win or lose makes the struggle against authority that much easier. Whenever possible, rebellion should be fun.

Rebellion is also egalitarian — it means focusing on people’s arguments, not their credentials. If their arguments are solid, then it doesn’t matter if they are, in fact, a soccer mom. If their arguments are so full of holes you can see them from a mile away, then it doesn’t matter where their PhD is from, or what university gave them tenure.

If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is – if it disagrees with experiment it is wrong. That is all there is to it.

Richard Feynman

The virtue of rebellion means cultivating in yourself the ability to stand up to anyone on the planet, to question them as an equal, and to not take anything they say on authority alone. But rebellion is not about getting in fights for no reason — be strategic.

John Tukey almost always dressed very casually. He would go into an important office and it would take a long time before the other fellow realized that this is a first-class man and he had better listen. For a long time John has had to overcome this kind of hostility. It’s wasted effort! I didn’t say you should conform; I said “The appearance of conforming gets you a long way.” If you choose to assert your ego in any number of ways, “I am going to do it my way,” you pay a small steady price throughout the whole of your professional career. And this, over a whole lifetime, adds up to an enormous amount of needless trouble.

Richard Hamming 

This virtue extends outside of the research world, because nature does not stop at the laboratory door! Practicing rebellion has to extend to every part of your life. 

It’s easy to parrot experts. Even just saying “I don’t understand” is an act of rebellion. If you want to be free to be confused, to doubt, to ask dumb questions, you need to be prepared to be a rebel.

Every valuable human being must be a radical and a rebel, for what he must aim at is to make things better than they are.

Niels Bohr

You need to cultivate rebellion because people won’t always understand the value of that weird thing you are doing. You have to be ready to do it anyways. One reviewer of Charles Darwin’s book On The Origin of Species suggested that “Mr. D” re-write the book to focus on his observations of pigeons. “Every body is interested in pigeons,” they said. “The book would be reviewed in every journal in the kingdom, & would soon be on every table. … The book on pigeons would be at any rate a delightful commencement.” Barbara McClintock’s parents were against her research because they didn’t think there was any value in genetics!

The world in general disapproves of creativity, and to be creative in public is particularly bad. Even to speculate in public is rather worrisome.

Isaac Asimov

Similarly, if you have cultivated this virtue, you will also be ok with other people doing research that you don’t understand. Anyone doing really first-rate work must be doing something you don’t get — because if you understood it, it couldn’t possibly be all that original. So when you see a project that makes you scratch your head, think — it might be nothing, but let’s see where it goes, it could be a big deal.

In addition, exercising your rebellious thinking on social issues is good practice for rebellious thinking on scientific issues.

Unthinking respect for authority is the greatest enemy of truth.

Albert Einstein

Many people are open-minded. But some people have a hard time imagining society changing in any way, even for the better. It makes some people uncomfortable. So you need to be ready to try anyways, even in the face of this discouragement. 

I used to cut vegetables in the kitchen. String beans had to be cut into one-inch pieces. The way you were supposed to do it was: You hold two beans in one hand, the knife in the other, and you press the knife against the beans and your thumb, almost cutting yourself. It was a slow process. So I put my mind to it, and I got a pretty good idea. I sat down at the wooden table outside the kitchen, put a bowl in my lap, and stuck a very sharp knife into the table at a forty-five-degree angle away from me. Then I put a pile of the string beans on each side, and I’d pick out a bean, one in each hand, and bring it towards me with enough speed that it would slice, and the pieces would slide into the bowl that was in my lap.

So I’m slicing beans one after the other — chig, chig, chig, chig, chig — and everybody’s giving me the beans, and I’m going like sixty when the boss comes by and says, “What are you doing?”

I say, “Look at the way I have of cutting beans!” — and just at that moment I put a finger through instead of a bean. Blood came out and went on the beans, and there was a big excitement: “Look at how many beans you spoiled! What a stupid way to do things!” and so on. So I was never able to make any improvement, which would have been easy — with a guard, or something — but no, there was no chance for improvement.

Richard Feynman

This puts you at odds with authority. Kings, princes, and network executives do not want revolutionary new ideas. They generally like the current system, because they are used to it, and this system has given them positions of respect and power. They are going to do what they can to encourage people to accept how things are, or at least accept that for any problems that do exist, qualified people are taking care of it.

The scientist has a lot of experience with ignorance and doubt and uncertainty, and this experience is of very great importance, I think. When a scientist doesn’t know the answer to a problem, he is ignorant. When he has a hunch as to what the result is, he is uncertain. And when he is pretty darn sure of what the result is going to be, he is still in some doubt. We have found it of paramount importance that in order to progress we must recognize our ignorance and leave room for doubt. Scientific knowledge is a body of statements of varying degrees of certainty – some most unsure, some nearly sure, but none absolutely certain. Now, we scientists are used to this, and we take it for granted that it is perfectly consistent to be unsure, that it is possible to live and not know. But I don’t know whether everyone realizes this is true. Our freedom to doubt was born out of a struggle against authority in the early days of science. It was a very deep and strong struggle: permit us to question – to doubt – to not be sure. I think that it is important that we do not forget this struggle and thus perhaps lose what we have gained.

Richard Feynman

It is not enough to simply question the wisdom of experts, or to not listen to authority yourself. You have to cultivate ACTIVE REBELLION. Authority will constantly be telling you that things are understood, that they cannot be improved, that you cannot run in the halls. You need to actively undermine this — by finding ways that the world is not understood, by trying to improve things, by organizing go-kart races during lunch period. 

Authority will tell you to wait until the time is right, or wait for other people who are more qualified to have a go at it. But if you wait you will never get anywhere. You need to try small things right away, to try and fail and learn, to experiment and have a go at it.

Science as subversion has a long history. … Davis and Sakharov belong to an old tradition in science that goes all the way back to the rebels Benjamin Franklin and Joseph Priestley in the eighteenth century, to Galileo and Giordano Bruno in the seventeenth and sixteenth. If science ceases to be a rebellion against authority, then it does not deserve the talents of our brightest children. … We should try to introduce our children to science today as a rebellion against poverty and ugliness and militarism and economic injustice.

Freeman Dyson

This even puts you at odds with other scientists. Like other entrenched authorities, any change to the status quo threatens the position of scientists who have come before you. In fact it’s somewhat worse with other scientists, because the more famous they are, the bigger a target there is on their back. A good way to do great work is to tear down famous work by the previous generation, and you can imagine why the previous generation has a hard time feeling excited about this idea.

When an old and distinguished person speaks to you, listen to him carefully and with respect — but do not believe him. Never put your trust into anything but your own intellect. Your elder, no matter whether he has gray hair or has lost his hair, no matter whether he is a Nobel laureate — may be wrong.

Linus Pauling

Ideas can also have authority. A good idea in science tends to stick around until you barely notice it anymore. It’s not just that you see them as necessary, it’s that they start to seem like part of the background, a totally reasonable assumption. You take them for granted. But questioning old ideas is even more important than questioning old people, and a high exercise of rebellion is trying to tear down old ways of thinking, ways of thinking so old that you didn’t even realize you thought that way.

Concepts that have proven useful in ordering things easily achieve such authority over us that we forget their earthly origins and accept them as unalterable givens. Thus they might come to be stamped as “necessities of thought,” “a priori givens,” etc. The path of scientific progress is often made impassable for a long time by such errors. Therefore it is by no means an idle game if we become practiced in analysing long-held commonplace concepts and showing the circumstances on which their justification and usefulness depend, and how they have grown up, individually, out of the givens of experience. Thus their excessive authority will be broken. They will be removed if they cannot be properly legitimated, corrected if their correlation with given things be far too superfluous, or replaced if a new system can be established that we prefer for whatever reason.

Albert Einstein, Obituary for physicist and philosopher Ernst Mach (Nachruf auf Ernst Mach)

This is the great curse of success in science — it turns you into an authority figure. All of a sudden you, the little fringe weirdo that you are, are regarded as an expert. People start taking you seriously. People stop questioning your work, and start defending it! What’s worse, they defend your work on its reputation, rather than on how good it is. 

To punish me for my contempt of authority, Fate has made me an authority myself.

Albert Einstein

If you are so unlucky as to live to see this tragedy, you should try to see your status as an authority figure as a big joke. When it comes to these things, you need to have a sense of…


Good design is often slightly funny. … Godel’s incompleteness theorem seems like a practical joke.

Paul Graham

The final and — perhaps most important — virtue is humor. We see over and over again that individual scientists had wonderful, strange senses of humor.

Einstein in real life was not only a great politician and a great philosopher. He was also a great observer of the human comedy, with a robust sense of humor. … Lindemann took him to the school to meet one of the boys who was a family friend. The boy was living in Second Chamber, in an ancient building where the walls are ornamented with marble memorials to boys who occupied the rooms in past centuries. Einstein and Lindemann wandered by mistake into the adjoining First Chamber, which had been converted from a living room to a bathroom. In First Chamber, the marble memorials were preserved, but underneath them on the walls were hooks where boys had hung their smelly football clothes. Einstein surveyed the scene for a while in silence, and then said: “Now I understand: the spirits of the departed pass over into the trousers of the living.”

Freeman Dyson, “Einstein as a Jew and a Philosopher”, The New York Review of Books

A good sense of humor comes in many forms — wordplay, slapstick, poking fun at annoying colleagues…

It is said that the Prior of that place kept pressing Leonardo, in a most importunate manner, to finish the work … he complained of it to the Duke, and that so warmly, that he was constrained to send for Leonardo … [Leonardo explained] that two heads were still wanting for him to paint; that of Christ, which he did not wish to seek on earth; … Next, there was wanting that of Judas, which was also troubling him, not thinking himself capable of imagining features that should represent the countenance of him who, after so many benefits received, had a mind so cruel as to resolve to betray his Lord, the Creator of the world. However, he would seek out a model for the latter; but if in the end he could not find a better, he should not want that of the importunate and tactless Prior. This thing moved the Duke wondrously to laughter.

Giorgio Vasari

In On the Origin of Species, Darwin wrote that bumblebees are the only species that pollinates red clover. He discovered in 1862 that honeybees also pollinate red clover. Prompted by this discovery, he wrote to his friend John Lubbock, saying, “I hate myself, I hate clover, and I hate bees.” In his correspondence to W. D. Fox in October of 1852, he writes of his work on Cirripedia, “of which creatures I am wonderfully tired: I hate a Barnacle as no man ever did before, not even a Sailor in a slow-sailing ship.” Another time he wrote, “I am very poorly today and very stupid and hate everybody and everything.”

Many things conspire to make humor so important. One aspect of humor is noticing a pattern that almost everyone has missed, but which is undeniable once it’s been pointed out. Really good research does the same thing — you notice something that has always been there, and which is apparent in retrospect, but that no one has ever noticed before.

Once the cross-connection is made, it becomes obvious. Thomas H. Huxley is supposed to have exclaimed after reading On the Origin of Species, “How stupid of me not to have thought of this.”

Isaac Asimov

Making these little connections is an essential part of humor. If you train yourself to see and appreciate these little jokes in your everyday life, with friends, at the movies, etc., you will get better at seeing them in your work.

In spite of twenty-five years in Southern California, [Aldous Huxley] remains an English gentleman. The scientist’s habit of examining everything from every side and of turning everything upside down and inside out is also characteristic of Aldous. I remember him leafing through a copy of Transition, reading a poem in it, looking again at the title of the magazine, reflecting for a moment, then saying, “Backwards it spells NO IT ISN(T) ART.”

Igor Stravinsky, Dialogues

David Ogilvy wasn’t a scientist, but he was right when he said, “The best ideas come as jokes. Make your thinking as funny as possible.”

One of economist Tyler Cowen’s favorite questions to bug people with is, “‘What is it you do to train that is comparable to a pianist practicing scales?’ If you don’t know the answer to that one, maybe you are doing something wrong or not doing enough.” For scientists, the perfect practice is telling jokes. 

László Polgár believed that geniuses are made, not born, and set out to prove it. He kept his daughters on a strict educational schedule that included studying chess for up to six hours a day. There was also a twenty-minute period dedicated to telling jokes.

— Louisa Thomas on László Polgár

Having a sense of humor also helps keep things in perspective.

When I gave a lecture in Japan, I was asked not to mention the possible re-collapse of the universe, because it might affect the stock market. However, I can re-assure anyone who is nervous about their investments that it is a bit early to sell: even if the universe does come to an end, it won’t be for at least twenty billion years. By that time, maybe the GATT trade agreement will have come into effect.

Stephen Hawking

Humor keeps you from taking yourself too seriously.

The downside of my celebrity is that I cannot go anywhere in the world without being recognized. It is not enough for me to wear dark sunglasses and a wig. The wheelchair gives me away.

Stephen Hawking

Life is hard — sometimes the world is very dark. Research can be challenging. Pursuing an interest that few people understand, that sets you up against the authorities of your day, is often isolating. Scientists may discover things they would rather not have known. A sense of humor lessens the burden.

Schopenhauer’s saying, that “a man can do as he will, but not will as he will,” has been an inspiration to me since my youth up, and a continual consolation and unfailing well-spring of patience in the face of the hardships of life, my own and others’. This feeling mercifully mitigates the sense of responsibility which so easily becomes paralyzing, and it prevents us from taking ourselves and other people too seriously; it conduces to a view of life in which humor, above all, has its due place.

Albert Einstein

Another reason to cultivate humor is that nature is really weird. It will always be stranger and more amusing than you expect. The only way to keep up is to try to think in jokes. If you have a good sense of humor, you will end up closer to the truth. “Wouldn’t it be absurd if X were true?” you think, only to discover the next day that X is indeed true. 

The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka” but “That’s funny…”

Isaac Asimov

 Finally, science is very social. If you have a good sense of humor, people will like you. You will get along with them better; you will have more fun; probably you will do better work together! Humor is worth cultivating for this reason too. 

Humor is generative. It attracts unusual people and ideas, the sort that wouldn’t otherwise end up in the same place together.

A deep sense of humor and an unusual ability for telling stories and jokes endeared Johnny even to casual acquaintances.

— Eugene Wigner, in “John von Neumann (1903 – 1957)”

Science is too important to be taken seriously. In the end, if you cannot have some fun out of your research, if you cannot see in some way how ridiculous the whole thing is — then what’s the point? 

When I was younger I was anti-culture, but my father had some good books around. One was a book with the old Greek play The Frogs in it, and I glanced at it one time and I saw in there that a frog talks. It was written as “brek, kek, kek.” I thought, “No frog ever made a sound like that; that’s a crazy way to describe it!” so I tried it, and after practicing it awhile, I realized that it’s very accurately what a frog says.

So my chance glance into a book by Aristophanes turned out to be useful, later on: I could make a good frog noise at the students’ ceremony for the Nobel-Prize-winners! And jumping backwards fit right in, too. So I liked that part of it; that ceremony went well.

Richard Feynman

Predictions for 1950

[Previously in this series: Predictions for 2050, A Few More Predictions for 2050]

Happy New Year to all! Welcome to 1922, and welcome back to our column, Slime of the Times.

Last year Professor Erik Hoel of Tufts University wrote in his column about all the things that will change between now and the far-distant future year of 1950. But this is no pulp-magazine tall tale; Professor Hoel says that the best way to predict the future of tomorrow is to extend the burgeoning trends of today.

While it sounds impossibly far off, the truth is that 1950 is a mere 28 years away. Making predictions for 1950 based on what we see to-day is just like sitting in the Gay Nineties and predicting what the world would look like in 1920 — no mean task, but not impossible either.

To us this seemed like jolly good fun. So without further ado, here is our set of predictions for the distant future that is 1950.

You Can’t Keep ‘Em Down on the Farm 

By now you all know the 1919 hit song of wild acclaim, “How Ya Gonna Keep ’em Down on the Farm (After They’ve Seen Paree)?” Probably you have heard the recording by legendary vaudeville darling Nora Bayes. And maybe you know the single from last year, “In a Cozy Kitchenette Apartment” from Irving Berlin’s Music Box Revue, rhapsodizing about urban living. 

Well, Miss Nora and all the other songbirds are right. The rural problem is here and it’s here to stay — city drift is the trend of the next 30 years and beyond. Already more than half of all Americans live in cities, and that is not stopping any time soon. The agrarian America we know and love is coming to an end. 

Metropolitan apartment living will soon be normal and even, in time, respectable. The home as you know it will soon be out of date, and instead of living in a pleasant frame dwelling with a front yard and crimson rambler roses over the porch, your son or daughter will live in a huge apartment building, where among hundreds of cell-like cubicles, they will be known to their neighbors not by name, but as “50A” or “32B”.

Young people in the cities will eat from “delicatessens” and “cafeterias” rather than from the kitchen. Already there are more than fifty delicatessens in Baltimore, and they are spreading in most every major city. Meanwhile the so-called cafeterias bring Chicago ideas of mechanical efficiency to the American dinner service, resembling nothing so much as a factory assembly line.

Some of you may think that delicatessens are the emblems of a declining civilization, the source of all our ills, and the destroyer of the home. But to our children and our children’s children even this scourge will become unremarkable, whatever the consequences may be to the American family.

Yes, our great-grandchildren will eat sandwiches, a natural by-product of modern machine civilization, and never know what they are missing of their heritage.

The Servant Problem

The movement to these “efficiency apartments” will be spurred by many things, but one is the gradual but ever-increasing decline in the role of the domestic servant. The servant problem comes and goes, and if you read tips on how to hire and clothe them in the magazines, you might be convinced it is simply a seasonal concern. But it gets harder to find servants every year, and it will get harder still, until the servant as we know her disappears.

The middle class will soon abandon servants almost entirely. The very well-to-do might employ a maid, but she will not attend the household day and night, and they will have no cooks and certainly no chauffeur. If they have a maid, they might even share her with other families. By 1950, only the very oldest, richest families will employ live-in servants. English butlers and Scotch maids, so common today in the households of your more fortunate relations, will be a thing almost entirely of the past.

Just imagine it. The silence of the once-great household. No more bustle on the streets every Sunday, when the maids and footmen take their weekly day off. No more fine, uniformed chauffeurs in front of estates. But where would you house them to begin with, in the tiny apartments of the far future? 

The Nation will be Powered over the Wires 

All of us can remember the time, not so long ago, when electric power was rare, even a novelty. But soon this wonder will be common-place in the homes of all. Indeed, it is already coming not only to private but to public buildings. President Benjamin Harrison was the first to benefit from electricity in the White House, all the way back in 1891. In 1906, Grand Central Terminal in New York City was electrified as well.

By the end of this year, almost four out of every ten US households will have electric wiring, and before 1930, more than half of households in the nation will be electrified. By 1950, every public building, and all but the meanest house, will have the full benefits of modern electrical systems. Even the most rural parts of our great nation will shine with electric light.

The posters might look something like this

The Finest Delicacies at Any Time of Year

Imagine a technology that captures freshness, abolishes the seasons, and erases the limitations of geography. Imagine food out of season; peaches from South Africa and strawberries picked green and shipped around the world. Imagine a midwestern housewife serving her family fresh filet of sole. 

These qualities represent the cutting edge of culinary modernity, and all will soon be made reality through the incredible power of refrigeration. Refrigerated railroad cars will bring delicacies long-distance from any locale. Refrigerated silos will store them year-round. Whatever regional delicacies you please, wherever you are.

Say good-bye to ice-harvesters and iceboxes! Forget about going down to the pond with a pair of tongs and bringing back a dribbling piece of ice. When foods and dishes reach your home, they will be stored in a fully electrified home refrigeration unit. You have probably heard of or even seen the clunky gas-powered household refrigeration unit produced by General Electric, or the more recent Kelvinator put on the market by General Motors. 

To be frank, these models are ugly and they are expensive — the Kelvinator will put you back as much as a new car! But everyone knows there is money in refrigeration. In the coming decade, dozens more models and companies will enter the fray. Some will be powered by gas, some by kerosine, but the ultimate winners will be those that run on electricity. Home refrigeration units will become more and more affordable. They will become compact and sleek, until they are admired as objects of modern beauty. These things will soon be so completely nifty to look at, that merely to see one will be to have a passionate desire for one.

Advances in freezing foods will revolutionize American cuisine. Modern frozen foods are invariably soggy and lifeless, but scientific control over temperature will soon give us frozen dishes that preserve each food at the very peak of freshness. Peas, asparagus, and spinach, each vegetable as delicious as if they had just been bought from the farmer down the road, ready from the moment they are drawn from the freezer, with no preparation required, not even washing. Farm-fresh broilers, tender calves liver, halibut, and even frozen goose — meats, poultry, vegetables, and fruit.

By 1950, futuristic markets equipped with refrigeration technology on a massive scale will be the norm. Enter any town market and choose from a huge variety of neatly stacked cartons of frozen fruits and vegetables, meats, and seafood, all of uniform quality, tastefully arranged in great refrigerated display cases that run the entire length of the store. 

There will be another Great War with the Hun

In Germany they are already concerned about the depreciation of the German mark. Each additional payment to France, England, and the United States brings a flood of paper currency and makes the depreciation of the marks greater. Yet the London ultimatum shall be held, Germany will be destabilized, invigorating unhealthy parasitical elements in Germany itself, and within a generation, there will be another great war.

This war will be, if it can be believed, even worse than the great war we just concluded. Already we have seen the evolution of aircraft, tools of peace, into first machines for reconnaissance, and then into “fighters” and bombers. In the next war, great flotillas of aircraft will level the jewels of Europe. New and terrible weapons will make even the mustard gas seem as quaint as a musket.

This time the war will be truly great — a world war. China and Japan both fought in the last war, and have gotten a taste for it. Japan in particular grows hungry, and bold after their victories over Russia. They desire nothing more to be a great power, and will take advantage of any chaos to rival not only Russia but Germany, Great Britain, and perhaps even the United States.

However, the Ottomans will be gone, and will no longer be a major power. We would frankly be surprised if the Ottoman Empire lasts the rest of the year.

There will be another Great Depression

We all remember the hard times of the past two years, what will surely come to be called the Depression of 1920–1921. Many of you also remember the Panic of 1907 or Knickerbocker Crisis, when the breadlines in New York City grew to incredible lengths.

Now things seem to have stabilized, and the 1920s show every sign of being another long economic boom. Businesses are growing and factories are running full tilt, churning out line after line of dazzling new goods.

But we warn you that even in the world of tomorrow, expansion is followed by contraction, and we will see another Great Depression within a generation. It may even be worse — maybe this next downturn will be so bad that it will come to be called the Great Depression, and everyone will forget that there ever was a Great Depression of 1873.

We don’t remember this part of Teddy Roosevelt’s presidency, but we have to assume that the bears were part of a sound fiscal policy.

Business Girls

Many of us still carry in our minds psychological remnants of the age when the home and indeed the country was built upon masculine protection. But in reality, the world has already changed, and it is changing more rapidly all the time. A quarter of the American workforce is already staffed by women, working outside the home as typists, switchboard operators, stenographers, waitresses, store clerks, factory hands, and bookkeepers. 

Even now, there are some young couples where both the man and his bride hold down full-time jobs. (This is why they come to rely on the delicatessen.) When the next great war with the European powers comes — and come it will — more women will take on jobs left open by boys who are sent to the front. Old gentlemen may scoff, but the truth is that any woman who can use a kitchen mixer can learn to operate a drill. We will see women auxiliaries to our armed forces, women carpenters, perhaps even a woman riveter or some other such futuristic vision. 

The Nineteenth Amendment to the Constitution, in effect for only two years now, will change the face of American politics as much as the wars change the face of American labor. Before 1930 we shall see a woman senator, a woman representative, women mayors, and even women governors. Gradually women will enter the White House and serve in presidential cabinets. There shall be women diplomats. By 1950 Americans will have come to think nothing about a woman for the highest post in the land.

You Will Hear the Latest from New York and Chicago in the Comfort of your Drawing Room

It sounds like something out of a pulp magazine, but by 1950 there will be a radio in every home. Turning on the radio receiver will be as normal to your children, as picking up a newspaper is to you. 

You may already have heard of some of the early success stories, like KDKA in Pittsburgh, which you might know by its old call sign, 8XK. They have aired religious services, a speech by the great humanitarian Herbert Hoover, the Dempsey – Carpentier heavyweight boxing match, and just a few months ago, the first major league professional baseball game, the Pirates-Phillies game at Forbes Field. This is not the future — this is the present! You simply have not caught up yet to the incredible pace of advancements in radio.

Everything newspapers can do, radio will do better. And not only coverage of baseball games and boxing matches. Syndicated radio shows, like syndicated columns, but with voice and music. Radio plays, almost as good as going to the theater. News coverage, live from any city in the nation, or from around the world. Don’t read about the president’s speech in the paper; hear it in his own voice as if you were in Washington. 

The Newsroom of Tomorrow