Links for January 2022

There’s a new blog in town. All the old blogs are still here of course. Our small mining town is filling up, it’s shoulder-to-shoulder blogs in here. Blogs in the ditches, blogs in the pantry, blogs under the floorboards. Last night I went to my room and found three strange blogs asleep in my bed. Send help. Anyways this new blog is Experimental History. The name comes from the author’s defining argument that psychology is best understood as the study of bothering people. “It’s nearly impossible to do psychology without bothering somebody,” he says, “and the moment you do, you’ve made experimental history.

MichelangEmoji Bot is by far our favorite AI art project. In particular, it seems like a good argument against the idea that AI will remove human creativity from the process of creating art. The tools get weirder and weirder, but some genius still had to come up with templates like “EMOJI EMOJI – Landscape Series -” and “Portal to the EMOJI EMOJI Dimension”.

Our header image this month is Portal to the 💥💗 Dimension. Please enjoy this additional selection: 

And a similar idea: Horror design based on; 😂🎃🦈 Listen, uh… is she single?

Hmmm. Given all this, are emoji the hieroglyphs of the internet? Yes. There’s even an emoji Alethiometer.

Interesting two-part series about how (and when) the New York Times tests multiple headlines for a single article and what kinds of articles make it to the front page.

Speaking of which, we know that whoever wrote the headline for “Scientists fight crab for mysterious purple orb discovered in California deep” was doing it to get us to click, and listen, it worked.

Don’t believe everything you read on the internet of course, but this is big if true: autistic redditor claims that prescription oxytocin nasal spray temporarily suppresses his autism.

End of history disproven by awesome sink.

In Florida they don’t have snow days, but they do have Iguana Warnings.

Noah Smith: “There’s no stopping [the technology] bus. I can only promise you that it’s going to get weirder.” Strap in! 😀 

In other future-predicting news, expect cloaking devices on cars by 2030.

When Timothy Leary (the LSD guy, Harvard Psychology Professor, etc.) finally ended up jail in 1970, this happened:

On January 21, 1970, Leary received a 10-year sentence for his 1968 offense, with a further 10 added later while in custody for a prior arrest in 1965, for a total of 20 years to be served consecutively. On his arrival in prison, he was given psychological tests used to assign inmates to appropriate work details. Having designed some of these tests himself (including the “Leary Interpersonal Behavior Inventory”), Leary answered them in such a way that he seemed to be a very conforming, conventional person with a great interest in forestry and gardening. As a result, he was assigned to work as a gardener in a lower-security prison from which he escaped in September 1970, saying that his non-violent escape was a humorous prank and leaving a challenging note for the authorities to find after he was gone.

If psychedelics are not your thing, consider the exciting and unusual world of hobby tunneling, where random people go full mole and sometimes dig elaborate series of tunnels beneath their house, yard, local public park, etc.

Notable examples include: 

Cray cray indeed: 

On twitter, Anton argues: what if we fixed San Francisco by doing crimes? “illegalism is a political philosophy which says that under unjust systems of control, doing crimes is a political act and revolutionaries ought to establish a parallel illegal economy” He also includes a list of “good crimes to do”, saying, “there’s a lot of really great crimes you can do if you have the resources and the will”. 

In related news, Applied Divinity Studies writes absolutely-not-medical-advice about how you can get fluvoxamine for less than $100 and a few minutes on a video call. We’re reminded of similar work by Scott Alexander, like Mental Health On A Budget and his “long post is long” post, Navigating And/Or Avoiding The Inpatient Mental Health System.

Also see this great thread by Maia Bittner, including tips like “literally no doctor cares if you’re tired or in pain or can’t work. *but* they do care about Activities of Daily Living (ADL). … Show up in kind of a decrepit state. Doctors are power freaks and they dismiss you if you’re put together. Super over-do your ADL- if you can’t put on socks, walk in barefoot. They need to see evidence of how your problem negatively affects your life when you walk in the room.” and “you know how every doctor gives you a 5 page form to fill out with all your family history of every possible disease? just skip it and give it back to them blank. they don’t even notice or look at it”. We’re not doctors so we don’t know if this is true, but either way, more stuff like this please. Medical praxis seems like incredibly valuable low-hanging social good, and most of it isn’t even illegal! Someone should make a manual. In fact, if you want to make a manual, let us know

Speaking of stoopid laws: The secret history of jaywalking

Surprising boondock etymology – “1910s during or around the Philippine–American War after the Spanish–American War, from Tagalog bundok (‘mountain’), adopted by occupying American soldiers serving in the mountains or rural countryside of the American-occupied Philippines under the United States Military Government of the Philippine Islands. The term was reinforced or re-adopted during World War II under the U.S. military, where terms like boondockers (‘shoes suited for rough terrain’) came originally in 1944 as U.S. services slang word for field boots. It was later shortened to boonies by 1964 originally among U.S. troops serving in the Vietnam War in reference to the rural areas of Vietnam, as opposed to Saigon.”  Can you guess why we looked up this etymology this month? 

Bad news for the West, friends — the world’s first classical Chinese programming language is out. It’s also beautiful: 

Pike squares were developed to fight cavalry. Slings could be used to panic war elephants. So too the answer to cheap modern drones may be motorcyclery — essentially, militarized biker gangs.  

If you are looking for new characters for your fiction novel, please please enjoy this list by Jess Nevins of the best characters from the pulps who were created in 1926 and thus fall into the public domain starting this year. Who among you will bring back the Crimson Clown: “The Crimson Clown is a ghastly frickin’ nightmare, of course—just *look* at that picture I posted two tweets back—it’s a clown with a gun, drinking Scotch! Pure nightmare fuel—which is why bringing him back and making him *really* scary would be a great idea.”

A video! Watch to the end.

Q: We’re going to ask this neighbor here, what do you think is happening in Telde?  What do you think is happening here? Hey, sir, I’m talking to you, don’t turn your back on me, man. What happens in your yard? Zapatero. What is your opinion on Zapatero’s government? What is your opinion on the Canary government? And of the city council of the island? Let’s ask him if he receives a subsidy from the government. Do you receive a subsidy from the government? Let’s see. What is going on with the mayors? What is happening with the president of the government and what is happening with the mayors of the island? What is happening with the city councilors? What’s happening…? Let’s see. Let’s see. Come here. Please. Don’t turn your back on me man, I want to talk to you. Look: What is going on with the mayor of Telde? What is going on with the mayors of the island?

A: WUALLLHHHHHHHH

“Excited to share a new study led by Shachar Givon & @MatanSamina w/ Ohad Ben Shahar,” begins this tweet. “Goldfish can learn to navigate a small robotic vehicle on land. We trained goldfish to drive a wheeled platform that reacts to the fish’s movement.” This is exciting but turns out home hackers have been letting fish control robot cars since at least 2014. Compare also to studies in teaching rats to drive tiny cars — researchers say the rats find this relaxing. Ok, new prediction for 2050: robot exoskeletons for small animals that let them navigate the human world, drive, take jobs, etc. Everyone was worried about automation, no one was worried about the goldfish taking our jobs. Alternatively if the self-driving cars don’t arrive on schedule, we can get rats or goldfish to do it.

“While recording the audiobook version of Charlotte’s Web, E.B. White needed 17 takes to read Charlotte’s death scene because he kept crying.”

Magawa, the landmine-sniffing hero rat, dies aged eight. Rest in peace hero 😢🥇🐀

Like a Lemon to a Lime, a Lime to a Lemon


We recently wrote a post
about Maciej Cegłowski’s essay Scott And Scurvy, a fascinating account of how the cure for scurvy was discovered, lost, and then by incredible chance, discovered again. At the time we said that this essay is one of the most interesting things we’ve ever read, and that we hoped to write more about it in the future. It was, we do, and here we go.

In the other post, we talked about what the history of scurvy can teach us about contradictory evidence — stuff that appears to disprove a theory, even though it doesn’t always. In this post, we want to talk about something different: the power of concepts.

First we’re gonna show you how bad it can be if you don’t have concepts you need. Then we’re going to show you how bad it can be if you DO have concepts you DON’T need.

Diseases of Deficiency

As Cegłowski puts it:

There are several aspects of this ‘second coming’ of scurvy in the late 19th century that I find particularly striking … [one was] how difficult it was to correctly interpret the evidence without the concept of ‘vitamin’. Now that we understand scurvy as a deficiency disease, we can explain away the anomalous results that seem to contradict that theory (the failure of lime juice on polar expeditions, for example). But the evidence on its own did not point clearly at any solution. It was not clear which results were the anomalous ones that needed explaining away. The ptomaine theory made correct predictions (fresh meat will prevent scurvy) even though it was completely wrong.

We’re not quite sure if he’s right about the concept of “vitamin” — even James Lind seems to have thought the cure was something in certain foods, maybe the fact that they were so tart and acidic. More critical might be the problem of focusing on the noticeable aspect of citrus (they are very tart) and missing the hidden reason it actually cures scurvy (high in vitamin C). Not sure what advice we could give there except “don’t mistake flash for substance”, but that’s easier said than done.

But we do wonder about the concept of a deficiency disease in the first place. Even James Lind thought that scurvy was actually caused by damp air, and vegetable acids were just a way to fight back. Vegetable acids were thought to be cures, not essential nutrients. They were “antiscorbutic” like “antibiotic”. The concept of a deficiency disease doesn’t seem to have existed before the 1880s and got almost no mention until 1900, at least not under that name:

Without this concept, it does seem like doctors of the 19th century were missing an important tool in their mental toolbox for fighting disease. 

This reminds us of other problems in global medicine — maybe we should introduce the idea of a “contamination disease”. People are already familiar with this concept to a point — lead poisoning, arsenic poisoning, etc. — but people don’t look at a disease and think, maybe it’s from a contaminant. In fact, they often look at the symptoms of exposure to contaminants and think, that’s an (infectious) disease.

A good example is so-called Minamata disease. In 1956, in the city of Minamata, Japan, a pair of sisters presented with a set of disturbing symptoms, including convulsions. Soon the neighbors were showing signs as well. Doctors diagnosed an “epidemic of an unknown disease of the central nervous system”, which they called “Minamata disease”. They assumed it was contagious and took the necessary precautions. 

But soon they started hearing about mysterious cases of cats and birds showing similar symptoms, having convulsions or falling from the sky. Eventually they figured out “Minamata disease” was not contagious at all — the disease was methylmercury poisoning, the result of mercury compounds a local Chisso chemical factory was leaking into the harbor.

You might say, “Well it was not a disease at all; they were poisoned. If SMTM are right, then obesity isn’t a disease either; everyone has just been low-grade poisoned all at once.” We think this highlights the need for a deeper discussion about our categories!

“Disease” really does come from just “dis” “ease”. If you’re a doctor and someone comes to you, and they are not at ease, they are diseased, and that’s what you should care about. The disease might ultimately be bacterial, or viral, or an allergy, or a parasite, or the result of a deficiency, or the result of exposure to a harmful contaminant or poison, but it’s still a disease. For more discussion of this particular point, see here, also coincidentally about obesity, we didn’t stack the deck on this one it’s from 2010.

(If we were being really strict, we would say that obesity is a symptom, because conditions like Cushing’s Syndrome and drugs like Haldol can cause it too. If one or more contaminants also cause obesity, then the result of that exposure is a contamination disease, with obesity as a symptom. For more discussion of THIS particular point, see here.)

Lemon Mold Lime Mold

One of the weirdest things Cegłowski describes is how back in the day, people used the words “lemon” and “lime” interchangeably to describe any citrus fruit, which they thought of as a single category:

The scheduled allowance for the sailors in the Navy was fixed at I oz.lemon juice with I + oz. sugar, served daily after 2 weeks at sea, the lemon juice being often called ‘lime juice’ and our sailors ‘lime juicers’. The consequences of this new regulation were startling and by the beginning of the nineteenth century scurvy may be said to have vanished from the British navy. In 1780, the admissions of scurvy cases to the Naval Hospital at Haslar were 1457; in the years from 1806 to 1810, they were two. 

(As we’ll see, the confusion between lemons and limes would have serious reprecussions.)

This ended up making a huge difference in the tale of the tragedy of scurvy cures:

When the Admiralty began to replace lemon juice with an ineffective substitute in 1860, it took a long time for anyone to notice. In that year, naval authorities switched procurement from Mediterranean lemons to West Indian limes. The motives for this were mainly colonial – it was better to buy from British plantations than to continue importing lemons from Europe. Confusion in naming didn’t help matters. Both “lemon” and “lime” were in use as a collective term for citrus, and though European lemons and sour limes are quite different fruits, their Latin names (citrus medica, var. limonica and citrus medica, var. acida) suggested that they were as closely related as green and red apples. Moreover, as there was a widespread belief that the antiscorbutic properties of lemons were due to their acidity, it made sense that the more acidic Caribbean limes would be even better at fighting the disease. 

In this, the Navy was deceived. Tests on animals would later show that fresh lime juice has a quarter of the scurvy-fighting power of fresh lemon juice. And the lime juice being served to sailors was not fresh, but had spent long periods of time in settling tanks open to the air, and had been pumped through copper tubing. A 1918 animal experiment using representative samples of lime juice from the navy and merchant marine showed that the ‘preventative’ often lacked any antiscorbutic power at all.

It’s worth focusing on one part of this passage in particular: “Both ‘lemon’ and ‘lime’ were in use as a collective term for citrus.” This seems to be the case. As far as we can tell, the word “citrus” wasn’t really used prior to 1880. It was probably introduced as a scientific term for the genus before slowly working its way into common usage. Before then, “lemon” dominated the conversation, and “lime” dominated ten times over:

Though note that many uses of “lime” probably refer to things like quicklime. 

Maybe it’s not surprising that it took the language a while to sort itself out, but it still seems surprising that your great-great-grandfather didn’t think to distinguish between two fruits that you can tell apart at a glance. Even so, we think there are a couple of reasons to be sympathetic.

The name stuff is confusing, but swapping out one citrus fruit for another seems understandable, even if it ended up being misguided. To Europeans at the time, the thing that stood out about limes AND lemons was how tart they were, so it’s not surprising that they thought that the incredible tartness of these fruits was critical to the role they played in treating scurvy. But sourness in citrus fruits generally comes from citric acid, not vitamin C / ascorbic acid (incidentally, this is ascorbic as in “antiscorbutic”). Unfortunately, they had no way of knowing that. 

The second reason to be sympathetic is this: People mixed up limes and lemons in the 1800s. You may laugh but actually you are mixing up citrus right now.

The lemon is a single species of fruit, Citrus limon. It’s a specific species of tree that gives a specific yellow fruit that is high in citric acid and high in vitamin C. If you go to the store and buy a lemon, you know what you’re getting.

(Well, mostly. The Wikipedia page for lemons has a section called “other citrus called ‘lemons’”, which lists six other citrus fruits that are also called lemons, like the rough lemon and the Meyer lemon. But besides this, a lemon is a lemon.)

There’s also this kind of lemon, but the British Admiralty didn’t have access to these back in the age of sail.

In comparison, the Wikipedia article on limes says,

There are several species of citrus trees whose fruits are called limes, including the Key lime (Citrus aurantiifolia), Persian lime, Makrut lime, and desert lime. … Plants with fruit called “limes” have diverse genetic origins; limes do not form a monophyletic group.

The very first section of the article is called, “plants known as ‘lime’”, which gives you a sense of how vague the name “lime” really is. The list they give includes the Persian lime, the Rangpur lime, the Philippine lime, the Makrut Lime, the Key Lime, four different Australian limes, and several things called “lime” that are not even citrus fruits, including the Spanish lime and two different plants called the wild lime. They also say: 

The difficulty in identifying exactly which species of fruit are called lime in different parts of the English-speaking world (and the same problem applies to synonyms in other European languages) is increased by the botanical complexity of the citrus genus itself, to which the majority of limes belong. Species of this genus hybridise readily, and it is only recently that genetic studies have started to shed light on the structure of the genus. The majority of cultivated species are in reality hybrids, produced from the citron (Citrus medica), the mandarin orange (Citrus reticulata), the pomelo (Citrus maxima) and in particular with many lime varieties, the micrantha (Citrus hystrix var. micrantha).

This means there is not even a straight answer to a question like “how much vitamin C is in a lime?” — there are at least a dozen different fruits that are commonly called “limes”, they all contain different amounts of vitamin C, and many of them are not even related to each other.

On those remote pages it is written that limes are divided into (a) limes that belong to the Emperor, (b) embalmed limes, (c) limes that are trained, (d) suckling limes, (e) mermaid limes, (f) fabulous limes, (g) stray limes, (h) limes included in the present classification, (i) limes that tremble as if they were mad, (j) innumerable limes, (k) limes drawn with a very fine camel’s hair brush, (l) other limes, (m) limes that have just broken the flower vase, (n) limes which, from a distance, resemble flies.

The British Admiralty seems to have switched from lemons grown in Sicily to West Indian limes. You probably know these as Key limes, and in case the nomenclature isn’t complicated enough, they’re also called bartender’s limes, Omani limes, or Mexican limes. We’ll stick with “Key lime” because that’s probably the name you know because it makes me think of pie. Mmmm, pie

The kind of limes you buy at the store, Persian limes, are a cross between Key limes and lemons. We can’t find any actual tests of the amount of vitamin C in Key limes, so we think all the published estimates of the amount of vitamin C in limes are probably from Persian limes.

We generally see numbers of about 50 mg/100 g vitamin C for lemons and about 30 mg/100 g for limes, presumably Persian limes. Since Persian limes are a cross between lemons and Key limes, Key limes probably have less than 30 mg/100 g vitamin C. Genetics isn’t this simple, but if we were to assume that Persian limes are the average of their forebears, then Key limes would contain about 10 mg/100 g vitamin C, less than a tomato. You need about 10 mg of vitamin C per day to keep from getting scurvy, so already we can see why this might be a problem.

Cooking reduces the vitamin C content of vegetables by about 40% (though of course this varies widely with specific conditions), so the 50 mg or so in a lemon would become about 30 mg after boiling, but the 30 mg in a lime would become about 18 mg after boiling. Lemon juice would be as good of an antiscorbutic after boiling as Persian lime juice would be fresh, and Key limes seem like they would have less vitamin C than either, boiled or not.

Persian limes also turn yellow as they ripen — you only think of them as green because farmers pick them and send them to the grocery store before they change colors. And of course, lemons are green in their early stages of growth. So like, so much for the “limes are green and lemons are yellow” thing.

Lovely fresh limes. Yes you read that right.

All these issues pale in comparison to the fact that citrus taxonomy is insane. Not only are limes not limes, it seems like nothing is really anything, or maybe anything is everything.

You walk into a supermarket and you think you recognize a bunch of Platonic fruits — oranges, clementines, lemons, limes, grapefruits, and so on. But when you do a Google image search for “citrus genetics”, you get diagrams like:

And this diagram:

Apparently orange genetics are so insane that even the person who made this diagram just gave up. “The citron was crossed with a lemon to make a sour orange and then uhhhhhh some stuff happened! and you got a sweet orange.”

And this diagram:

We notice there are unlabeled spaces on this Venn diagram — apparently the citrus cartels are holding out on us. Where’s my micrantha x maxima hybrid???

And this diagram, in which the Bene Gesserit attempt to breed the Kumquat Haderach. No, really.

The written material on the subject is, if anything, even more disheartening. Let’s look at a couple passages from the Wikipedia article on citrus taxonomy:

Citrus taxonomy is complex and controversial. Cultivated citrus are derived from various citrus species found in the wild. Some are only selections of the original wild types, many others are hybrids between two or more original species, and some are backcrossed hybrids between a hybrid and one of the hybrid’s parent species. Citrus plants hybridize easily between species with completely different morphologies, and similar-looking citrus fruits may have quite different ancestries. … Conversely, different-looking varieties may be nearly genetically identical, and differ only by a bud mutation.

The same common names may be given to different species, citrus hybrids or mutations. For example, citrus with green fruit tend to be called ‘limes’ independent of their origin: Australian limes, musk limes, Key limes, kaffir limes, Rangpur limes, sweet limes and wild limes are all genetically distinct. Fruit with similar ancestry may be quite different in name and traits (e.g. grapefruit, common oranges, and ponkans, all pomelo-mandarin hybrids). Many traditional citrus groups, such as true sweet oranges and lemons, seem to be bud sports, clonal families of cultivars that have arisen from distinct spontaneous mutations of a single hybrid ancestor. Novel varieties, and in particular seedless or reduced-seed varieties, have also been generated from these unique hybrid ancestral lines using gamma irradiation of budwood to induce mutations.

For more on using radiation to make new fruit, please refer to these talking dinosaurs.

In case that isn’t weird enough for you, there’s even a graft chimera citrus called the Bizzaria (really), which produces fruits that look like this: 

We’re at the Florentine citron. We’re at the sour orange. We’re at the…

On limes in particular, this page says: 

Limes: A highly diverse group of hybrids go by this name. Rangpur limes, like rough lemons, arose from crosses between citron and mandarin. The sweet limes, so-called due to their low acid pulp and juice, come from crosses of citron with either sweet or sour oranges, while the Key lime arose from a cross between a citron and a micrantha.

All of these hybrids have in turn been bred back with their parent stocks or with other pure or hybrid citrus to form a broad array of fruits. Naming of these is inconsistent, with some bearing a variant of the name of one of the parents or simply another citrus with superficially-similar fruit, a distinct name, or a portmanteau of ancestral species.

While most other citrus are diploid, many of the Key lime hybrid progeny have unusual chromosome numbers. For example, the Persian lime is triploid, deriving from a diploid Key lime gamete and a haploid lemon ovule. A second group of Key lime hybrids, including the Tanepao lime and Madagascar lemon, are also triploid but instead seem to have arisen from a backcross of a diploid Key lime ovule with a citron haploid gamete. The “Giant Key lime” owes its increased size to a spontaneous duplication of the entire diploid Key lime genome to produce a tetraploid. [Editor’s note: uhhhhh]

Wikipedia tells us this is a “lumia”. W-what is that? We don’t know, but Wikipedia assures us that “like a citron, it can grow to a formidable size.”

Pretty much every citrus page on Wikipedia has shit like this, truly enough to drive a man mad. You wander onto Wikipedia trying to find out what in god’s name a lumia is, and soon you are reading this: “A recent genomic analysis of several species commonly called ‘lemons’ or ‘limes’ revealed that the various individual lumias have different genetic backgrounds. The ‘Hybride Fourny’ was found to be an F1 hybrid of a citron-pomelo cross, while the ‘Jaffa lemon’ was a more complex cross between the two species, perhaps an F2 hybrid. The Pomme d’Adam arose from a citron-micrantha cross, while two other lumias, the ‘Borneo’ and ‘Barum’ lemons, were found to be citron-pomelo-micrantha mixes.” Lovecraft, eat your heart out. 

Mr Lovecraft might also enjoy this lovely citro-AAAAAAH

This is the much deeper problem that the history of scurvy reveals. In science, you need tools you can trust. You need to have the right equipment, the right study design, and the right analysis techniques — but you also need the right concepts.

Most of us are trained to calibrate our equipment and to double-check our experimental designs, but how often do we reconcile our concepts? Back in the 1800s, they trusted the terms “lemon” and “lime” to be relevant, to be reliable, to be meaningful — and to be interchangeable, to mean the same thing as each other. But they were all of them, deceived. 

This will continue to be a problem forever. We distinguish between lemons and limes today, and we’re better off for it, but we aren’t safe and can’t afford to forget this problem. “Lime” is still considered a perfectly good tool, and if you were going to do a study on whether limes are good for your heart or something, no one except for citrus geneticists would think anything of it.

But “lime”, as we have hopefully convinced you today, is not a good category at all! It’s not a good tool. You can’t trust it. Yet the assumption that “lime” is a perfectly normal category is so deeply embedded that you never realized it was an assumption.

Evaluating simple propositions like “limes cure scurvy” depends on accepting that “limes”, “scurvy”, and even “cure” are coherent and meaningful concepts. But they may not be!

The TRUE way that reality is very weird is that words and concepts that you use every day and take entirely for granted may be just as incoherent as the term “lime”. Concepts you think of as normal may some day seem as crazy as using the words “lemon” and “lime” interchangeably for all citrus fruits. We can pretty much guarantee that this will happen for something.

In our last post we described “splitting” as the practice of coming up with weird special cases or new distinctions between categories in the face of contradictory evidence. Splitting concepts is especially risky, in part because concepts are so powerful. If there is a confusion of categories, then all the research up to that point will be hopelessly confused as well, entirely muddled.

But if you split the categories in a better way, you will suddenly be left facing nothing but low-hanging fruit — be they lemons, limes, other limes, grapefruits, other other limes, clementines, pomelos, lumias, etrogs, etc.

Newsletter Natural Selection

Apparently, Substack wants to destroy newspapers. And maybe that would be good — maybe it would be good for journalism to be democratized, for bloggers to inherit the earth. Of course we’re bloggers and not newspapers, so maybe we’re biased.

Obviously it would be great if someone came up with a set of blogging and newsletter tools that were just amazing, that were the clear front-runner, that outperformed every other platform. We’d love it if the technical problems were all solved and we just had a perfect set of blogging tools.

But if everyone ends up on the same platform, well, that’s kind of dangerous. If one company controls the whole news & blogging industry, they can blacklist whoever they want, and can squeeze users as much as they want.

Even if you think Substack has a good track record, there’s no way they can guarantee that they won’t squeeze their writers once they control the market. Even if you trust the current management, at some point they will all retire, or all die, or the company will be bought by wesqueezeusers.biz, and then you’re shit outta luck.

Substack just can’t make a credible commitment that makes it impossible for them to abuse their power if they get a monopoly. You have to take them at their word. But since management can change, you can’t even really do that. They just can’t bind their hands convincingly.

But there may be some very unusual business models that would fix this problem. 

On the Origin of Substacks

Imagine there’s a “Substack” company that commits itself to breaking in half every time it gets 100,000 users (or something), creating two child companies. Each company ends up with 50,000 users. All the blogs with even-numbered IDs go to Substack A, and all the blogs with odd-numbered IDs go to Substack B. The staff gets split among these two companies, and half of them move to a new office. Both companies retain the same policy of breaking in half once they hit that milestone again — an inherited, auto-trust-busting mechanism.

(Splitting into exactly two companies wouldn’t have to be a part of the commitment. They could equally choose to break up into Substack Red, Substack Blue, and Substack Yellow: Special Pikachu Edition.)

In addition, a core part of the product would be high-quality, deeply integrated tools to switch from one of these branches to another. Probably this would involve an easy way to export all your posts and a list of your subscribers to some neutral file format (maybe a folder full of markdown, css, and csv files), and to import them from the same format into a new blog. If you end up in Substack B and you want to be in Substack A instead (your favorite developer works there or something), the product would make it very easy to switch, maybe to the point of being able to switch at the push of a button.

To help with this, the third and final commitment of the company, and all child companies, would be to keep the software open-source. Unlike biological evolution, software evolution isn’t siloed. If Substack Air implements a great feature, and the team over at Substack Earth likes it too, they can just go to the open-source code of their sister company, snip that AAAGTCTGAC, and copy it over to their branch.

Each of these child companies would go on to develop their tool from the same starting point, but of course the companies would speciate over time. More competitive branches would get to 100,000 users first and would split again, so there would be more descendants of successful branches. Bad branches would die off or just never grow enough to speciate. 

Because it’s easy to switch, branches that make a bad decision will also face an exodus of users to different branches that don’t suck. Of course, one species of Substack might choose to remove the feature that allows you to switch easily, but this seems like evolutionary suicide. Faced with the prospect of being locked in, most users will switch away if there’s any hint of removing this feature. It’s ok if people decide to stay, of course, things might just get weird.

After several generations of isolation from the main line, bloggers will look like this.

Many branches would die — nature is red in tooth and claw, after all — but many companies die off in the normal course of the economy anyways. And it’s reassuring that there would be an ecosystem of similar, related companies that would be ready to hire on any deserving refugees.

Ghosts Undergoing Mitosis

This would keep one company from taking over the blogging ecosystem and imposing terrible conditions. Or rather, if one lineage did dominate the blogging ecosystem, that could be a good thing, not a danger to free thought and free expression. That lineage would be split up across multiple companies with different leadership styles and different values, and would lack the kind of monopoly that tempts men to evil.

If Substack were our company, we would not only implement this idea, we would emphasize it a lot in our marketing and recruitment — not least because your target demo, bloggers, are smart and paranoid. They want this kind of freedom, ownership, and control, and they’re worried about the fact that current platforms sometimes seem a little power-hungry.

It would take Substack a minute to make this pivot, but other companies could do it right now. In fact, the web publishing platform Ghost is already planning to do something along these lines.

Ghost is already open-source, which is the big requirement to get started immediately. If they developed some quality Ghost-to-Ghost migration tools (uhhh… G2G tools?) and started branching, they could do this tomorrow. But probably they don’t want to follow our plan, for in an amusing display of convergent evolution, they have come up with a very similar plan of their own: they plan to stop growing at 50 employees and let other companies take on the growth from there.

(For more about Ghost’s fascinating business model, see here.) 

John O’Nolan, the founder of Ghost, who apparently lives on a sailboat (mad props) was interviewed by the Indie Hackers Podcast, where he said (go to 41:49 or so):

Interviewer: I think with you, we were just talking about this, I think, a few months ago, you have this other arbitrary constraint, I’m not sure why you have it, it might be like a side effect of the fact that you’re a nonprofit, but you can’t hire more than fifty people, was it? At which point you’re constrained and you have to figure out how to grow and become bigger and better without hiring a single more person than fifty. So where’s that constraint come from? 

John: I love that you brought this up, because it’s something I think more and more about nowadays. We’re coming up on, I think 27 people, so more than halfway there, and the rate at which we’re hiring is increasing, so the kind of fifty-sixty number is very much on the horizon, it’s within sight. And the constraint comes from, I have never worked at a company bigger than that which didn’t have office politics, or disconnection from the mission, or where things kind of stopped being fun. And from all the people we’ve hired over the years, there’s a remarkable amount of refugees, who were ar startups, they passed the kind of sixty, seventy person mark, things stopped being fun, middle management came in, the founders sort of left the early team behind, and started pursuing growth goals at the cost of people, and everything just sort of like *sigh* lost what made the journey special, around about that point. 

And there are just so many people who have the exact same story, at a certain point we just said, ok, well, what if we just don’t grow bigger than that, we’ll just stick like not bigger than fifty-ish. Fifty, sixty, somewhere around there. Not going to like, say, be really belligerent about a fixed number, but around that point, what if we just put a line, say, “ok no more”. And… what will that do? 

So, first of all, the same as what I was talking about earlier, it keeps Ghost as a company I’m happy to be stuck with. I want to have a group of fifty or sixty people where I know every single person well — not a large group of strangers who are all just working to a common economic incentive, but a team, a group of people who really know each other deeply and meaningfully, which I think you can still achieve around that size. 

But then, the logical question that follows is, ok, what are the goals of the company once you have fifty or sixty people and you still have ambition? How do you fulfill whatever goals you have that kind of don’t fit into the model of that size of company? And the answer is, you have to change your ambition, or you have to change the model with which you approach your goals.

So, a lot of how I think about Ghost now is less about growing one company — one centralized company — and more about growing a large, decentralized ecosystem. So whereas many-slash-most companies will try to grow bigger, and absorb smaller companies, and kind of be this big blob, consuming more and more of the market to become the holy grail of what everyone wants to become, which is a monopoly that dominates a market, kind of think about the opposite, how can we make Ghost, the products, a really strong and stable core, and then spin off all the other things for which there is demand from the market, but that we don’t have a big enough team to build. 

So maybe that’s community features, or maybe it’s video and media that integrates with Ghost really well, or maybe there’s an enterprise hosting option of people who DO love to get those emails from large companies with a big procurement process and close those deals. If we can have our smaller team make a tight core that enables lots of businesses to exist around Ghost, and around that open-source core, then an ecosystem will evolve around it of multiple economic dependents, and it will probably function similarly to a large company, except that I won’t control all of it, and that’s actually very appealing to me, I don’t want to control all of it, I don’t want to have the final say in how everything should evolve. 

This sounds a lot like the speciation idea we describe above. He even uses the term “ecosystem”! 

There are a few core differences. Limiting the company by the number of employees rather than the number of users might be the better way to go. So in a different version of our proposal, the company could be organized into several teams and the teams could become separate companies once the company has hit 60 employees or something. 

O’Nolan envisions an ecosystem more like Darwin’s finches — related companies that spread out to fill different niches, one for blogging, one for comments, one for video, one for different hosting models, etc. This seems like it would be relatively easy to do, and you can see how a successful company would draw related companies into existence, like a coral reef.

In contrast, we imagine an ecosystem of different companies competing (hopefully friendly competition, but still competition) for the same major niche, like birds and mice all competing for the same nuts and seeds. This seems good because competition will lead to better products, especially given built-in features that let bloggers vote with their feet. It also seems uniquely good in that, if Ghost or Substack or anyone does come to dominate the blogging world, this system will keep them from monopolizing it. 

So we think Ghost should consider not stopping at 50 employees, but undergoing mitosis instead, and splitting into Ghost Day and Ghost Night; or Ghost Sweet and Ghost Sour; or Ghost To and Ghost Fro; or Ghost Claw and Ghost Fang; or Ghost Sound and Ghost Fury; or Ghost Charm and Ghost Strange; or Ghost Video and Ghost Radio; or Ghost Milk and Ghost Honey; or Ghost Rosencrantz and Ghost Guildenstern; or Ghost Migi and Ghost Hidari; or Ghost Ale and Ghost Lager (and Ghost Lambic); or X-Mas Ghosts Past, Present, and Future; or 

*cane reaches out from the wings and pulls us off stage*


Special thanks to our friend Uri Bram for enlightening discussions about the world of online publishing.

TODOs from Paper Systems

I want to start by talking about the emotional experience of working with a todo list. 

The biggest hurdle todo faces is that the emotion you generally experience when you look at your todo list is shame. This is bad because it makes you uncomfortable with the tool and makes you want to avoid it — you don’t want to look at it because engaging with it makes you feel bad, so you don’t use it, or you wait until it’s too late, you avoid it, etc. etc. 

The key to fixing this is coming up with a todo system where the emotional experience is pride. This way you often want to look at your todo list, you enjoy the experience of working with it, you approach it, you seek it out, etc.

To help describe ways we can do this, I’m going to go over some of the pen-and-paper todo systems I’ve used and describe how I think they fulfill the goal of making the emotional experience of working with your todo list one of pride rather than of shame.

Example #1: Digital Painting Calendar

Back in 2017, I was trying to learn digital painting. I was enjoying keeping up with it, and so I set myself a soft goal for myself of trying to do some digital painting (even if only 10 minutes) almost every day.

To keep track of this, I printed out a single-page calendar for the year, and simply marked off every day where I did some digital painting. By the end of the year, I was so proud of this that I saved an image of the calendar, reproduced below. I think I have the hard copy somewhere, even. This was such a positive experience that looking at it STILL makes me proud, even years later: 

Why did this work so well? I think there are a couple reasons.

First of all, the goal was low-commitment (any painting at all). This encouraged me to start painting often, and 10 minutes often turned into 3 hours. But this is a feature of the goal, not the todo system.

The goal was also simple. This helped the todo system, because it made it very easy to determine whether I had “earned” the right to check off each day. 

The scope of this todo system also has some great features. Because the scope is a year long, as soon as I missed one day, I knew that there wasn’t a chance of me getting 100% on the full year, which made it feel lower-stakes, while still being important. Early failures made the stakes not feel catastrophic, which decreased the threat and sense of shame. Incidentally, I think this is a strong argument against the use of daily streaks in todo apps. Streaks are a threat, not encouragement.

The calendar is also naturally split up into sub-units. There are 12 months and about 52 weeks. This means each week and month could also stand alone for success (or failure), keeping the local stakes high enough to be engaging while still avoiding feeling catastrophic. You’ll see that I completed several weeks perfectly. I also tried (and failed) to do every day in April, then tried (and succeeded) to do every day in October. Having these “local stakes” increased the chances for feeling proud of an accomplishment while keeping the total stakes low in terms of failure. I feel good that I 100%’ed October, but I don’t feel bad at all that I missed a bunch of days in December.

I also think this system works well because it covers just ONE of the tasks that I had on my mind then. I did other things in 2017, but this document doesn’t even try to cover those aspects of my life. I wasn’t overwhelmed when looking at it, and it forms a nice historical document that isn’t cluttered by unnecessary context.

Finally, I think this system works well because it pushes back against what I’ll call “the Tetris Problem”. This is something we will come back to again and again. Namely, the Tetris Problem is:

In this todo system, however, both errors and accomplishments are equally and fairly presented. There’s also some value in the fact that they are presented as fact (did I do painting this day or not), rather than as a judgment.

Robert Caro’s “Planning Calendar,” 1971. He shoots for 1,000 words a day — each day is marked with how many words he wrote with excuses in parentheses. (“Lazy,” “sick,” etc.). Source 

Example #2: Trello

I mostly don’t like Trello, but one thing it gets right is the combo of cards and checklists. 

When you have a checklist of 10 things, as you burn through it, the checklist fills up. One day 3/10, then 5/10, then 9/10. When you hit 10/10, it gets bold or changes color or something, I don’t remember. 

The important thing is that this turns Tetris on its head. In this system, accomplishments pile up, and errors are nowhere to be seen. 

Another nice feature is that accomplishments pile up at multiple levels. Completed checkboxes pile up in a list. Completed lists pile up in a card. Then, when the card is all finished, you get a final rush when you drag it to the “completed” pile. 

Importantly, the accomplishments don’t disappear until you manually choose to put them behind you. This is a critical difference from Tetris! With Trello, you bask in your accomplishments for as long as you want — until you say, “this was good but I’m ready to put this chapter of my life behind me, let’s move on to some new projects!”

Though this makes me wonder, should a system have a “trophy case” rather than a “completed” deck? A good point of comparison might be the “run complete” screens from recent hit video game Hades. Every time you successfully escape from hell, the game shows you a bunch of stats about your last run and you can bask in the success as long as you want. This seems like a nice feature.

(Not one of our runs)

Example #2.5: Trello Mimic on Paper

I copied this approach a little back when I was teaching, except I used a pen-and-paper approach rather than Trello. 

Unfortunately I don’t have pictures, but the general idea was this. I pinned a 8.5 x 11 piece of paper to the wall in front of my desk, where I could easily see it every day. Then, what I did is I scoped out all my teaching duties for the semester in a bunch of vertical checklists. I don’t remember the exact numbers, but something like, there were 12 weeks of lectures, 5 major assignments, 3 exams, etc. Each got a checkbox, and as I hit each milestone, I would check it off — one week down, one exam graded, etc.

This wasn’t quite as exciting as Trello for some reason. It didn’t make me feel proud, but it certainly didn’t make me feel shame. I didn’t have any problem looking at that list, and it gave me a good sense of progress as I slogged through some of the dumb-ass grading they made me do lol. I think I would have liked it more if I had felt better about the classes at [SCHOOL REDACTED], but I did not!

I notice that like the digital painting calendar, the scope of this was pretty long-term. I think that is part of what I mean when I focus on accomplishments piling up — it’s not enough for them to just pile up, they need to stick around for a while. It’s also useful if accomplishments produce ephemera, like the calendar, or like these checklists.

A semester may not even be a long enough scope! As a teenager, I mostly thought about tasks in terms of weeks and quarters. When you’re young, your life is explicitly structured around these short-term horizons. But as an adult, I am already starting to think about progress in terms of years, even decades. 

Compare also to the traffic stats interface provided by WordPress for this here blog. Normally we look at traffic per day, but we can immediately zoom out to look at weeks, months (seen below), or even years at the click of a button. With this, we can appreciate a greater scope whenever we want, and it can be nice to see how far we’ve come.

Example #3: Post-its in College

A long-term sense of accomplishment is important, but when we talk about todo, we also need day-to-day elements.

The best todo system I ever used in my life was in college. At the beginning of every week, I took seven post-it notes, one for each day, and wrote out my major milestones for that week. As I went through the week, I would check each off in turn, adding and removing tasks as needed. 

I don’t have any photos, but here’s an artist’s impression: 

At the end of the week, I would pull all seven off the wall and replace them, which was always incredibly satisfying. I felt like I had slaughtered the week every time. 

I do worry that a digital system will never be as satisfying as physically checking boxes and peeling post-its off my dorm room walls. But Trello for all its failings did give me some of that, so I’m optimistic. Probably the thing to do here is to look to the world of game design, to the concepts of game feel, AKA “juice”. Need that screen shake on my checkboxes!!! 😛 

If this system was so great, why don’t I still do it today? Strangely enough, I think it comes down to a few simple factors. In college, I always had only one desk, which was in my room. Ever since then, I’ve generally had one home desk and one work desk, and even that small amount of separation is enough to kill this system. In college, my desk always faced a blank white wall, perfect for hanging post-its. These days, my desk generally faces a window, to reduce eye strain. Trade-offs, man!

There’s also the fact that, when most of my todos were clearly tied to classes and student groups, it was easier to plan a whole week in advance. These days, my schedule is actually a bit too flexible.

Either way, this was a great system and I think there are a lot of lessons here.

The first thing you’ll notice is that, as before, accomplishments pile up. Every week, I knew what I had accomplished so far that week. Even if I missed a task on Monday, if I managed to get to it on Tuesday, I could go back and check it off Monday’s list.

Planning for the week helped keep me from carrying a todo from day to day. These days I still use post-its, but only one at a time. If I don’t finish a task today, I add it to the post-it for the next day. But this is a bad habit, and stressful too. It encourages me to carry many tasks in my working memory (and/or the paper equivalent), rather than spacing them out across seven post-its.

With the old system, I would have put the task at the point in the week when I thought I would be able to accomplish it. I didn’t get to it that day (which did happen sometimes), I would be able to see that it was overdue. This helped give it a naturally higher priority, and made for a clear indicator of just how overdue it was.

It also helped conflate personal and professional accomplishments. Now you may say, why would I want to conflate these? Isn’t it better to treat them differently? Well, I worry that too many people try to keep their work and their personal accomplishments separate, when both are controlled by the same limited resource — time. Having “get groceries” on the same list as “finish term paper” was a nice structural acknowledgement of the fact that both tasks trade on the same resource. I think it kept me from feeling bad when I didn’t get any “work” done in a given day. Hey, those personal chores were important! They were on the list!

It also helped that post-its are small. This reflects the limited time in a day and kept my ambition focused. I could only ever list a few tasks, so I figured out what I really needed to finish each day. It encouraged me to break up big projects into reasonable pieces, each only a couple of hours long, so I could check off a piece of the project on a given day. 

There’s another element which is also critical, but harder to explain. Nonetheless, I strongly believe it to be true. Having these limited post-its encouraged me to 1) do everything on my list as soon as possible, and 2) filled me with energy and a feeling of freedom afterwards. The same experience is described by Sasha Chapin:

And echoed in the responses: 

Whatever the reason, this is definitely a real phenomenon. In college, I churned through my requirements with astonishing speed — and then continued working really hard at whatever I was interested in.

This may have something to do with what Scott Alexander has called infinite debt (see also here). Your school/work/personal/whatever obligations — your schedule obligations — are in some sense infinite. You can always come up with new things to do. Like the moral equivalent, this can make your todo list feel really bad and overwhelming. This makes for bad designs — you don’t want to look at your todo list and feel like “your work will never be done, you’ll never be good enough.” Ouch.

In contrast, by scheduling finite goals for each day, you can give yourself the sense of being on track — not discharging all of your schedule debt, but discharging all your schedule debt for that day. After that point, you’re free for the rest of the day! 

This works even better with my post-its-for-the-week approach. By scheduling out the major milestones for the week, when you finish your tasks for a day, you’re not just on track for the day, you’re on track for the week! 

This can even give rise to a feeling that is so powerful and vicious I can only describe it as “bloodthirsty”. Since your schedule debt is effectively infinite, you normally have no chance of catching up, let alone getting ahead. Scheduling out the day is better because you can catch up and be on track, but you still can’t get ahead. But if you mark out your milestones for the week, you can actually get ahead of schedule. If you finish your work for the day, and you feel energized (which you often will!), you can get that bloodthirst and chew through the tasks for later in the week! That makes you feel more accomplished, and it also gives you more free time later in the week, leading you to get even further ahead — it’s a positive feedback loop! 

The trick here is making each day’s set of tasks accomplishable in the 24 hours you have. But you should be doing that already. If you do this right, you feel great, you’re more productive than ever, and you get “bonus time”!

Example #4: The Modern Hybrid

Right now I am using something that is kind of like the pen-and-paper Trello checklist approach described above, but I’ve added a few features that I think are important. 

This fills a different niche from the post-its (and you could probably use both). Rather than daily organization, this is the near-term scope of 1-3 months or so. 

There are two-long running goals I’ve had for my todo organization, which I’ve struggled with, but I think these todo lists are starting to approach it nicely.

The first I’ll call “families”. This is simply a recognition that, while all tasks trade on time, different tasks belong to different classes or families. You have your personal list, your chores list, your work list, your hobby list, etc. Personally I find it very disorienting if I don’t keep track of which task goes in which family — or worse, if I don’t know how many families of tasks I have going on at all! This makes my todo list feel infinite, and as we covered before, infinite bad! 

So on the subject of families, you’ll see that my checkboxes are broken up into different sections, so I know how many families I have and what task belongs to which. I think any todo list worth its salt will break things up visually — possibly by color or shape, but even better is to be broken up spatially.

My ideal software would let me slide around tasks and families on the page much like I do when laying it out with pen and paper. This is another thing Trello approaches with its spatial organization, but you could certainly go a step or two further.

Two examples

Families also serve my second goal, which is a clear representation of dependencies. Tasks within a family often have a clear priority structure and sometimes even have literal dependencies, where one thing has to come before another. 

I’ve always really wanted a good way of representing dependencies, but actual graphs/connections and so on never worked for me. But in this notebook system, simple layout alone seems to work pretty well. In my first two passes (above), dependency is roughly indicated by a combination of left-to-right and top-to-bottom, like English reading direction. Things lower on the page and further to the right are generally lower priority and/or depend on things above and to the left of them.

Below is my most recent version, which instead uses top-to-bottom alone to indicate dependency. Each column is a family, and vertical order approximately indicates priority and dependency, with items higher in the list being higher priority and being requirements for lower items. 

I say “approximately” because it turns out, you don’t always need to indicate dependency explicitly. A todo list is a memory aid, not a memory replacement. I can remember what the dependencies are — the vertical organization just makes it easy for me to think about it, compare across families, not worry about tasks I haven’t reached yet, and so on. 

Having a quick visual shorthand for dependencies is useful and saves time. Actually bothering to map out all the dependencies tends to look cluttered, and does not save time at all.

In conclusion:

To make you feel pride rather than stress or shame, the ideal features of a todo system are something like:

  1. Accomplishments accumulate
  2. Long-term scope to see the arc of your success
  3. Multiple levels of scope to get sense of reward at multiple scales
  4. Recognize that tasks and events all compete for one resource — time 
  5. Limit your daily tasks and get “Bonus Time”
  6. Clear visual families & dependencies, probably through spatial organization

A Few More Predictions for 2050

This is an extension of our earlier set of Predictions for 2050.

Assistive Technology Meets in the Middle

Early hearing aids sucked. Your options were pretty much limited to asking people to shout, or using one of those giant ear trumpets. The first major advancement seems to have been smaller ear trumpets, shaped like seashells and worn on a headband:

Digital hearing aids started appearing in the 1980s, though you still had to wear a big transistor strapped to your chest. But things slowly got better and better with behind-the-ear devices and eventually in-canal hearing aids.

One of our family members wears hearing aids full-time, and modern hearing aids, while still expensive, are pretty impressive. They’re almost invisible, and they sit deep in the ear — you can use them to boost or block out certain frequencies, and if you turn them down, they essentially function as earplugs. They have bluetooth, so you can listen to music or have your phone calls go straight to your ears. They’re basically just a slightly fancier kind of earbud. 

Apple introduced AirPods in 2016. While they are still cheaper than a high-quality pair of hearing aids, that won’t last forever. Eventually these two devices will meet in the middle, and it won’t take until 2050.

This is an especially clear case, but the same thing will happen with lots of assistive technology. Inventions that are meant to restore our senses or abilities will begin to surpass them, and then everyone will benefit from using them, not just people with disabilities. It will happen with hearing aids first, but it’s easy to imagine a world where AR glasses become better than unassisted eyesight, or robotic leg braces end up better than your knees. You can already buy basic assistive exoskeletons for about $900, it’s coming.

If you want a picture of the future, imagine a girl with sick robot boots – for ever. (source)
I for one welcome our new cyborg maid overlords (source)

Medical Science Realizes that Women are People too

More women’s health problems will be solved, and this will lead to greater understanding of how the human body works in general, since women and men are basically the same except for small differences in the amounts/ratios of their hormones. An obvious example is the role of hormones in thermoregulation — women usually feel colder than men, despite having slightly higher core temperatures and slightly more body fat, and hot flashes are the stereotypical side effect of menopause. This seems kind of weird but everyone just takes it for granted.

(For what it’s worth, men have hormonal cycles too, which are if anything even less well understood.)

Certainly this covers anything about menopause and hormonal rhythms, but women are also more likely to have IBS, arthritis, and celiac disease, and twice as likely to have migraines. About 2/3 of Americans with Alzheimer’s are women. Figuring out why women are more likely to have these diseases will help us treat everyone more effectively, and lead to medical breakthroughs.

Everything Will be on Video

For a long time no one really knew what a tsunami looked like. They strike rarely and without warning, so there isn’t much time for you to send your local landscape painter or a camera crew to the scene. They don’t tend to leave a ton of eyewitnesses — if you’re close enough to get a good look at what’s happening, you’re probably dead. So for a long time, most people imagined a huge cresting wave like the ones you see at a surfing beach, just ten or a hundred times bigger. 

But it turns out they were wrong. We’ll let XKCD describe

The real picture is slightly more complicated (Randall goes into more detail here) but in general he’s right. Do a google image search for “tsunami” and you’ll see a lot of photoshopped images of giant cresting waves rising up above major cities. 

But video from the 2004 tsunami showed that a tsunami isn’t a wave at all — the water level just goes up 20 feet all at once, which is really really bad all on its own. Since then, every major tsunami has been captured on video. And why not? Even in the developing world, nearly everyone has a video camera in their pocket at all times

Giant squid have long been monsters of legend, but the whole 20th century came and went without anyone photographing a giant squid alive. All this changed in (also) 2004, when a Japanese team managed to capture a photograph of a giant squid using a lure. Not long after that, we had video — first on the surface in 2006, and then in its natural habitat in 2012.

The 2020 Beirut explosion caught everyone by surprise. But there were still multiple videos and images available immediately, within minutes, to anyone on twitter:

You probably heard about the recent volcanic eruption near Tonga. Like Beirut, we immediately had multiple videos within hours. Unlike Beirut, some of these were satellite videos. Partially we point this out to say, you can see this shit from space. But partially we want to emphasize that even satellite video now ends up on twitter and reddit in a matter of hours, if not minutes. 

This is the world we’re living in. Almost everyone has a video camera in their pocket at all times. This isn’t entirely true in the developing world, but it’s getting more true there all the time. And when the event is something that can’t be captured on your cellphone, like a volcanic eruption visible from space, the footage will make its way to twitter in a few minutes anyways.

From here on out, anything interesting will be captured on video, and usually that video will be publicly available. When we were looking into the leanest and fattest cities in the US, and learned about the explosion at the Chemtool lithium grease factory in Rockton, IL, we were able to find not one but several videos of the explosion publicly available on the internet. We didn’t even have to look that hard.

Never seen anything like this” is right, and that’s the byword of the next several decades. This will probably be humdrum by 2050, but between now and then there will be a lot of firsts. Like the first (decent) video of a tsunami in 2004, and the first video of a giant squid in 2006, there will soon be the first video of Halley’s Comet, maybe the first video of an asteroid impact, and of course the first video of the Loch Ness Monster.  

So unless we have a total civilizational collapse, from now on expect that all important historical events will be captured on video. By 2050, expect them all from multiple angles, in glaring HD. If Napoleon is brought back to life through the power of cloning, and marches across Europe in 2034, expect to be able to count the pores on his nose in the newsreel footage.  

Double Book Review: Confessions of an Ad Man & The Way of the General

I.

David Ogilvy’s Confessions of an Advertising Man opens:

As a child I lived in Lewis Carroll’s house in Guildford. My father, whom I adored, was a Gaelic-speaking Highlander, a classical scholar and a bigoted agnostic. One day he discovered that I had started going to church secretly.

“My dear old son, how can you swallow that mumbo-jumbo? It is all very well for servants but not for educated people. You don’t have to be a Christian to behave like a gentleman!

My mother was a beautiful and eccentric Irishwoman. She disinherited me on the ground that I was likely to acquire more money than was good for me without any help from her. I could not disagree.

For those of you who are just tuning in, David Ogilvy was a copywriter who made his way to advertising stardom. He founded the advertising firm Ogilvy & Mather (now known simply as “Ogilvy”), and in 1962, Time Magazine called him “the most sought-after wizard in today’s advertising industry”. People still call him “the Father of Advertising” and “the King of Madison Avenue” to this day. Wikipedia describes him simply as a “British advertising tycoon”. 

It’s immediately obvious that Ogilvy is an engaging writer. He knows this, because he’s cultivated it. From the start he’s talking about the value of writing, and he never strays too far from the topic. You can tell it’s important to him. “We like reports and correspondence to be well-written, easy to read – and short,” he says. “We are revolted by pseudo-academic jargon.” Later he says, “American businessmen are not taught that it is a sin to bore your fellow creatures.”

The writing shines brightest in his personal narratives — his statistics training at Princeton, his time as a door-to-door salesman, dropping out of Oxford to go to work as an apprentice chef at the Hotel Majestic in Paris, trying to avoid the storm of forty-seven raw eggs thrown across the kitchen at his head (“scoring nine direct hits”) by the Hotel’s chef potager who had grown impatient with Ogilvy’s constant “raids on his stock pot in search of bones for the poodles of an important client” — and so on.

The Hotel Majestic, now known as The Peninsula Paris

But his business advice is equally gripping — hiring and firing, how to get clients, how to keep clients, how to be a good client, how to write ads for television, and so on. This is striking, because most business advice is tedious and bad. 

His advice escapes these clichés partly because it is delivered in the writing style he recommends — easy to read, short, and direct. But another part of it is that his advice has something of a timeless quality to it. So after the quality of the writing, the second thing we noticed is that Ogilvy strongly reminds us of 2nd-century Chinese statesman, mystic, and military strategist Zhuge Liang.

II.  

Zhuge Liang, also known by his courtesy name Kongming, or his nickname Wolong (meaning “Crouching Dragon”), was born in 181 CE, in eastern China. He grew to become a scholar so highly regarded that his surname alone is synonymous with intelligence. In China, calling someone “Zhuge” is like calling someone “Einstein” in the west, except less likely to be sarcastic. 

Zhuge’s parents died when he was very young, and he was raised by one of his father’s cousins. This was during the extremely unstable years leading up to the Three Kingdoms period, when war was tearing the empire apart, and famines were so extreme that whole provinces resorted to cannibalism. While Zhuge was still a teenager, he was forced to move to a town in central China.

There he grew into a man of great insight and intelligence. Eventually he was discovered by Liu Bei, a distant relation of the Emperor and one of the great men of the age. Liu Bei was an accomplished general, but he had a reputation for being direct and honorable to a fault. Zhuge, on the other hand, already had a reputation for trickery and cunning. He shared with Liu Bei an idea that came to be known as the Longzhong Plan, a plan which eventually led to Liu Bei being crowned emperor of the new state of Shu Han. Zhuge is a central character in the massive historical classic Romance of the Three Kingdoms, and shrines in his honor still dot China 1,800 years later.

The parallels between Ogilvy and Zhuge are surprisingly strong. Both were extremely well-read in a wide variety of topics, but neither of them were snobs. Zhuge could quote classics like the Analects of Confucius and Sun Tzu’s The Art of War, but also enjoyed reciting folk songs from his hometown. In his book, Ogilvy references the ancient Greek orator Demosthenes, quotes statesmen like Winston Churchill, but also quotes a stanza sung by The Pirate King from Gilbert and Sullivan’s Pirates of Penzance

David Ogilvy

When Liu Bei recruited Zhuge Liang, Zhuge was working as a subsistence farmer in Longzhong valley. Fifteen years later, he was appointed Regent to Liu Bei’s son, the young Emperor of Shu Han, when Liu Bei died. 

“Fifteen years ago,” writes Ogilvy at the beginning of Chapter Two, “I was an obscure tobacco farmer in Pennsylvania. Today I preside over one of the best advertising agencies in the United States, with billings of $55,000,000 a year, a payroll of $5,000,000, and offices in New York, Chicago, Los Angeles, San Francisco, and Toronto.”

Farming wasn’t the only profession they shared. After finishing his book, we were surprised to learn that Ogilvy also worked as a military strategist. In World War II he served with British Intelligence, where he applied the insights he had gained from studying polling (with George Gallup himself) to secret intelligence and propaganda.

Takeshi Kaneshiro as Zhuge Liang, in John Woo’s Red Cliff

Zhuge Liang has a couple surviving works to his name. His longest work is called The Way of the General, so that’s the main book we draw on today. We also consider his two memorials known as the Chu Shi Biao, as well as a letter he wrote to his son, called Admonition to His Son. Finally, as The Way of the General is sometimes considered to be a sort of commentary on Sun Tzu’s The Art of War, we will occasionally reference that work as well. 

Similarly, Ogilvy has not only Confessions of an Advertising Man, but also a fascinating manual, The Theory and Practice of Selling the AGA Cooker, which Fortune magazine called “the finest sales instruction manual ever written.” With an endorsement like that, you know we will be referring to this piece.

III.

Let’s start with the writing. The two men have a very similar style. Both books are clearly written. But while the language they use is normally plain, both men have an occasional tendency to dip into wild metaphors. 

Ogilvy describes founders who get rich and let their creative fires go out as “extinct volcanoes”, and refers to his set of techniques for writing great campaigns as “my magic lantern.” Meanwhile, Zhuge opens his book with the following imagery: “If the general can hold the authority of the military and operate its power, he oversees his subordinates like a fierce tiger with wings, flying over the four seas, going into action whenever there is an encounter.” On the other hand: “If the general loses his authority and cannot control the power, he is like a dragon cast into a lake.” 

“Those who would be military leaders must have loyal hearts, eyes and ears, claws and fangs. Without people loyal to them, they are like someone walking at night, not knowing where to step. Without eyes and ears, they are as though in the dark, not knowing how to proceed. Without claws and fangs, they are like hungry men eating poisoned food, inevitably to die,” says Zhuge, while Ogilvy says, I prefer the discipline of knowledge to the anarchy of ignorance. We pursue knowledge the way a pig pursues truffles. A blind pig can sometimes find truffles, but it helps to know that they grow in oak forests.”

Sometimes these metaphors veer into the farcical. “Advertising is a business of words,” writes Ogilvy, “but advertising agencies are infested with men and women who cannot write. They cannot write advertisements, and they cannot write plans. They are helpless as deaf mutes on the stage of the Metropolitan Opera.” Zhuge strikes a similar note in writing, “If the rulership does not give [generals] the power to reward and punish, this is like tying up a monkey and trying to make it cavort around, or like gluing someone’s eyes shut and asking him to distinguish colors.”

Both of them make a lot of lists. Zhuge has lists of five skills, four desires, fifteen avenues of order, and eight kinds of decadence in generalship (“Seventh is to be a malicious liar with a cowardly heart.”). Ogilvy has lists of ten criteria for accounts, fourteen devices to use when you need to use very long copy, and twenty-two commandments for advertising food products (“The larger your food illustration, the more appetite appeal.”). 

These lists are good enough that you could easily turn them into a series of Buzzfeed-style listicles: “8 Kinds of Decadence in Generalship – Number 7 will SHOCK YOU”

Both men sometimes use little parables to drive home their points. In one section, Zhuge lists a number of ancient kings and their approaches to winning wars with the least possible violence. Ogilvy sometimes combines a parable with one of his vivid metaphors, and ends up sounding rather a lot like a Chinese courtier himself:  

When Arthur Houghton asked us to do the advertising for Steuben, he gave me a crystal-clear directive: “We make the best glass in the world. Your job is to make the best advertising.”

I replied, “Making perfect glass is very difficult. Even the Steuben craftsmen produce some imperfect pieces. Your inspectors break them. Making perfect advertisements is equally difficult.”

Six weeks later I showed him the proof of our first Steuben advertisement. It was in color, and the plates, which had cost $1,200, were imperfect. Without demur, Arthur agreed to let me break them and make a new set. For such enlightened clients it is impossible to do shoddy work.

Both books hit their key themes over and over, in slightly different guises each time. They look at the same few ideas repeatedly, from different perspectives. Continuous focus on the fundamentals highlights what really matters, and maybe this is why much of their advice ends up sounding so similar.

Ambition

For these two men, the root of their advice, and probably the root of their similarity, is that both of them are enormously ambitious. “Aspirations should remain lofty and far-sighted,” writes Zhuge. Despite being born a Scotsman, Ogilvy sounds very American when he says, “Don’t bunt. Aim out of the park.” Then he sounds kind of like Zhuge again, when he finishes with, “Aim for the company of immortals.”

Ambition gets a bad rap these days, but these two aren’t talking about accumulating piles of money, or being as big or as famous as humanly possible. Ambition means doing something meaningful with your life. “I have no ambition to preside over a vast bureaucracy.” says our Ad Man. “That is why we have only nineteen clients. The pursuit of excellence is less profitable than the pursuit of bigness, but it can be more satisfying.” 

Zhuge goes out of his way to specifically mention fighting injustice. “If your will is not strong,” he says, “if your thought does not oppose injustice, you will fritter away your life stuck in the commonplace, silently submitting to the bonds of emotion, forever cowering before mediocrities, never escaping the downward flow.”

And this is the other side of ambition, maybe the side that really matters: freedom from fear. Zhuge says, “The years run off with the hours, aspirations flee with the years. Eventually one ages and collapses. What good will it do to lament over poverty?” You only get one life and it’s going to end someday. You’re going to lose it all no matter what, so why not be ambitious? The alternative is cowering before mediocrity.

Many people are afraid of failing, or worse, the embarrassment that they imagine comes with failure. We say “imagine” because, once you try it, you’ll find that most of the time, the embarrassment never comes. And you can’t fight injustice, let alone make excellent ads, if you’re hung up on the idea of failing.

Hard Work & Relaxation

To the short-sighted, effort and relaxation seem like opposites. It’s easy to think there are two categories of people: those who work very hard for very long hours (and presumably burn out) and those who are slackers (and presumably go nowhere). In certain rare cases people talk about aiming for “work-life balance”, a sort of purgatorial or limbo-like concept that combines the worst of both worlds — the inability to get anything done at work with the inability to have anything more than the most superficial personal life.

Ogilvy and Zhuge understand that this isn’t how it works. Work and rest are complements, and they advocate a life where you both work extremely hard and place a high premium on relaxation. 

Maybe it’s not surprising to hear that a Madison Avenue executive worked long hours, but Ogilvy really did work some long hours. He reminisces about his time working for the head chef at the Parisian Hotel Majestic, who worked seventy-seven hours a week, and says, “That is about my schedule today.” When describing what he admires, Ogilvy comes right out and says, “It is more fun to be overworked than to be underworked.” Elsewhere he says, “I believe in the Scottish proverb: ‘Hard work never killed a man.’ Men die of boredom, psychological conflict and disease. They never die of hard work.”

Zhuge mentions some long hours himself. “One who rises early in the morning and retires late at night,” he says, “is the leader of a hundred men.” He kind of makes a point of it. “Generals do not say they are thirsty before the soldiers have drawn from the well,” he says. “Generals do not say they are hungry before the soldiers’ food is cooked; generals do not say they are cold before the soldiers’ fire are kindled; generals do not say they are hot before the soldiers’ canopies are drawn.” 

These are grueling requirements, but much of it seems to spring from the noble desire to not expect anything from others that you wouldn’t do yourself. Zhuge says, “Lead them into battle personally, and soldiers will be brave.” In explaining his own long hours, Ogilvy says, “I figure that my staff will be less reluctant to work overtime if I work longer hours than they do.”

This seems like more than hustle culture. It’s closely related to the drive for excellence. “From morning to night we sweated and shouted and cursed and cooked,” says Ogilvy of his time at the Hotel Majestic. “Every man jack was inspired by one ambition: to cook better than any chef had ever cooked before.”

In warfare, excellence can save thousands of lives. It is somewhat more prosaic in advertising, but we think Ogilvy is sincere when he promises his employees, “I try to make sufficient profits to keep you all from penury in old age,” and excellence in advertising helps him make good on that promise.

The commitment to hard work is important in part because hard work is how you make something look easy. The height of woodworking is when you cannot see the seams, and the height of advertising is when you cannot see the ad:

A good advertisement is one which sells the product without drawing attention to itself. It should rivet the reader’s attention on the product. Instead of saying “What a clever advertisement”, the reader says “I never knew that before. I must try this product.”

It is the professional duty of the advertising agent to conceal his artifice. When Aeschines spoke, they said, “How well he speaks.” But when Demosthenes spoke, they said, “Let us march against Philip.” I’m for Demosthenes.

To our ear, this sounds almost exactly like the following passage from The Art of War

To see victory only when it is within the ken of the common herd is not the acme of excellence. Neither is it the acme of excellence if you fight and conquer and the whole Empire says, “Well done!” To lift an autumn hair is no sign of great strength; to see the sun and moon is no sign of sharp sight; to hear the noise of thunder is no sign of a quick ear.

What the ancients called a clever fighter is one who not only wins, but excels in winning with ease. Hence his victories bring him neither reputation for wisdom nor credit for courage. He wins his battles by making no mistakes. Making no mistakes is what establishes the certainty of victory, for it means conquering an enemy that is already defeated. 

While we think Ogilvy is more like Zhuge Liang than Sun Tzu, Confessions of an Advertising Man might be more like The Art of War than The Way of the General. Both are about the same length. The physical books are about the same size. Both are divided up into a modest number of chapters — 11 chapters for Confessions, and 13 for The Art of War. In both books, each chapter is devoted to a specific topic, like “How to Keep Clients”, “Variation in Tactics”, “How to Rise to the Top of the Tree”, “Laying Plans”, “How to Build Great Campaigns”, “The Use of Spies”, and “Attack by Fire”.

Zhuge and Ogilvy both stress the importance of relaxation as an explicit complement to their focus on hard work and long hours. In a letter to his son where he warns against being lazy, Zhuge also says:

The practice of a cultivated man is to refine himself by quietude and develop virtue by frugality. Without detachment, there is no way to clarify the will; without serenity, there is no way to get far.

Study requires calm, talent requires study. Without study there is no way to expand talent; without calm there is no way to accomplish study.

Ogilvy also likes to study, but he tends to think of it as “homework”. His true love is vacations, which he describes like so:

I hear a great deal of music. I am on friendly terms with John Barleycorn. I take long hot baths. I garden. I go into retreat among the Amish. I watch birds. I go for long walks in the country. And I take frequent vacations, so that my brain can lie fallow—no golf, no cocktail parties, no tennis, no bridge, no concentration; only a bicycle.

Zhuge makes it clear that calm is needed for study, so that you can increase your talents. Ogilvy is equally clear that he takes vacations because he needs them to be creative:

The creative process requires more than reason. … I am almost incapable of logical thought, but I have developed techniques for keeping open the telephone line to my unconscious, in case that disorderly repository has anything to tell me. …

While thus employed in doing nothing [on vacation], I receive a constant stream of telegrams from my unconscious, and these become the raw material for my advertisements.

Both men emphasize relaxation because they believe it will help them be more productive. You may see this as dysfunctional; if so, it’s telling that Ogilvy agrees with you. “If you prefer to spend all your spare time growing roses or playing with your children, I like you better,” he says, “but do not complain that you are not being promoted fast enough.“ 

But there’s also an interesting point to be made. Even if productivity is the only thing you care about (let’s hope it’s not, but even so), you still need lots of calm and rest to make it happen. Working long hours can be fine if that’s what you want, but people who work all the time are doing it wrong. 

It’s also worth noting how the two of them think about creativity in about the same terms: 

Creative people are especially observant, and they value accurate observation (telling themselves the truth) more than other people do. They often express part-truths, but this they do vividly; the part they express is the generally unrecognized; by displacement of accent and apparent disproportion in statement they seek to point to the usually unobserved. They see things as others do, but also as others do not.

And:

An observant and perceptive government is one that looks at subtle phenomena and listens to small voices. When phenomena are subtle they are not seen, and when voices are small they are not heard; therefore an enlightened leader looks closely at the subtle and listens for the importance of the small voice. This harmonizes the outside with the inside, and harmonizes the inside with the outside; so the Way of government involves the effort to see and hear much.

Recruiting Great People

Zhuge and Ogilvy had different sorts of ambitions. Ogilvy wanted to be a great chef, then he wanted to make the best advertisements. Somewhere in between he wanted to be a tobacco farmer. Zhuge wanted to fight injustice, lower the people’s taxes, prevent government corruption, and (depending on the version of the story) embarrass Zhou Yu.

But despite these differences in focus, both of them agree that the highest form of ambition is to work with great people. Even so, the trouble with amazing people is, how do you find them? This question is at least as old as Zhuge’s time, probably much older, and both authors take it very seriously.

Ogilvy tells us that he has talked to some psychologists who have been working on the problem of creativity. But, he tells us, they have not yet caught up to his approach:

While I wait for Dr. Barron and his colleagues to synthesize their clinical observations into formal psychometric tests, I have to rely on more old-fashioned and empirical techniques for spotting creative dynamos. Whenever I see a remarkable advertisement or television commercial, I find out who wrote it. Then I call the writer on the telephone and congratulate him on his work. A poll has shown that creative people would rather work at Ogilvy, Benson & Mather than at any other agency, so my telephone call often produces an application for a job.

I then ask the candidate to send me the six best advertisements and commercials he has ever written. This reveals, among other things, whether he can recognize a good advertisement when he sees one, or is only the instrument of an able supervisor. Sometimes I call on my victim at home; ten minutes after crossing the threshold I can tell whether he has a richly furnished mind, what kind of taste he has, and whether he is happy enough to sustain pressure.

Zhuge has similar tricks. “Hard though it be to know people,” says Zhuge, “there are ways.” He doesn’t recommend visiting your prospective hires at home; instead, he suggests other situations you can put them in, to test their personalities. In characteristic fashion, he gives us a list:

First is to question them concerning right and wrong, to observe their ideas.

Second is to exhaust all their arguments, to see how they change.

Third is to consult with them about strategy, to see how perceptive they are.

Fourth is to announce that there is trouble, to see how brave they are.

Fifth is to present them with the prospect of gain, to see how modest they are.

Sixth is to give them a task to do within a specific time, to see how trustworthy they are.

Ogilvy goes a step further — not only does he give advice on how ad agencies can take the measure of potential employees, he lays out advice on how clients (that is, businesses) can take the measure of a potential ad agency! In spelling it out, he practically reiterates Zhuge’s list:

Invite the chief executive from each of the leading contenders to bring two of his key men to dine at your house. Loosen their tongues. Find out if they are discreet about the secrets of their present clients. Find out if they have the spine to disagree when you say something stupid. Observe their relationship with each other; are they professional colleagues or quarrelsome politicians? Do they promise you results which are obviously exaggerated? Do they sound like extinct volcanoes, or are they alive? Are they good listeners? Are they intellectually honest?

Above all, find out if you like them; the relationship between client and agency has to be an intimate one, and it can be hell if the personal chemistry is sour.

The most specific piece of advice the two authors agree on is where to find great people. “We receive hundreds of job applications every year,” Ogilvy admits. “I am particularly interested in those which come from the Middle West. I would rather hire an ambitious young man from Des Moines than a high-priced fugitive from a fashionable agency on Madison Avenue.” 

They agree that great people usually come from obscurity. “For strong pillars you need straight trees; for wise public servants you need upright people,” says Zhuge. “Straight trees are found in remote forests; upright people come from the humble masses. Therefore when rulers are going to make appointments they need to look in obscure places.” And apparently, this practice goes back pretty far. “Ancient kings are known to have hired unknowns and nobodies,” says Zhuge, “finding in them the human qualities whereby they were able to bring peace.”

Maybe these authors both feel this way because both of them started out in obscurity. But then again, here we are reading their books approximately 60 and 1,800 years later, so maybe they’re right. 

This is how Zhuge describes himself:

I was of humble origin, and used to lead the life of a peasant in Nanyang. In those days, I only hoped to survive in such a chaotic era. I did not aspire to become famous among nobles and aristocrats. The Late Emperor did not look down on me because of my background. He lowered himself and visited me thrice in the thatched cottage, where he consulted me on the affairs of our time. I was so deeply touched that I promised to do my best for him. 

Driving the point home is this memo Ogilvy sent to one of his partners in 1981:

Will Any Agency Hire This Man? 

He is 38, and unemployed. He dropped out of college. 

He has been a cook, a salesman, a diplomatist and a farmer. 

He knows nothing about marketing and had never written any copy. 

He professes to be interested in advertising as a career (at the age of 38!) and is ready to go to work for $5,000 a year. 

I doubt if any American agency will hire him.

However, a London agency did hire him. Three years later he became the most famous copywriter in the world, and in due course built the tenth biggest agency in the world. 

The moral: it sometimes pays an agency to be imaginative and unorthodox in hiring.

In case you can’t tell, he is describing himself.

Integrity

When Zhuge and Ogilvy talk about greatness, they’re not just talking about skill. In fact, skill comes second, and a distant second at that! Without integrity, without virtue, skill means nothing. 

“I admire people with first-class brains, because you cannot run a great advertising agency without brainy people,” says Ogilvy. “But brains are not enough unless they are combined with intellectual honesty.” Zhuge quotes Confucius as saying, “People may have the finest talents, but if they are arrogant and stingy, their other qualities are not worthy of consideration.”

Ogilvy doesn’t pull his punches, here or indeed ever. “I despise toadies who suck up to their bosses,” he says. “They are generally the same people who bully their subordinates. … I admire people who hire subordinates who are good enough to succeed them. I pity people who are so insecure that they feel compelled to hire inferiors as their subordinates.” 

A good leader looks to their team for counsel — these people were recruited for a reason! “Those who consider themselves lacking when they see the wise, who go along with good advice like following a current, who are magnanimous yet able to be firm, who are uncomplicated yet have many strategies,” says Zhuge, “are called great generals.”

You don’t expect much personal virtue from Madison Avenue, but Ogilvy really seems to feel strongly about this one:

I admire people who build up their subordinates, because this is the only way we can promote from within the ranks. I detest having to go outside to fill important jobs, and I look forward to the day when that will never be necessary.

I admire people with gentle manners who treat other people as human beings. I abhor quarrelsome people. I abhor people who wage paper-warfare. The best way to keep the peace is to be candid. 

Integrity is especially important in leadership — “for what is done by those above,” says Zhuge, “is observed by those below.” Here especially, the two leaders exhibit their belief that they should not expect anything of others that they are not prepared to demonstrate themselves. “To indulge oneself yet instruct others is contrary to proper government,” says Zhuge. “To correct oneself and then teach others is in accord with proper government. … If [leaders] are not upright themselves, their directives will not be followed, resulting in disorder.”

Ogilvy gives more detail. “I try to be fair and to be firm,” he says, “to make unpopular decisions without cowardice, to create an atmosphere of stability, and to listen more than I talk.” This is in some ways a very Confucian perspective, that a leader owes their subordinates exemplary behavior. “A policy of instruction and direction means those above educate those below,” says Zhuge, “not saying anything that is unlawful and not doing anything that is immoral.”

Exceptional integrity means understanding that you have a commitment to the people who work for you. Not the same commitment than they have to you — more of a commitment.  

Zhuge paraphrases Confucius as saying, “an enlightened ruler does not worry about people not knowing him, he worries about not knowing people. He worries not about outsiders not knowing insiders, but about insiders not knowing outsiders. He worries not about subordinates not knowing superiors, but about superiors not knowing subordinates. He worries not about the lower classes not knowing the upper classes, but about the upper classes not knowing the lower classes.”

“In the early days of our agency I worked cheek by jowl with every employee; communication and affection were easy,” says Ogilvy. “But as our brigade grows bigger I find it more difficult. How can I be a father figure to people who don’t even know me by sight?” If Confuicius was right, I guess this makes Ogilvy an enlightened ruler.

“It is important to admit your mistakes,” Ogilvy tells us, “and do so before you are charged with them. Many clients are surrounded by buckpassers who make a fine art of blaming the agency for their own failures. I seize the earliest opportunity to assume the blame.” 

But it’s not all tactics — you also want to earn the respect of the people you work with. “If you are brave about admitting your mistakes to your clients and your colleagues, you will earn their respect. Candor, objectivity and intellectual honesty are a sine qua non for the advertising careerist.” 

Being respected does happen to be good for business, but it’s also important for your self-worth as a person. Ogilvy offers a few conspicuous cases where he decided to act honorably, even though it was against his business interests:

Several times I have advised manufacturers who wanted to hire our agency to stay where they were. For example, when the head of Hallmark Cards sent emissaries to sound me out, I said to them, “Your agency has contributed much to your fortunes. It would be an act of gross ingratitude to appoint another agency. Tell them exactly what it is about their service which you now find unsatisfactory. I am sure they will put it right. Stay where you are.” Hallmark took my advice.

When one of the can companies invited us to solicit their account, I said, “Your agency has been giving you superb service, in circumstances of notorious difficulty. I happen to know that they lose money on your account. Instead of firing them, reward them.”

Exceptional integrity means exceptional humanity. “One whose humanitarian care extends to all under his command, whose trustworthiness and justice win the allegiance of neighboring nations, who understands the signs of the sky above, the patterns of the earth below, and the affairs of humanity in between, and who regards all people as his family,” says Zhuge, “is a world-class leader, one who cannot be opposed.”

Exceptional humanity in advertising — in 1963 no less! — looks like this:

Some of our people spend their entire working lives in our agency. We do our damnedest to make it a nice place to work. 

We treat our people like human beings. We help them when they are in trouble–with their jobs, with illness, with alcoholism, and so on.

We help our people make the best of their talents, investing an awful lot of time and money in training–like a teaching hospital. 

Our system of management is singularly democratic. We don’t like hierarchical bureaucracy or rigid pecking orders.

We give our executives an extraordinary degree of freedom and independence. 

We like people with gentle manners. Our New York office gives an annual award for “professionalism combined with civility.” 

We like people who are honest in argument, honest with clients, and above all, honest with consumers.

We admire people who work hard, who are objective and thorough.

We despise office politicians, toadies, bullies and pompous asses. We abhor ruthlessness.

The way up the ladder is open to everybody. We are free from prejudice of any kind — religious prejudice, racial prejudice or sexual prejudice. 

We detest nepotism and every other form of favouritism. In promoting people to top jobs, we are influenced as much by their character as anything else.

And in case that isn’t scrupulous enough for you, there’s at least one product that Ogilvy entirely refuses to advertise: politicians. “The use of advertising to sell statesmen,” he says, “is the ultimate vulgarity.”

V.

Zhuge and Ogilvy focus on different things. Zhuge has a section on grieving for the dead, Ogilvy has a chapter on writing television commercials. But these differences are superficial. Both men are animated by the same spirit. Both of them are infinitely ambitious — but it’s not a callous ambition. Their ambition is to be honest, relaxed, creative, and humane. 

We think these men would have been good friends. It’s tragic that they were born 1,730 years and several thousand miles apart. But it’s to our advantage that we get to read both books and see that these two authors are drawing from the same well. The best wisdom is timeless.

Reality is Very Weird and You Need to be Prepared for That

I. 

Maciej Cegłowski’s essay Scott And Scurvy is one of the most interesting things we’ve ever read. We keep coming back to it — and we hope to write more about it in the future — but today we want to start with just how weird the whole thing is.

Scott and Scurvy tells the true history of scurvy, a horrible and dangerous disease. Scurvy is the result of a vitamin C deficiency — if you’re a sailor or something, eating preserved food for months on end, you eventually run out of vitamin C and many horrible things start happening to your body. If this continues long enough, you die. But at any point, consuming even a small amount of vitamin C, present in most fresh foods, will cure you almost immediately. 

We can’t do the full story justice (read the original essay, seriously), but just briefly: The cure was repeatedly discovered and lost by different crews of sailors at different points in time. Then in 1747, James Lind tried a bunch of treatments and found that citrus was more or less a miracle cure for the disease. Even so, it took until 1799, more than 50 years, for citrus juice to become a staple in the Royal Navy. 

Instead of diagrams depicting the horrifying symptoms of scurvy, please enjoy this picture of James Lind shoving a whole lemon into some unfortunate sailor’s mouth.

Originally, the Royal Navy was given lemon juice, which works well because it contains a lot of vitamin C. But at some point between 1799 and 1870, someone switched out lemons for limes, which contain a lot less vitamin C. Worse, the lime juice was pumped through copper tubing as part of its processing, which destroyed the little vitamin C that it had to begin with. 

This ended up being fine, because ships were so much faster at this point that no one had time to develop scurvy. So everything was all right until 1875, when a British arctic expedition set out on an attempt to reach the North Pole. They had plenty of lime juice and thought they were prepared — but they all got scurvy. 

The same thing happened a few more times on other polar voyages, and this was enough to convince everyone that citrus juice doesn’t cure scurvy. The bacterial theory of disease was the hot new thing at the time, so from the 1870s on, people played around with a theory that a bacteria-produced substance called “ptomaine” in preserved meat was the cause of scurvy instead. 

This theory was wrong, so it didn’t work very well. Everyone kept getting scurvy on polar expeditions. This lasted decades, and could have lasted longer, except that two Norwegians happened to stumble on the answer entirely by accident: 

It was pure luck that led to the actual discovery of vitamin C. Axel Holst and Theodor Frolich had been studying beriberi (another deficiency disease) in pigeons, and when they decided to switch to a mammal model, they serendipitously chose guinea pigs, the one animal besides human beings and monkeys that requires vitamin C in its diet. Fed a diet of pure grain, the animals showed no signs of beriberi, but quickly sickened and died of something that closely resembled human scurvy.

No one had seen scurvy in animals before. With a simple animal model for the disease in hand, it became a matter of running the correct experiments, and it was quickly established that scurvy was a deficiency disease after all. Very quickly the compound that prevents the disease was identified as a small molecule present in cabbage, lemon juice, and many other foods, and in 1932 Szent-Györgyi definitively isolated ascorbic acid.

Even in retrospect, the story is pretty complicated. But we worry that it would have looked even messier from the inside.

II.

Holst and Frolich also ran a version of the study with dogs. But the dogs were fine. They never developed scurvy, because unlike humans and guinea pigs, they don’t need vitamin C in their diet. Almost any other animal would also have been fine — guinea pigs and a few species of primates just happen to be really weird about vitamin C. So what would this have looked like if Holst and Frolich just never got around to replicating their dog research on guinea pigs? What if the guinea pigs had gotten lost in the mail?

Three of Theodore Roosevelt’s children posing in a photo with one of their five guinea pigs. Kermit Roosevelt is holding the pig.

Let’s imagine a version of history where the guinea pigs did indeed get lost in the Norwegian mail, so Holst and Frolich only tested dogs, and found no sign of scurvy. Let’s further imagine that Frolich has been struck by inspiration, and through pure intuition has figured out exactly what is going on. 

Frolich: You know Holst, I think old James Lind was right. I think scurvy really is a disease of deficiency, that there’s something in citrus fruits and cabbages that the human body needs, and that you can’t go too long without. 

Holst: Frolich, what are you talking about? That doesn’t make any sense.

Frolich: No, I think it makes very good sense. People who have scurvy and eat citrus, or potatoes, or many other foods, are always cured.

Holst: Look, we know that can’t be right. George Nares had plenty of lime juice when he led his expedition to the North Pole, but they all got scurvy in a couple weeks. The same thing happened in the Expedition to Franz-Josef Land in 1894. They had high-quality lime juice, everyone took their doses, but everyone got scurvy. It can’t be citrus.

Frolich: Maybe some citrus fruits contain the antiscorbutic [scurvy-curing] property and others don’t. Maybe the British Royal Navy used one kind of lime back when Lind did his research but gave a different kind of lime to Nares and the others on their Arctic expeditions. Or maybe they did something to the lime juice that removed the antiscorbutic property. Maybe they boiled it, or ran it through copper piping or something, and that ruined it.

Holst: Two different kinds of limes? Frolich, you gotta get a hold of yourself. Besides, the polar explorers found that fresh meat also cures scurvy. They would kill a polar bear or some seals, have the meat for dinner, and then they would be fine. You expect me to believe that this antiscorbutic property is found in both polar bear meat AND some kinds of citrus fruits, but not in other kinds of citrus?

Frolich: You have to agree that it’s possible. Why can’t the property be in some foods and not others? 

Holst: It’s possible, but it seems really unlikely. Different varieties of limes are way more similar to one another than they are to polar bear meat. I guess what you describe fits the evidence, but it really sounds like you made it up just to save your favorite theory. 

Frolich: Look, it’s still consistent with what we know. It would also explain why Lind says that citrus cures scurvy, even though it clearly didn’t cure scurvy in the polar expeditions. All you need is different kinds of citrus, or something in the preparation that ruined it — or both! 

Holst: What about our research? We fed those dogs nothing but grain for weeks. They didn’t like it, but they didn’t get scurvy. We know that grain isn’t enough to keep sailors from getting scurvy, so if scurvy is about not getting enough of something in your diet, those dogs should have gotten scurvy too.

Frolich: Maybe only a few kinds of animals need the antiscorbutic property in their food. Maybe humans need it, but dogs don’t. I bet if those guinea pigs hadn’t gotten lost in the mail, and we had run our study on guinea pigs instead of dogs, the guinea pigs would have developed scurvy.

Holst: Let me get this straight, you think there’s this magical ingredient, totally essential to human life, but other animals don’t need it at all? That we would have seen something entirely different if we had used guinea pigs or rats or squirrels or bats or beavers?

Frolich: Yeah basically. I bet most animals don’t need this “ingredient”, but humans do, and maybe a few others. So we won’t see scurvy in our studies unless we happen to choose the right animal, and we just picked the wrong animal when we decided to study dogs. If we had gotten those guinea pigs, things would have turned out different.

III.

Frolich is entirely right on every point. He also sounds totally insane. 

Maybe there are different kinds of citrus. Maybe some animals need this mystery ingredient and others don’t. Maybe polar bear meat is, medically speaking, more like citrus fruit from Sicily than like citrus fruit from the West Indies. Really???

This looks a lot like special pleading, but in this case, the apparent double standard is correct. All of these weird exceptions he suggests were actually weird exceptions. And while our hypothetical version of Frolich wouldn’t have any way of knowing, these were the right distinctions to make. 

Reality is very weird, and you need to be prepared for that. Like the hypothetical Holst, most of us would be tempted to discard this argument entirely out of hand. But this weird argument is correct, because reality is itself very weird. Looking at this “contradictory” evidence and responding with these weird bespoke splitting arguments turns out to be the right move, at least in this case. 

Real explanations will sometimes sound weird, crazy, or too complicated because reality itself is often weird, crazy, or too complicated. 

It’s unfortunate, but scurvy is really the BEST CASE SCENARIO. The answer ended up being almost comically simple: it’s just a disease of deficiency, eat one of these foods containing this vitamin and be instantly cured. But the path to get to that answer was confusing and complicated. Think about all the things in the world that have a more complicated answer than scurvy, i.e. almost everything. Those things will have even weirder and more confusing stories to untangle.

This story has a couple of lessons for us. The first is just, don’t discard an explanation just because it’s weird or complicated. 

Focus on explanations that are consistent with all the evidence. Frolich’s harebrained different-citrus different-animals explanation from above does sound crazy, but at least it’s consistent with everything they knew at the time. If some kinds of citrus cured scurvy and other kinds didn’t, that would explain why it worked for Lind and for early sailors, but it didn’t work for the polar explorers after 1870. And in fact, that does explain it.  

It’s also testable, at least in principle. If you think there might be differences between different kinds of citrus fruits, you could go back and try to figure out the original source used by James Lind and the Royal Navy, and try to re-create those conditions as closely as possible.

FRUIT

We’re taught to see splitting  — coming up with weird special cases or new distinctions between categories — as a tactic that people use to save their pet theories from contradictory evidence. You can salvage any theory just by saying that it only works sometimes and not others — it only happens at night, you need to use a special kind of wire, the vitamin D supplements from one supplier aren’t the same as from a different supplier, etc. Splitting has gotten a reputation as the sort of thing scientific cheats do to draw out the con as long as possible.

But as we see from the history of scurvy, sometimes splitting is the right answer! In fact, there were meaningful differences in different kinds of citrus, and meaningful differences in different animals. Making a splitting argument to save a theory — “maybe our supplier switched to a different kind of citrus, we should check that out” — is a reasonable thing to do, especially if the theory was relatively successful up to that point. 

Splitting is perfectly fair game, at least to an extent — doing it a few times is just prudent, though if you have gone down a dozen rabbitholes with no luck, then maybe it is time to start digging elsewhere.

Scurvy isn’t the only case where splitting was the right call. Maybe there’s more than one kind of fat. Maybe there are different kinds of air. Maybe there are different types of blood. It turns out, there are! So give splitting a chance.

Be more forgiving of contradictory evidence. These days people like to put a lot of focus on the idea of decisive experiments. While it’s true that some experiments are more decisive than others, no experiment can be entirely decisive either for or against a theory. We need to stop expecting knock-down studies that solve things forever.

Contradictory evidence can be wrong! The person making the observations might have been confused. They might have done the analysis wrong. The equipment may have malfunctioned. They might have used dogs instead of guinea pigs, or they might have used the wrong kind of hamster. The data might even be fabricated! Shit happens. 

Things change as contradictory evidence piles up, but even then, it doesn’t mean you should scrap the theory you started out with. Everyone back in the 1870s made a big mistake throwing out their perfectly good “disease of deficiency” theory as soon as there were a few contradictory stories from polar explorers.

Their mistake was thinking “maybe the theory is wrong”, instead of “maybe the real theory is more complicated”. When you see evidence that goes against a theory, it could mean that you’ve been barking up the wrong tree. Or it could just mean that there’s a small wrinkle you aren’t aware of.

If you have a theory that’s been working pretty well for a while — it made good predictions, it solved real problems, it explained a lot of mysteries — you should stick with it in the face of apparent contradictions, at least for a while. When you hit a snag with a reliable theory, think “maybe it’s complicated” instead of “oh it’s wrong”. It may still be wrong, but it’s good to check!

Be careful of purely verbal, syllogistic reasoning. We make these arguments in conversation all the time. They seem plain, convincing, and commonsensical, but in reality they’re pretty weak. It’s hard to get away from commonsensical, verbal arguments since that’s how we naturally think, but don’t take them too seriously. They’re ok as starting points, but keep in mind that they’re not actually evidence.

“Different kinds of citrus fruits are more like one another than they are like polar bear meat” sounds very reasonable, but in this case it was wrong. Sicilian lemons really ARE more like polar bear meat than they are like West Indian limes, at least for the purposes of treating scurvy.

One of these things is not like the others. That’s right — the limes!

“Dogs are about as similar to humans as guinea pigs are” also sounds very reasonable. The three species are all the same class (Mammalia) but different orders (Carnivora, Primates, and Rodentia, respectively), so there seems to be some taxonomic evidence as well. But humans really are a lot more like guinea pigs than they are like dogs, or most other animals, at least for the purposes of getting scurvy.

IV.

We were tickled to see this paragraph near the end of Scott and Scurvy, for obvious reasons

…one of the simplest of diseases managed to utterly confound us for so long, at the cost of millions of lives, even after we had stumbled across an unequivocal cure. It makes you wonder how many incurable ailments of the modern world—depression, autism, hypertension, obesity—will turn out to have equally simple solutions, once we are able to see them in the correct light. What will we be slapping our foreheads about sixty years from now, wondering how we missed something so obvious?

This is really good, and we think it’s reason to be optimistic. We might be closer than we think to cures for depression, hypertension, and yes, even obesity

The answer to scurvy was just one thing, plus a few wrinkles — mostly “not all citrus has the antiscorbutic property” and “most animals can’t get scurvy”. This was only difficult because people weren’t prepared to deal with basic wrinkles, but we can do better by learning from their mistakes.

This means don’t give up easily. It suggests that there is lots of low-hanging fruit, because even simple explanations are easily missed.

Lots of theories have been tried, and lots of them have been given up because of something that looks like contradictory evidence. But the evidence might not actually be a contradiction — the real explanation might just be slightly more complicated than people realized. Go back and revisit scientific near-misses, maybe there’s a wrinkle they didn’t know how to iron out.

The Didactic Novel

Shōgun

James Clavell’s Shōgun is a historical novel about the English pilot John Blackthorne. The Dutch ship he’s piloting crashes in Japan in the year 1600, and Blackthorne has to learn how to survive in what to him is a mad and totally alien culture. 

All historical novels are somewhat educational, but Shōgun teaches you about more than just Japanese society at the beginning of the Tokugawa Shogunate. 

Blackthorne speaks a lot of different languages, and this is a big part of his identity. He speaks English natively and Dutch with his crew, but also Latin and Portuguese and even a little Spanish, which he uses to communicate with the few other Europeans he finds in Japan, mostly Catholic priests. This makes sense in the context of the novel — his ship is Dutch but their allies the English are the best pilots in the world, and they’re using stolen Portuguese documents to navigate strange waters, so he would need to speak that language too. 

So when Blackthorne finds himself stranded in Japan, he starts learning Japanese. At first this is hard because Blackthorne has only ever studied European languages before, and also because people keep trying to kill him. But he has a lot of experience learning foreign languages and little else to do, so he quickly starts picking it up.

What’s more surprising is that soon the reader is picking up some Japanese too. Linguistically, Clavell has put the reader in the very same situation as Blackthorne. The book starts out entirely in English, but suddenly you are confronted with words and phrases in a language you don’t understand. You end up learning many of these words and phrases just to follow along. 

Staged seppuku ritual, 1897

It seems like Clavell is doing this intentionally. The book is in English, but Blackthorne is the only English-speaking character in the novel. Except in the few cases where he’s talking to himself, all the dialogue is actually being carried on in other languages, but when the dialogue is in Dutch, or Portuguese, or even Latin, Clavell renders it all as English. When Japanese people are speaking Japanese to each other, he translates that into English too. But when Blackthorne encounters Japanese that he doesn’t understand, or just barely understands, it’s usually rendered as romanized Japanese. To follow these snippets you need to learn a little Japanese, so you do. And the interesting thing is, you learn this little bit of Japanese without any conscious effort.

It’s hard to read Shōgun all the way through and not learn at least a few words in Japanese. By the end of the first volume, most readers will know words like onna, kinjiru, wakarimasu, hai, ima, ikimasho, anjin, domo, isogi, and of course the omnipresent neh.

This isn’t a perfect language-learning tool. Shōgun is over 300,000 words long (and the original draft was considerably longer), but most of that is devoted to being a historical novel, an adventure story, and a romance, not teaching you Japanese. We love that there are lots of reasons to read it. But given the limited amount of space devoted to these basic Japanese lessons, it’s a very effective introduction.

Cryptonomicon

Cryptonomicon by Neal Stephenson is a dense novel that alternates between historical fiction and near-future sci-fi. 

There are two storylines. The first is set during World War II, and follows a group of characters pioneering cryptography in an effort to win the war, and inventing the computer — among the characters are a fictionalized version of Alan Turing and his even-more-fictional German boyfriend, Rudolf “Rudy” von Hacklheber. 

The second storyline focuses on the grandchildren of some of the WWII characters in the modern day, several of whom are putting together a startup in southeast Asia in an attempt to create an anonymous banking system using magic internet money. The novel was published in 1999 so yes, this seemed like an ambitiously futuristic scheme at the time. It also maybe helped create that future — Cryptonomicon was required reading during the early days of PayPal.

Unironically the best ad ever created

But implicitly, and at times explicitly, Cryptonomicon is a textbook on something like information theory. Chapter One includes a long discussion where Alan Turing and Rudy von Hacklheber teach Lawrence Pritchard Waterhouse (sort of the viewpoint character) about Russell and Whitehead, Gödel, the distinctions between mathematics and physics, how logic can be reduced to symbols, etc. If this sounds dry, it isn’t — you’ll probably learn more about philosophy of math in these 4000 words than you did during 4 years of college. Then Alan and Rudy give Lawrence a problem to go off and solve so the two of them can fuck. Sex comes up a lot in Cryptonomicon, possibly because sex itself is about the exchange of deeply encrypted source code, or possibly because Stephenson is just horny.

All that just in Chapter One. This is a book about cryptography, and so pretty much every other chapter has some lesson, implicit or explicit, about topics like symbols, languages, systems, inference, even actual algorithms or code snippets. Chapter 25 ends by walking you through the process of doing encryption and decryption with a one-time pad. There’s even information theory disguised (?) as small-business advice. It’s kind of Gödel, Escher, Bach in novel format, to the point that there are references to GEB hidden in a few places around the book. 

For the most part these lessons are subtle and deeply embedded:

One night, Benjamin received a message and spent some time deciphering it. He announced the news to Shaftoe: “The Germans know we’re here.”

“What do you mean, they know we’re here?”

“They know that for at least six months we have had an observation post overlooking the Bay of Naples,” Benjamin said.

“We’ve been here less than two weeks.”

’’They’re going to begin searching this area tomorrow.”

“Well, then let’s get the fuck out of here,” Shaftoe said.

“Colonel Chattan orders you to wait,” Benjamin said, “until you know that the Germans know that we are here.”

“But I do know that the Germans know that we are here,” Shaftoe said, “you just told me.”

“No, no no no no,” Benjamin said, “wait until you would know that the Germans knew even if you didn’t know from being told by Colonel Chattan over the radio.”

“Are you fucking with me?”

“Orders,” Benjamin said, and handed Shaftoe the deciphered message as proof.

But in a few places he does come out and state the idea plainly:

It all comes to him, explosively, during the Battle of Midway, while he and his comrades are spending twenty-four hours a day down among those ETC machines, decrypting Yamamoto’s messages, telling Nimitz exactly where to find the Nip fleet.

What are the chances of Nimitz finding that fleet by accident? That’s what Yamamoto must be asking himself.

It is all a question (oddly enough!) of information theory.

If the action is one that could never have happened unless the Americans were breaking Indigo, then it will constitute proof, to the Nipponese, that the Americans have broken it. The existence of the source—the machine that Commander Schoen built—will be revealed.

Waterhouse trusts that no Americans will be that stupid. But what if it isn’t that clear-cut? What if the action is one that would merely be really improbable unless the Americans were breaking the code? What if the Americans, in the long run, are just too damn lucky?

And how closely can you play that game? A pair of loaded dice that comes up sevens every time is detected in a few throws. A pair that comes up sevens only one percent more frequently than a straight pair is harder to detect—you have to throw the dice many more times in order for your opponent to prove anything.

If the Nips keep getting ambushed—if they keep finding their own ambushes spoiled—if their merchant ships happen to cross paths with American subs more often than pure probability would suggest—how long until they figure it out?

The whole book is backwards and out-of-order — not only because the chapters set in 1942 are intermixed with the chapters set in 1997, but because internal storylines are intentionally disjointed. Effects come before causes, explanations come many chapters before or after the thing they are meant to explain, critical hints are brief and easily missed. But this is intentional. The whole book is a giant combination lock, the final exercise left for the reader, and deciphering it is part of the reading experience and part of the lesson.

In any case, it’s hard to read Cryptonomicon all the way through and not learn something about information theory. You won’t be an expert, but it’s a damn fine introductory textbook. And because Stephenson is such a master, the book is designed to give up more mysteries every time you re-read it. Each time you revisit, you’re struck with stuff you missed the last time around. 

Writing novels that are secretly textbooks kind of seems to be Stephenson’s MO. Cryptonomicon has a prequel series called The Baroque Cycle. Just like Cryptonomicon deals with the invention of computing and information theory, these books deal with the invention of the scientific method, following historical characters like Sir Isaac Newton and Gottfried Wilhelm (von) Leibniz. It’s also about the invention of banking/modern currency, and it’s heavily implied that the two are connected — a true historical fact is that in addition to his work in physics, Isaac Newton was the Master of the Mint, in charge of all English currency, for thirty years. He even went out to taverns in disguise to personally catch counterfeiters. 

The perfect disguise

Stephenson also seems to be aware that this is what he’s doing. Maybe this is not surprising given his other novel The Diamond Age, a book about a book that teaches you things. The Diamond Age follows a similar model and tries to implicitly teach the reader about the basics of computer science and macroeconomics.

Harry Potter and the Methods of Rationality

Harry Potter and the Methods of Rationality (HPMoR) is a 660,000-word Harry Potter fanfic by Eliezer Yudkowsky.

Explicitly, HPMoR asks the question: what if Harry Potter were raised by an Oxford professor and was intensively homeschooled, instead of being raised in a closet by the Dursleys? Also explicitly, HPMoR is Yudkowsky’s attempt to teach the scientific method and “the methods of rationality” to a general audience.

Clavell and Stephenson seem somewhat aware that their novels are educational, but Yudkowsky is the only one of the three who comes right out and talks about how this is his goal, at least that we’ve seen. In a post on why he wrote the fanfic, he says:

But to answer your question, nonfiction writing conveys facts; fiction writing conveys *experiences*. I’m worried that my previous two years of nonfiction blogging haven’t produced nearly enough transfer of real cognitive skills. The hope is that writing about the inner experience of someone trying to be rational will convey things that I can’t easily convey with nonfiction blog posts.

Yudkowsky is referring to his other attempt to teach these skills as “The Sequences” on LessWrong. Elsewhere he says that these two attempts, fiction and nonfiction, don’t even communicate the same thought. But to editorialize a bit, it seems like HPMoR was more successful than the Sequences. It’s certainly reached a broad audience — among other things, it’s been reviewed in venues like Vice, Who Magazine, and The Hindustan Times.

(To editorialize a bit more, Yudkowsky’s writing on writing might be more interesting than either the Sequences or HPMoR. But of course we’re very interested in writing so we’re kind of biased.)

Yudkowsky describes his goal as teaching “real cognitive skills”, and he’s on the money with this one. Many skills are better taught through experience than presented as a block of facts — you’ll learn more Japanese from getting lost in Tokyo than you will from skimming a Japanese grammar. So for skills like these, a didactic novel is better than an explicit textbook, or at least a good complement.

HPMoR is spread a little thin — unlike Japanese or information theory, “rationality” is not really a single subject, so it’s a little less cohesive. But Yudkowsky does still have a lot of specific points he’s trying to make, and it’s hard to read HPMoR all the way through and not learn something about genetics, psychology, heuristics, game theory, tactics, and the scientific method.

The Didactic Novel

All three of these novels were extremely successful. All of them try to teach you something more concrete than the average novel tries to teach you. And all of them are at least somewhat successful.

Some skills, like oil painting or bicycle repair, are hard to learn from just reading about them — you actually have to go out and try it for yourself. But in many skills, the basics can be picked up vicariously. You won’t be a great codebreaker after reading Cryptonomicon, but it gives you a very firm foundation to start from.

Novels are powerful teaching tools because they’re more fun than textbooks, and fun is good. Educational and entertaining are treated like foils, but they’re actually complimentary. If something is entertaining, it holds your attention; if it holds your attention, you will be able to engage; if you engage you can learn something. If something is boring or tedious you will go look at twitter or pick your nose instead. Shōgun doesn’t teach you quite as much Japanese as you would get from a Japanese 101 course at the local university, but we guarantee it’s twice as fun and two hundred times easier to read Shōgun than it is to take all those quizzes. Japanese for Busy People is a pretty good textbook, but you don’t want to cuddle in with it on a snowy afternoon.

And frankly, fun sticks in your brain easier. 

Fiction is great. It engages. It inspires. Fiction led thousands of people to develop an intricate understanding of the history and politics of Westeros, including hundreds of characters and thousands of events and relationships. It led people to create detailed models of fictional castles in SketchUp. Fiction inspires people to scholarly discourse on the details of medieval sieges, or painstaking minecraft replicas of entire continents. Fiction leads people to totally overthink why an empire might destroy a province in a show of military might, or speculate in-depth about the project management that it would require. And yes, the power of fiction led to millions of words worth of Harry Potter fanfic from literally thousands of authors. Imagine if we harnessed even a little of that power.

Do you have strong opinions about which of these people you would invite to your birthday party? Which of them you would have an ale with? Which of them you would let look after your child? You do? FICTION

Language Learning

We think there should be lots more didactic novels — novels that try to teach you something concrete, like a skill. And we actually think that James Clavell got it right with Shōgun, that the best subject for a didactic novel is language learning. 

Shōgun is distracted by having many other priorities, but a novel that put language-learning first could be an engine of unimaginable education. Much like Clavell, you would start the story entirely in English, and introduce words in the new language one by one. Eventually you would start introducing basic grammar. The bits in the target language would start out on the level of “see spot run”, but would gradually become as complicated as the sections in English. As you move through the novel, the text would transition slowly from all-English to all-target-language. By the end, you would just be reading a novel in Swedish or Arabic or Cantonese or whatever.

This transition would have to be very slow for this to work, so the novel would have to be really long. But if you do it slowly enough, it won’t feel difficult for the reader at any point.

You might be worried that people won’t be willing to read such a massive story, but we don’t think it’s a problem. People already spend a lot of time on language-learning apps. Language-learning is a big market, and people are plenty happy to invest their time and money. As just one example, Duolingo is now worth more than $6 billion. And Duolingo isn’t even that great — it’s kind of bad. 

And while there’s a stereotype that people don’t like to read, or don’t like long books, the rumors of the death of our attention spans are greatly exaggerated. Shōgun itself is on Wikipedia’s list of the longest novels of all time, at over 300,000 words, and it sold six million copies in the first five years of publication. Jonathan Strange & Mr Norrell by Susanna Clarke, also about 300,000 words, was a smash hit and won a slate of awards. The entire Lord of the Rings series (minus The Hobbit), is about 500,000 words. Infinite Jest is about 550,000 words, all of them dense.

The entire Harry Potter series is more than 1,000,000 words long, and millions of pre-teens have wolfed it down without stopping for breath. If a school story with magic wands could inspire kids to do that, imagine how they would respond to a book that actually teaches them German, or any other language their parents don’t understand. Half the fun of any YA series is all the weird shibboleths you develop that adults can’t pierce. On this note, the web epic Homestuck was arguably even longer, and captured the minds of a generation, for good or for ill.

You really can engage 13-year-olds with 1,000,000+ words of arcane bullshit

Game of Thrones, the first book alone, is about 300,000 words long, and the whole A Song of Ice and Fire series is about 1,700,000 words so far. While most people have not read all the books, you can’t deny their impact. And it’s not like the sales have been lackluster or something, Martin is one of the highest-earning authors in the world.

You could make a pretty good case that Dune, almost 200,000 words long and with five sequels, is already a didactic novel about ecology, or maybe political science, or maybe the intersection of ecology and political science. I’m at the ecology. I’m at the political science. I’m at the intersection of ecology and political science. 

A Case Study

Since we think Clavell has done the best job so far, it’s worth taking a bit of a look at how he does it.

(Minor spoilers for Shōgun from here on.)

The prologue has no Japanese at all, since it’s set on a Dutch ship in immediate danger of going down with all hands. But in Chapter 1, things are immediately different. Blackthorne wakes up in a strange room. A woman comes in and says something to him in Japanese — “Goshujinsama, gokibun wa ikaga desu ka?” It’s the very first page, and already we get a full sentence in Japanese.

A few pages later, we learn our first word. Blackthorne points at the woman to ask her her name. She says, “Onna”. But this is a misunderstanding — “onna” is just the Japanese word for “woman”. This will come back to get Blackthorne in the ass, but not for a while.

A few pages later we learn the words “daimyo” (a type of Japanese noble) and “samurai” when Blackthorne talks to one of the local Catholic priests, who challenges him in Portuguese.

Then a samurai appears and says, “Nanigoto da,” a phrase we don’t understand, three times. Then we get our second full sentence. The samurai, whose name is Omi, asks Blackthorne, “Onushi ittai doko kara kitanoda? Doko no kuni no monoda?” which the Portuguese priest translates as ‘Where do you come from and what’s your nationality?’” He also explains that the Japanese use the suffix “-san” after a name as an honorific, like we use “Mr.” or “Dr.” before ours, so he should call the samurai Omi-san.

Clavell doesn’t give us the rest of the conversation in Japanese, but at the end Omi asks him, “Wakarimasu ka?” which the priest translates as “Do you understand?” Blackthorne is already itching to learn the language for himself, and asks how to say “yes” in Japanese. The priest tells him to say, “wakarimasu,” which is sort of correct. He also sees Omi behead a man and shout “Ikinasai!” twice. Most of what we hear at this point isn’t translated, but we’re already getting exposed to a lot of Japanese.

From the 1980 miniseries

Blackthorne talks to a few more samurai on his ship, and hears the phrases “Hotté oké!”, “Nan no yoda?”, and “Wakarimasen”, which astute readers might already notice is similar to “Wakarimasu ka?” and “wakarimasu” from before. When he uses signs to ask to go to his cabin, they say, “Ah, so desu! Kinjiru.” Based on how they threaten him when he tries to go inside, he correctly infers that “Kinjiru” means “forbidden”.

After spending a lot of time with his crew, he goes back to the house he woke up in. He hears “konbanwa” from the gardener, and while it’s not defined, context makes it clear that this is a greeting — in fact, it’s Japanese for “good evening”. 

Then he asks to see “Onna” and the joke set up at the start of the chapter comes full circle. He hears “hai” and “ikimasho” and “nanda”, not understanding, and then one of the women tries to get into bed with him, until the village headman, who speaks a little Portuguese, explains that “onna” means “woman”. We also see our first “neh”s.

And that’s all the Japanese in Chapter One. Blackthorne is taught the words onna, daimyo, and samurai, and is taught to use the suffix –san. He is sort of taught the word wakarimasu, and he correctly infers the meaning of kinjiru. He — along with the reader — is also exposed to several words that are not yet defined explicitly, and a few complete phrases, some of which get approximate translations. 

In Chapter 2, and forever onwards, daimyo and samurai are used as normal vocab, since these terms don’t have equivalents in English, and we see the suffix -san where appropriate. We also see one other full sentence in Japanese — “Ano mono wa nani o moshité oru?”, which isn’t translated — but that’s it. 

In Chapter 3, we learn the suffix -sama, meaning “lord”. We also learn that ronin are “landless or masterless peasant-soldiers or samurai.” But this chapter is also short, and we barely see Blackthorne at all, so both of these translations are provided by the narration.

In Chapter 4, we hear the word “isogi”, which is translated as “hurry up!” Then we hear it again. We also see “kinjiru” twice, with only the reminder that it’s “the word from the ship”, but context and the hint help recall the meaning. 

In Chapter 5, Blackthorne starts using Japanese himself, saying “kinjiru” twice to talk to a samurai.

In Chapter 6, the local priest tells him that the Japanese word for “yes” is “hai”. Blackthorne uses the word four times. We see the phrase, “wakarimasu ka” twice, which the priest translates the first time, but not the second time. We encounter the word “okiro” for the first time, translated as “you will get up.” We also learn the word “anjin”, which means “pilot”, when Omi tells Blackthorne that the Japanese can’t pronounce his name and will call him “Mr. Pilot”, or “Anjin-san”.

In Chapter 7, we learn the phrase “konnichi wa”, which they translate as “good day”. Blackthorne then uses the phrase six times to greet people, and we hear it once from someone else. We see the word “Anjin” at least a dozen times — Clavell wants us to get used to it, because it’s Blackthorne’s new name. We see “hai” twice, and “wakarimasu” and “wakarimasu ka” and “isogi” and “kinjiru” once each. 

During this chapter, Blackthorne also meets a Portuguese pilot (Rodrigues), who tells him that “ima” means “now”, and also uses the term “ikimasho”, a term we saw once in Chapter 1, but doesn’t define it. He also uses the term “ichi ban”, which he doesn’t explain, and throws around a bunch of “wakarimasu ka”, “kinjiru”, and “sama”. When he argues with some samurai, they say “gomen nasai”, which is translated as “so sorry”, and “iyé”, which isn’t translated but clearly means “no”. 

In Chapter 8, Blackthorne and the Portuguese pilot Rodrigues use “wakarimasu ka” and “hai” with one another, just as part of normal conversation. Blackthorne hears him use “isogi” again, asks what it means, and Rodrigues tells him it means “hurry up”. Blackthorne uses the word not long after when he takes control of the ship in a storm. We see “wakarimasu” twice and “hai” four times. We see a new term, “arigato goziemashita” (not the common spelling), which isn’t defined but is clearly in the context of someone thanking him. We also see “iyé” again, in a context where it clearly means “no”, confirming its meaning.

In Chapter 9, we see “hai” twice, and “isogi” once. We also see “iyé”, and again Clavell refuses to define it explicitly. But by now, the reader has seen it three times in contexts that all clearly mean “no”, and is probably starting to pick up on that. 

In Chapter 10, we see “konnichi wa”, “isogi”, and “wakarimasu ka” once each, and “hai” five times. None of them are translated, and the chapter doesn’t miss a beat. These are all just normal vocabulary in the novel at this point, the reader is expected to know what they mean. 

At this point the novel takes a break from language education to spend a few chapters mostly focusing on plot, so we’ll stop here too. But already, you can see the pattern. 

Clavell mixes it up a lot, but the general formula goes like this:

  1. The first time you encounter a word, it isn’t defined and no one explains what it means, but there are often context clues.
  2. Soon after that, the word is used again and someone either tells you what it means, or Blackthorne guesses. 
  3. The next time you see the word, you get a little reminder either of the definition, or of the last time you saw the word.
  4. After a few more uses with clear context, the word becomes part of the general vocabulary. From then on, you are expected to know what it means!

This is essentially how you learn words as a child, or how you would learn Japanese if you had to use it as part of your daily life. The first time you hear a word, you have no idea what it means. Eventually someone tells you what it means or it becomes clear from context. The next time you see or hear the word, you might need a reminder. But once you’ve used it a bit, it gets locked in. 

Examples

Let’s look at some examples. The word “hai” means “yes”. You hear it first in Chapter 1, with a little context that suggests what it might mean. We don’t see it again until Chapter 6, when the local priest tells us what it means. It’s then used a couple of times in Chapter 7. In Chapters 8-10, it’s just a normal word, fully integrated into the story, with no further reminders. 

The word “kinjiru” means “forbidden”. Blackthorne hears it first in Chapter 1, and guesses what it means from context. We see it again in Chapter 4 with a simple reminder (just “the word from the ship”), and Blackthorne uses it in Chapter 5, where context makes it clear what it means. From then on, it’s in the vocab.

We first encounter the word “isogi” in Chapter 4, where the narrator translates it for the reader as “Hurry up!” But Blackthorne doesn’t get the benefit of this translation. When it reappears in Chapter 7, he still doesn’t know what it means. It comes back in Chapter 8, Blackthorne asks what it means, and Rodrigues tells him. Later that chapter, Blackthorne is using the word himself. It’s the same principles, just slightly mixed up.

The approach Clavell is using is called spaced repetition, a memory technique that works by introducing new content and then bringing it back after a bit of a delay. This works because of something called the forgetting curve. When you’ve just learned something, it’s strong in your memory, but that trace gets weaker and weaker over time. If you’re asked to remember the thing right away, it’s still fresh in your mind and takes no effort — but if you wait too long, you’ve forgotten entirely. So the thing to do is wait until the memory has decayed just a bit, and then bring it back. This stresses the memory and reinforces it, sort of like how stressing a muscle builds strength.

Clavell is taking advantage of the fact that most people will not chug this 300,000-word novel in one sitting — most people will read it a few chapters at a time. This gives them time to partially forget many of these words between chapters, so that when they return to the book in a day or two and the words come up again, they are jostled out of memory, and the meaning of the word is reinforced. 

(Stephenson uses the same approach as a storytelling technique. Something called “Van Eck phreaking” is an important plot point near the end of Cryptonomicon, so Stephenson makes sure that it’s explained before it becomes important, and that it comes up a few times before it’s explained.)

This is how you should write your didactic novel too. Start with a character who doesn’t know the language at all, who is in the same position as the reader. Words and concepts are introduced in the background first, without any explanation. After the reader has seen the word a few times, a character comes out and tells the reader what it means, or else they guess what it means, or it’s used in a context that makes the meaning clear. Shortly afterwards, the word is used again, either in a context that helps reinforce the meaning, or with a gentle reminder. 

Use the word a few more times in situations where context helps make the meaning clear. After that, add the word to your “approved vocabulary” list, and use it wherever it’s appropriate in the novel — the reader is now expected to know what it means. If you teach people a couple words each chapter, you can outstrip the average language 101 class in a decent-length novel.

All you need to do is go harder than Clavell, and make language-learning your secondary focus. We say secondary and not primary because your primary focus is to make sure it’s an enjoyable read. The book won’t teach anything if no one gets through it!

Naturally, you can use all the same techniques if you’re writing a didactic novel about calculus or music theory. All the same ideas still apply — language learning just offers an exceptionally clear-cut example. 

A Narrative Addition

Clavell’s technique is similar to the hero’s journey. This is a template for writing and describing stories, where a person starts out in their comfort zone, is forced out by circumstance, confronts trials, gains knowledge, and returns to their comfort zone, but stronger than they were before.

Clavell doesn’t exactly use this technique, but you could easily combine the hero’s journey with his approach.

The hero’s journey can be as epic as a series of fantasy novels, or as unassuming as a man changing a tire in the rain:

Fade in on a meek-looking man driving a car. It’s raining. Boom. Flat tire. He struggles to keep the car from ditching. He pulls it to the side of the road and stops. He’s got fear on his face. He looks out his car window at the pounding rain… It doesn’t matter how small or large the scope of your story is, what matters is the amount of contrast between these worlds. In our story about the man changing his tire in the rain, up until now, he wasn’t changing a tire. He was inside a dry car. Now, he opens his car door and steps into the pouring rain. … Our stranded, rain soaked driver has finished emptying the contents of his trunk on the side of the road. He sees the spare tire and he lets out a very slight, very fast sound of relief. That’s all. This is a story about a man changing a tire. … When you realize that something is important, really important, to the point where it’s more important than YOU, you gain full control over your destiny. … You have become that which makes things happen. You have become a living God. Depending on the scope of your story, a “living God” might be a guy that can finish changing a tire in the rain. 

This is such an engrossing story format because it mirrors the process of self-improvement in the real world, which the reader can enjoy vicariously. You learn something unfamiliar, use it, and master it. But in the didactic novel, we can put the reader in nearly the same situation as the character, and have them go through the journey together.

This approach would work well with genres like adventure novels, police procedurals, sitcoms, detective dramas, or Monster of the Week shows, which lend themselves well to stories with explicit cycles. Anything super-pulpy should fit the bill, anything episodic or serialized. 

The American spy stranded in Russia needs to get home, but to survive for the moment, he needs to learn some Russian. He finds an old run-down garage where two old farts, who speak a little English, let him hide out. Each cycle goes like this: During the intro, Spy encounters some Russian that he doesn’t know, on the radio or in the newspaper or something. This is foreshadowing, phrases that will come up later in the cycle, and this is just to embed them in the reader’s subconscious. Then he has a conversation with one of the old guys, who tells him some vocabulary or explains some part of Russian grammar to him. 

After this, the spy goes out on a mission or a job or something — get some supplies, meet a contact, follow up on a lead, normal spy shit. During the climax he is in a real pinch, but he remembers the words the old guy taught him that morning, and he manages to fix things. He uses those words a few more times to really embed them in the reader’s mind, and then he goes back to his hideout. The words he learned today go in the vocab box, and the author will use them freely from now on, maybe making sure to give them a guest appearance next episode so they stay in the reader’s memory.

For obvious reasons, novels that want to teach a language will have an easier time if the novel is set in the past, because there were more places you could go where you’d have to learn the language to get by. For similar reasons, setting your story in a time before cell phones and the internet will generally help a didactic novel on any subject, since it lets you isolate your characters from textbooks and dictionaries. Post-apocalyptic, fantasy, and far-future settings would also work.

So if you decide to write a didactic novel (or other didactic fiction), give us a holler.

Predictions for 2050

Erik Hoel makes a list of predictions for 2050.

This may seem like the far-flung future, but as Hoel points out, it’s only 28 years away. Making predictions for 2050 based on what we see today is just like sitting in the early ‘90s and predicting what the world will look like in the 2020s.

Hoel makes his predictions based on a simple insight: change is incremental, and the minor trends of today are the institutional changes of tomorrow. If you want to know what 2050 will look like, think about the nascent trends of the early 2020s and project them into the future:

If you want to predict the future accurately, you should be an incrementalist and accept that human nature doesn’t change along most axes. … To see what I mean more specifically: 2050, that super futuristic year, is only 29 years out, so it is exactly the same as predicting what the world would look like today back in 1992. … what was most impactful from 1992 were technologies or trends already in their nascent phases, and it was simply a matter of choosing what to extrapolate. For instance, cellular phones, personal computers, and the internet all existed back in 1992, although in comparatively inchoate stages of development. … The central social and political ideas of our culture were established in the 1960s and 70s and took a slow half-century to climb from obscure academic monographs to Super Bowl ads. So here are my predictions for 2050. They are all based on current trends.

We think this approach is really smart. In fact, we like it so much that we wanted to take it for a test drive. In this post, we make our own set of predictions for 2050, using Hoel’s method of picking out trends that we suspect will go on to shape the 2020s, 2030s, and 2040s.

Projects are more fun when you do them with friends, so we invited a bunch of other bloggers to make their own predictions for 2050, using the same approach of extrapolating trends that they think are important today. So far we have predictions from:

Here at SMTM, we’re going to add something to Hoel’s original method of extrapolating “technologies or trends already in their nascent phases”: regression to the mean. What we mean by this is, well — the 20th century was very unusual in many different ways.

A lot of things that we take for granted are really, really new — like 401ks (invented in 1978), Traditional (1974) and Roth (1997) IRAs, and modern credit scores (1989). Indexes like the Dow Jones and the S&P 500 run back several decades, but index funds that track them only appeared in 1972. In 1940, only 5% of US adults over 25 had a college degree and only 25% had a high school diploma. Even income tax wasn’t a permanent part of the US tax system until 1913 — we had to do a whole amendment to the Constitution to make it happen.

Some of these may be here to stay, but looking back from 2050, a lot of 20th century “institutions” will look like a flash in the pan. The trends that are holding will probably hold, but any 20th century abnormalities that seem to be reversing are likely to go back to the way they were for most of human history. A nascent trend that looks like regression to the historical mean is much more likely to be a trend that will continue on to 2050.

Hoel’s Predictions

We agree with a lot of Hoel’s predictions. A Martian colony, or crewed missions to Mars at least, are looking pretty likely as the price of space travel drops (and he’s not the only one predicting this). We’re also reminded of the recent increasing interest in charter cities and Georgism — Mars would be a great location for your wacky new city and it’s the closest we’re going to get to making all-new land, at least any time soon. 

Hoel is clearly right that we will move away from stores, but this might also look like more business done out of people’s homes (like was done historically) or like more business done in something like a marketplace with semi-permanent stalls (like was done historically). 

Genetic engineering of embryos to avoid disease is already being done and it does seem like this will happen more and more. Similarly, anti-aging technology is already here and will just keep getting better and cheaper, especially given that Peter Thiel is involved. But this is sort of hard to square with Hoel’s final prediction, that 2050 will be “the winter of my life”, at the age of only 62. It seems a little pessimistic on Hoel’s part. Didn’t you hear that in 2050, 62 will be the new 25?  

Sometimes we agree with the general picture, but not with the details. Education will indeed be mostly online (again, it already is), but it will look more radical than what Hoel imagines. The real education giant today is not Harvard, or even MITx, but YouTube, and we will see more of THAT in the future. 

Hoel is right that AI will be impactful in day-to-day life, but we think this is true only in the obvious ways. You will still have Siri and Alexa, but you won’t have Data, or even Bender. We might have better image classifiers and even decent chatbots. Strong AI may be a possibility by 2050 (a topic for another time), but by the “extrapolate the future from current trends” technique, in 2050 many classifiers will still have a hard time telling the difference between a dragonfly and a bus. GPT-29 will be able to churn out a movie script as formulaic as that of the average Hollywood scriptwriter (and may well replace them) but it won’t be replacing writing that requires anything as complicated as “themes” or “meaning”. 

Hoel predicts wild changes in family structure — specifically, the decline of the family and the rise of the throuple. We agree family dynamics will change, but again we disagree on the specifics. More on this in a minute.

A few predictions we disagree with in general. Hoel predicts that the online mob will create an endless culture war, and that “the future really is female”. But the current culture war is amusingly soft compared to many cultural conflicts in living memory, and the fact that women get a majority of all degrees means very little when you believe that university degrees are worth less and less all the time!

Finally, we disagree that people and culture will become boring. Thanks for reading a pseudonymous mad science blog called SLIME MOLD TIME MOLD

SMTM Predictions

Robotics

This first one is less an original prediction than an elaboration on Hoel, who says: 

Buzzing drones of all shapes and sizes will be common in the sky (last year Amazon won FAA approval for its delivery drone service, opening the door for this). Small robots will be everywhere, roving the streets in urban areas, mostly doing deliveries.

We agree. Robots will stay dumb but you will see a lot of them, possibly in delivery. It’s hard to look at work from Boston Dynamics and not expect that in 30 years we’ll have lots of quadrupedal robots trotting around our streets, carrying goods and generally acting as porters, footmen, and stevedores. 

If you get the price point low enough, small robots might even replace backpacks and suitcases. Boston Dynamics’ robot Spot currently costs $74,500, but 30 years of R&D can do a lot. Let’s take computers — in the early 90s, 1 GB of storage cost about $10,000. But these days you can get a 2 TB drive for about $50, which puts a 1 GB at only a few cents. If the same thing happened to Spot, it would cost less than a dollar. We don’t expect anything this drastic, but similar forces could turn quadrupedal robots into household goods. Our bet is on robotic palanquins.

The Witch of the Waste: robotics thought leader

It also seems very likely that with 30 more years of R&D, we’ll have ironed out all the last problems with self-driving cars, so expect that kind of robot as well.

More Infectious Disease

For most of human history, infectious disease was a fact of life. As in so many things, the 20th century was an aberration. We developed antibiotics, improved hygiene, even eliminated some diseases altogether. But this pleasant moment in the sun is over. Someone writing in December 2019 might be forgiven for thinking that with our medical knowledge and scientific might, we could defeat any disease that might rise up. But evidently not.

This XKCD from 2015 aged kind of poorly. explainxkcd.com even says, “at the time of writing it was not readily apparent that the old dog still has some teeth”

This means more pandemics. Many will become endemic, as will probably happen with COVID. Some existing diseases will become resistant to our best antibiotics. If we’re really unlucky, we will see the return of smallpox or some horrible mystery plague released by the thawing permafrost. (Hoel is also concerned about this.) 

We still have germ theory, so we won’t be sent back to the state of things in 1854. But the future will look more like the past, and we’ll have to start paying attention to disease in the way our ancestors did. As historian Ada Palmer describes, “I have never read a full set of Renaissance letters which didn’t mention plague outbreaks and plague deaths, and Renaissance letters from mothers to their traveling sons regularly include, along with advice on etiquette and eating enough fennel, a list of which towns to avoid this season because there’s plague there.” Embrace tradition with this delicious recipe for Fenkel in soppes

Citizen Research

These days, big universities and medical centers and stuff are responsible for most research. But this is a big deviation from the historical norm. In the past, random haberdashers and architects and patent clerks and high school teachers, or just rich people with too much time on their hands, were the ones doing most of the cutting-edge research. 

There are already many signs of regression to the mean on this. Anonymous 4channers are publishing proofs to longstanding superpermutation problems on anime boards. The blog Astral Codex Ten (and predecessor blog Slate Star Codex, by the same author) publishes major reviews (“much more than you wanted to know”) on a wide variety of topics — disease seasonality, links between autism and intelligence, melatonin, you name it. Sometimes he even does empirical work — case studies on the effect of CO2 on cognition, large nootropics usage surveys, studies of SSRI usage, etc.

Pseudonymous internet besserwisser Gwern writes long articles on everything from Gaussian expected maximums to generating anime faces with neural networks. Wikipedia, the largest and most-read reference work in history, is written entirely by volunteers. And of course there’s us, Slime Mold Time Mold, creating a book-length original work where we argue for a new theory of the obesity epidemic. 

This is only going to speed up. The 2020s will see a lot more research from people who aren’t in the academy, and by 2050, most of the best scholarship will be done by laypeople.

Elective Chemistry

At some point in the near future, the trends of plastic surgery, nootropics, psychedelic legalization, trans hormone therapy, and bodybuilding will collide, with spectacular results. 

Doing things to reshape your body and mind is an idea as old as dirt, but with recent advances in technology, and breakdowns in cultural taboos, the practice of what could be called “elective chemistry” is going to take off, probably in the next 10 or 20 years. 

Why let nature be the only one who has any say over the chemicals affecting your mind and body? It’s already common to use caffeine, alcohol, and tobacco to reshape your mind. If you’re willing to go out of your way, you can get a psychiatrist to prescribe any number of mind-altering chemicals, and many people today are on lexapro or modafinil or adderall or wellbutrin full-time. And while this is easy enough to do legally, it’s even easier outside the law — many people use psychedelics, steroids, or hormone therapies illegally, to change their minds or bodies as they see fit.

This won’t just become more acceptable for people on the margins of society, it will become mainstream. Cis people are already the largest consumers of hormone therapy and other medical procedures normally assocaited with trans healthcare (largely because of base rates, but even so). Cis men sometimes go on androgen replacement therapy as they age, and cis women often go on hormone replacement therapy after menopause, which sometimes includes testosterone. And it’s equally easy to use them as mind-altering substances, since they have psychological effects as well as physical ones.

Working out, getting plastic surgery, and taking steroids or hormones are all just forms of body modification. We’ve already come to accept piercings and tattoos, to the point where they’re practically boring. In the near future, most forms of body modification will be unremarkable, in the literal sense that you cannot be bothered to remark on them.  

(This may be extended even further by the development of better prosthetics, like the extra thumb or connecting your brain directly to social media — wait that last one seems like a bad idea.)

Europe will become less important, regional politics more important, and general de-globalization

Europe was a technological and cultural backwater for most of history. Then, in the 16th century, Europe began a period of explosive growth and development, sometimes called the Great Divergence. There’s a lot of interesting debate as to why this happened, but it definitely did happen. 

It was also definitely a historical anomaly, and there are already signs of things going back to the way they were. There was a crunch in favor of Europa and her direct offshoots up to the middle of the 20th century, but since 1950 things have been turning around:

The fastest-growing economies in the world are all countries like Bangladesh, Ethiopia, Vietnam, Turkey, and Iran. Brazil is already the 13th-largest economy in the world, Indonesia the 16th, and Nigeria the 27th — all ahead of countries like Ireland (29th), Norway (31st), Denmark (37th), and Portugal (49th). It’s hard to predict who the big winners will be, but it’s clear that Europe will become less and less important, as countries in the rest of the world become major powers.

As wealth and power gets more distributed, supply chains will get shorter and less global. Measures of globalization used to increase year after year, but they sputtered in the financial crash of 2008 and never really recovered. COVID has provided another shock, a disruption that is far from over. There isn’t really a trend away from globalization yet, but the trend in favor of globalization has definitely stalled. 

There may also be regression to the mean in protectionism. Historically, many states have supported themselves largely through tariffs (see e.g. for the US), and protectionism may be good for growing economies. If globalization really has stalled for the long-term, and certainly if it starts to reverse, we may see more and more tariffs, even a shift in how governments fund themselves. Russia and India have already begun taking steps in that direction, and other countries may follow.

Non-nuclear families

Historically, most people lived in large extended families. The nuclear family, at least as we know it today, is largely an artifact of the unusual circumstances of the 20th century. As income inequality and the cost of buying a home increase, more people will live in large groups — be that group houses, “adult dorms”, or multigenerational homes. COVID has accelerated this trend. More young adults (18 to 29) are living at home now than they were at any point since 1900. The future doesn’t look like Leave it to Beaver, or even The Simpsons

Part of this will be transitioning back to a system where familial wealth is more important than personal wealth. Historically if your family disowned you, you were screwed. This is why a mainstay of 19th century literature is killing your brother for an inheritance.

And as much as the “kill your brother for the inheritance” thing was a pattern of the upper classes, familial wealth was more important than personal wealth even for peasants (though for peasants, it was sort of more communal wealth than familial).

This is why we agree with Hoel’s prediction of major changes in family structure. We agree that “normal families” are on their way out. But we disagree on nearly all the specifics. We don’t expect to see lots of single-parent homes — we expect more multi-generational homes, group homes, or other arrangements, with lots of adults co-raising children. See e.g. Kelsey Piper’s experience, her main conclusions being “I have no idea how people with two parent households manage” and “I wish we had even more breastfeeding parents”. Put that on a t-shirt: Even More Breastfeeding Parents by 2050.

And instead of seeing a rise in throuples, we expect to see a return of that very old-fashioned arrangement, the Harem — where a person of means has multiple wives, one wife and multiple concubines, etc. etc. 

Wage labor becomes less common 

Tying yourself to a major employer is still the norm today, but this is changing. Some people will be paid on retainer (i.e. a salary), and some jobs where you really are being paid for your time (e.g. security guard) will still be hourly, but more and more people will be paid to complete specific tasks or deliver a particular result, with no questions asked as to how fast they did it.  

We think the gig economy is coming for the rest of the marketplace, but instead of everything being chopped up into little tasks and ruled by corporations (à la Uber), we expect it to look like more contract workers and fewer full-time employees. More people will be self-employed, or will form small companies to deliver goods or services on demand. 

We expect this is (mostly) a good thing. People benefit from being their own boss and being able to do the work however they want, as long as they get it done. Being paid to stand around and look busy isn’t good for anyone. 

To a historical person, wage labor would be one of the strangest things about the modern world, and the idea of a steady job with benefits would be even stranger. Most people were farmers and almost never had any reason to handle money. Even if you were a potter or a blacksmith, you were paid for your product, the actual bowl or knife you were selling, not for your labor or the hours you worked.

Antibodies to the Outrage Economy

Once upon a time, clickbait was a major annoyance, but it was mostly a problem because people fell for it. The term was invented in 2006, and clickbait was the scourge of the internet for a few years, but by 2014 the cultural immune response was in full swing. The Onion launched ClickHole that year, Facebook started taking steps to squash clickbait, blah blah blah.

Now, no one reads clickbait because we’ve learned better. People are learning again. Hoel is worried that “Social media will ensure an endless culture war and internal social upheaval.” But we’re not worried, because soon we will develop cultural antibodies to the outrage economy, just like we developed cultural antibodies to clickbait, or to evolution vs. creationism debates, or to whatever was blowing up the internet in the 1990s (arguments about Microsoft?).

In fact, we’re already getting there. There was a time when we used to click on outrageous political stories. Now I think, “They’re rifting me”, and move on without clicking. No one has written the definitive piece on it yet, but “don’t read the news” is a meme that’s gaining steam. We hear it from our friends all the time. People are waking up to the fact that the news will do almost anything to raise your blood pressure, and that freaking out about “the issues of the day” does no one any good.

There will always be some new brainworm that we have to develop cultural antibodies to. And it might be fun to speculate which stupid argument will threaten to tear us apart next. But the outrage economy is on its way out, and the divisions of 2050 will look very, very different from the divisions we see today.

Identity and Anonymity Online

In the early days of the internet, everyone was anonymous — as the old saying goes, “on the Internet, nobody knows you’re a dog.”

But today the assumption is that everyone uses their real name. This is Mark Zuckerberg’s fault, for pushing real names on Facebook. “You have one identity,” he says in David Kirkpatrick’s 2010 book, The Facebook Effect. “The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly. Having two identities for yourself is an example of a lack of integrity.”

But this will be even more of a flash in the pan than fighting about politics online. Internet anonymity is already coming back into style (hello) and this trend will continue into the future. Most people will have a mix of public and private accounts, pseudonyms, alts, and pen names. As with many of our other “predictions”, this is pretty much true already — what will change is that there will eventually be widespread acknowledgement and acceptance.

This is also a regression to the historical mean. Public anonymity and pseudonymity is a long and esteemed tradition — just ask Voltaire, George Sand, Mark Twain, Lewis Carroll, George Orwell, or Dr. Seuss.

During the American revolution, practically everyone was using a pseudonym. Many of these guys were already famous public figures, and ALSO writing pseudonymous letters. They were having it both ways — they had alts! 

Alexander Hamilton, James Madison, and John Jay wrote The Federalist Papers under the name “Publius”. But Ben Franklin was the real master of this — his pseudonyms included not only Richard Saunders of Poor Richard’s Almanack, but also “Silence Dogood”, “Caelia Shortface”, “Martha Careful”, “Anthony Afterwit”, “Miss Alice Addertongue”, “Polly Baker”, and “Benevolus”. We shudder to think what he would have done with even a dial-up connection.

Advances in crypto, VR, AR, and social networking will splinter the web, not unite it. More virtual locations means more places for different identities to thrive — just like how your family group text is different from the Discord channel you have with your friends, is different from your reddit comments, is different from your LinkedIn profile, is different from the messages you send on tinder. 

Loss of the distinction between “Lowbrow” and “Highbrow”

In 2018, Kendrick Lamar’s DAMN. became the first rap album to win a Pulitzer. Before this, the prize had only ever gone to classical or jazz. In 2020, skateboarding was in the Olympics for the first time, and most of the medals went to teenagers. The game Hades from Supergiant Games recently won a Hugo Award. It’s the first video game to do so, but it’s unlikely to be the last. 

In fact, the Hugo Awards themselves may be a good example of this trend. Back in 1953 when the Hugos started, fantasy and science fiction (and everything nerdy) was fringe stuff, totally marginal. Today, comic book superheroes dominate the box office and Targaryens are household names. 

This trend shows every sign of continuing. Things that are fringe, lowbrow, and popular will continue getting more and more official recognition, to the point that we will eventually lose the distinction between lowbrow and highbrow art at all. Olympic fencing is already on the same plane as olympic surfing, and soon there will be no social difference between comic games like Untitled Goose Game by House House, and comic operas like Le nozze di Figaro by Wolfgang Amadeus Mozart. If that seems impossibly flippant, remember that Mozart once composed a canon in B-flat major called “Lick me in the ass”.

High culture

This is part of why we’re not concerned that people and culture will become boring — cultural forces are constantly driving bizarre, fringe works towards the mainstream, and this trend shows no signs of stopping. Among other things, this will be really good for social mobility.

Minorities as minorities

Saying that America will be a majority-minority country by 2050 is the wrong way of thinking about it. By 2050, we won’t think about minorities in the same way at all. We won’t hold on to this sense of minority-nonminority — we’ll give up the minority-nonminority idea in favor of something more specific.

The categories that are important now won’t be important in 30 years. Concepts that we take for granted — the idea of being Italian, or German, or even just European — didn’t exist until pretty recently. We expect a return to a sort of negative multiculturalism — everyone is sort of fighting with everyone else, like how all the cities on the Italian peninsula used to go at it without much of a sense of shared Italian identity.

Legacy media struggles to keep up, but race and gender already compete with minority identities like subculture and political leanings. Your identity comes from being a goth, a furry, by wearing hiking clothes to the office, by wearing a $1,200 Canada Goose jacket on the New York subway in October, by your favorite sports team, by the websites you frequent, by which author or podcaster you won’t shut up about, by which YouTubers you reference, by being a progressive or a libertarian or an ACAB commie. In many contexts, your status as one of these minorities already matters more than your race, gender, or even sexuality — and online, your meatspace traits barely matter at all. 

The True Uses of the Internet are not yet Known

Johannes Gutenberg invented the printing press in 1440, but Martin Luther didn’t publish his 95 theses until 1517. If it takes a new technology 77 years to come into its own strength, we shouldn’t be surprised.

There are a number of dates we could choose for the invention of the internet — the first ARPANET connections in 1969, the TCP/IP standard in 1982, or the first web pages in 1991. Maybe 1993 is the right choice, being the year of the first web browser, the invention of HTML, and Eternal September, though basic technologies like URLs and HTTP didn’t come until a few years later! 

If we do go with 1993, then 77 years later would be the nice round 2070. Maybe the modern world moves a little bit faster than the protestant reformation, but anyone who thinks the internet hasn’t lived up to expectations in terms of changing the world should wait a minute. The cutting-edge developments of the early 2020s will come to seem like Jacobus de Varagine’s Legenda aurea — which you’ve probably never heard of, that’s the point. We haven’t seen the internet’s real face yet.