Constitution Review: John Brown’s Provisional Constitution

John Brown’s body lies a-mouldering in the grave, but his provisional constitution is available online.

John Brown is best known for fighting to end American slavery. Born in 1800 and raised around abolitionists, ending slavery was a religious conviction for Brown, who came to believe that he was “an instrument of God” put on earth for this very purpose. From the 1820s on Brown was seriously involved in the Underground Railroad, but in the 1840s he became frustrated by their lack of progress, and formed his own, more militant version of the Underground Railroad, the Subterranean Pass Way. By 1856, Brown and his sons were out in Kansas, killing pro-slavery border ruffians as part of Bleeding Kansas.

This all culminated in a daring raid on the federal armory at Harpers Ferry, Virginia in 1859. Brown’s plan was to take the armory and use the captured weapons to arm former slaves, carrying out future raids deeper and deeper into the South, freeing and arming more slaves every time. The plan originally called for 4,500 men to lead the attack, but on the day of the raid, Brown found himself with only 21. He went ahead with the plan anyway. 

The raid went well at first, but eventually the US Marines showed up (under the command of Robert E. Lee, of all people!) and took back the armory. John Brown was captured, tried, and hanged. He became a martyr to the abolitionist cause, and in the Civil War a few years later, Union soldiers marched to the new song John Brown’s Body, which eventually mutated into such forms as the Battle Hymn of the Republic — or if you are a schoolchild, The Burning of the School.

But before his untimely end, he put together a provisional constitution. 

It’s not really clear what the provisional constitution was for. Even at the time, people weren’t sure what to make of it. Brown’s lawyer introduced the provisional constitution at his trial as evidence that Brown must be insane, calling the provisional constitution “ridiculous nonsense–a wild, chimerical production” that “could only be produced by men of unsound minds.” Naturally, Brown disagreed.

Some people have suggested that it was intended to be the constitution of a new anti-slavery state in the Appalachian Mountains, where West Virginia ended up being. But the provisional constitution itself makes it pretty clear that it is intended for, or at least open to admitting, all citizens of the United States. 

It might be a constitution for Brown’s new, more hardcore Underground Railroad, his Subterranean Pass Way. By some accounts it was written while Brown was a guest of Frederick Douglass in Rochester, New York (the two men had been uneasy friends for more than a decade). In May 1858 he met with Railroad leaders, including Harriet Tubman, in Chatham, Ontario, and it was there that he held a constitutional convention. But the provisional constitution describes the rules for a government, not a secret society. 

Most likely, the provisional constitution was meant as a stopgap solution to a major point of contention among abolitionists. Brown and other abolitionists were fervent, one might even say crazed patriots, and they loved America. But hating slavery and loving America had a problem, and that problem was the US Constitution. In the eyes of many abolitionists, at least, the US Constitution sanctioned slavery, and that was unacceptable.

Abolitionist William Lloyd Garrison took the most extreme position — he called the US Constitution “the most bloody and heaven-daring arrangement ever made by men for the continuance and protection of a system of the most atrocious villainy ever exhibited on earth” and promoted a philosophy sometimes called “no-governmentism”, which is about what it sounds like. This led to a schism in the abolitionist movement, between people who accepted the US Constitution mostly as it was, and people who thought it was a covenant with death. 

This may seem a little hysterical to modern ears, but it makes sense given what was going on at the time. In 1857, the Supreme Court made a decision in the case of Dred Scott v. Sandford, ruling quite explicitly that, because of the way the US Constitution was constructed, the descendents of slaves could not be US citizens. “A free negro of the African race,” reads the transcript, “whose ancestors were brought to this country and sold as slaves, is not a ‘citizen’ within the meaning of the Constitution of the United States. … The change in public opinion and feeling in relation to the African race, which has taken place since the adoption of the Constitution, cannot change its construction and meaning, and it must be construct and administered now according to its true meaning and intention when it was formed and adopted.” 

The majority opinion in the case said:  

The question is simply this: Can a negro, whose ancestors were imported into this country, and sold as slaves, become a member of the political community formed and brought into existence by the Constitution of the United States, and as such become entitled to all of the rights, and privileges, and immunities, guarantied [sic] by that instrument to the citizen? … they are not included, and were not intended to be included, under the word ‘citizens’ in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for and secures to citizens of the United States.

The two dissenting justices made strong cases that this argument was ahistorical and not, in fact, in line with constitutional law. But coming on the heels of this decision, it’s easy to see why many abolitionists couldn’t get behind the US Constitution. 

So one possibility is that Brown got up his provisional constitution in an effort to bypass this schism, or because Garrison had convinced him that the US Constitution had to go. Garrison was firmly anti-violence, so the two men did not exactly see eye to eye; though Garrison sort of came around to Brown’s position in the end. And Thoreau wrote of Brown that, “I should say that he was an old-fashioned man in his respect for the Constitution, and his faith in the permanence of this Union.” The hope might have been that even if the abolitionists could not all agree on the US Constitution, they could agree on a provisional one in the meantime, and put off the decision of whether or not to replace the US Constitution until after slavery was defeated.  

(We think Thomas Jefferson would approve, he wanted the Constitution — and indeed, all laws — to automatically expire every 19 years.)

In any case, Brown’s provisional constitution gives us a glimpse into his thinking and what kind of political philosopher he was, so it’s worth taking a look. (Brown himself would like this — “I wish you would give that paper close attention,” he said of his provisional constitution during the questioning after his capture.)

Tip My Hat to the New Constitution

The provisional constitution is made up of a short preamble and 48 articles. The preamble starts by condemning slavery and ends in the second paragraph by declaring, “we, citizens of the United States, and the oppressed people who, by a recent decision of the Supreme Court, are declared to have no rights which the white man is bound to respect, together with all other people degraded by the laws thereof, do, for the time being, ordain and establish for ourselves the following Provisional Constitu­tion and Ordinances, the better to protect our persons, property, lives, and liberties, and to govern our actions.”

The first few articles concern the design for a system of government, with the expected executive, judicial, and legislative branches. Compared to the US federal government, though, it seems quite small. Only one chamber of Congress, and that composed only of “not less than five nor more than ten members.” A Supreme Court of only five justices, chosen by direct election to three-year terms, just like the President and Vice-President. 

Articles 13 to 15 provide brief and explicit instructions for how to try and impeach any member of government, including Supreme Court justices. Perhaps Brown was thinking of the Dred Scott case.

It’s easy to see how the provisional constitution could be construed as treasonous, since it does provide for an entirely new form of government. But John Brown eventually gets around to addressing this explicitly in the third-to-last article:

ARTICLE XLVI.

These articles not for the overthrow of government.

The foregoing articles shall not be construed so as in any way to encourage the overthrow of any State government, or of the general government of the United States, and look to no dissolution of the Union, but simply to amendment and repeal. And our flag shall be the same that our fathers fought under in the Revolution.

On the other hand, the provisional constitution does mention “the limits secured by this organization”, suggesting that the organization would be taking territory from someone. And the provisional constitution does seem written with warfare against slavery particularly in mind. Several articles are devoted to the organization, rules, and duties of the military, including what to do with prisoners and with “all money, plate, watches, or jewelry captured by honorable war­fare, found, taken, or confiscated, belonging to the enemy.” There are also these articles: 

ARTICLE XXXIII.

Voluntaries.

All persons who may come forward, and shall voluntarily deliver up their slaves, and have their names registered on the books of the organization, shall, so long as they continue at peace, be entitled to the fullest protection of person and property, though not connected with this organization, and shall be treated as friends and not merely as persons neutral.

ARTICLE XXXIV.

Neutrals.

The persons and property of all non-slaveholders, who shall remain absolutely neutral, shall be respected so far as the circumstances can allow of it, but they shall not be entitled to any active protection.

At times, Brown gets a little bogged down in the weeds, especially on religious issues. He gets into it enough to devote several whole articles to forbidding behaviors which, we imagine, he must personally have had strong feelings about, the sorts of things not normally included in a constitution: 

ARTICLE XII.

Special duties.

It shall be the duty of Congress to provide for the instant removal of any civil officer or policeman, who becomes habitually intoxicated, or who is addicted to other immoral conduct, or to any neglect or unfaithfulness in the discharge of his official duties.

ARTICLE XXXV.

No needless waste.

The needless waste or destruction of any useful property or article by fire, throwing open of fences, fields, buildings, or needless killing of animals, or injury of either, shall not be tolerated at any time or place, but shall be promptly and properly punished.

ARTICLE XL.

Irregularities.

Profane swearing, filthy conversation, indecent behavior, or indecent exposure of the person, or intoxication or quarreling, shall not be allowed or tolerated, neither unlawful intercourse of the sexes.

ARTICLE XLII.

The marriage relation, schools, the Sabbath.

The marriage relation shall be at all times respected, and families kept together, as far as possible; and broken families encouraged to reunite, and intelligence offices established for that purpose. Schools and churches established, as soon as may be, for the purpose of reli­gious and other instructions; for the first day of the week, regarded as a day of rest, and appropriated to moral and religious instruction and improvement, relief of the suffering, instruction of the young and ignorant, and the encouragement of personal cleanliness; nor shall any persons be required on that day to perform ordinary manual labor, unless in extremely urgent cases.

That said, sometimes getting into the weeds on specific issues is all right. Section 41, on “crimes”, actually lists only one crime, but it’s a strong choice. The entire text of the Section 41 is: 

Persons convicted of the forcible violation of any female prisoner shall be put to death.

Brown’s reputation as a “.44 caliber abolitionist” seems pretty well-deserved in the light of the last few articles, where he encourages open carry for everyone, men and women alike:

ARTICLE XLIII.

Carry arms openly.

All persons known to be of good character and of sound mind and suitable age, who are connected with this organization, whether male or female, shall be encouraged to carry arms openly.

It’s especially interesting, though, that he stresses the open part of open carry. Concealed weapons were to be the exclusive domain of policemen, officers of the army, and… mail carriers:

ARTICLE XLIV.

No person to carry concealed weapons.

No person within the limits of the conquered territory, except regularly appointed policemen, express officers of the army, mail carriers, or other fully accredited messengers of the Congress, President, Vice President, members of the Supreme Court, or commissioned officers of the army-and those only under peculiar circumstances-shall be allowed at any time to carry concealed weapons; and any person not specially authorized so to do, who shall be found so doing, shall be deemed a suspicious person, and may at once be arrested by any officer, soldier, or citizen, without the formality of a complaint or warrant, and may at once be subjected to thorough search, and shall have his or her case thoroughly investigated, and be dealt with as circumstances on proof shall require.

On its own merits, the provisional constitution is not so radical, not even all that nutty. It provides for a normal state, with the normal branches of government, and most of the articles have to do with basic stuff like how people get elected and who is allowed to sign what treaties. The religious bent is a tad unusual, the violent abolition angle is pretty exciting/terrifying, but paragraph to paragraph it reads like any other constitution.

But in another way, the provisional constitution is affirming one of the deepest and most radical privileges inherent to being an American. Being an American entitles you to be an amateur political scientist, to speculate on strange new forms of government, and if needs be, to write your own damn constitution in defense of new forms of freedom, just like they did in 1776. 

Book Review: A Square Meal – Part II: Politics

The book is still A Square Meal: A Culinary History of the Great Depression, recommended to us by reader Phil Wagner, and Part I of the review is here.

Condescension, Means Testing, and Infinite Busybodies

The other big thing we learned from A Square Meal, besides the fact that food in the 1920s was bonkers, is that the Great Depression brought out the absolute worst in the American political machine: the tendency for condescension to the unfortunate, constant means testing to make sure the needy are really as needy as they say, and infinite busybodies of every stripe.

Some of this was just standard government condescension. In WWI, the United States Food Administration tried to convince Americans that dried peas were a fine substitute for beefcake, and that “Wheatless, Eggless, Butterless, Milkless, Sugarless Cake” was a food substance at all. Sure.

When the Great Depression hit, things got steadily worse. Homemakers were encouraged to turn anything and everything into casseroles, which had the benefit of making their contents indistinguishable under a layer of white sauce and bread crumbs. The housewife could serve unappetizing food or leftovers over and over again without her family catching on, or at least that was the idea. Among other things this gives us this unusual early example of the overuse of the term “epic”:

Whatever the answer, [the casserole] is bound to be an epic if mother is up to the minute on the art of turning homely foods into culinary triumphs

Some people in government seemed to be confused — they seemed to think that the food issues facing the nation were a matter of presentation, that food just didn’t look appetizing enough. It’s hard to interpret some of these suggestions in any other light, like the idea that good advice for Americans in the Depression could be, “to impart a touch of elegance to a bowl of split pea soup, why not float a thin slice of lemon on top and sprinkle it with some bright red paprika and finely chopped parsley.” Or the suggestion that beans could be fried in croquettes to make them more appealing. Authorities trotted out “reassuring” announcements like: 

The fact that a really good cook can serve better meals on a small budget than a poor cook can serve on the fat of the land suggests that the fault may be not in the food material itself but in the manner in which the food is prepared and served, and therein lays a tale!

But what really grinds our gears isn’t the condescension, it’s the means testing. The second half of the book is mostly depressing stories about the government refusing to provide basic aid to starving families, or screwing up one relief program after another with means testing, or other things along these lines.

American society at the time was firmly anti-charity. People thought that everyone on the breadline could be divided into the “deserving poor” and the “undeserving poor”, and it was their job to search out which was which. They seemed to believe that even one whiff of assistance would immediately turn any hardworking, self-respecting American into a total layabout. 

People in the 1930s are always saying things like, “if a man keeps beating a path to the welfare office to get a grocery order he will gradually learn the way and it will be pretty hard to get him off that path.” They really seemed to believe in the corrupting power of government support, to the point where they were often afraid to even use the word “charity”.

Before the Great Depression, there was very little history of big-picture welfare. The support that society did provide was administered locally, and not very well administered at all:

The poor laws combined guarded concern for needy Americans with suspicions that they were complicit in their own misfortune. Under the poor laws, the chronically jobless were removed from society and dispatched to county poorhouses, catchall institutions that were also home to the old, infirm, and mentally ill. Those who could ordinarily shift for themselves but were temporarily jobless applied to public officials, men with no special welfare training, for what was known as outdoor or home relief, assistance generally given in the form of food and coal. To discourage idlers, the welfare experience was made as unpleasant as possible. Before applying for help, the poor were made to wait until utterly penniless, and then declare it publicly. When granting relief, officers followed the old rule of thumb that families living “on the town” must never reach the comfort level of the poorest independent family. The weekly food allowance was a meager four dollars a week—and less in some areas—regardless of how many people it was supposed to feed. Finally, it was customary to give food and coal on alternate weeks, providing minimal nourishment and warmth, but never both at the same time.

Journalists called people who received government assistance “tax eaters”. When support from the town was forthcoming, it looked like this: 

…the board made all relief applicants fill out a detailed form with questions like: Do you own a radio? Do you own a car? Do you spend any money for movies and entertainment? Did you plant a garden? How many bushels of potatoes do you have? The board gave aid in the form of scrip, which now could only be used to purchase the “necessities of life” at local stores: “flour, potatoes, navy beans, corn meal, oatmeal, coffee, tea, sugar, rice, yeast cakes, baking soda, pepper, matches, butter, lard, canned milk, laundry soap, prunes, syrup, tomatoes, canned peas, salmon, salt, vinegar, eggs, kerosene.”

The Manhattan breadline is emblematic of the Great Depression, so we were sort of surprised at just how much people at the time hated them, even very mainstream sources. You’d think that giving out bread to the starving would be one of the more defensible forms of charity, but people loathed it. And none more than the city’s social workers, described as the breadlines’ “harshest critics”:

Welfare professionals with a long-standing aversion to food charity, social workers condemned the breadlines as relief of the most haphazard and temporary variety, not much different from standing on a street corner and handing out nickels. The people who ran the breadlines, moreover, made no attempt to learn the first thing about the men they were trying to help, or to offer any form of “service” or counseling. The cause of more harm than good, the breadlines were humiliating and demoralizing and encouraged dependence, depriving able-bodied men of the impulse to fend for themselves. Social workers were adamant. Breadlines were the work of fumbling amateurs and “should be abolished entirely; if necessary by legal enactment.”

As the Depression dragged on and things became worse, more relief did come. But when it came, the relief was invasive. Housewives were told not only what to cook, but where to shop. Some women had to venture far outside their own neighborhoods to use food tickets. Social workers dropped in on schools to criticize the work of teachers, in particular the tendency of teachers to be overly “sentimental” or “solicitous”. They feared that schoolteachers lacked the “well-honed detective skills” required to distinguish between whining and genuine tales of woe. 

To ascertain that applicants were truly destitute, officials subjected them to a round of interviews. Candor was not assumed. Rather, all claims were verified through interviews with relatives and former employers, which was not only embarrassing but could hurt a man’s chances for employment in the future. More demeaning, however, were the home visits by TERA investigators to make sure the family’s situation was sufficiently desperate. Investigators came once a month, unannounced, anxious to catch welfare abusers. Any sign that the family’s finances had improved—a suspiciously new-looking dress or fresh set of window curtains—was grounds for cross-examination. If the man of the house was not at home—a suggestion that he might be out earning money—investigators asked for his whereabouts, collecting names and addresses for later verification. Finally, though instructed otherwise, investigators were known to reprimand women for becoming pregnant while on relief, the ultimate intrusion.

Families lived in dread of these monthly visits, terrified they would be cut off if it was discovered that one of the kids had a paper route or some similar infraction.

In some areas, including New York City, “pantry snoopers” accompanied women to the market to confirm that all parties (both shopper and shopkeeper) were complying with TERA’s marketing guidelines. More prying took place in the kitchen itself, where investigators lifted pot covers and peered into iceboxes on the lookout for dietary violations.

Relief was often inadequate. Public officials were sometimes able to set relief levels at whatever amount they saw fit, regardless of state or federal guidance. Some of them assumed that poor families would be able to provide their own farm goods, but often this was not the case. In some places officials reasoned that poor workers would be easier to push around, and kept food allowances low to keep them in line. There was also just straight-up racism: 

Six Eyetalians will live like kings for two weeks if you send in twenty pounds of spaghetti, six cans of tomato paste and a dozen loaves of three-foot-long bread. But give them a food order like this [$13.50, state minimum for six persons for half a month], and they will still live like kings and put five bucks in the bank. Now you ought to give a colored boy more. He likes his pork chops and half a fried chicken. Needs them, too, to keep up his strength. Let him have a chicken now and then and maybe he’ll go out and find himself a job. But a good meal of meat would kill an Eyetalian on account of he ain’t used to it.

Families on relief who asked for seasonings on their food, like vinegar or mustard, were refused on the grounds that they might “begin to feel too much like other families”. Officials who were afraid that cash handouts to the poor might encourage dependence instead used that money to hire a resident home economist to help the poor make better use of what little they had.

As with modern means testing, this seems heart-breakingly callous. All these “supervisory” jobs intended to keep poor people from getting too much relief look suspiciously like a method for taking money that’s meant to help starving families and using it to pay the middle class to snoop on their less fortunate neighbors. Everyone loves giving middle-class busybodies jobs in “charity” work, no one seems to worry all that much about getting food to malnourished children. 

To be fair, no one expected that the relief would have to go on for years. Everyone thought that the panic was temporary, that it would all be over in a couple of months. This doesn’t make it much better, but it does explain some of the reluctance.

Another surprising villain in all this is, of all things, the American Red Cross. Over and over again, the Red Cross either refused to provide aid or gave only the smallest amounts, even when people were literally starving to death. They sent officials to areas stricken by drought and flood, who reported back that there was not “evidence of malnutrition more than exists in normal times” or brought back stories about an old man complaining that the Red Cross was feeding him too well. Meanwhile, actual Red Cross workers were reporting circumstances like this, from Kentucky: 

We have filled up the poor farm. We have carted our children to orphanages for the sake of feeding them. There is no more room. Our people in the country are starving and freezing.

The Red Cross’s reasoning was the same as everyone else in government: “If people get it into their heads that when they have made a little cotton crop and tried to make a corn crop and failed and then expected charity to feed them for five months, then the Red Cross had defeated the very thing that it should have promoted, self-reliance and initiative.” Actually this statement is on the friendlier side of things; another Red Cross official, after touring Kentucky, wrote: “There is a feeling among the better farmers in Boyd County that the drought is providential; that God intended the dumb ones should be wiped out; and that it is a mistake to feed them.”

Was Hoover the Villain? 

This brings us to a major question — namely, was Herbert Hoover the real villain of the Great Depression? 

At first glance it certainly looks that way. Hoover consistently blocked relief bills in Congress. He had a clear no-relief policy, and he stuck to it throughout his time as president. And he did send the US Army to drive a bunch of poor veterans out of DC

(He also lived in incredible opulence during his time in the White House. Always black tie dinners, always a table awash in gold, always fancy gourmet foreign food, always a row of butlers all exactly the same height. As they say, “not a good look”.)

But when you start looking at things in more detail, it becomes more complicated. It may seem naïve, but Hoover really thought that Americans would come together and take care of each other without the need for government assistance. He seemed to oppose relief because he thought that the federal government stepping in would make things worse. In one speech, he promised that “no man, woman or child shall go hungry or unsheltered through the coming winter” and emphasized that voluntary relief organizations would make sure that everyone was taken care of. He might have been wrong, but this doesn’t look villainous. (He also said, “This is, I trust, the last winter of this great calamity.” It was 1932.)

In a pretty bizarre state of affairs, he also seems to have been thwarted by the Red Cross at every turn, especially its chairman John Barton Payne. It makes sense that Hoover approved of the Red Cross, since it was one of the voluntary generous organizations he liked so much. What’s strange is how frequently the Red Cross just didn’t do jack, even when asked.

For example: In 1930, Hoover pressured the Red Cross to help out with drought relief in the Mississippi delta. The Red Cross agreed to give out $5 million in aid, but by the end of the year they had spent less than $500,000, mostly on handing out seed to farmers. 

Another time, Hoover went to the Red Cross to help provide relief to striking miners. This time the Red Cross refused, though again they offered seed to the miners (it’s unclear if there was even arable land near the mine). So Hoover went to a Quaker relief organization instead, and the Quakers agreed to help feed hungry children in the mining areas. Hoover struck a deal where the Red Cross would provide $200,000 to help the Quakers out. The Quakers waited two months before the Red Cross refused again. So the Quakers went ahead without them.

Somehow Hoover never turned his back on the Red Cross. Maybe he just liked the idea of aid organizations too much to realize that this one kept undermining him in times of crisis. 

But the other thing to understand about Hoover is that, despite his gruff no-handouts exterior, inside he was a bit of a softie. He stuck to his guns on the subject of cash relief, but he usually found a way to help without breaking his own rules. When the Red Cross refused to help the Quakers, Hoover rooted around and found $225,000 in an idle account belonging to a World War I relief organization, and sent that instead. In the flood of 1927 (when he was secretary of commerce), he refused to allocate federal funds directly, but he did have the U.S. Army distribute rations, tents, and blankets, organized local governments to rebuild roads and bridges, and got the Red Cross to distribute seeds and farm implements (the only thing they seem comfortable with). This was a huge success, and a big part of what won him the 1928 presidential election!  

The reason Hoover believed in the no-relief approach was simple — he had used it many times before, and it had always worked. He had a long track record of dealing with this kind of crisis. Before he was president, his nickname was “The Great Humanitarian” for the relief work he had done in Europe during World War I. People saw him as an omnicompetent engineering genius, and the reputation is at least partially deserved. It’s hard to overstate just how popular he was before the Depression: He won the election of 1928 with 444 electoral votes to 87, a total landslide.

Hoover thinks that attitude is the key to fighting financial panics, because this is exactly what he saw in 1921. There was a big stock market panic, which Hoover recognized was at least partially psychological. So he put out a bunch of reassuring press releases, told everyone that the panic was over, and sure enough, the market recovered.

So when the same thing happens in 1929, he figures the same approach will work. He does everything he can to project confidence and a sense of business as usual, and tries not to do anything that will start a bigger panic. This includes no federal relief — because if the federal government starts handing out money, that must mean things are REALLY bad. It makes a certain amount of sense, it did work for him in the past, but for some reason, it doesn’t work this time around. Maybe blame the Red Cross. 

FDR definitely jumped in to take advantage of the confusion. As then-governor of New York, he started implementing the kind of relief plans that Hoover refused to consider. He gave direct food relief. He used the word “charity”. And when he ran for president, he made it very clear that he thought the federal government should cover what the states could not, and make sure that no one would starve.

This did win him the election. But afterwards, he started looking a lot more like Hoover and some of his cronies. In his 1935 state of the union address, he said, “continued dependence upon relief induces a spiritual and moral disintegration fundamentally destructive to the national fiber. To dole out relief in this way is to administer a narcotic, a subtle destroyer of the human spirit. It is inimical to the dictates of sound policy. It is in violation of the traditions of America.” We’re right back to where we started. 

FDR rolls out the Works Progress Administration, another program that ties relief to a person’s work. But this isn’t administered much better than the Red Cross. Many workers couldn’t live on the low wages they offered. Even when they could, the jobs lasted only as long as the projects did, so workers often went months without jobs between different programs. 

The Civilian Conservation Corps was for a long time the most popular of these programs, but in 1936, Roosevelt decided to focus on balancing the budget instead. He slashed the program from 300,000 people to about 1,400. Over time, most of the relief burden fell back on state and city governments, many of which descended back into cronyism.

Some of the programs, for migrants and “transients”, were worse, nearly Orwellian:

Before the New Deal, transients were the last group to receive relief under the old poor laws. Now the FTP funded a separate system of transient centers guided by federal regulations meant to guard against local governments’ ingrained cultural biases against drifters and migrant job seekers. In rural areas, transients would be gathered into federal “concentration camps” (a term that had not yet gained its ominous connotations) designed for long-term stays.

As waves of agricultural migrants spread across the United States, by 1940 the FSA had opened 56 camps around the country, 18 of them in California, each accommodating up to 350 families. Administrators nevertheless continued to keep costs as low as possible, following the “rehabilitation rather than relief” rule handed down by President Roosevelt. Rather than give migrants food, the camps taught home economics–style classes on nutrition and food budgeting.

By 1937 everything seems to have fallen apart again, and the authors suggest that the second half of the 1930s was as bad or worse than the first half. In 1938, Roosevelt refused to give any further direct food relief from the WPA coffers. The stories from 1939 are kind of harrowing. 

By 1939, the problems of unemployment and what to do with millions of jobless Americans seemed intractable. The economy continued to sputter along; real prosperity remained an elusive goal; and Americans were losing compassion for the destitute and hungry.  

A Houston, Texas, reporter lived for a week on the city’s $1.20 weekly food handout, eating mostly oatmeal, potatoes, stewed tomatoes, and cabbage, and lost nearly ten pounds. In Chicago, a family of four received $36.50 a month, meant to cover food, clothing, fuel, rent, and everything else. But fuel in the cold Chicago winters was expensive; families had no choice but to cut back on food. In Ohio, the governor again refused to give aid to Cleveland, which ran out of money for nearly a month—called the “Hunger Weeks”—at the end of 1939. The city was reduced to feeding its poor with flour and apples as desperate families combed garbage bins for anything edible. Adults lost as much as fifteen pounds, while children had to stay home, too weak from hunger to attend school. Doctors saw a jump in cases of pneumonia, influenza, pleurisy, tuberculosis, heart disease, suicide attempts, and mental breakdowns.

So whatever Hoover did wrong, he doesn’t deserve all the blame, and the WPA certainly did not end the Great Depression.

Book Review: A Square Meal – Part I: Foods of the ‘20s and ‘30s

[Content warning: Food, culture shock, milk]

They say that the past is a foreign country, and nowhere is this more true than with food. 

The book is A Square Meal: A Culinary History of the Great Depression by Jane Ziegelman and Andrew Coe, recommended to us by reader Phil Wagner. This book is, no pun intended, just what it says on the tin, a history of food during the 1920s and 1930s. Both decades are covered because you need to understand what food was like in the 1920s to understand what changed when the Great Depression battered the world in the ‘30s. 

Home is where the lard-based diet is

We read this book and were like, “what are you eating? I would never eat this.” 

The book picks up at end of World War I, and the weird food anecdotes begin immediately:  

Their greeting back in American waters—even before they landed—was rapturous. Local governments, newspapers, and anybody else who could chartered boats to race out to meet the arriving ships. When the Mauretania, carrying 3,999 troops, steamed into New York Harbor late in 1918, a police boat carrying the mayor’s welcoming committee pulled alongside. After city dignitaries shouted greetings to them through megaphones, the troops who crowded the deck and hung from every porthole bellowed en masse: “When do we eat?!” It became a custom for greeting parties to hire professional baseball pitchers to hurl California oranges at the troops—some soldiers sustained concussions from the barrage—to give them their first taste of fresh American produce in more than a year.

Not that the soldiers weren’t also well-fed at the front lines: 

Despite the privations they had undergone, the Americans held one great advantage over both the German enemy and the soldiers of their French and British allies. They were by far the best-fed troops of World War I.

The U.S. Army field ration in France varied according to circumstances, but the core of the soldiers’ daily diet was twenty ounces of fresh beef (or sixteen ounces of canned meat or twelve ounces of bacon), twenty ounces of potatoes, and eighteen ounces of bread, hard or soft. American troops were always proud that they enjoyed white bread, while all the other armies had to subsist on dark breads of various sorts. This ration was supplemented with coffee, sugar, salt, pepper, dried fruit, and jam. If supply lines were running, a soldier could eat almost four pounds of food, or 5,000 calories, a day. American generals believed that this was the best diet for building bone, muscle, tissue, and endurance. British and French troops consumed closer to 4,000 calories, while in the last months of the war the Germans were barely receiving enough rations to sustain themselves.

The overall food landscape of the 1920s is almost unrecognizable. The term “salad” at the time referred to “assemblages made from canned fruit, cream cheese, gelatin, and mayonnaise,” which the authors note FDR especially hated [1]. Any dish that contained tomatoes was called “Spanish” (a tradition that today survives only in the dish Spanish rice). And whatever the circumstances, there was ALWAYS dessert — even in the quasi-military CCC camps, even in the government-issued guides to balanced meals, even in school lunch programs that were barely scraping by. 

This book also has some interesting reminders that constipation used to be the disease of civilization. In fact, they mention constipation being called “civilization’s curse”. This is why we have the stereotype of old people being obsessed with fiber and regularity, even though that stereotype is about a generation old now, and refers to a generation that has largely passed.

In the countryside, farm diets were enormous and overwhelmingly delicious: 

In midwestern kitchens, the lard-based diet achieved its apotheosis in a dish called salt pork with milk gravy, here served with a typical side of boiled potatoes:

On a great platter lay two dozen or more pieces of fried salt pork, crisp in their shells of browned flour, and fit for a king. On one side of the platter was a heaping dish of steaming potatoes. A knife had been drawn once around each, just to give it a chance to expand and show mealy white between the gaping circles that covered its bulk. At the other side was a boat of milk gravy, which had followed the pork into the frying-pan and had come forth fit company for the boiled potatoes.

The first volume of their oral history, Feeding Our Families, describes the Indiana farmhouse diet from season to season and meal to meal. In the early decades of the century, the Hoosier breakfast was a proper sit-down feast featuring fried eggs and fried “meat,” which throughout much of rural American meant bacon, ham, or some other form of pork. In the nineteenth century, large tracts of Indiana had been settled by Germans, who left their mark on the local food culture. A common breakfast item among their descendants was pon haus, a relative of scrapple, made from pork scraps and cornmeal cooked into mush, molded into loaf pans and left to solidify. For breakfast, it was cut and fried. Toward fall, as the pork barrel emptied, the women replaced meat with slices of fried apples or potatoes. The required accompaniment was biscuits dressed with butter, jam, jelly, sorghum syrup, or fruit butter made from apples, peaches, or plums. A final possibility—country biscuits were never served naked—was milk gravy thickened with a flour roux.

Where farmhouse breakfasts were ample, lunch was more so, especially in summer when workdays were long and appetites pushed to their highest register. With the kitchen garden at full production, the midday meal often included stewed beets, stewed tomatoes, long-simmered green beans, boiled corn, and potatoes fried in salt pork, all cooked to maximum tenderness. At the center of the table often stood a pot of chicken and dumplings, with cushiony slices of white bread to sop up the cooking broth. The gaps between the plates were filled with jars of chow-chow; onion relish; and pickled peaches, cauliflower, and watermelon rinds. The midday meal concluded with a solid wedge of pie. Like bread, pies were baked in bulk, up to a dozen at a time, and could be consumed at breakfast, lunch, and dinner.

Ingredients were prepared in ways that sound pretty strange to a modern ear. Whole onions were baked in tomato sauce and then eaten for lunch. Whole tomatoes were scalloped on their own. 

Organ meats were considered perfectly normal, if somewhat tricky to cook. The book mentions how food columnists had to teach urban housewives about how to remove the “transparent casing” that brains naturally come in, the membrane from kidneys, and the arteries and veins from hearts — not the sort of thing you would expect from a modern food columnist. On hog-killing day, an annual event all over the rural United States: 

The most perishable parts of the animal were consumed by the assembled crowd, the brains scrambled with eggs, the heart and liver fried up and eaten with biscuits and gravy. Even bladders were put to good use—though it wasn’t culinary. Rather, they were given to the children, who inflated them, filled them with beans, and used them as rattles.

There are a lot of fascinating recipes in this book, but perhaps our favorite is this recipe that appears in a section on the many uses of pork lard: 

Appalachian farm women prepared a springtime specialty called “killed lettuce,” made from pokeweed, dandelion, and other wild greens drizzled with hot bacon grease that “killed,” or wilted, the tender, new leaves. The final touch to this fat-slicked salad was a welcome dose of vinegar.

You might expect the urban food situation to be more modern, seeing as it involves less hog-killing. But if anything, it’s stranger. 

To start with, ice cream delicacies were considered normal lunch fare: 

The most typical soda fountain concoction was the ice cream soda, which was defined as “a measured quantity of ice cream added to the mixture of syrup and carbonated water. From there, the imaginations of soda jerks were given free range. Trade manuals such as The Dispenser’s Formulary or Soda Water Guide contained more than three thousand soda fountain recipes for concoctions like the Garden Sass Sundae (made with rhubarb) and the Cherry Suey (topped with chopped fruit, nuts, and cherry syrup). … From relatively austere malted milks to the most elaborate sundaes, all of these sweet confections were considered perfectly acceptable as a main course for lunch, particularly by women. In fact, American sugar consumption spiked during the 1920s. This was in part thanks to Prohibition—deprived of alcohol, Americans turned to anything sweet for a quick, satisfying rush.

Delicatessens and cafeterias, which we take for granted today, were strange new forms of dining. The reaction to these new eateries can only be described as apocalyptic. Delicatessens were described as “emblems of a declining civilization, the source of all our ills, the promoter of equal suffrage, the permitter of business and professional women, the destroyer of the home.” The world of the 1920s demanded an entirely new vocabulary for many new social ills springing up — “cafeteria brides” and “delicatessen husbands” facing down the possibility of that new phenomenon, the “delicatessen divorce.” The fear was that your flapper wife, unable to make a meal in her tiny city kitchenette, or out all day with a self-supporting career, would feed you food that she got from the delicatessen, instead of a home-cooked and hearty meal. 

In all of these cases, the idea was that new ways of eating would destroy the kitchen-centric American way of life — which, to be fair, it did. Calling a deli “the destroyer of the home” seems comical to us, but they were concerned that these new conveniences would destroy the social structures that they knew and loved, and they were right. We think our way of life is an improvement, of course, but you can hardly fault the accuracy of their forecasting.

Really, people found these new eateries equal parts wonderful and terrifying — like any major change, they had their songs of praise as well as their fiery condemnations (hot take: delicatessens were the TikTok of the 1920s). For a stirring example from the praise section, take a look at this lyrical excerpt from the June 18, 1922 edition of the New York Tribune:

Spices of the Orient render delectable the fruits of the Occident. Peach perches on peach and pineapple, slice on slice, within graceful glass jars. Candies are there and exhibits of the manifold things that can be pickled in one way or another. Chickens, hams and sausages are ready to slice, having already been taken through the preliminaries on the range. There are cheeses, fearful and wonderful, and all the pretty bottles are seen, as enticing looking as ever, although they are but the fraction of their former selves [i.e., under Prohibition].”

CHEESES FEARFUL AND WONDERFUL

Sandwiches were not only strange and new, but practically futuristic. “Before the 1920s, sandwiches were largely confined to picnics and free lunches in saloons,” they tell us, “and, with their crusts cut off, delicate accompaniments to afternoon tea.” The writer George Jean Nathan claimed that before the 1920s, there existed only eight basic sandwich types: Swiss cheese, ham, sardine, liverwurst, egg, corned beef, roast beef, and tongue (yes). But by 1926, he “claimed that he had counted 946 different sandwich varieties stuffed with fillings such as watermelon and pimento, peanut butter, fried oyster, Bermuda onion and parsley, fruit salad, aspic of foie gras, spaghetti, red snapper roe, salmi of duck, bacon and fried egg, lettuce and tomato, spiced beef, chow-chow, pickled herring, asparagus tips, deep sea scallops, and so on ad infinitum.”

Like the delicatessen, Americans were not going to take this sandwich thing lying down. Nor would they take it at all calmly! Boston writer Joseph Dinneen described sandwiches as “a natural by-product of modern machine civilization.”

Make your own “biggest thing since sliced bread” joke here, but actually this sandwich craze led directly to first the invention of special sandwich-shaped loaves with flattened tops, and then to sliced bread, which hit the market in 1928.

Frozen foods had also just been invented (frozen foods are soggy and tasteless unless you freeze them really fast; Clarence Birdseye figured out how to do quick freezing by seeing fish freeze solid during an ice fishing trip in Labrador) and were considered a novelty. Yet somehow the brand name Jell-O dates all the way back to 1897.

Many new foods didn’t fit squarely within existing categories. This is sort of like how squid ice cream seems normal in Japan. We have rules about what you can put in an ice cream — mint ice cream makes sense, but onion ice cream is right out — but the Japanese don’t care what we think the ice cream rules are. In the 1920s and 1930s many foods were unfamiliar or actually brand new, so no one had any expectations of what to do with them. For example, the banana, which you know as a fruit, was new enough to Americans that they were still figuring out how the thing should be served

Does seem guaranteed to start conversation! 

We’re sure bananas would be fine served as a vegetable, or with bacon, but this is certainly not the role we would assign to them today.

When the Depression hit, grapefruit somehow found its way into food relief boxes in huge quantities; “so much grapefruit that people didn’t know what to do with it.” Soon the newspapers were coming up with imaginative serving suggestions, like in this piece from the Atlanta Constitution:

It may open the meal, served as a fruit cocktail, in halves with a spoonful of mint jelly in the center or sprinkled with a snow of powdered sugar. It bobs up in a fruit cup, or in a delicious ice. It may be served broiled with meat, appear in a fruit salad or in a grapefruit soufflé pie. Broiled grapefruit slices, seasoned with chili sauce, make an unusual and delightful accompaniment for broiled fish, baked fish or chops.

Some of these sound pretty good; but still, unusual.

Vitamins

The other really strange and exciting thing about this period is that they had just discovered vitamins.

As we’ve covered previously, this was not as easy as you might think. It’s simple to think in terms of vitamins when you’re raised with the idea, but it took literally centuries for people to come up with the concept of a disease of deficiency, even with the totally obvious problem of scurvy staring everyone right in the face. 

Scurvy isn’t just a problem for polar explorers and sailors in the Royal Navy. Farm families living through the winter on preserved foods from their cellar tended to develop “spring fever” just before the frost broke, which the authors of this book think was probably scurvy. Farmwives treated it with “blood tonics” like sassafras tea or sulfured molasses, or the first-sprouted dandelions and onions of spring.

But just around the turn of the century, and with the help of cosmic accidents involving guinea pigs, people finally started to get this vitamin thing right. So the 1920s and 30s paint an interesting picture of what cutting-edge nutrition research looks like when it’s so new that it’s still totally bumbling and incompetent. 

In 1894, Wilbur Olin Atwater established America’s first dietary standards. Unfortunately, Atwater’s recommendations didn’t make much sense. For example, in this system men with more strenuous jobs were assigned more food than men with less strenuous jobs — a carpenter would get more calories than a clerk. This makes some sense, but Atwater then used each man’s food levels to calculate the amount of food required for his wife and kids. The children of men with desk jobs sometimes got half as much food as the children of manual laborers! The idea of treating each member of the family as their own person, nutritionally speaking, was radical in the early 1900s, but the observation that some children were “kept alive in a state of semi-starvation” had begun to attract attention.

People knew they could do better, so following Atwater’s death in 1907, the next generation got to work on coming up with a better system. Atwater had assumed that basically all fats were the same, as were all carbohydrates, all protein, etc. But Dr. Elmer V. McCollum, “a Kansas farm boy turned biochemist”, was on the case investigating fats. 

We really want to emphasize that they had no system at this point, no idea what they were doing. Medical science was young, and nutritional science was barely a decade old. Back then they were still just making things up. These days “guinea pig” and “lab rat” are clichés, but these clichés hadn’t been invented back in 1907. Just like how Holst and Frolich seem to have picked guinea pigs more or less at random to study scurvy, and how ​Karl Koller’s lab used big frogs to test new anesthetics, McCollum was one of the first researchers to use rats as test subjects.

Anyways, McCollum tried feeding his rats different kinds of fats to see if, as Atwater claimed, all fats had the same nutritional value. He found that rats that ate lots of butterfat “grew strong and reproduced, while those that ate the olive oil did not”. He teamed up with a volunteer, Marguerite Davis, and they discovered a factor that was needed for growth and present not only in milk, but eggs, organ meat, and alfalfa leaves. This factor was later renamed vitamin A (as the first to be discovered), and the age of the vitamins had begun. Soon McCollum and Davis were on the trail of a second vitamin, which they naturally called vitamin B.

The public went absolutely bananas for vitamins. It’s not clear if this was a totally natural public reaction, or if it was in response to fears drummed up by… home economists. Yes, home economics, the most lackluster class of all of middle school, represents that last lingering influence of what was once a terrible force in American politics: 

More than anything else, women were afraid of the “hidden hunger” caused by undetectable vitamin deficiencies that could well be injuring their children. … Home economists leveraged those fears. To ensure compliance, bureau food guides came with stark admonitions, warning mothers that poor nutrition in childhood could handicap a person for life. Women were left with the impression that one false move on their part meant their children would grow up with night blindness and bowed knees.

Whatever the cause, vitamins took America by storm. Any food found to be high in one vitamin or another quickly turned that finding to advertising purposes. Quaker oats, found to be high in vitamin B, advertised to kids with a campaign that “teamed up with Little Orphan Annie and her new pal, a soldier named Captain Sparks, who could perform his daring rescues because he had eaten his vitamins.” For adults, they implied that vitamin B would help make you vigorous in bed: 

…a snappy new advertising campaign: “I eat Quaker Oats for that wonderful extra energy ‘spark-plug.’ Jim thinks I have ‘Oomph!’ but I know it’s just that I have plenty of vitality and the kind of disposition a man likes to live with.” What she did with her extra “oomph” was unspecified, but the graphic showed a young couple nose to nose, smiling into each other’s eyes.

Vitamins continued to have this weird grip over the imagination for a long time. As late as the 1940s, American food experts worried that the Nazis had developed some kind of super-nutritional supplement, a “magical Buck Rogers pill,” to keep their army tireless and efficient (there probably was such a pill, but that pill was methamphetamine). In response, Roosevelt convened a 900–person National Nutrition Conference for Defense, a full quarter of them home economists, to tackle malnutrition as part of the war effort.

Maybe it’s not surprising that vitamins had such a hold on the popular imagination. It’s hard for us to imagine growing up in a world where scurvy, beriberi, and rickets were a real and even terrifying danger, not just funny-sounding words you might encounter in a Dickens novel. But for people living in the 1920s, they were no joke. Look at your local five-year-old and think how they will never understand the real importance of the internet, and what life was like before. You’re the same way about vitamins.

Milk

The final thing we learned is that people from the 1920s and 1930s had an intense, almost deranged love for milk.

Milk was always mentioned first and usually mentioned often. It was on every menu. Good Housekeeping’s 1926 article, Guide Posts to Balanced Meals, included “One pint of milk a day as either a beverage or partly in soups, sauces or desserts” as guidepost #1. Pamphlets from the USDA’s Bureau of Home Economics suggested that one fifth of a family’s food budget should be spent on milk. Milk was served at every meal in the schoolhouse, with milk and crackers at recess, the target being a quart of milk for every child, every day.

Milk was on every relief list. Food relief in NYC in 1930, a very strict beans-and-potatoes affair, still made sure to include a pound of evaporated milk for every family. Even for those on microscopic fifty-cent-a-day menus, milk was recommended at every meal, “one pint for breakfast, some for lunch, and then another pint for supper.” One father struggling to adjust to the Depression said, “We had trouble learning to live within the food allowance allotted us. We learned it meant oleomargarine instead of butter. It meant one quart of milk a day for the children instead of three.” Even the tightest-fisted relief lists included a pint of milk a day for adults, and a quart a day for children. The most restrictive diets of all were bread and — you guessed it — milk.

Milk was the measure of destitution. Descriptions of people eating “whatever they could get” sound like this: “inferior qualities of food and less of it; less milk; loose milk instead of bottled milk, coffee for children who previously drank milk.” When describing the plight of West Virginia mining families, a state union leader said, “Their diet is potatoes, bread, beans, oleomargarine, but not meat, except sow-belly two or three times a week. The company won’t let the miners keep cows or pigs and the children almost never have fresh milk. Only a few get even canned milk.”

There’s no question — milk was the best food. The government sent McCollum, the guy who discovered vitamins, around the country, where in his lectures he said:

Who are the peoples who have achieved, who have become large, strong, vigorous people, who have reduced their infant mortality, who have the best trades in the world, who have an appreciation for art and literature and music, who are progressive in science and every activity of the human intellect? They are the people who have patronized the dairy industry.

Normal milk wasn’t enough for these people, so in 1933 they developed a line of “wonder foods” around the idea of combining milk with different kinds of cereals. They called them: Milkorno, Milkwheato, and Milkoat. These products are about what you would expect, but the reception was feverish:  

With great fanfare, Rose introduced Milkorno, the first of the cereals, at Cornell’s February 1933 Farm & Home Week, where the assembled dignitaries—including Eleanor Roosevelt, wife of the president-elect—were fed a budget meal that included a Milkorno polenta with tomato sauce. The price tag per person was 6½ cents. FERA chose Milkwheato (manufactured under the Cornell Research Foundation’s patent) to add to its shipments of surplus foods, contracting with the Grange League Federation and the Ralston Purina Company to manufacture it. … Milkwheato and its sister cereals represented the pinnacle of scientifically enlightened eating. Forerunners to our own protein bars and nutritional shakes, they were high in nutrients, inexpensive, and nonperishable. White in color and with no pronounced flavor of their own, they were versatile too. Easily adapted to a variety of culinary applications, they boosted the nutritional value of whatever dish they touched. They could be baked into muffins, cookies, biscuits, and breads; stirred into chowders and chili con carne; mixed into meat loaf; and even used in place of noodles in Chinese chop suey.

We had always assumed that the American obsession with milk was the result of the dairy lobby trying to push more calcium on us than we really need. And maybe this is partially true. But public opinion of dairy has fallen so far from the rabid heights of the 1930s that now we wonder if milk might actually be underestimated. Is the dairy lobby asleep at the wheel? Still resting on their laurels? Anyways, if you want to eat the way your ancestors ate back in the 1920s, the authentic way to start your day off right is by drinking a nice tall pint of milk.

[1] : There might be a class element here? The authors say, “FDR recoiled from the plebeian food foisted on him as president; perhaps no dish was more off-putting to him than what home economists referred to as ‘salads,’ assemblages made from canned fruit, cream cheese, gelatin, and mayonnaise.”


PART II HERE

The Only True Wisdom is Knowing that You Can’t Draw a Bicycle

I. 

Early on in science there would never even could be a replication crisis or anything because everyone was just trying all the stuff. They were writing letters to each other with directions, trying each others’ studies, and seeing what they could confirm for themselves.  

Today, scientists would tell you that replicating someone else’s work takes decades of specialized training, because most findings are too subtle and finicky to be reproduced by just anyone. For example, consider this story from Harvard psychology professor Jason Mitchell, about how directions depend on implicit knowledge, and it’s impossible to fully explain your procedure to anyone:

I have a particular cookbook that I love, and even though I follow the recipes as closely as I can, the food somehow never quite looks as good as it does in the photos. Does this mean that the recipes are deficient, perhaps even that the authors have misrepresented the quality of their food? Or could it be that there is more to great cooking than just following what’s printed in a recipe? I do wish the authors would specify how many millimeters constitutes a “thinly” sliced onion, or the maximum torque allowed when “fluffing” rice, or even just the acceptable range in degrees Fahrenheit for “medium” heat. They don’t, because they assume that I share tacit knowledge of certain culinary conventions and techniques; they also do not tell me that the onion needs to be peeled and that the chicken should be plucked free of feathers before browning. … Likewise, there is more to being a successful experimenter than merely following what’s printed in a method section. Experimenters develop a sense, honed over many years, of how to use a method successfully. Much of this knowledge is implicit.

Mitchell believes in a world where findings are so fragile that only extreme insiders, close collaborators of the original team, could possibly hope to reproduce their findings. The implicit message here is something like, “don’t bother replicating ever; please take my word for my findings.” 

The general understanding of replication is slightly less extreme. To most researchers, replication is when one group of scientists at a major university reproduce the work of another group of scientists at a different major university. There’s also a minority position that replications should be done by many labs, that replication is an internal process of double-checking: “take the community’s word”. 

But this doesn’t seem quite right to us either. If a finding can’t be confirmed by outsiders like you — if you can’t see it for yourself — it doesn’t really “count” as replication. This used to be the standard of evidence (confirm it for yourself or don’t feel bound to take it seriously) and we think this is a better standard to hold ourselves to.

It’s not that Mitchell is wrong — he’s right, there is a lot of implicit knowledge involved in doing anything worth doing. Sometimes science is really subtle and hard to replicate at home; other times, it isn’t. But whether or not a particular study is easy or hard to replicate is a dodge. This argument is a load of crap because the whole reason to do research in the first place is a fight against received wisdom.

The motto of the Royal Society, one of the first scientific societies, was and still is nullius in verba. Roughly translated, this means, “take no one’s word” or “don’t take anyone’s word for it”. We think this is a great motto. It’s a good summary of the kind of spirit you need to investigate the world. You have the right to see for yourself and make up your own mind; you shouldn’t have to take someone’s word. If you can take someone else’s word for it — a king, maybe — then why bother? 

In the early 1670s, Antonie van Leeuwenhoek started writing to the Royal Society, talking about all the “little animals” he was seeing in drops of pond water when he examined them under his new microscopes. Long particles with green streaks, wound about like serpents, or the copper tubing in a distillery. Animals fashioned like tiny bells with long tails. Animals spinning like tops, or shooting through the water like pikes. “Little creatures,” he said, “above a thousand times smaller than the smallest ones I have ever yet seen upon the rind of cheese.”

Wee beasties

Naturally, the Royal Society found these reports a little hard to believe. They had published some of van Leewenhoek’s letters before, so they had some sense of who the guy was, but this was almost too much:

Christiaan Huygens (son of Constanijn), then in Paris, who at that time remained sceptical, as was his wont: ‘I should greatly like to know how much credence our Mr Leeuwenhoek’s observations obtain among you. He resolves everything into little globules; but for my part, after vainly trying to see some of the things which he sees, I much misdoubt me whether they be not illusions of his sight’. The Royal Society tasked Nehemiah Grew, the botanist, to reproduce Leeuwenhoek’s work, but Grew failed; so in 1677, on succeeding Grew as Secretary, Hooke himself turned his mind back to microscopy. Hooke too initially failed, but on his third attempt to reproduce Leeuwenhoek’s findings with pepper-water (and other infusions), Hooke did succeed in seeing the animalcules—‘some of these so exceeding small that millions of millions might be contained in one drop of water’ 

People were skeptical and didn’t take van Leewenhoek at his word alone. They tried to get the same results, to see these little animals for themselves, and for a number of years they failed. They got no further help from van Leewenhoek, who refused to share his methods, or the secrets of how he made his superior microscopes. Yet even without a precise recipe, Hooke was eventually able to see the tiny, wonderful creatures for himself. And when he did, van Leewenhoek became a scientific celebrity almost overnight. 

If something is the truth about how the world works, the truth will come out, even if it takes Robert Hooke a few years to confirm your crazy stories about the little animals you saw in your spit. Yes, research is very exacting, and can demand great care and precision. Yes, there is a lot of implicit knowledge involved. The people who want to see for themselves might have to work for it. But if you think what you found is the real McCoy, then you should expect that other people should be able to go out and see it for themselves. And assuming you are more helpful than van Leewenhoek, you should be happy to help them do it. If you don’t think people will be able to replicate it at their own bench, are you sure you think you’ve discovered something?

Fast forward to the early 1900s. Famous French Physicist Prosper-René Blondlot is studying the X-Rays, which had been first described by Wilhelm Röntgen in 1895. This was an exciting time for rays of all stripes — several forms of invisible radiation had just been discovered, not only X-Rays but ultraviolet light, gamma rays, and cathode rays. 

Also he looked like a wizard

So Blondlot was excited, but not all that surprised, when he discovered yet another new form of radiation. He was firing X-rays through a quartz prism and noticed that a detector was glowing when it shouldn’t be. He performed more experiments and in 1903 he announced the discovery of: N-rays!  

Blondlot was a famous physicist at a big university in France, so everyone took this seriously and they were all very excited. Soon other scientists had replicated his work in their own labs and were publishing scores of papers on the subject. They began documenting the many strange properties of N-rays. The new radiation would pass right through many substances that blocked light, like wood and aluminum, but were obstructed by water, clouds, and salt. They were emitted by the sun and by human bodies (especially flexed muscles and certain areas of the brain), as well as rocks that had been left in the sun and been allowed to “soak up” the N-rays from sunlight. 

The procedure for detecting these rays wasn’t easy. You had to do everything just right — you had to use phosphorescent screens as detectors, you had to stay in perfect darkness for a half hour so your eyes could acclimate, etc. Fortunately Blondlot was extremely forthcoming and always went out of his way to help provide these implicit details he might not have been able to fit in his reports. And he was vindicated, because with his help, labs all over the place were able to reproduce and extend his findings.

Well, all over France. Some physicists outside France, including some very famous ones, weren’t able to reproduce Blondlot’s findings at all. But as before, Blondlot was very forthcoming and did his best to answer everyone’s questions. 

Even so, over time some of the foreigners began to get a little suspicious. Eventually some of them convinced an American physicist, Robert W. Wood, to go visit Blondlot in France to see if he could figure out what was going on. 

What a dude. Classic American

Blondlot took Wood in and gave him several demonstrations. To make a long story short (you can read Wood’s full account here; it’s pretty interesting), Wood found a number of problems with Blondlot’s experiments. The game was really up when Wood secretly removed a critical prism from one of the experiments, and Blondlot continued reporting the same results as if nothing had happened. Wood concluded that N-rays and all the reports had been the work of self-deception, calling them “purely imaginary”. Within a couple of years, no one believed in N-rays anymore, and today they’re seen as a cautionary tale. 

So much for the subtlety and implicit knowledge needed to do cutting-edge work. Maybe your results are hard to get right, but maybe if other people can’t reproduce your findings, they shouldn’t take your word for it.

This is the point of all those chemistry sets your parents (or cool uncle) gave you when you were a kid. This is the point of all those tedious lab classes in high school. They were poorly executed and all but this was the idea. If whatever Röntgen or Pasteur or Millikan or whoever found is for real, you should be able to reproduce the same thing for yourself in your high school with only the stoner kid for a lab assistant (joke’s on you, stoners make great chemists — they’re highly motivated).

Some people will scoff. After all, what kind of teenager can replicate the projects reported in a major scientific journal? Well, as just one example, take Dennis Gabor: “during his childhood in Budapest, Gabor showed an advanced aptitude for science; in their home laboratory, he and his brother would often duplicate the experiments they read about in scientific journals.”

Clearly some studies will be so complicated that Hungarian teenagers won’t be able to replicate them, or may require equipment they don’t have access to. And of course the Gabor brothers were not your average teenagers. But it used to be possible, and it should be made possible whenever possible. Because otherwise you are asking the majority of people to take your claims on faith. If a scientist is choosing between two lines of work of equal importance, one that requires a nuclear reactor and the other that her neighbor’s kids can do in their basement, she should go with the basement.

It’s good if one big lab can recreate what another big lab claims to have found. But YOU are under no obligation to believe it unless you can replicate it for yourself.

You can of course CHOOSE to trust the big lab, look at their report and decide for yourself. But that’s not really replication. It’s taking someone’s word for something. 

There’s nothing wrong with taking someone’s word; you do it all the time. Some things you can’t look into for yourself; and even if you could, you don’t have enough time to look into everything. So we are all practical people and take the word of people we trust for lots of things. But that’s not replication.

Something that you personally can replicate is replication. Watching someone else do it is also pretty close, since you still get to see it for yourself. Something that a big lab would be able to replicate is not really replication. It’s nice to have confirmation from a second lab, but now you’re just taking two people’s word for it instead of one person’s. Something that can in principle be replicated, but isn’t practical for anyone to actually attempt, is not replication at all.

If it cannot be replicated even in principle, then what exactly do you think you’re doing? What exactly do you think you’ve discovered here? 

What ever happened to all the public science demonstrations

We find it kind of concerning that “does replicate” or “doesn’t replicate” have come to be used as synonyms of “true” and “untrue”. It’s not enough to say that things replicate or not. Blondlot’s N-ray experiments were replicated hundreds of times around France, until all of a sudden they weren’t; van Leeuwenhoek’s observations of tiny critters in pond water weren’t replicated for years, until they were. The modern take on replication (lots of replications from big labs = good) would have gotten both of these wrong. 

II.

If knowing the truth about some result is important to you, don’t just take someone’s word for it. Don’t leave it up to the rest of the world to do this work; we’re all bunglers, you should know that. If you can, you should try it for yourself.

So let’s look at some examples of REAL replication. We’ll take our examples from psychology, since as we saw earlier, they’re in the thick of the modern fight over replication.

We also want to take a minute to defend the psychologists, at least on the topic of replication (psychology has other sins, but that’s a subject for another time). Psychology has gotten a lot of heat for being the epicenter of the replication crisis. Lots of psychology studies haven’t replicated under scrutiny. There have been many high-profile disputes and attacks. Lots of famous findings seem to be made out of straw

Some people have taken this as a sign that psychology is all bunkum. They couldn’t be more wrong — it’s more like this. One family in town gets worried and hires someone to take a look at their house. The specialist shows up and sure enough, their house has termites. Some of the walls are unsafe; parts of the structure are compromised. The family is very worried but they start fumigating and replacing boards that the termites have damaged to keep their house standing. All the other families in town laugh at them and assume that their house is the most likely to fall down. But the opposite is true. No other family has even checked their home for termites; but if termites are in one house in town, they are in other houses for sure. The first family to check is embarrassed, yes, but they’re also the only family who is working to repair the damage.

The same thing is going on in psychology. It’s very embarrassing for the field to have their big mistakes aired in public; but psychology is also distinct for being the first field willing to take a long hard look at themselves and make a serious effort to change for the better. They haven’t done a great job, but they’re one of the only fields that is even trying. We won’t name names but you can bet that other fields have just as many problems with p-hacking — the only difference is that those fields are doing a worse job rooting it out. 

The worst thing you can say about psychology is that it is still a very young field. But try looking at physics or chemistry when they were only 100 years old, and see how well they were doing. From this perspective, psychology is doing pretty ok. 

Despite setbacks, there has been some real progress in psychology. So here are a few examples of psychological findings that can actually be replicated, by any independent researcher in an afternoon. You don’t have to take our word or anyone else’s word for these findings if you don’t want to. Try it for yourself! Please do try this at home, that’s the point.

Are these the most important psychology findings? Probably not — we picked them because they’re easy to replicate, and you should be able to confirm their results from your sofa (disclaimer: for some of them, you may have to leave your sofa). But all of them are things we didn’t know about 150 years ago, so they represent a real advance in what we know about the mind.

For most of these you will need a small group of people, because most of these are statistically true results, not guaranteed to work in every case. But as long as you have a dozen people or so, they should be pretty reliable.

Draw a Bicycle — Here’s a tricky one you can do all on your own. You’ve seen a bicycle before, right? You know what they look like? Ok, draw one. 

Unless you’re a bicycle mechanic, chances are you’ll be really rubbish at this — most people are. While you can recognize a bicycle no problem, you don’t actually know what one looks like. Most people produce drawings that look something like this:

Needless to say, that’s not a good representation of the average bicycle.

Seriously, try this one yourself right now. Don’t look up what a bicycle looks like; draw it as best you can from memory and see what you get. We’ll put a picture of what a bicycle actually looks like at the end of this post. 

Then, tweet your bicycle drawings at us at @mold_time on twitter

(A similar example: which of the images below shows what a penny looks like?)

Wisdom of the CrowdWisdom of the crowd refers to the fact that people tend to make pretty good guesses on average even when their individual guesses aren’t that good. 

You can do this by having a group of people guess how many jellybeans are in a jar of jellybeans, or how much an ox weighs. If you average all the guesses together, most of the time it will be pretty close to the right answer. But we’ve found it’s more fun to stand up there and ask everyone to guess your age.

We’ve had some fun doing this one ourselves, it’s a nice trick, though you need a group of people who don’t know you all that well. It works pretty well in a classroom. 

This only works if everyone makes their judgments independently. To make sure they don’t influence each other’s guesses, have them all write down their guesses on a piece of paper before blurting it out. 

Individual answers are often comically wrong — sometimes off by up to a decade in both directions — but we’ve been very impressed. In our experience the average of all the guesses is very accurate, often to within a couple of months. But give it a try for yourself.

Emotion in the Face — You look at someone’s face to see how they’re feeling, right? Well, maybe. There’s a neat paper from a few years ago that has an interesting demonstration of how this isn’t always true. 

They took photos of tennis players who had just won a point or who had just lost a point, and cut apart their faces and bodies (in the photos; no tennis pros were harmed, etc.). Then they showed people just the bodies or just the faces and asked them to rate how positively or negatively the person was feeling:

They found that people could usually tell that a winning body was someone who was feeling good, and a losing body was someone feeling bad. But with just the faces, they couldn’t tell at all. Just look above – for just the bodies, which guy just won a point? How about for the faces, who won there?

Then they pushed it a step further by putting winning faces on losing bodies, and losing faces on winning bodies, like so:

Again, the faces didn’t seem to matter. People thought chimeras with winning bodies felt better than chimeras with losing bodies, and seemed to ignore the faces.

This one should be pretty easy to test for yourself. Go find some tennis videos on the internet, and take screenshots of the players when they win or lose a point. Cut out the faces and bodies and show them to a couple friends, and ask them to rate how happy/sad each of the bodies and faces seems, or to guess which have just won a point and which have just lost. You could do this one in an afternoon. 

Anchoring — This one is a little dicey, and you’ll need a decent-sized group to have a good chance of seeing it. 

Ask a room of people to write down some number that will be different for each of them — like the last four digits of their cell phone number, or the last two digits of their student ID or something. Don’t ask for part of their social security number or something that should be kept private. 

Let’s assume it’s a classroom. Everyone takes out their student ID and writes down the last two digits of their ID number. If your student ID number is 28568734, you write down “34”.

Now ask everyone to guess how old Mahatma Gandhi was when he died, and write that down too. If this question bores you, you can ask them something else — the average temperature in Antarctica, the average number of floors in buildings in Manhattan, whatever you like.

Then ask everyone to share their answers with you, and write them on the board. You should see that people who have higher numbers as the last two digits of their student ID number (e.g. 78 rather than 22) will guess higher numbers for the second question, even though the two numbers are unrelated. They call this anchoring. You can plot the student ID digits and the estimates of Gandhi’s age on a scatterplot if you like, or even calculate the correlation. It should come out positive.

Inattentional Blindness — If you’ve taken an intro psych class, then you’re familiar with the “Invisible Gorilla” (for everyone else, sorry for spoiling). In the biz they call this “inattentional blindness” — when you aren’t paying attention, or your attention is focused on one task, you miss a lot of stuff.

Turns out this is super easy to replicate, especially a variant called “change blindness”, where you change something but people don’t notice. You can swap out whole people and about half the time, no one picks up on it.

Because it’s so easy, people love to replicate this effect. Like this replication from NOVA, or this British replication, or this replication from National Geographic. You can probably find a couple more on YouTube if you dig around a bit. 

This one isn’t all that easy to do at home, but if you can find a couple accomplices and you’re willing to play a prank on some strangers, you should be able to pull it off. 

(Or you can replicate it in yourself by playing I’m on Observation Duty.)

False Memory — For this task you need a small group of people. Have them put away their phones and writing tools; no notes. Tell them you’re doing a memory task — you’ll show them a list of words for 30 seconds, and you want them to remember as many words as possible. 

Then, show them the following list of words for 30 seconds or so: 

After 30 seconds, hide or take down the list. 

Then, wait a while for the second half of the task. If you’re doing this in a classroom, do the first step at the beginning of class, and the second half near the end.

Anyways, after waiting at least 10 minutes, show them these words and ask them, which of the words was on the original list? 

Most people will incorrectly remember “sleep” as being on the original list, even though, if you go back and check, it’s not. What’s going on here? Well, all of the words on the original list are related to sleep — sleep adjectives, sleep sounds, sleep paraphernalia — and this leads to a false memory that “sleep” was on the list as well. 

You can do the same thing for other words if you want — showing people a list of words like “sour”, “candy”, and “sugar” should lead to false memories of the word “sweet”. You can also read the list of words aloud instead of showing it on a screen for 30 seconds, you should get the same result either way. 

Draw your own conclusions about what this tells us about memory, but the effect should be pretty easy to reproduce for yourself.

We don’t think all false memory findings in psychology bear out. We think some of them aren’t true, like the famous Loftus & Palmer (1974) study, which we think is probably bullshit. But we do think it’s clear that it’s easy to create false memories under the right circumstances, and you can do it in the classroom using the approach we describe above.

You can even use something like the inattentional blindness paradigms above to give people false memories about their political opinions. A little on the tricky side but you should also be able to replicate this one if you can get the magic trick right. And if this seems incredible, ridiculous, unbelievable — try it for yourself! 

Oh yeah, and here’s that bicycle: 

Predictions for 1950

[Previously in this series: Predictions for 2050, A Few More Predictions for 2050]

Happy New Year to all! Welcome to 1922, and welcome back to our column, Slime of the Times.

Last year Professor Erik Hoel of Tufts University wrote in his column about all the things that will change between now and the far-distant future year of 1950. But this is no pulp-magazine tall tale; Professor Hoel says that the best way to predict the future of tomorrow is to extend the burgeoning trends of today.

While it sounds impossibly far off, the truth is that 1950 is a mere 28 years away. Making predictions for 1950 based on what we see to-day is just like sitting in the Gay Nineties and predicting what the world would look like in 1920 — no mean task, but not impossible either.

To us this seemed like jolly good fun. So without further ado, here is our set of predictions for the distant future that is 1950.

You Can’t Keep ‘Em Down on the Farm 

By now you all know the 1919 hit song of wild acclaim, “How Ya Gonna Keep ’em Down on the Farm (After They’ve Seen Paree)?” Probably you have heard the recording by legendary vaudeville darling Nora Bayes. And maybe you know the single from last year, “In a Cozy Kitchenette Apartment” from Irving Berlin’s Music Box Revue, rhapsodizing about urban living. 

Well, Miss Nora and all the other songbirds are right. The rural problem is here and it’s here to stay — city drift is the trend of the next 30 years and beyond. Already more than half of all Americans live in cities, and that is not stopping any time soon. The agrarian America we know and love is coming to an end. 

Metropolitan apartment living will soon be normal and even, in time, respectable. The home as you know it will soon be out of date, and instead of living in a pleasant frame dwelling with a front yard and crimson rambler roses over the porch, your son or daughter will live in a huge apartment building, where among hundreds of cell-like cubicles, they will be known to their neighbors not by name, but as “50A” or “32B”.

Young people in the cities will eat from “delicatessens” and “cafeterias” rather than from the kitchen. Already there are more than fifty delicatessens in Baltimore, and they are spreading in most every major city. Meanwhile the so-called cafeterias bring Chicago ideas of mechanical efficiency to the American dinner service, resembling nothing so much as a factory assembly line.

Some of you may think that delicatessens are the emblems of a declining civilization, the source of all our ills, and the destroyer of the home. But to our children and our children’s children even this scourge will become unremarkable, whatever the consequences may be to the American family.

Yes, our great-grandchildren will eat sandwiches, a natural by-product of modern machine civilization, and never know what they are missing of their heritage.

The Servant Problem

The movement to these “efficiency apartments” will be spurred by many things, but one is the gradual but ever-increasing decline in the role of the domestic servant. The servant problem comes and goes, and if you read tips on how to hire and clothe them in the magazines, you might be convinced it is simply a seasonal concern. But it gets harder to find servants every year, and it will get harder still, until the servant as we know her disappears.

The middle class will soon abandon servants almost entirely. The very well-to-do might employ a maid, but she will not attend the household day and night, and they will have no cooks and certainly no chauffeur. If they have a maid, they might even share her with other families. By 1950, only the very oldest, richest families will employ live-in servants. English butlers and Scotch maids, so common today in the households of your more fortunate relations, will be a thing almost entirely of the past.

Just imagine it. The silence of the once-great household. No more bustle on the streets every Sunday, when the maids and footmen take their weekly day off. No more fine, uniformed chauffeurs in front of estates. But where would you house them to begin with, in the tiny apartments of the far future? 

The Nation will be Powered over the Wires 

All of us can remember the time, not so long ago, when electric power was rare, even a novelty. But soon this wonder will be common-place in the homes of all. Indeed, it is already coming not only to private but to public buildings. President Benjamin Harrison was the first to benefit from electricity in the White House, all the way back in 1891. In 1906, Grand Central Terminal in New York City was electrified as well.

By the end of this year, almost four out of every ten US households will have electric wiring, and before 1930, more than half of households in the nation will be electrified. By 1950, every public building, and all but the meanest house, will have the full benefits of modern electrical systems. Even the most rural parts of our great nation will shine with electric light.

The posters might look something like this

The Finest Delicacies at Any Time of Year

Imagine a technology that captures freshness, abolishes the seasons, and erases the limitations of geography. Imagine food out of season; peaches from South Africa and strawberries picked green and shipped around the world. Imagine a midwestern housewife serving her family fresh filet of sole. 

These qualities represent the cutting edge of culinary modernity, and all will soon be made reality through the incredible power of refrigeration. Refrigerated railroad cars will bring delicacies long-distance from any locale. Refrigerated silos will store them year-round. Whatever regional delicacies you please, wherever you are.

Say good-bye to ice-harvesters and iceboxes! Forget about going down to the pond with a pair of tongs and bringing back a dribbling piece of ice. When foods and dishes reach your home, they will be stored in a fully electrified home refrigeration unit. You have probably heard of or even seen the clunky gas-powered household refrigeration unit produced by General Electric, or the more recent Kelvinator put on the market by General Motors. 

To be frank, these models are ugly and they are expensive — the Kelvinator will put you back as much as a new car! But everyone knows there is money in refrigeration. In the coming decade, dozens more models and companies will enter the fray. Some will be powered by gas, some by kerosine, but the ultimate winners will be those that run on electricity. Home refrigeration units will become more and more affordable. They will become compact and sleek, until they are admired as objects of modern beauty. These things will soon be so completely nifty to look at, that merely to see one will be to have a passionate desire for one.

Advances in freezing foods will revolutionize American cuisine. Modern frozen foods are invariably soggy and lifeless, but scientific control over temperature will soon give us frozen dishes that preserve each food at the very peak of freshness. Peas, asparagus, and spinach, each vegetable as delicious as if they had just been bought from the farmer down the road, ready from the moment they are drawn from the freezer, with no preparation required, not even washing. Farm-fresh broilers, tender calves liver, halibut, and even frozen goose — meats, poultry, vegetables, and fruit.

By 1950, futuristic markets equipped with refrigeration technology on a massive scale will be the norm. Enter any town market and choose from a huge variety of neatly stacked cartons of frozen fruits and vegetables, meats, and seafood, all of uniform quality, tastefully arranged in great refrigerated display cases that run the entire length of the store. 

There will be another Great War with the Hun

In Germany they are already concerned about the depreciation of the German mark. Each additional payment to France, England, and the United States brings a flood of paper currency and makes the depreciation of the marks greater. Yet the London ultimatum shall be held, Germany will be destabilized, invigorating unhealthy parasitical elements in Germany itself, and within a generation, there will be another great war.

This war will be, if it can be believed, even worse than the great war we just concluded. Already we have seen the evolution of aircraft, tools of peace, into first machines for reconnaissance, and then into “fighters” and bombers. In the next war, great flotillas of aircraft will level the jewels of Europe. New and terrible weapons will make even the mustard gas seem as quaint as a musket.

This time the war will be truly great — a world war. China and Japan both fought in the last war, and have gotten a taste for it. Japan in particular grows hungry, and bold after their victories over Russia. They desire nothing more to be a great power, and will take advantage of any chaos to rival not only Russia but Germany, Great Britain, and perhaps even the United States.

However, the Ottomans will be gone, and will no longer be a major power. We would frankly be surprised if the Ottoman Empire lasts the rest of the year.

There will be another Great Depression

We all remember the hard times of the past two years, what will surely come to be called the Depression of 1920–1921. Many of you also remember the Panic of 1907 or Knickerbocker Crisis, when the breadlines in New York City grew to incredible lengths.

Now things seem to have stabilized, and the 1920s show every sign of being another long economic boom. Businesses are growing and factories are running full tilt, churning out line after line of dazzling new goods.

But we warn you that even in the world of tomorrow, expansion is followed by contraction, and we will see another Great Depression within a generation. It may even be worse — maybe this next downturn will be so bad that it will come to be called the Great Depression, and everyone will forget that there ever was a Great Depression of 1873.

We don’t remember this part of Teddy Roosevelt’s presidency, but we have to assume that the bears were part of a sound fiscal policy.

Business Girls

Many of us still carry in our minds psychological remnants of the age when the home and indeed the country was built upon masculine protection. But in reality, the world has already changed, and it is changing more rapidly all the time. A quarter of the American workforce is already staffed by women, working outside the home as typists, switchboard operators, stenographers, waitresses, store clerks, factory hands, and bookkeepers. 

Even now, there are some young couples where both the man and his bride hold down full-time jobs. (This is why they come to rely on the delicatessen.) When the next great war with the European powers comes — and come it will — more women will take on jobs left open by boys who are sent to the front. Old gentlemen may scoff, but the truth is that any woman who can use a kitchen mixer can learn to operate a drill. We will see women auxiliaries to our armed forces, women carpenters, perhaps even a woman riveter or some other such futuristic vision. 

The Nineteenth Amendment to the Constitution, in effect for only two years now, will change the face of American politics as much as the wars change the face of American labor. Before 1930 we shall see a woman senator, a woman representative, women mayors, and even women governors. Gradually women will enter the White House and serve in presidential cabinets. There shall be women diplomats. By 1950 Americans will have come to think nothing about a woman for the highest post in the land.

You Will Hear the Latest from New York and Chicago in the Comfort of your Drawing Room

It sounds like something out of a pulp magazine, but by 1950 there will be a radio in every home. Turning on the radio receiver will be as normal to your children, as picking up a newspaper is to you. 

You may already have heard of some of the early success stories, like KDKA in Pittsburgh, which you might know by its old call sign, 8XK. They have aired religious services, a speech by the great humanitarian Herbert Hoover, the Dempsey – Carpentier heavyweight boxing match, and just a few months ago, the first major league professional baseball game, the Pirates-Phillies game at Forbes Field. This is not the future — this is the present! You simply have not caught up yet to the incredible pace of advancements in radio.

Everything newspapers can do, radio will do better. And not only coverage of baseball games and boxing matches. Syndicated radio shows, like syndicated columns, but with voice and music. Radio plays, almost as good as going to the theater. News coverage, live from any city in the nation, or from around the world. Don’t read about the president’s speech in the paper; hear it in his own voice as if you were in Washington. 

The Newsroom of Tomorrow

Like a Lemon to a Lime, a Lime to a Lemon


We recently wrote a post
about Maciej Cegłowski’s essay Scott And Scurvy, a fascinating account of how the cure for scurvy was discovered, lost, and then by incredible chance, discovered again. At the time we said that this essay is one of the most interesting things we’ve ever read, and that we hoped to write more about it in the future. It was, we do, and here we go.

In the other post, we talked about what the history of scurvy can teach us about contradictory evidence — stuff that appears to disprove a theory, even though it doesn’t always. In this post, we want to talk about something different: the power of concepts.

First we’re gonna show you how bad it can be if you don’t have concepts you need. Then we’re going to show you how bad it can be if you DO have concepts you DON’T need.

Diseases of Deficiency

As Cegłowski puts it:

There are several aspects of this ‘second coming’ of scurvy in the late 19th century that I find particularly striking … [one was] how difficult it was to correctly interpret the evidence without the concept of ‘vitamin’. Now that we understand scurvy as a deficiency disease, we can explain away the anomalous results that seem to contradict that theory (the failure of lime juice on polar expeditions, for example). But the evidence on its own did not point clearly at any solution. It was not clear which results were the anomalous ones that needed explaining away. The ptomaine theory made correct predictions (fresh meat will prevent scurvy) even though it was completely wrong.

We’re not quite sure if he’s right about the concept of “vitamin” — even James Lind seems to have thought the cure was something in certain foods, maybe the fact that they were so tart and acidic. More critical might be the problem of focusing on the noticeable aspect of citrus (they are very tart) and missing the hidden reason it actually cures scurvy (high in vitamin C). Not sure what advice we could give there except “don’t mistake flash for substance”, but that’s easier said than done.

But we do wonder about the concept of a deficiency disease in the first place. Even James Lind thought that scurvy was actually caused by damp air, and vegetable acids were just a way to fight back. Vegetable acids were thought to be cures, not essential nutrients. They were “antiscorbutic” like “antibiotic”. The concept of a deficiency disease doesn’t seem to have existed before the 1880s and got almost no mention until 1900, at least not under that name:

Without this concept, it does seem like doctors of the 19th century were missing an important tool in their mental toolbox for fighting disease. 

This reminds us of other problems in global medicine — maybe we should introduce the idea of a “contamination disease”. People are already familiar with this concept to a point — lead poisoning, arsenic poisoning, etc. — but people don’t look at a disease and think, maybe it’s from a contaminant. In fact, they often look at the symptoms of exposure to contaminants and think, that’s an (infectious) disease.

A good example is so-called Minamata disease. In 1956, in the city of Minamata, Japan, a pair of sisters presented with a set of disturbing symptoms, including convulsions. Soon the neighbors were showing signs as well. Doctors diagnosed an “epidemic of an unknown disease of the central nervous system”, which they called “Minamata disease”. They assumed it was contagious and took the necessary precautions. 

But soon they started hearing about mysterious cases of cats and birds showing similar symptoms, having convulsions or falling from the sky. Eventually they figured out “Minamata disease” was not contagious at all — the disease was methylmercury poisoning, the result of mercury compounds a local Chisso chemical factory was leaking into the harbor.

You might say, “Well it was not a disease at all; they were poisoned. If SMTM are right, then obesity isn’t a disease either; everyone has just been low-grade poisoned all at once.” We think this highlights the need for a deeper discussion about our categories!

“Disease” really does come from just “dis” “ease”. If you’re a doctor and someone comes to you, and they are not at ease, they are diseased, and that’s what you should care about. The disease might ultimately be bacterial, or viral, or an allergy, or a parasite, or the result of a deficiency, or the result of exposure to a harmful contaminant or poison, but it’s still a disease. For more discussion of this particular point, see here, also coincidentally about obesity, we didn’t stack the deck on this one it’s from 2010.

(If we were being really strict, we would say that obesity is a symptom, because conditions like Cushing’s Syndrome and drugs like Haldol can cause it too. If one or more contaminants also cause obesity, then the result of that exposure is a contamination disease, with obesity as a symptom. For more discussion of THIS particular point, see here.)

Lemon Mold Lime Mold

One of the weirdest things Cegłowski describes is how back in the day, people used the words “lemon” and “lime” interchangeably to describe any citrus fruit, which they thought of as a single category:

The scheduled allowance for the sailors in the Navy was fixed at I oz.lemon juice with I + oz. sugar, served daily after 2 weeks at sea, the lemon juice being often called ‘lime juice’ and our sailors ‘lime juicers’. The consequences of this new regulation were startling and by the beginning of the nineteenth century scurvy may be said to have vanished from the British navy. In 1780, the admissions of scurvy cases to the Naval Hospital at Haslar were 1457; in the years from 1806 to 1810, they were two. 

(As we’ll see, the confusion between lemons and limes would have serious reprecussions.)

This ended up making a huge difference in the tale of the tragedy of scurvy cures:

When the Admiralty began to replace lemon juice with an ineffective substitute in 1860, it took a long time for anyone to notice. In that year, naval authorities switched procurement from Mediterranean lemons to West Indian limes. The motives for this were mainly colonial – it was better to buy from British plantations than to continue importing lemons from Europe. Confusion in naming didn’t help matters. Both “lemon” and “lime” were in use as a collective term for citrus, and though European lemons and sour limes are quite different fruits, their Latin names (citrus medica, var. limonica and citrus medica, var. acida) suggested that they were as closely related as green and red apples. Moreover, as there was a widespread belief that the antiscorbutic properties of lemons were due to their acidity, it made sense that the more acidic Caribbean limes would be even better at fighting the disease. 

In this, the Navy was deceived. Tests on animals would later show that fresh lime juice has a quarter of the scurvy-fighting power of fresh lemon juice. And the lime juice being served to sailors was not fresh, but had spent long periods of time in settling tanks open to the air, and had been pumped through copper tubing. A 1918 animal experiment using representative samples of lime juice from the navy and merchant marine showed that the ‘preventative’ often lacked any antiscorbutic power at all.

It’s worth focusing on one part of this passage in particular: “Both ‘lemon’ and ‘lime’ were in use as a collective term for citrus.” This seems to be the case. As far as we can tell, the word “citrus” wasn’t really used prior to 1880. It was probably introduced as a scientific term for the genus before slowly working its way into common usage. Before then, “lemon” dominated the conversation, and “lime” dominated ten times over:

Though note that many uses of “lime” probably refer to things like quicklime. 

Maybe it’s not surprising that it took the language a while to sort itself out, but it still seems surprising that your great-great-grandfather didn’t think to distinguish between two fruits that you can tell apart at a glance. Even so, we think there are a couple of reasons to be sympathetic.

The name stuff is confusing, but swapping out one citrus fruit for another seems understandable, even if it ended up being misguided. To Europeans at the time, the thing that stood out about limes AND lemons was how tart they were, so it’s not surprising that they thought that the incredible tartness of these fruits was critical to the role they played in treating scurvy. But sourness in citrus fruits generally comes from citric acid, not vitamin C / ascorbic acid (incidentally, this is ascorbic as in “antiscorbutic”). Unfortunately, they had no way of knowing that. 

The second reason to be sympathetic is this: People mixed up limes and lemons in the 1800s. You may laugh but actually you are mixing up citrus right now.

The lemon is a single species of fruit, Citrus limon. It’s a specific species of tree that gives a specific yellow fruit that is high in citric acid and high in vitamin C. If you go to the store and buy a lemon, you know what you’re getting.

(Well, mostly. The Wikipedia page for lemons has a section called “other citrus called ‘lemons’”, which lists six other citrus fruits that are also called lemons, like the rough lemon and the Meyer lemon. But besides this, a lemon is a lemon.)

There’s also this kind of lemon, but the British Admiralty didn’t have access to these back in the age of sail.

In comparison, the Wikipedia article on limes says,

There are several species of citrus trees whose fruits are called limes, including the Key lime (Citrus aurantiifolia), Persian lime, Makrut lime, and desert lime. … Plants with fruit called “limes” have diverse genetic origins; limes do not form a monophyletic group.

The very first section of the article is called, “plants known as ‘lime’”, which gives you a sense of how vague the name “lime” really is. The list they give includes the Persian lime, the Rangpur lime, the Philippine lime, the Makrut Lime, the Key Lime, four different Australian limes, and several things called “lime” that are not even citrus fruits, including the Spanish lime and two different plants called the wild lime. They also say: 

The difficulty in identifying exactly which species of fruit are called lime in different parts of the English-speaking world (and the same problem applies to synonyms in other European languages) is increased by the botanical complexity of the citrus genus itself, to which the majority of limes belong. Species of this genus hybridise readily, and it is only recently that genetic studies have started to shed light on the structure of the genus. The majority of cultivated species are in reality hybrids, produced from the citron (Citrus medica), the mandarin orange (Citrus reticulata), the pomelo (Citrus maxima) and in particular with many lime varieties, the micrantha (Citrus hystrix var. micrantha).

This means there is not even a straight answer to a question like “how much vitamin C is in a lime?” — there are at least a dozen different fruits that are commonly called “limes”, they all contain different amounts of vitamin C, and many of them are not even related to each other.

On those remote pages it is written that limes are divided into (a) limes that belong to the Emperor, (b) embalmed limes, (c) limes that are trained, (d) suckling limes, (e) mermaid limes, (f) fabulous limes, (g) stray limes, (h) limes included in the present classification, (i) limes that tremble as if they were mad, (j) innumerable limes, (k) limes drawn with a very fine camel’s hair brush, (l) other limes, (m) limes that have just broken the flower vase, (n) limes which, from a distance, resemble flies.

The British Admiralty seems to have switched from lemons grown in Sicily to West Indian limes. You probably know these as Key limes, and in case the nomenclature isn’t complicated enough, they’re also called bartender’s limes, Omani limes, or Mexican limes. We’ll stick with “Key lime” because that’s probably the name you know because it makes me think of pie. Mmmm, pie

The kind of limes you buy at the store, Persian limes, are a cross between Key limes and lemons. We can’t find any actual tests of the amount of vitamin C in Key limes, so we think all the published estimates of the amount of vitamin C in limes are probably from Persian limes.

We generally see numbers of about 50 mg/100 g vitamin C for lemons and about 30 mg/100 g for limes, presumably Persian limes. Since Persian limes are a cross between lemons and Key limes, Key limes probably have less than 30 mg/100 g vitamin C. Genetics isn’t this simple, but if we were to assume that Persian limes are the average of their forebears, then Key limes would contain about 10 mg/100 g vitamin C, less than a tomato. You need about 10 mg of vitamin C per day to keep from getting scurvy, so already we can see why this might be a problem.

Cooking reduces the vitamin C content of vegetables by about 40% (though of course this varies widely with specific conditions), so the 50 mg or so in a lemon would become about 30 mg after boiling, but the 30 mg in a lime would become about 18 mg after boiling. Lemon juice would be as good of an antiscorbutic after boiling as Persian lime juice would be fresh, and Key limes seem like they would have less vitamin C than either, boiled or not.

Persian limes also turn yellow as they ripen — you only think of them as green because farmers pick them and send them to the grocery store before they change colors. And of course, lemons are green in their early stages of growth. So like, so much for the “limes are green and lemons are yellow” thing.

Lovely fresh limes. Yes you read that right.

All these issues pale in comparison to the fact that citrus taxonomy is insane. Not only are limes not limes, it seems like nothing is really anything, or maybe anything is everything.

You walk into a supermarket and you think you recognize a bunch of Platonic fruits — oranges, clementines, lemons, limes, grapefruits, and so on. But when you do a Google image search for “citrus genetics”, you get diagrams like:

And this diagram:

Apparently orange genetics are so insane that even the person who made this diagram just gave up. “The citron was crossed with a lemon to make a sour orange and then uhhhhhh some stuff happened! and you got a sweet orange.”

And this diagram:

We notice there are unlabeled spaces on this Venn diagram — apparently the citrus cartels are holding out on us. Where’s my micrantha x maxima hybrid???

And this diagram, in which the Bene Gesserit attempt to breed the Kumquat Haderach. No, really.

The written material on the subject is, if anything, even more disheartening. Let’s look at a couple passages from the Wikipedia article on citrus taxonomy:

Citrus taxonomy is complex and controversial. Cultivated citrus are derived from various citrus species found in the wild. Some are only selections of the original wild types, many others are hybrids between two or more original species, and some are backcrossed hybrids between a hybrid and one of the hybrid’s parent species. Citrus plants hybridize easily between species with completely different morphologies, and similar-looking citrus fruits may have quite different ancestries. … Conversely, different-looking varieties may be nearly genetically identical, and differ only by a bud mutation.

The same common names may be given to different species, citrus hybrids or mutations. For example, citrus with green fruit tend to be called ‘limes’ independent of their origin: Australian limes, musk limes, Key limes, kaffir limes, Rangpur limes, sweet limes and wild limes are all genetically distinct. Fruit with similar ancestry may be quite different in name and traits (e.g. grapefruit, common oranges, and ponkans, all pomelo-mandarin hybrids). Many traditional citrus groups, such as true sweet oranges and lemons, seem to be bud sports, clonal families of cultivars that have arisen from distinct spontaneous mutations of a single hybrid ancestor. Novel varieties, and in particular seedless or reduced-seed varieties, have also been generated from these unique hybrid ancestral lines using gamma irradiation of budwood to induce mutations.

For more on using radiation to make new fruit, please refer to these talking dinosaurs.

In case that isn’t weird enough for you, there’s even a graft chimera citrus called the Bizzaria (really), which produces fruits that look like this: 

We’re at the Florentine citron. We’re at the sour orange. We’re at the…

On limes in particular, this page says: 

Limes: A highly diverse group of hybrids go by this name. Rangpur limes, like rough lemons, arose from crosses between citron and mandarin. The sweet limes, so-called due to their low acid pulp and juice, come from crosses of citron with either sweet or sour oranges, while the Key lime arose from a cross between a citron and a micrantha.

All of these hybrids have in turn been bred back with their parent stocks or with other pure or hybrid citrus to form a broad array of fruits. Naming of these is inconsistent, with some bearing a variant of the name of one of the parents or simply another citrus with superficially-similar fruit, a distinct name, or a portmanteau of ancestral species.

While most other citrus are diploid, many of the Key lime hybrid progeny have unusual chromosome numbers. For example, the Persian lime is triploid, deriving from a diploid Key lime gamete and a haploid lemon ovule. A second group of Key lime hybrids, including the Tanepao lime and Madagascar lemon, are also triploid but instead seem to have arisen from a backcross of a diploid Key lime ovule with a citron haploid gamete. The “Giant Key lime” owes its increased size to a spontaneous duplication of the entire diploid Key lime genome to produce a tetraploid. [Editor’s note: uhhhhh]

Wikipedia tells us this is a “lumia”. W-what is that? We don’t know, but Wikipedia assures us that “like a citron, it can grow to a formidable size.”

Pretty much every citrus page on Wikipedia has shit like this, truly enough to drive a man mad. You wander onto Wikipedia trying to find out what in god’s name a lumia is, and soon you are reading this: “A recent genomic analysis of several species commonly called ‘lemons’ or ‘limes’ revealed that the various individual lumias have different genetic backgrounds. The ‘Hybride Fourny’ was found to be an F1 hybrid of a citron-pomelo cross, while the ‘Jaffa lemon’ was a more complex cross between the two species, perhaps an F2 hybrid. The Pomme d’Adam arose from a citron-micrantha cross, while two other lumias, the ‘Borneo’ and ‘Barum’ lemons, were found to be citron-pomelo-micrantha mixes.” Lovecraft, eat your heart out. 

Mr Lovecraft might also enjoy this lovely citro-AAAAAAH

This is the much deeper problem that the history of scurvy reveals. In science, you need tools you can trust. You need to have the right equipment, the right study design, and the right analysis techniques — but you also need the right concepts.

Most of us are trained to calibrate our equipment and to double-check our experimental designs, but how often do we reconcile our concepts? Back in the 1800s, they trusted the terms “lemon” and “lime” to be relevant, to be reliable, to be meaningful — and to be interchangeable, to mean the same thing as each other. But they were all of them, deceived. 

This will continue to be a problem forever. We distinguish between lemons and limes today, and we’re better off for it, but we aren’t safe and can’t afford to forget this problem. “Lime” is still considered a perfectly good tool, and if you were going to do a study on whether limes are good for your heart or something, no one except for citrus geneticists would think anything of it.

But “lime”, as we have hopefully convinced you today, is not a good category at all! It’s not a good tool. You can’t trust it. Yet the assumption that “lime” is a perfectly normal category is so deeply embedded that you never realized it was an assumption.

Evaluating simple propositions like “limes cure scurvy” depends on accepting that “limes”, “scurvy”, and even “cure” are coherent and meaningful concepts. But they may not be!

The TRUE way that reality is very weird is that words and concepts that you use every day and take entirely for granted may be just as incoherent as the term “lime”. Concepts you think of as normal may some day seem as crazy as using the words “lemon” and “lime” interchangeably for all citrus fruits. We can pretty much guarantee that this will happen for something.

In our last post we described “splitting” as the practice of coming up with weird special cases or new distinctions between categories in the face of contradictory evidence. Splitting concepts is especially risky, in part because concepts are so powerful. If there is a confusion of categories, then all the research up to that point will be hopelessly confused as well, entirely muddled.

But if you split the categories in a better way, you will suddenly be left facing nothing but low-hanging fruit — be they lemons, limes, other limes, grapefruits, other other limes, clementines, pomelos, lumias, etrogs, etc.

Double Book Review: Confessions of an Ad Man & The Way of the General

I.

David Ogilvy’s Confessions of an Advertising Man opens:

As a child I lived in Lewis Carroll’s house in Guildford. My father, whom I adored, was a Gaelic-speaking Highlander, a classical scholar and a bigoted agnostic. One day he discovered that I had started going to church secretly.

“My dear old son, how can you swallow that mumbo-jumbo? It is all very well for servants but not for educated people. You don’t have to be a Christian to behave like a gentleman!

My mother was a beautiful and eccentric Irishwoman. She disinherited me on the ground that I was likely to acquire more money than was good for me without any help from her. I could not disagree.

For those of you who are just tuning in, David Ogilvy was a copywriter who made his way to advertising stardom. He founded the advertising firm Ogilvy & Mather (now known simply as “Ogilvy”), and in 1962, Time Magazine called him “the most sought-after wizard in today’s advertising industry”. People still call him “the Father of Advertising” and “the King of Madison Avenue” to this day. Wikipedia describes him simply as a “British advertising tycoon”. 

It’s immediately obvious that Ogilvy is an engaging writer. He knows this, because he’s cultivated it. From the start he’s talking about the value of writing, and he never strays too far from the topic. You can tell it’s important to him. “We like reports and correspondence to be well-written, easy to read – and short,” he says. “We are revolted by pseudo-academic jargon.” Later he says, “American businessmen are not taught that it is a sin to bore your fellow creatures.”

The writing shines brightest in his personal narratives — his statistics training at Princeton, his time as a door-to-door salesman, dropping out of Oxford to go to work as an apprentice chef at the Hotel Majestic in Paris, trying to avoid the storm of forty-seven raw eggs thrown across the kitchen at his head (“scoring nine direct hits”) by the Hotel’s chef potager who had grown impatient with Ogilvy’s constant “raids on his stock pot in search of bones for the poodles of an important client” — and so on.

The Hotel Majestic, now known as The Peninsula Paris

But his business advice is equally gripping — hiring and firing, how to get clients, how to keep clients, how to be a good client, how to write ads for television, and so on. This is striking, because most business advice is tedious and bad. 

His advice escapes these clichés partly because it is delivered in the writing style he recommends — easy to read, short, and direct. But another part of it is that his advice has something of a timeless quality to it. So after the quality of the writing, the second thing we noticed is that Ogilvy strongly reminds us of 2nd-century Chinese statesman, mystic, and military strategist Zhuge Liang.

II.  

Zhuge Liang, also known by his courtesy name Kongming, or his nickname Wolong (meaning “Crouching Dragon”), was born in 181 CE, in eastern China. He grew to become a scholar so highly regarded that his surname alone is synonymous with intelligence. In China, calling someone “Zhuge” is like calling someone “Einstein” in the west, except less likely to be sarcastic. 

Zhuge’s parents died when he was very young, and he was raised by one of his father’s cousins. This was during the extremely unstable years leading up to the Three Kingdoms period, when war was tearing the empire apart, and famines were so extreme that whole provinces resorted to cannibalism. While Zhuge was still a teenager, he was forced to move to a town in central China.

There he grew into a man of great insight and intelligence. Eventually he was discovered by Liu Bei, a distant relation of the Emperor and one of the great men of the age. Liu Bei was an accomplished general, but he had a reputation for being direct and honorable to a fault. Zhuge, on the other hand, already had a reputation for trickery and cunning. He shared with Liu Bei an idea that came to be known as the Longzhong Plan, a plan which eventually led to Liu Bei being crowned emperor of the new state of Shu Han. Zhuge is a central character in the massive historical classic Romance of the Three Kingdoms, and shrines in his honor still dot China 1,800 years later.

The parallels between Ogilvy and Zhuge are surprisingly strong. Both were extremely well-read in a wide variety of topics, but neither of them were snobs. Zhuge could quote classics like the Analects of Confucius and Sun Tzu’s The Art of War, but also enjoyed reciting folk songs from his hometown. In his book, Ogilvy references the ancient Greek orator Demosthenes, quotes statesmen like Winston Churchill, but also quotes a stanza sung by The Pirate King from Gilbert and Sullivan’s Pirates of Penzance

David Ogilvy

When Liu Bei recruited Zhuge Liang, Zhuge was working as a subsistence farmer in Longzhong valley. Fifteen years later, he was appointed Regent to Liu Bei’s son, the young Emperor of Shu Han, when Liu Bei died. 

“Fifteen years ago,” writes Ogilvy at the beginning of Chapter Two, “I was an obscure tobacco farmer in Pennsylvania. Today I preside over one of the best advertising agencies in the United States, with billings of $55,000,000 a year, a payroll of $5,000,000, and offices in New York, Chicago, Los Angeles, San Francisco, and Toronto.”

Farming wasn’t the only profession they shared. After finishing his book, we were surprised to learn that Ogilvy also worked as a military strategist. In World War II he served with British Intelligence, where he applied the insights he had gained from studying polling (with George Gallup himself) to secret intelligence and propaganda.

Takeshi Kaneshiro as Zhuge Liang, in John Woo’s Red Cliff

Zhuge Liang has a couple surviving works to his name. His longest work is called The Way of the General, so that’s the main book we draw on today. We also consider his two memorials known as the Chu Shi Biao, as well as a letter he wrote to his son, called Admonition to His Son. Finally, as The Way of the General is sometimes considered to be a sort of commentary on Sun Tzu’s The Art of War, we will occasionally reference that work as well. 

Similarly, Ogilvy has not only Confessions of an Advertising Man, but also a fascinating manual, The Theory and Practice of Selling the AGA Cooker, which Fortune magazine called “the finest sales instruction manual ever written.” With an endorsement like that, you know we will be referring to this piece.

III.

Let’s start with the writing. The two men have a very similar style. Both books are clearly written. But while the language they use is normally plain, both men have an occasional tendency to dip into wild metaphors. 

Ogilvy describes founders who get rich and let their creative fires go out as “extinct volcanoes”, and refers to his set of techniques for writing great campaigns as “my magic lantern.” Meanwhile, Zhuge opens his book with the following imagery: “If the general can hold the authority of the military and operate its power, he oversees his subordinates like a fierce tiger with wings, flying over the four seas, going into action whenever there is an encounter.” On the other hand: “If the general loses his authority and cannot control the power, he is like a dragon cast into a lake.” 

“Those who would be military leaders must have loyal hearts, eyes and ears, claws and fangs. Without people loyal to them, they are like someone walking at night, not knowing where to step. Without eyes and ears, they are as though in the dark, not knowing how to proceed. Without claws and fangs, they are like hungry men eating poisoned food, inevitably to die,” says Zhuge, while Ogilvy says, I prefer the discipline of knowledge to the anarchy of ignorance. We pursue knowledge the way a pig pursues truffles. A blind pig can sometimes find truffles, but it helps to know that they grow in oak forests.”

Sometimes these metaphors veer into the farcical. “Advertising is a business of words,” writes Ogilvy, “but advertising agencies are infested with men and women who cannot write. They cannot write advertisements, and they cannot write plans. They are helpless as deaf mutes on the stage of the Metropolitan Opera.” Zhuge strikes a similar note in writing, “If the rulership does not give [generals] the power to reward and punish, this is like tying up a monkey and trying to make it cavort around, or like gluing someone’s eyes shut and asking him to distinguish colors.”

Both of them make a lot of lists. Zhuge has lists of five skills, four desires, fifteen avenues of order, and eight kinds of decadence in generalship (“Seventh is to be a malicious liar with a cowardly heart.”). Ogilvy has lists of ten criteria for accounts, fourteen devices to use when you need to use very long copy, and twenty-two commandments for advertising food products (“The larger your food illustration, the more appetite appeal.”). 

These lists are good enough that you could easily turn them into a series of Buzzfeed-style listicles: “8 Kinds of Decadence in Generalship – Number 7 will SHOCK YOU”

Both men sometimes use little parables to drive home their points. In one section, Zhuge lists a number of ancient kings and their approaches to winning wars with the least possible violence. Ogilvy sometimes combines a parable with one of his vivid metaphors, and ends up sounding rather a lot like a Chinese courtier himself:  

When Arthur Houghton asked us to do the advertising for Steuben, he gave me a crystal-clear directive: “We make the best glass in the world. Your job is to make the best advertising.”

I replied, “Making perfect glass is very difficult. Even the Steuben craftsmen produce some imperfect pieces. Your inspectors break them. Making perfect advertisements is equally difficult.”

Six weeks later I showed him the proof of our first Steuben advertisement. It was in color, and the plates, which had cost $1,200, were imperfect. Without demur, Arthur agreed to let me break them and make a new set. For such enlightened clients it is impossible to do shoddy work.

Both books hit their key themes over and over, in slightly different guises each time. They look at the same few ideas repeatedly, from different perspectives. Continuous focus on the fundamentals highlights what really matters, and maybe this is why much of their advice ends up sounding so similar.

Ambition

For these two men, the root of their advice, and probably the root of their similarity, is that both of them are enormously ambitious. “Aspirations should remain lofty and far-sighted,” writes Zhuge. Despite being born a Scotsman, Ogilvy sounds very American when he says, “Don’t bunt. Aim out of the park.” Then he sounds kind of like Zhuge again, when he finishes with, “Aim for the company of immortals.”

Ambition gets a bad rap these days, but these two aren’t talking about accumulating piles of money, or being as big or as famous as humanly possible. Ambition means doing something meaningful with your life. “I have no ambition to preside over a vast bureaucracy.” says our Ad Man. “That is why we have only nineteen clients. The pursuit of excellence is less profitable than the pursuit of bigness, but it can be more satisfying.” 

Zhuge goes out of his way to specifically mention fighting injustice. “If your will is not strong,” he says, “if your thought does not oppose injustice, you will fritter away your life stuck in the commonplace, silently submitting to the bonds of emotion, forever cowering before mediocrities, never escaping the downward flow.”

And this is the other side of ambition, maybe the side that really matters: freedom from fear. Zhuge says, “The years run off with the hours, aspirations flee with the years. Eventually one ages and collapses. What good will it do to lament over poverty?” You only get one life and it’s going to end someday. You’re going to lose it all no matter what, so why not be ambitious? The alternative is cowering before mediocrity.

Many people are afraid of failing, or worse, the embarrassment that they imagine comes with failure. We say “imagine” because, once you try it, you’ll find that most of the time, the embarrassment never comes. And you can’t fight injustice, let alone make excellent ads, if you’re hung up on the idea of failing.

Hard Work & Relaxation

To the short-sighted, effort and relaxation seem like opposites. It’s easy to think there are two categories of people: those who work very hard for very long hours (and presumably burn out) and those who are slackers (and presumably go nowhere). In certain rare cases people talk about aiming for “work-life balance”, a sort of purgatorial or limbo-like concept that combines the worst of both worlds — the inability to get anything done at work with the inability to have anything more than the most superficial personal life.

Ogilvy and Zhuge understand that this isn’t how it works. Work and rest are complements, and they advocate a life where you both work extremely hard and place a high premium on relaxation. 

Maybe it’s not surprising to hear that a Madison Avenue executive worked long hours, but Ogilvy really did work some long hours. He reminisces about his time working for the head chef at the Parisian Hotel Majestic, who worked seventy-seven hours a week, and says, “That is about my schedule today.” When describing what he admires, Ogilvy comes right out and says, “It is more fun to be overworked than to be underworked.” Elsewhere he says, “I believe in the Scottish proverb: ‘Hard work never killed a man.’ Men die of boredom, psychological conflict and disease. They never die of hard work.”

Zhuge mentions some long hours himself. “One who rises early in the morning and retires late at night,” he says, “is the leader of a hundred men.” He kind of makes a point of it. “Generals do not say they are thirsty before the soldiers have drawn from the well,” he says. “Generals do not say they are hungry before the soldiers’ food is cooked; generals do not say they are cold before the soldiers’ fire are kindled; generals do not say they are hot before the soldiers’ canopies are drawn.” 

These are grueling requirements, but much of it seems to spring from the noble desire to not expect anything from others that you wouldn’t do yourself. Zhuge says, “Lead them into battle personally, and soldiers will be brave.” In explaining his own long hours, Ogilvy says, “I figure that my staff will be less reluctant to work overtime if I work longer hours than they do.”

This seems like more than hustle culture. It’s closely related to the drive for excellence. “From morning to night we sweated and shouted and cursed and cooked,” says Ogilvy of his time at the Hotel Majestic. “Every man jack was inspired by one ambition: to cook better than any chef had ever cooked before.”

In warfare, excellence can save thousands of lives. It is somewhat more prosaic in advertising, but we think Ogilvy is sincere when he promises his employees, “I try to make sufficient profits to keep you all from penury in old age,” and excellence in advertising helps him make good on that promise.

The commitment to hard work is important in part because hard work is how you make something look easy. The height of woodworking is when you cannot see the seams, and the height of advertising is when you cannot see the ad:

A good advertisement is one which sells the product without drawing attention to itself. It should rivet the reader’s attention on the product. Instead of saying “What a clever advertisement”, the reader says “I never knew that before. I must try this product.”

It is the professional duty of the advertising agent to conceal his artifice. When Aeschines spoke, they said, “How well he speaks.” But when Demosthenes spoke, they said, “Let us march against Philip.” I’m for Demosthenes.

To our ear, this sounds almost exactly like the following passage from The Art of War

To see victory only when it is within the ken of the common herd is not the acme of excellence. Neither is it the acme of excellence if you fight and conquer and the whole Empire says, “Well done!” To lift an autumn hair is no sign of great strength; to see the sun and moon is no sign of sharp sight; to hear the noise of thunder is no sign of a quick ear.

What the ancients called a clever fighter is one who not only wins, but excels in winning with ease. Hence his victories bring him neither reputation for wisdom nor credit for courage. He wins his battles by making no mistakes. Making no mistakes is what establishes the certainty of victory, for it means conquering an enemy that is already defeated. 

While we think Ogilvy is more like Zhuge Liang than Sun Tzu, Confessions of an Advertising Man might be more like The Art of War than The Way of the General. Both are about the same length. The physical books are about the same size. Both are divided up into a modest number of chapters — 11 chapters for Confessions, and 13 for The Art of War. In both books, each chapter is devoted to a specific topic, like “How to Keep Clients”, “Variation in Tactics”, “How to Rise to the Top of the Tree”, “Laying Plans”, “How to Build Great Campaigns”, “The Use of Spies”, and “Attack by Fire”.

Zhuge and Ogilvy both stress the importance of relaxation as an explicit complement to their focus on hard work and long hours. In a letter to his son where he warns against being lazy, Zhuge also says:

The practice of a cultivated man is to refine himself by quietude and develop virtue by frugality. Without detachment, there is no way to clarify the will; without serenity, there is no way to get far.

Study requires calm, talent requires study. Without study there is no way to expand talent; without calm there is no way to accomplish study.

Ogilvy also likes to study, but he tends to think of it as “homework”. His true love is vacations, which he describes like so:

I hear a great deal of music. I am on friendly terms with John Barleycorn. I take long hot baths. I garden. I go into retreat among the Amish. I watch birds. I go for long walks in the country. And I take frequent vacations, so that my brain can lie fallow—no golf, no cocktail parties, no tennis, no bridge, no concentration; only a bicycle.

Zhuge makes it clear that calm is needed for study, so that you can increase your talents. Ogilvy is equally clear that he takes vacations because he needs them to be creative:

The creative process requires more than reason. … I am almost incapable of logical thought, but I have developed techniques for keeping open the telephone line to my unconscious, in case that disorderly repository has anything to tell me. …

While thus employed in doing nothing [on vacation], I receive a constant stream of telegrams from my unconscious, and these become the raw material for my advertisements.

Both men emphasize relaxation because they believe it will help them be more productive. You may see this as dysfunctional; if so, it’s telling that Ogilvy agrees with you. “If you prefer to spend all your spare time growing roses or playing with your children, I like you better,” he says, “but do not complain that you are not being promoted fast enough.“ 

But there’s also an interesting point to be made. Even if productivity is the only thing you care about (let’s hope it’s not, but even so), you still need lots of calm and rest to make it happen. Working long hours can be fine if that’s what you want, but people who work all the time are doing it wrong. 

It’s also worth noting how the two of them think about creativity in about the same terms: 

Creative people are especially observant, and they value accurate observation (telling themselves the truth) more than other people do. They often express part-truths, but this they do vividly; the part they express is the generally unrecognized; by displacement of accent and apparent disproportion in statement they seek to point to the usually unobserved. They see things as others do, but also as others do not.

And:

An observant and perceptive government is one that looks at subtle phenomena and listens to small voices. When phenomena are subtle they are not seen, and when voices are small they are not heard; therefore an enlightened leader looks closely at the subtle and listens for the importance of the small voice. This harmonizes the outside with the inside, and harmonizes the inside with the outside; so the Way of government involves the effort to see and hear much.

Recruiting Great People

Zhuge and Ogilvy had different sorts of ambitions. Ogilvy wanted to be a great chef, then he wanted to make the best advertisements. Somewhere in between he wanted to be a tobacco farmer. Zhuge wanted to fight injustice, lower the people’s taxes, prevent government corruption, and (depending on the version of the story) embarrass Zhou Yu.

But despite these differences in focus, both of them agree that the highest form of ambition is to work with great people. Even so, the trouble with amazing people is, how do you find them? This question is at least as old as Zhuge’s time, probably much older, and both authors take it very seriously.

Ogilvy tells us that he has talked to some psychologists who have been working on the problem of creativity. But, he tells us, they have not yet caught up to his approach:

While I wait for Dr. Barron and his colleagues to synthesize their clinical observations into formal psychometric tests, I have to rely on more old-fashioned and empirical techniques for spotting creative dynamos. Whenever I see a remarkable advertisement or television commercial, I find out who wrote it. Then I call the writer on the telephone and congratulate him on his work. A poll has shown that creative people would rather work at Ogilvy, Benson & Mather than at any other agency, so my telephone call often produces an application for a job.

I then ask the candidate to send me the six best advertisements and commercials he has ever written. This reveals, among other things, whether he can recognize a good advertisement when he sees one, or is only the instrument of an able supervisor. Sometimes I call on my victim at home; ten minutes after crossing the threshold I can tell whether he has a richly furnished mind, what kind of taste he has, and whether he is happy enough to sustain pressure.

Zhuge has similar tricks. “Hard though it be to know people,” says Zhuge, “there are ways.” He doesn’t recommend visiting your prospective hires at home; instead, he suggests other situations you can put them in, to test their personalities. In characteristic fashion, he gives us a list:

First is to question them concerning right and wrong, to observe their ideas.

Second is to exhaust all their arguments, to see how they change.

Third is to consult with them about strategy, to see how perceptive they are.

Fourth is to announce that there is trouble, to see how brave they are.

Fifth is to present them with the prospect of gain, to see how modest they are.

Sixth is to give them a task to do within a specific time, to see how trustworthy they are.

Ogilvy goes a step further — not only does he give advice on how ad agencies can take the measure of potential employees, he lays out advice on how clients (that is, businesses) can take the measure of a potential ad agency! In spelling it out, he practically reiterates Zhuge’s list:

Invite the chief executive from each of the leading contenders to bring two of his key men to dine at your house. Loosen their tongues. Find out if they are discreet about the secrets of their present clients. Find out if they have the spine to disagree when you say something stupid. Observe their relationship with each other; are they professional colleagues or quarrelsome politicians? Do they promise you results which are obviously exaggerated? Do they sound like extinct volcanoes, or are they alive? Are they good listeners? Are they intellectually honest?

Above all, find out if you like them; the relationship between client and agency has to be an intimate one, and it can be hell if the personal chemistry is sour.

The most specific piece of advice the two authors agree on is where to find great people. “We receive hundreds of job applications every year,” Ogilvy admits. “I am particularly interested in those which come from the Middle West. I would rather hire an ambitious young man from Des Moines than a high-priced fugitive from a fashionable agency on Madison Avenue.” 

They agree that great people usually come from obscurity. “For strong pillars you need straight trees; for wise public servants you need upright people,” says Zhuge. “Straight trees are found in remote forests; upright people come from the humble masses. Therefore when rulers are going to make appointments they need to look in obscure places.” And apparently, this practice goes back pretty far. “Ancient kings are known to have hired unknowns and nobodies,” says Zhuge, “finding in them the human qualities whereby they were able to bring peace.”

Maybe these authors both feel this way because both of them started out in obscurity. But then again, here we are reading their books approximately 60 and 1,800 years later, so maybe they’re right. 

This is how Zhuge describes himself:

I was of humble origin, and used to lead the life of a peasant in Nanyang. In those days, I only hoped to survive in such a chaotic era. I did not aspire to become famous among nobles and aristocrats. The Late Emperor did not look down on me because of my background. He lowered himself and visited me thrice in the thatched cottage, where he consulted me on the affairs of our time. I was so deeply touched that I promised to do my best for him. 

Driving the point home is this memo Ogilvy sent to one of his partners in 1981:

Will Any Agency Hire This Man? 

He is 38, and unemployed. He dropped out of college. 

He has been a cook, a salesman, a diplomatist and a farmer. 

He knows nothing about marketing and had never written any copy. 

He professes to be interested in advertising as a career (at the age of 38!) and is ready to go to work for $5,000 a year. 

I doubt if any American agency will hire him.

However, a London agency did hire him. Three years later he became the most famous copywriter in the world, and in due course built the tenth biggest agency in the world. 

The moral: it sometimes pays an agency to be imaginative and unorthodox in hiring.

In case you can’t tell, he is describing himself.

Integrity

When Zhuge and Ogilvy talk about greatness, they’re not just talking about skill. In fact, skill comes second, and a distant second at that! Without integrity, without virtue, skill means nothing. 

“I admire people with first-class brains, because you cannot run a great advertising agency without brainy people,” says Ogilvy. “But brains are not enough unless they are combined with intellectual honesty.” Zhuge quotes Confucius as saying, “People may have the finest talents, but if they are arrogant and stingy, their other qualities are not worthy of consideration.”

Ogilvy doesn’t pull his punches, here or indeed ever. “I despise toadies who suck up to their bosses,” he says. “They are generally the same people who bully their subordinates. … I admire people who hire subordinates who are good enough to succeed them. I pity people who are so insecure that they feel compelled to hire inferiors as their subordinates.” 

A good leader looks to their team for counsel — these people were recruited for a reason! “Those who consider themselves lacking when they see the wise, who go along with good advice like following a current, who are magnanimous yet able to be firm, who are uncomplicated yet have many strategies,” says Zhuge, “are called great generals.”

You don’t expect much personal virtue from Madison Avenue, but Ogilvy really seems to feel strongly about this one:

I admire people who build up their subordinates, because this is the only way we can promote from within the ranks. I detest having to go outside to fill important jobs, and I look forward to the day when that will never be necessary.

I admire people with gentle manners who treat other people as human beings. I abhor quarrelsome people. I abhor people who wage paper-warfare. The best way to keep the peace is to be candid. 

Integrity is especially important in leadership — “for what is done by those above,” says Zhuge, “is observed by those below.” Here especially, the two leaders exhibit their belief that they should not expect anything of others that they are not prepared to demonstrate themselves. “To indulge oneself yet instruct others is contrary to proper government,” says Zhuge. “To correct oneself and then teach others is in accord with proper government. … If [leaders] are not upright themselves, their directives will not be followed, resulting in disorder.”

Ogilvy gives more detail. “I try to be fair and to be firm,” he says, “to make unpopular decisions without cowardice, to create an atmosphere of stability, and to listen more than I talk.” This is in some ways a very Confucian perspective, that a leader owes their subordinates exemplary behavior. “A policy of instruction and direction means those above educate those below,” says Zhuge, “not saying anything that is unlawful and not doing anything that is immoral.”

Exceptional integrity means understanding that you have a commitment to the people who work for you. Not the same commitment than they have to you — more of a commitment.  

Zhuge paraphrases Confucius as saying, “an enlightened ruler does not worry about people not knowing him, he worries about not knowing people. He worries not about outsiders not knowing insiders, but about insiders not knowing outsiders. He worries not about subordinates not knowing superiors, but about superiors not knowing subordinates. He worries not about the lower classes not knowing the upper classes, but about the upper classes not knowing the lower classes.”

“In the early days of our agency I worked cheek by jowl with every employee; communication and affection were easy,” says Ogilvy. “But as our brigade grows bigger I find it more difficult. How can I be a father figure to people who don’t even know me by sight?” If Confuicius was right, I guess this makes Ogilvy an enlightened ruler.

“It is important to admit your mistakes,” Ogilvy tells us, “and do so before you are charged with them. Many clients are surrounded by buckpassers who make a fine art of blaming the agency for their own failures. I seize the earliest opportunity to assume the blame.” 

But it’s not all tactics — you also want to earn the respect of the people you work with. “If you are brave about admitting your mistakes to your clients and your colleagues, you will earn their respect. Candor, objectivity and intellectual honesty are a sine qua non for the advertising careerist.” 

Being respected does happen to be good for business, but it’s also important for your self-worth as a person. Ogilvy offers a few conspicuous cases where he decided to act honorably, even though it was against his business interests:

Several times I have advised manufacturers who wanted to hire our agency to stay where they were. For example, when the head of Hallmark Cards sent emissaries to sound me out, I said to them, “Your agency has contributed much to your fortunes. It would be an act of gross ingratitude to appoint another agency. Tell them exactly what it is about their service which you now find unsatisfactory. I am sure they will put it right. Stay where you are.” Hallmark took my advice.

When one of the can companies invited us to solicit their account, I said, “Your agency has been giving you superb service, in circumstances of notorious difficulty. I happen to know that they lose money on your account. Instead of firing them, reward them.”

Exceptional integrity means exceptional humanity. “One whose humanitarian care extends to all under his command, whose trustworthiness and justice win the allegiance of neighboring nations, who understands the signs of the sky above, the patterns of the earth below, and the affairs of humanity in between, and who regards all people as his family,” says Zhuge, “is a world-class leader, one who cannot be opposed.”

Exceptional humanity in advertising — in 1963 no less! — looks like this:

Some of our people spend their entire working lives in our agency. We do our damnedest to make it a nice place to work. 

We treat our people like human beings. We help them when they are in trouble–with their jobs, with illness, with alcoholism, and so on.

We help our people make the best of their talents, investing an awful lot of time and money in training–like a teaching hospital. 

Our system of management is singularly democratic. We don’t like hierarchical bureaucracy or rigid pecking orders.

We give our executives an extraordinary degree of freedom and independence. 

We like people with gentle manners. Our New York office gives an annual award for “professionalism combined with civility.” 

We like people who are honest in argument, honest with clients, and above all, honest with consumers.

We admire people who work hard, who are objective and thorough.

We despise office politicians, toadies, bullies and pompous asses. We abhor ruthlessness.

The way up the ladder is open to everybody. We are free from prejudice of any kind — religious prejudice, racial prejudice or sexual prejudice. 

We detest nepotism and every other form of favouritism. In promoting people to top jobs, we are influenced as much by their character as anything else.

And in case that isn’t scrupulous enough for you, there’s at least one product that Ogilvy entirely refuses to advertise: politicians. “The use of advertising to sell statesmen,” he says, “is the ultimate vulgarity.”

V.

Zhuge and Ogilvy focus on different things. Zhuge has a section on grieving for the dead, Ogilvy has a chapter on writing television commercials. But these differences are superficial. Both men are animated by the same spirit. Both of them are infinitely ambitious — but it’s not a callous ambition. Their ambition is to be honest, relaxed, creative, and humane. 

We think these men would have been good friends. It’s tragic that they were born 1,730 years and several thousand miles apart. But it’s to our advantage that we get to read both books and see that these two authors are drawing from the same well. The best wisdom is timeless.

Reality is Very Weird and You Need to be Prepared for That

I. 

Maciej Cegłowski’s essay Scott And Scurvy is one of the most interesting things we’ve ever read. We keep coming back to it — and we hope to write more about it in the future — but today we want to start with just how weird the whole thing is.

Scott and Scurvy tells the true history of scurvy, a horrible and dangerous disease. Scurvy is the result of a vitamin C deficiency — if you’re a sailor or something, eating preserved food for months on end, you eventually run out of vitamin C and many horrible things start happening to your body. If this continues long enough, you die. But at any point, consuming even a small amount of vitamin C, present in most fresh foods, will cure you almost immediately. 

We can’t do the full story justice (read the original essay, seriously), but just briefly: The cure was repeatedly discovered and lost by different crews of sailors at different points in time. Then in 1747, James Lind tried a bunch of treatments and found that citrus was more or less a miracle cure for the disease. Even so, it took until 1799, more than 50 years, for citrus juice to become a staple in the Royal Navy. 

Instead of diagrams depicting the horrifying symptoms of scurvy, please enjoy this picture of James Lind shoving a whole lemon into some unfortunate sailor’s mouth.

Originally, the Royal Navy was given lemon juice, which works well because it contains a lot of vitamin C. But at some point between 1799 and 1870, someone switched out lemons for limes, which contain a lot less vitamin C. Worse, the lime juice was pumped through copper tubing as part of its processing, which destroyed the little vitamin C that it had to begin with. 

This ended up being fine, because ships were so much faster at this point that no one had time to develop scurvy. So everything was all right until 1875, when a British arctic expedition set out on an attempt to reach the North Pole. They had plenty of lime juice and thought they were prepared — but they all got scurvy. 

The same thing happened a few more times on other polar voyages, and this was enough to convince everyone that citrus juice doesn’t cure scurvy. The bacterial theory of disease was the hot new thing at the time, so from the 1870s on, people played around with a theory that a bacteria-produced substance called “ptomaine” in preserved meat was the cause of scurvy instead. 

This theory was wrong, so it didn’t work very well. Everyone kept getting scurvy on polar expeditions. This lasted decades, and could have lasted longer, except that two Norwegians happened to stumble on the answer entirely by accident: 

It was pure luck that led to the actual discovery of vitamin C. Axel Holst and Theodor Frolich had been studying beriberi (another deficiency disease) in pigeons, and when they decided to switch to a mammal model, they serendipitously chose guinea pigs, the one animal besides human beings and monkeys that requires vitamin C in its diet. Fed a diet of pure grain, the animals showed no signs of beriberi, but quickly sickened and died of something that closely resembled human scurvy.

No one had seen scurvy in animals before. With a simple animal model for the disease in hand, it became a matter of running the correct experiments, and it was quickly established that scurvy was a deficiency disease after all. Very quickly the compound that prevents the disease was identified as a small molecule present in cabbage, lemon juice, and many other foods, and in 1932 Szent-Györgyi definitively isolated ascorbic acid.

Even in retrospect, the story is pretty complicated. But we worry that it would have looked even messier from the inside.

II.

Holst and Frolich also ran a version of the study with dogs. But the dogs were fine. They never developed scurvy, because unlike humans and guinea pigs, they don’t need vitamin C in their diet. Almost any other animal would also have been fine — guinea pigs and a few species of primates just happen to be really weird about vitamin C. So what would this have looked like if Holst and Frolich just never got around to replicating their dog research on guinea pigs? What if the guinea pigs had gotten lost in the mail?

Three of Theodore Roosevelt’s children posing in a photo with one of their five guinea pigs. Kermit Roosevelt is holding the pig.

Let’s imagine a version of history where the guinea pigs did indeed get lost in the Norwegian mail, so Holst and Frolich only tested dogs, and found no sign of scurvy. Let’s further imagine that Frolich has been struck by inspiration, and through pure intuition has figured out exactly what is going on. 

Frolich: You know Holst, I think old James Lind was right. I think scurvy really is a disease of deficiency, that there’s something in citrus fruits and cabbages that the human body needs, and that you can’t go too long without. 

Holst: Frolich, what are you talking about? That doesn’t make any sense.

Frolich: No, I think it makes very good sense. People who have scurvy and eat citrus, or potatoes, or many other foods, are always cured.

Holst: Look, we know that can’t be right. George Nares had plenty of lime juice when he led his expedition to the North Pole, but they all got scurvy in a couple weeks. The same thing happened in the Expedition to Franz-Josef Land in 1894. They had high-quality lime juice, everyone took their doses, but everyone got scurvy. It can’t be citrus.

Frolich: Maybe some citrus fruits contain the antiscorbutic [scurvy-curing] property and others don’t. Maybe the British Royal Navy used one kind of lime back when Lind did his research but gave a different kind of lime to Nares and the others on their Arctic expeditions. Or maybe they did something to the lime juice that removed the antiscorbutic property. Maybe they boiled it, or ran it through copper piping or something, and that ruined it.

Holst: Two different kinds of limes? Frolich, you gotta get a hold of yourself. Besides, the polar explorers found that fresh meat also cures scurvy. They would kill a polar bear or some seals, have the meat for dinner, and then they would be fine. You expect me to believe that this antiscorbutic property is found in both polar bear meat AND some kinds of citrus fruits, but not in other kinds of citrus?

Frolich: You have to agree that it’s possible. Why can’t the property be in some foods and not others? 

Holst: It’s possible, but it seems really unlikely. Different varieties of limes are way more similar to one another than they are to polar bear meat. I guess what you describe fits the evidence, but it really sounds like you made it up just to save your favorite theory. 

Frolich: Look, it’s still consistent with what we know. It would also explain why Lind says that citrus cures scurvy, even though it clearly didn’t cure scurvy in the polar expeditions. All you need is different kinds of citrus, or something in the preparation that ruined it — or both! 

Holst: What about our research? We fed those dogs nothing but grain for weeks. They didn’t like it, but they didn’t get scurvy. We know that grain isn’t enough to keep sailors from getting scurvy, so if scurvy is about not getting enough of something in your diet, those dogs should have gotten scurvy too.

Frolich: Maybe only a few kinds of animals need the antiscorbutic property in their food. Maybe humans need it, but dogs don’t. I bet if those guinea pigs hadn’t gotten lost in the mail, and we had run our study on guinea pigs instead of dogs, the guinea pigs would have developed scurvy.

Holst: Let me get this straight, you think there’s this magical ingredient, totally essential to human life, but other animals don’t need it at all? That we would have seen something entirely different if we had used guinea pigs or rats or squirrels or bats or beavers?

Frolich: Yeah basically. I bet most animals don’t need this “ingredient”, but humans do, and maybe a few others. So we won’t see scurvy in our studies unless we happen to choose the right animal, and we just picked the wrong animal when we decided to study dogs. If we had gotten those guinea pigs, things would have turned out different.

III.

Frolich is entirely right on every point. He also sounds totally insane. 

Maybe there are different kinds of citrus. Maybe some animals need this mystery ingredient and others don’t. Maybe polar bear meat is, medically speaking, more like citrus fruit from Sicily than like citrus fruit from the West Indies. Really???

This looks a lot like special pleading, but in this case, the apparent double standard is correct. All of these weird exceptions he suggests were actually weird exceptions. And while our hypothetical version of Frolich wouldn’t have any way of knowing, these were the right distinctions to make. 

Reality is very weird, and you need to be prepared for that. Like the hypothetical Holst, most of us would be tempted to discard this argument entirely out of hand. But this weird argument is correct, because reality is itself very weird. Looking at this “contradictory” evidence and responding with these weird bespoke splitting arguments turns out to be the right move, at least in this case. 

Real explanations will sometimes sound weird, crazy, or too complicated because reality itself is often weird, crazy, or too complicated. 

It’s unfortunate, but scurvy is really the BEST CASE SCENARIO. The answer ended up being almost comically simple: it’s just a disease of deficiency, eat one of these foods containing this vitamin and be instantly cured. But the path to get to that answer was confusing and complicated. Think about all the things in the world that have a more complicated answer than scurvy, i.e. almost everything. Those things will have even weirder and more confusing stories to untangle.

This story has a couple of lessons for us. The first is just, don’t discard an explanation just because it’s weird or complicated. 

Focus on explanations that are consistent with all the evidence. Frolich’s harebrained different-citrus different-animals explanation from above does sound crazy, but at least it’s consistent with everything they knew at the time. If some kinds of citrus cured scurvy and other kinds didn’t, that would explain why it worked for Lind and for early sailors, but it didn’t work for the polar explorers after 1870. And in fact, that does explain it.  

It’s also testable, at least in principle. If you think there might be differences between different kinds of citrus fruits, you could go back and try to figure out the original source used by James Lind and the Royal Navy, and try to re-create those conditions as closely as possible.

FRUIT

We’re taught to see splitting  — coming up with weird special cases or new distinctions between categories — as a tactic that people use to save their pet theories from contradictory evidence. You can salvage any theory just by saying that it only works sometimes and not others — it only happens at night, you need to use a special kind of wire, the vitamin D supplements from one supplier aren’t the same as from a different supplier, etc. Splitting has gotten a reputation as the sort of thing scientific cheats do to draw out the con as long as possible.

But as we see from the history of scurvy, sometimes splitting is the right answer! In fact, there were meaningful differences in different kinds of citrus, and meaningful differences in different animals. Making a splitting argument to save a theory — “maybe our supplier switched to a different kind of citrus, we should check that out” — is a reasonable thing to do, especially if the theory was relatively successful up to that point. 

Splitting is perfectly fair game, at least to an extent — doing it a few times is just prudent, though if you have gone down a dozen rabbitholes with no luck, then maybe it is time to start digging elsewhere.

Scurvy isn’t the only case where splitting was the right call. Maybe there’s more than one kind of fat. Maybe there are different kinds of air. Maybe there are different types of blood. It turns out, there are! So give splitting a chance.

Be more forgiving of contradictory evidence. These days people like to put a lot of focus on the idea of decisive experiments. While it’s true that some experiments are more decisive than others, no experiment can be entirely decisive either for or against a theory. We need to stop expecting knock-down studies that solve things forever.

Contradictory evidence can be wrong! The person making the observations might have been confused. They might have done the analysis wrong. The equipment may have malfunctioned. They might have used dogs instead of guinea pigs, or they might have used the wrong kind of hamster. The data might even be fabricated! Shit happens. 

Things change as contradictory evidence piles up, but even then, it doesn’t mean you should scrap the theory you started out with. Everyone back in the 1870s made a big mistake throwing out their perfectly good “disease of deficiency” theory as soon as there were a few contradictory stories from polar explorers.

Their mistake was thinking “maybe the theory is wrong”, instead of “maybe the real theory is more complicated”. When you see evidence that goes against a theory, it could mean that you’ve been barking up the wrong tree. Or it could just mean that there’s a small wrinkle you aren’t aware of.

If you have a theory that’s been working pretty well for a while — it made good predictions, it solved real problems, it explained a lot of mysteries — you should stick with it in the face of apparent contradictions, at least for a while. When you hit a snag with a reliable theory, think “maybe it’s complicated” instead of “oh it’s wrong”. It may still be wrong, but it’s good to check!

Be careful of purely verbal, syllogistic reasoning. We make these arguments in conversation all the time. They seem plain, convincing, and commonsensical, but in reality they’re pretty weak. It’s hard to get away from commonsensical, verbal arguments since that’s how we naturally think, but don’t take them too seriously. They’re ok as starting points, but keep in mind that they’re not actually evidence.

“Different kinds of citrus fruits are more like one another than they are like polar bear meat” sounds very reasonable, but in this case it was wrong. Sicilian lemons really ARE more like polar bear meat than they are like West Indian limes, at least for the purposes of treating scurvy.

One of these things is not like the others. That’s right — the limes!

“Dogs are about as similar to humans as guinea pigs are” also sounds very reasonable. The three species are all the same class (Mammalia) but different orders (Carnivora, Primates, and Rodentia, respectively), so there seems to be some taxonomic evidence as well. But humans really are a lot more like guinea pigs than they are like dogs, or most other animals, at least for the purposes of getting scurvy.

IV.

We were tickled to see this paragraph near the end of Scott and Scurvy, for obvious reasons

…one of the simplest of diseases managed to utterly confound us for so long, at the cost of millions of lives, even after we had stumbled across an unequivocal cure. It makes you wonder how many incurable ailments of the modern world—depression, autism, hypertension, obesity—will turn out to have equally simple solutions, once we are able to see them in the correct light. What will we be slapping our foreheads about sixty years from now, wondering how we missed something so obvious?

This is really good, and we think it’s reason to be optimistic. We might be closer than we think to cures for depression, hypertension, and yes, even obesity

The answer to scurvy was just one thing, plus a few wrinkles — mostly “not all citrus has the antiscorbutic property” and “most animals can’t get scurvy”. This was only difficult because people weren’t prepared to deal with basic wrinkles, but we can do better by learning from their mistakes.

This means don’t give up easily. It suggests that there is lots of low-hanging fruit, because even simple explanations are easily missed.

Lots of theories have been tried, and lots of them have been given up because of something that looks like contradictory evidence. But the evidence might not actually be a contradiction — the real explanation might just be slightly more complicated than people realized. Go back and revisit scientific near-misses, maybe there’s a wrinkle they didn’t know how to iron out.

Higher than the Shoulders of Giants; Or, a Scientist’s History of Drugs


I. 

The United States used to introduce new constitutional amendments all the time. But after the 26th Amendment in 1971, we stopped coming up with new amendments and haven’t added any since. (The 27th Amendment doesn’t really count — while it was ratified in 1992, it was proposed all the way back in 1789. It’s also only one sentence long and really boring.)

Global GDP used to grow faster and faster all the time — the time it took the global economy to double in size showed a pretty clear linear trend. This was the rule until about 1960-1980, when economic growth suddenly stagnated. Global GDP is still going up, but it’s now growing at a more or less constant rate, instead of accelerating. 

Productivity and hourly wages used to be tightly linked — if you’re creating more value for your employer, they will be willing to pay you more. However, around 1970, these two trends suddenly decoupled. You may have seen graphs like this: 

There used to be less than 1 lawyer per 1000 Americans, though that number was slowly increasing. That is, until about 1971, when it suddenly shot up. Now there are about 4 lawyers for every 1000 Americans. In some parts of the country, the ratio can be as high as 10 per 1000. This is (unsurprisingly) true in New York but also unexpectedly true in our home state of Vermont, which has 5.8 lawyers per 1000 people. It’s ok though, I hear they can’t enter your home unless you invite them in. 

It used to be that about 100 out of every 100,000 people in the population were in prison. That is, until about 1971, when that rate started climbing. Now about 700 out of every 100,000 Americans are incarcerated.

There are even signs that scientific progress has been slowing down since — you guessed it! — about 1970 (see also this paper). 

This is only a small selection of the many things that seem to have gone terribly wrong since about 1970. For a more complete picture, check out the excellent Wake Up, You’ve Been Asleep for 50 Years and WTF Happened In 1971?, which are our sources for most of the trends described above. 

So yeah, what the F did happen in the early 1970s? When dozens of unexplained trends all seem to start in the same year, it seems like more than coincidence — you start wondering if there might be a monocausal event

“The break point in America is exactly 1973,” says economist Tyler Cowen, “and we don’t know why this is the case.” One possible culprit is the 1973 oil embargo, because many of these trends have to do with energy. But Cowen doesn’t think this holds water. “Since that time, the price of oil in real terms has fallen a great deal,” he says, “and productivity has not bounded back.” 

Another possible culprit is the US going off the gold standard in 1971, part of the set of measures known as the Nixon shock (also the name of our new Heavy Metal band). This makes some sense because many of these trends have to do with the economy. But it’s not clear if this is a good explanation either, as many of these trends seem to be global, and most of the world is not on the US dollar. The US is admittedly a pretty big deal, but we’re not the only economy in the world.

But it’s also possible that all this comes from a different policy that Nixon signed into law the year before: the 1970 Controlled Substances Act.

II. 

The early history of coffee is shrouded in mystery. Legends of its discovery date as far back as the 9th century CE, but whenever it was discovered, it’s clear that it came from Africa and had reached the Middle East by 1400. The first coffeehouse in Istanbul opened around 1554, and word of coffee began reaching Europe in the middle 1500s. Even so, it took Europeans about a hundred more years to really take note — the first coffeehouse in Christendom didn’t open until 1645, when one popped up in Venice.

Only five years later, in 1650, the first coffeehouse in England opened in Oxford. There is nothing new under the sun, so unsurprisingly it was very popular with students and intellectuals. Early patrons included Christopher Wren and John Evelyn, and later additions included Hans Sloane, Edmund Halley, and Isaac Newton, who according to some stories, “are said to have dissected a dolphin on a table in the coffeehouse before an amazed audience.” Coffee is a hell of a drug. 

The first coffeehouse in London opened in 1652 in St. Michael’s Alley in Cornhill, operated by a Greek or Armenian (“a Ragusan youth”) man named Pasqua Rosée. The coffee house seems to have been named after Roseé as well, and used him as its logo — one friend who wrote him a poem addressed the verses, “To Pasqua Rosée, at the Sign of his own Head and half his Body in St. Michael’s Alley, next the first Coffee-Tent in London.”

The Royal Society, the oldest national scientific institution in the world, was founded in London on 28 November 1660. The founding took place at the original site of Gresham College, which as far as we can tell from Google Maps, was a mere three blocks from Rosée’s coffeehouse. Some accounts say that their preferred coffeehouse was in Devereux Court, though, which is strange as that is quite a bit further away. But this may be because Rosée’s coffeehouse was destroyed in the Great Fire of 1666.

In 1661, Robert Boyle published The Sceptical Chymist, which argues that matter is made up of tiny corpuscules, providing the foundations of modern chemistry. In 1665, Robert Hooke published Micrographia, full of spectacularly detailed illustrations of insects and plants as viewed through a microscope, which was the first scientific best-seller and invented the biological term cell. By 1675, there were more than 3,000 coffeehouses in England. In 1687, Newton published his Principia

As the popular 1667 broadside News from the Coffe House put it: 

So great a Universitie

I think there ne’re was any;

In which you may a Schoolar be

For spending of a Penny.

This trend continued into the following centuries. As just one example, Voltaire (1694-1778) reportedly consumed a huge amount of coffee per day. No, REALLY huge. Most sources seem to suggest 40 to 50 cups, but The New York Times has it as “more than 50 cups a day.” Perhaps the cups were very small. Wikipedia says “50-72 times per day”, but we can’t tell where they got these numbers. I ask you, what kind of drugs would this man be on, if he were alive today?

Do we really think this mild stimulant could be responsible for the Scientific Revolution? Well to be entirely clear, we aren’t the first ones to make this argument. Here’s a Huffington Post article reviewing several books and essays on the same idea, including one by Malcolm Gladwell. And in Weinberg and Bealer’s The World of Caffeine, the authors tell us that the members of the Royal Society, “had something in common with Timothy Leary, the Harvard professor who experimented with LSD, in that they were dabbling in the use of a new and powerful drug unlike anything their countrymen had ever seen. Surviving recorded accounts confirm that the heavily reboiled sediment-ridden coffee of the day was not enjoyed for its taste, but was consumed exclusively for its pharmacological benefits.”

Today we tend to take coffee in stride, but this stimulant didn’t seem so mild at the time. In 1675, King Charles II briefly banned coffeehouses in London, claiming they had “very evil and dangerous effects.” We don’t know the exact details of the public response, but it was so negative that the king changed his mind after only eleven days! Ten years later, coffee houses were yielding so much tax revenue to the crown that banning them became totally out of the question. 

Merchants panicked over an imagined danger to the economy, one writing, “The growth of coffee-houses has greatly hindered the sale of oats, malt, wheat, and other home products. Our farmers are being ruined because they cannot sell their grain; and with them the landowners, because they can no longer collect their rents.” The owner of the second coffeehouse in London, James Farr, was prosecuted by his neighbors in 1657, “for making and selling a sort of liquor called coffe, as a great nuisance and prejudice to the neighborhood, etc.”

On the less official side of things, the 1674 anonymous WOMEN’S PETITION AGAINST COFFEE REPRESENTING TO PUBLICK CONSIDERATION THE Grand INCONVENIENCIES accruing to their SEX from the Excessive Use of that Drying, Enfeebling LIQUOR (which possibly deserves to be read in full, if only for the 1674 use of “cuckol’d” and “dildo’s”) declared, among other things:

Never did Men wear greater Breeches, or carry less in them of any Mettle whatsoever. There was a glorious Dispensation (’twas surely in the Golden Age) when Lusty Ladds of seven or eight hundred years old, Got Sons and Daughters; and we have read, how a Prince of Spain was forced to make a Law, that Men should not Repeat the Grand Kindness to their Wives, above NINE times in a night … the Excessive Use of that Newfangled, Abominable, Heathenish Liquor called COFFEE …has…Eunucht our Husbands, and Crippled our more kind Gallants, that they are become as Impotent, as Age, and as unfruitful as those Desarts [sic] whence that unhappy Berry is said to be brought.

It’s not like these concerns disappeared as people got used to it. As late as the early 1900s, physicians were still raving about the dangers of this terrible drug. As the wonderful (and sadly defunct) site History House reports:

In the spectacularly titled Morphinism and Narcomanias from Other Drugs (1902), one T. D. Crothers, M.D. tells a few tales of delirium induced by coffee consumption. He also remarks, not unlike analogies to marijuana made by current drug crusaders, that, “Often coffee drinkers, finding the drug to be unpleasant, turn to other narcotics, of which opium and alcohol are the most common.” Similarly, in A System of Medicine (1909), edited by the comically degreed Sir T. Clifford Allbutt (K.C.B., M.A., M.D., LL.D., D. Se., F.R.C.P., F.R.S., F.L.S., F.S.A., Regius Professor of Physic [Internal medicine] in the University of Cambridge), some contributors announce their distaste for caffeine: “We have seen several well-marked cases of coffee excess… the sufferer is tremulous, and loses his self-command… the speech may become vague and weak. By miseries such as these, the best years of life may be Spoilt.”

High doses of caffeine cause odd behavior in test animals. Rats will bite themselves enough to die from blood loss, prompting Consumers Union to observe, “Some readers may here be moved to protest that the bizarre behavior of rats fed massive doses of caffeine is irrelevant to the problems of human coffee drinkers, who are not very likely to bite themselves to death.”

Neither did the science-coffee connection disappear with Newton and Hooke. Researchers still consume more coffee than any other profession. The mathematician Alfréd Rényi quipped, “A mathematician is a machine for turning coffee into theorems,” and he and his colleagues, including Paul Erdős, drank copious amounts. At one point, when trying to explain why Hungary produces so many mathematicians, one of the reasons Erdős gave was, “in Hungary, many mathematicians drink strong coffee … At the mathematical institute they make particularly good coffee.” 

The first webcam, great ancestor to all those Zoom calls you’ve been having, was developed by University of Cambridge computer scientists so they could watch the coffee pot without having to leave their desks.  

And while it’s very popular, coffee isn’t the only way to get your sweet, sweet caffeine fix. Consider the connection between Tea and the British Empire. [cue Rule, Britannia!

Hey Jared we have a piping hot tip for you

Caffeine in one form or another continued to be the stimulant of choice until the middle of the 19th century, when the Germans made an even more exciting discovery.

III. 

When the Spanish arrived in South America, they noticed that some of the natives had the refreshing habit of chewing on the leaves of a local plant, “which make them go as they were out of their wittes.” At first the Spaniards were concerned but then they realized it was pretty great, and started using it themselves — for medicinal purposes, of course. 

Even so, chemistry was not fully developed in the 1600s (they needed to wait for the coffee to hit), so despite many attempts it took until 1855 for the active ingredient to be purified from coca leaves. This feat was accomplished by a German named Friedrich Georg Carl Gaedcke. With this success, another German chemist (Friedrich Wöhler) asked a German doctor who happened to be going on a round-the-world trip (Carl Scherzer) to bring him back more of these wonderful leaves. The doctor came back a few years later with a trunk full of them, which the second chemist passed on to yet a third German chemist, Albert Niemann, who developed a better way of purifying the new substance, which he published as his dissertation. (Sadly he never got to enjoy the substance himself, as he discovered mustard gas the same year and died the year after that, probably from working too closely with mustard gas.)

And with this series of developments, pure cocaine was injected directly into the German nervous system.

A typical example of the effects of cocaine on the German scientific body can be found in a man you might have heard of — Sigmund Freud, who has the same birthday as one of the authors. Having recently moved on from his earlier interest in trying to find the testicles and/or ovaries of eels (don’t laugh, it was a major scientific question of the day!), he found himself VERY EXCITED by the possibilities of this new treatment, which had just become available to physicians.

“Woe to you, my Princess, when I come,” wrote Sigmund Freud to his future wife, Martha Bernays, on June 2, 1884. “I will kiss you quite red and feed you till you are plump. And if you are forward, you shall see who is the stronger, a gentle little girl who doesn’t eat enough, or a big wild man who has cocaine in his body. In my last serious depression I took cocaine again and a small dose lifted me to the heights in a wonderful fashion. I am just now collecting the literature for a song of praise to this magical substance.”

He didn’t just use cocaine to intimidate (???) his fiancée, though. Freud also found that it had professional applications. “So I gave my lecture yesterday,” he wrote in a letter a few months earlier, “Despite lack of preparation, I spoke quite well and without hesitation, which I ascribe to the cocaine I had taken beforehand. I told about my discoveries in brain anatomy, all very difficult things that the audience certainly didn’t understand, but all that matters is that they get the impression that I understand it.” We see that not much has changed since the 1880s. 

Freud wasn’t the only one who was excited by this new discovery, of course. Only two years later, a bedridden Robert Louis Stevenson wrote The Strange Case of Dr Jekyll and Mr Hyde, a 30,000-word novella that he completed in about three days. Many accounts suggest that Stevenson was high on cocaine during this brief, incredibly productive period, possibly recreationally, or possibly because it was simply part of the medicine he was taking. This claim is somewhat contested, but we’re inclined to believe it — you try writing 30,000 words in three days, by hand, while bedridden, without the help of a rather good stimulant.

One Italian, Paolo Mantegazza, was so enthusiastic about the new substance that he actually developed a purification process of his own in 1859. Over the next several decades, he founded the first Museum of Anthropology in Italy, served in the Italian parliament, published a 1,200-page volume of his philosophical and social views, at least three novels, and several scientific books and papers (this paper from 2008 claims that he founded the field of sexual medicine), including one in which he wrote:

“I sneered at the poor mortals condemned to live in this valley of tears while I, carried on the wings of two leaves of coca, went flying through the spaces of 77,438 words, each one more splendid than the one before. An hour later, I was sufficiently calm to write these words in a steady hand: God is unjust because he made man incapable of sustaining the effect of coca all lifelong. I would rather have a life span of ten years with coca than one of 10,000,000,000,000,000,000,000 centuries without coca.”

We should note that while Mantegazza was very productive in these decades, he was also a vivisectionist and a racist. Clearly not everyone should have access to cocaine of this quality.

A different Italian looked at cocaine and saw not poor mortals condemned to live in the valley of tears, but economic opportunity. He happened to read a paper by Mantegazza on the substance, and was inspired. This man was Angelo Mariani, and in 1863 he “invented” cocawine, by which we mean he put cocaine in wine and then sold it. 

Apparently this was more than just a good idea. Cocaine.org, a reputable source if ever we’ve seen one, tells us, “If cocaine is consumed on its own, it yields two principal metabolites, ecgonine methyl ester and benzoyleconine. Neither compound has any discernible psychoactive effect. Cocaine co-administered with alcohol, however, yields a potent psychoactive metabolite, cocaethylene. Cocaethylene is very rewarding agent in its own right. Cocaethylene is formed in the liver by the replacement of the methyl ester of cocaine by the ethyl ester. It blocks the dopamine transporter and induces euphoria. Hence coca wine drinkers are effectively consuming three reinforcing drugs rather than one.”

Mariani is notable less for taking cocaine himself, and more for being possibly the most influential drug pusher of all time. His enticing product, called Vin Mariani, soon became a favorite of the rich, powerful, and highly productive, unleashing the creative potential of cocaine on the world.

YOU CAN TELL SHE’S VERY EXCITED ABOUT THIS WINE FOR SOME REASON

A good catalogue of its influence can be found in the literally thousands of celebrity endorsements it received, and which were proudly displayed in its ads. “Testimonials from eminent personages were so numerous that Mariani, as great a public relations man as he was a chemist, published them in handsome leather-bound volumes—replete with portraits and biographical sketches of the endorsers.” Many of these names and endorsements seem to have been lost to time, but here are a few you might recognize. 

Presumably you have heard of the Pope. Pope Leo XIII and Pope Pius X both enjoyed Vin Mariani, and Pope Leo XIII liked it so much that he often carried a hip flask of the wine. He even awarded Mariani a Vatican Gold Medal, “to testify again in a special manner his gratitude.” He also appeared on a poster advertisement endorsing the wine, and later called Mariani a “benefactor of humanity”. AP news reports that the chief rabbi of France liked it too. 

Sarah Bernhardt, famous actress and subject of the most entertaining Wikipedia entry of all time, said, “My health and vitality I owe to Vin Mariani. When at times unable to proceed, a few drops give me new life.” Jules Verne, one of the fathers of science fiction, wrote, “Vin Mariani, the wonderful tonic wine, has the effect of prolonging life.” Frédéric Auguste Bartholdi, who you will know as the sculptor of the Statue of Liberty, wrote, “this precious wine will give me the strength to carry out certain other projects already formed.” Alexander Dumas is said to have enjoyed it as well, but we can’t find a quote. 

In 1892, Thomas Edison contributed the almost maddeningly vague note, “Monsieur Mariani, I take pleasure in sending you one of my photographs for publication in your album.” Edison was already quite famous by this point, and it’s not clear how long he had been enjoying the effects of Vin Mariani, but we can make an educated guess. 

Vin Mariani was invented in 1863, and we know that by 1868, Edison had a reputation for working “at all hours, night or day”. His famous Menlo Park lab was built in 1876, and soon began producing inventions at a steady rate — the phonograph in 1877, his work on electric lights about 1880, motion picture devices in 1891, and so on. 

In 1887, one writer noted, “he scarcely sleeps at all, and is equally as irregular concerning his eating”. The same account quotes a “co-laborer” of Edison’s as saying, “he averaged eighteen hours [of work] a day. … I have worked with him for three consecutive months, all day and all night, except catching a little sleep between six and nine o’clock in the morning.” In 1889, when he was 42, he told Scientific American that he slept no more than four hours a night. Given that we know he enjoyed Vin Mariani, we think this is good evidence of just how much he must have been drinking. 

Mariani claimed to have collected over four thousand such endorsements from various celebrities. It’s only natural that he also collected endorsements from physicians. In one of his ads, he trots out the following: “In cases of morphinomania, Dr. Dujardin-Beaumetz has pointed out the advantage to be obtained with the Vin Mariani, and following him, Dr. Palmer, of Louisville, and Dr. Sigmaux Treaux [sic] of Vienna, have obtained excellent results with this therapeutic agent.” Yes, you saw that right — that last name there is a botched attempt to spell “Dr. Sigmund Freud”. Maybe Mariani was high on his own supply after all.

While Mariani deserves credit as the man who got cocaine to the masses, the Germans were the ones who first purified the cocaine, and the ones who undoubtedly put it to the best scientific and medical use.

[content warning for the next several paragraphs: descriptions of 19th-century medical experimentation]

It’s easy for a modern person to miss the fact that aside from alcohol and getting held down by surgical assistants, there were few anaesthetics at this point in history. Laughing gas (nitrous oxide) was discovered in 1776, but the Americans took a long time to figure out that it could be used for anything other than killing animals and getting high, and were still struggling with the idea that it might have medical applications. 

Furthermore, laughing gas is a general anaesthetic, not a local anaesthetic, and a weak one at that. It was totally unsuitable for delicate operations like eye surgery. 

People had already noticed that a dose of cocaine will numb your nose, lips, or tongue. Even so, it took the combined powers of Sigmaux Treaux Sigmund Freud and his friend Karl Koller, an ophthalmology intern, to make this breakthrough. Koller was interested in finding a local anaesthetic for eye surgery, and he had already tried putting various chemicals, including morphine, into the eyes of laboratory animals, with no success. Separately, Freud was convinced that cocaine had many undiscovered uses. So in 1884, when Freud left to go pay a visit to Martha, he left Koller some cocaine and encouraged him to experiment with it. 

While Freud was away, Koller made his discovery. Amazingly, in his papers Koller describes the exact moment when he made the connection:

Upon one occasion another colleague of mine, Dr. Engel, partook of some (cocaine) with me from the point of his penknife and remarked, “How that numbs the tongue.” I said, “Yes, that has been noticed by everyone that has eaten it.” And in the moment it flashed upon me that I was carrying in my pocket the local anesthetic for which I had searched some years earlier.

Dr. Gaertner, an assistant in the lab where Koller worked, continues the story in more detail:

One summer day in 1884, Dr. Koller, at that time a very young man … stepped into Professor Strickers laboratory, drew a small flask in which there was a trace of white powder from his pocket, and addressed me … in approximately the following words: “I hope, indeed I expect that this powder will anesthetize the eye.” 

“We’ll find out that right away”, I replied. A few grains of the substance were thereupon dissolved in a small quantity of distilled water, a large, lively frog was selected from the aquarium and held immobile in a cloth, and now a drop of the solution was trickled into one of the protruding eyes. At intervals of a few seconds the reflex of the cornea was tested by touching the eye with a needle… After about a minute came the great historic moment, I do not hesitate to designate it as such. The frog permitted his cornea to be touched and even injured without a trace of reflex action or attempt to protect himself, whereas the other eye responded with the usual reflex action to the slightest touch. The same tests were performed on a rabbit and a dog with equally good results. … 

Now it was necessary to go one step further and to repeat the experiment upon a human being. We trickled the solution under the upraised lids of each other’s eyes. Then we put a mirror before us, took a pin in hand, and tried to touch the cornea with its head. Almost simultaneously we could joyously assure ourselves, “I can’t feel a thing.” We could make a dent in the cornea without the slightest awareness of the touch, let alone any unpleasant sensation or reaction. With that the discovery of local anesthesia was completed. I rejoice that I was the first to congratulate Dr. Koller as a benefactor of mankind.

The final proof came on August 11, 1884, when Koller performed the first successful cocaine-aided cataract surgery. Koller was only 25 when he made this discovery, a Jewish medical student so poor that he had to ask a friend to present the findings for him, since he could not afford the train fare to go to the ophthalmology conference in Heidelberg that year. 

The finding was received with worldwide amazement and enthusiasm. “Within three months of this date,” says one paper, “every conceivable eye operation had been attempted using cocaine, in every part of the world.” The idea spread “not just into ophthalmology, but wherever mucous membranes required surgery—in gynecology, proctology, urology, and otolaryngology.”  Encyclopedia Britannica says that this finding “inaugurated the modern era of local anesthesia.”

In fact, cocaine got such an amazing reputation as a local anaesthetic that the suffix -”caine” was back-formed from the name, and was used form names of new local anaesthetics as they were discovered, like amylocaine, lidocaine, bupivacaine, prilocaine, and procaine (aka novocaine).

[content warning: more descriptions of 19th-century medical experimentation]

As the technique developed further, people started using cocaine as an anaesthetic in spinal operations. The first was an American named James Leonard Corning, who also happened to be a big fan of Vin Mariani. In 1885, he performed a spinal injection of cocaine on a dog (why?), and found that this left the dog temporarily unable to use its legs. 

Encouraged by this finding, he soon decided to give a similar injection to a patient who had recently been referred to him for “addiction to masturbation”. Corning gave the man cocaine as a spinal injection of some sort (there is scholarly debate over what sort!). After 20 minutes, he noticed that “application of [a wire brush] to the penis and scrotum caused neither pain nor reflex contraction.” Whether this was a successful treatment for the unfortunate patient is not recorded.

A German surgeon named August Bier independently came up with the idea in 1898. He and his assistant August Hildebrandt performed the procedure on several patients as part of routine surgeries, until one day in August 1898, when for reasons that remain unclear, they decided to experiment on each other.

“Hildebrandt was not a surgeon and his ham-fisted attempts to push the large needle through Bier’s dura proved very painful,” begins one account, not at all what you would expect from the rather dry-sounding volume Regional Anaesthesia, Stimulation, and Ultrasound Techniques. It continues, “The syringe of cocaine and needle did not fit well together and a large volume of Bier’s cerebrospinal fluid leaked out and he started to suffer a headache shortly after the procedure.” Probably because of the flawed injection, Bier was not anaesthetized at all.

Bier of course was a surgeon, and so when it was his turn to give Hildebrandt the injection, he performed it flawlessly. Soon Hildebrandt was very anaesthetized. To test it, reports Regional Anaesthesia, “Bier pinched Hildebrandt with his fingernails, hit his legs with a hammer, stubbed out a burning cigar on him, pulled out his pubic hair, and then firmly squeezed his testicles,” all to no effect. In a different account, this last step was described as “strong pressure and traction to the testicles”. They also pushed a large needle “in down to the thighbone without causing the slightest pain”, and tried “strong pinching of the nipples”, which could hardly be felt. They were thrilled. With apparently no bad blood over this series of trials, the two gentlemen celebrated that evening with wine and cigars, and woke up the next morning with the world’s biggest pair of headaches, which confined them to bed for 4 and 9 days, respectively. You can read the account in its thrilling original German here.

(Why genital flagellation has such a central role in the climax of both of these stories is anyone’s guess.)

Despite the wild tale of the discovery, this represented a major medical advancement, which made many new techniques and treatments a possibility. Spinal anaesthesia is now a common technique, used in everything from hip surgery to Caesarean sections. Soon Bier and others had developed various forms of regional anaesthesia, which made it safe to perform new and more delicate operations on the arms and legs.

A more prosaic discovery, but no less important, was made by Richard Willstätter in 1898. At the time there was some academic debate about the chemical structure of cocaine, and there were a couple competing theories. Willstätter proved that they were both wrong, came up with the correct structure, and demonstrated that he was correct by synthesizing cocaine in the lab. This was not only the first artificial synthesis of cocaine, but the first synthesis of an organic structure that we’re aware of.  

We’re tempted to wink and ask why he was so motivated to develop a synthetic cocaine, but we’ve looked through Willstätter’s autobiography, and he very clearly states at one point, “although I always possessed cocaine from my youth on, I never knew the temptation to experience its peculiar effects myself.” Maybe this was because by 1894 they had discovered that cocaine had some side effects (even the diehard Freud was off it by 1904), or maybe because he was a nice Jewish boy who wouldn’t mess around with that sort of thing (though Dr. Karl “pins-in-the-eyes” Koller was also Jewish). In any case, his early fame was closely related to the rise of cocaine, and he went on to win the 1915 Nobel Prize for Chemistry.

Just like England was the center of learning in the enlightenment, Germany was the center of scientific advancement in the second half of the 19th century, especially in the natural sciences. Anyone who wanted to study biology, chemistry, or physics had to learn German, because that’s the language all the best volumes and journals were printed in. 

Around 1897, the great Spanish neuroscientist Santiago Ramón y Cajal wrote, “it must be admitted that Germany alone produces more new data than all the other nations combined when it comes to biology. … A knowledge of German is so essential that today there is probably not a single Italian, English, French, Russian, or Swedish investigator who is unable to read monographs published in it. And because the German work comes from a nation that may be viewed as the center of scientific production, it has the priceless advantage of containing extensive and timely historical and bibliographic information.”

“We can only speculate as to how twentieth century history would be different if the Germans had discovered marijuana instead of cocaine,” writes History House (they wrote about the history of drugs a lot, ok?).

This persisted until the two World Wars, when German scientific dominance ended. In a footnote to the 1923 edition of his book, Ramón y Cajal notes that other countries had begun, “competing with, and in many cases surpassing, the work of German universities, which for decades was incomparable.” 

One explanation is the obvious one: that the wars destroyed Germany’s ability to do good science. (Also they kicked out all the scientists who were Jewish, gay, communists, etc.) But another explanation is that America began to discover new drugs of her own. 

IV.

There were other drugs of course, to fill the gap between German scientific dominance and the third drug revolution of the 1950s and ’60s. Cocaine had already become illegal in the United States in 1914, so people were on the lookout for alternative highs.

In contrast to his rival Edison, Nikola Tesla doesn’t drink cocaine wine. Tesla didn’t smoke — he didn’t even take tea or coffee. “I myself eschew all stimulants,” he once told Liberty magazine in 1935. “I am convinced that within a century coffee, tea, and tobacco will be no longer in vogue.” Perhaps this was because of his amazing, and apparently substance-unaided, ability to visualize designs in his mind’s eye. Tesla said elsewhere that when he first designed a device, he would let it run in his head for a few weeks to see which parts would begin to wear out first.

Tesla did, however, LOVE to drink. “Alcohol … will still be used,” he said. “It is not a stimulant but a veritable elixir of life.” When Prohibition came around in the United States, Tesla did break the habit, but he wrote that the law would, “subject a citizen to suffering, danger and possible loss of life,” and suggested that damages from the resulting lawsuits against the government would soon exhaust the treasury. 

(And what was the worst of these vices according to Tesla, the one more dangerous than rum, tobacco, or coffee? Nothing less than chewing gum, “which, by exhaustion of the salivary glands, puts many a foolish victim into an early grave.”)

Obviously Tesla was wrong about the cost of reparations from Prohibition. But is it a coincidence that Prohibition was the law of the land for the decade running up to the Great Depression? Was it a coincidence that the Great Depression began to turn around in March 1933, the same month that President Roosevelt signed the first law beginning the reversal of Prohibition? Probably it is, but you have to admit, it fits our case surprisingly well. 

While Alcohol is a depressant, perhaps it stimulates the curious spirit in some number of our fellow creatures, as it seems to have done for Tesla. Again from History House:

Washington’s taste for Madeira wine shows up [in his accounts] with mindnumbing regularity: from September 1775 to March 1776, Washington spent over six thousand dollars on booze. … Revolutionary War-era persons drank a phenomenal amount. We have here an account of a gentleman’s average consumption: “Given cider and punch for lunch; rum and brandy before dinner; punch, Madeira, port and sherry at dinner; punch and liqueurs with the ladies; and wine, spirit and punch till bedtime, all in punchbowls big enough for a goose to swim in.”

The other drug as old as time has also been associated with scientific productivity. One contributor to the 1971 book Marihuana Reconsidered, who wrote under the pseudonym “Mr. X”, said that he often enjoyed cannabis, found that it improved his appreciation for art, and even made him a better scientist. In the late ‘90s, after his death, Mr. X was revealed to be Carl Sagan. On the topic of his professional skills, he said: 

What about my own scientific work? While I find a curious disinclination to think of my professional concerns when high – the attractive intellectual adventures always seem to be in every other area – I have made a conscious effort to think of a few particularly difficult current problems in my field when high. It works, at least to a degree. I find I can bring to bear, for example, a range of relevant experimental facts which appear to be mutually inconsistent. So far, so good. At least the recall works. Then in trying to conceive of a way of reconciling the disparate facts, I was able to come up with a very bizarre possibility, one that I’m sure I would never have thought of down. I’ve written a paper which mentions this idea in passing. I think it’s very unlikely to be true, but it has consequences which are experimentally testable, which is the hallmark of an acceptable theory.

Marijuana doesn’t help everyone be a better scientist — some people just get paranoid, or just fall asleep. But it’s especially interesting that Sagan found it hallucinogenic, because the third drug revolution was all about hallucinogens. 

The history of hallucinogens is pretty weird, even by the standards of how weird drug history normally is. Hallucinogens are relatively common, and in theory we could have discovered them at any point in the past several thousand years. But aside from occasional mishaps involving ergot poisoning, hallucinogens didn’t play much of a role in human history until the middle of the 20th century. 

Like the coca plant, Psilocybin mushrooms (“shrooms”) grow in the dirt and have been around forever. Unlike the coca plant, they grow all over the world, and have always been readily available. Indigenous groups around the world have used them in ceremonies and rituals, but they weren’t used as a recreational drug until 1955

Europeans certainly had access to these shrooms for thousands of years, but the first well-documented report of psilocybin consumption in Europe was a case described in the London Medical and Physical Journal in 1799, of a man who picked Psilocybe semilanceata (“liberty cap”) mushrooms in London’s Green Park and had them for breakfast with his four children. First the youngest child, “was attacked with fits of immoderate laughter, nor could the threats of his father or mother refrain him.” Then the father, “was attacked with vertigo, and complained that every thing appeared black, then wholly disappeared.” Soon all of them were affected. The doctor who made the report didn’t see this as a potential good time, or a way to expand the mind — he refers to the effect as “deleterious”.

While it has been enjoyed by many people, we can’t find much evidence of mercantile, economic, or scientific discoveries associated with the use of shrooms. This may not be the drug’s fault, since it was banned so soon after being brought to popular attention. 

But there is one major cultural development linked to psilocybin. In his book Mycelium Running: How Mushrooms Can Help Save the World, Paul Stamets describes a discussion he had with Frank Herbert, author of Dune, in the 1980s. Herbert showed him a new method he had developed for growing mushrooms on newly-planted trees, which at the time everyone thought was impossible. They kept talking, and:

Frank went on to tell me that much of the premise of Dune — the magic spice (spores) that allowed the bending of space (tripping), the giant worms (maggots digesting mushrooms), the eyes of the Freman (the cerulean blue of Psilocybe mushrooms), the mysticism of the female spiritual warriors, the Bene Gesserits (influenced by tales of Maria Sabina and the sacred mushroom cults of Mexico) — came from his perception of the fungal life cycle, and his imagination was stimulated through his experiences with the use of magic mushrooms.

Dune is the best-selling science fiction novel of all time, winner of the Hugo and the very first Nebula award, and one of my personal favorites. Even if this were the only thing shrooms had inspired, it would be a pretty big deal.

The other major naturally-occurring hallucinogen seems to have had a wider impact, and has a laundry list of famous users and associated creations. This drug is mescaline, the active ingredient in peyote cactus. As with cocaine, the Germans were the first to discover mescaline, but unlike cocaine, they didn’t seem to do anything with it. Possibly this was because they thought of it as a poison. The chemist who first isolated it wrote, “mescaline is exclusively responsible for the major symptoms of peyote (mescal) poisoning.” Well, he was almost right.

The first recreational use of the drug we found was from Jean-Paul Sartre, who took mescaline in 1929 while attending the École Normale Supérieure. He had a bad trip, during which he hallucinated various sea creatures. When he came down, he found that the hallucinations persisted, though he didn’t seem to be very worried by this:

Yeah, after I took mescaline, I started seeing crabs around me all the time. They followed me in the streets, into class. I got used to them. I would wake up in the morning and say, “Good morning, my little ones, how did you sleep?” I would talk to them all the time. I would say, “O.K., guys, we’re going into class now, so we have to be still and quiet,” and they would be there, around my desk, absolutely still, until the bell rang.

[Interviewer asks: A lot of them?]

Actually, no, just three or four.

He eventually ended up getting treated for this by Jacques Lacan, who suggested the crabs represented loneliness. When he was feeling depressed, Sartre would instead get the “recurrent feeling, the delusion, that he was being pursued by a giant lobster, always just out of sight… perpetually about to arrive.”

This experience seems to have influenced Sartre’s work — for example, in his play “The Condemned of Altona,” one of the characters claims to communicate with people from the thirtieth century, who have become a race of crabs that sit in judgment of humanity. Is this a precursor to the Carcinization Meme?

are you feeling it now mr krabs

Other authors have had similar experiences, except more positive, and without the crustaceans. Aldous Huxley took mescaline in 1953, and wrote his book The Doors of Perception about the experience. From then on he was a proponent of psychedelics, and they came to influence his final book, Island, published in 1962. Sadly the mescaline cannot be responsible for his most famous novel, Brave New World, because it was published decades earlier, in 1932. It also can’t be held responsible for his 1940 screenplay adaptation of Pride and Prejudice.

But mescaline clearly deserves some credit for Ken Kesey’s 1962 book, One Flew Over the Cuckoo’s Nest, and for Ken Kesey in general. Kesey was working as an orderly at a psych hospital and decided to make some money on the side by testing drugs for the CIA as part of project MKUltra, who gave him both mescaline and LSD (we’ll get to this drug in a second, don’t you worry). The combination of these drugs and his job as an orderly led him to write One Flew Over the Cuckoo’s Nest, which was an instant smash hit — there was a play the next year, with Gene Wilder in a major role, and the film adaptation in 1975 won five Oscars. 

Ken Kesey went on to basically invent modern drug culture, hippie culture, and Bay Area California. Ken Kesey and his drugs were also largely responsible for Jerry Garcia and the Grateful Dead, and thus indirectly responsible for the Ben & Jerry’s flavor Cherry Garcia, “the first ice cream named for a rock legend”.

Mescaline was also a force behind Philip K. Dick’s 1974 Hugo- and Nebula-nominated novel, Flow My Tears, The Policeman Said. In a letter that is more than a little reminiscent of the cocaine-driven Robert Louis Stevenson, he says:

At one point in the writing I wrote 140 pages in 48 hours. I have high hopes for this. It is the first really new thing I’ve done since EYE IN THE SKY. The change is due to a change that overtook me from having taken mescalin [sic], a very large dose that completely unhinged me. I had enormous insights behind the drug, all having to do with those whom I loved. Love. Will love.

If you want to REALLY understand this story, you probably have to read Dick’s undelivered speech, How to Build a Universe That Doesn’t Fall Apart Two Days Later. It doesn’t mention the mescaline but it certainly captures… something. 

Most of his other famous works — The Man in the High Castle, Do Androids Dream of Electric Sheep? (aka Blade Runner), We Can Remember It for You Wholesale (aka Total Recall), Minority Report, etc. — were written before this, and so probably were not affected by mescaline. That’s ok though, because we know that up to 1970 Dick was on amphetamines nearly full-time.

And finally of course there is the great king of the psychedelics, LSD, which started to become prominent around the same time. LSD was actually invented some decades earlier. It was first synthesized in 1938 by Swiss (but notably, German-speaking) chemist Albert Hofmann. He was looking for a new respiratory and circulatory stimulant, but when he tested the new chemical in lab animals, it showed none of the desired effect — though the animals did become “restless” — and was abandoned for five years. 

But Hofmann had a “peculiar presentiment” that there might be more to LSD than met the eye, and so in 1943 he synthesized some more. On April 19th, he arranged to take what he thought would be a tiny dose, in case the substance was poisonous, a mere 250 micrograms. Instead, he went on the mother of all trips, and had his famous bicycle ride home. Subsequent tests showed that a fifth of that original dose was sufficient to produce strong trips in lab assistants — LSD had arrived.

The inventor had no question about what his discovery meant, or what it was for. In a speech on his 100th (!!!) birthday, Hofmann said, “I think that in human evolution it has never been as necessary to have this substance LSD. It is just a tool to turn us into what we are supposed to be.” Okie dokie.

For a drug that got only a couple decades in the sun, LSD has a pretty impressive track record. Francis Crick, one of the people who discovered the structure of DNA, probably took LSD and may have been tripping when he was doing some of his DNA work, though this isn’t well-attested. Douglas Englebart, inventor of the mouse and the guy who did The Mother of All Demos, took LSD some time in the early 60’s. Time magazine wrote approvingly of LSD’s ability to treat mental illnesses as early as 1955.

The Beatles were already extremely popular before they first took acid in 1965, but it clearly influenced their music from then on. This in turn influenced much of the music made in the second half of the 20th century. You may be surprised to learn that they took it for the first time by accident; to be more precise, someone dosed them without their consent. You see…

In the spring of 1965, John Lennon and George Harrison, along with their wives Cynthia Lennon and Patti Boyd, were having dinner over their dentist’s house when they were first “dosed” with LSD.

Dentist John Riley and his girlfriend, Cyndy Bury, had just served the group a great meal, and urged their distinguished guests to stay for coffee, which they reluctantly did…

Riley wanted to be the first person to turn on the Beatles to acid, so the couples finished their coffee, and then Riley told Lennon that the sugar cubes they used contained LSD, a powerful new drug with incredible hallucinogenic effects.

Lennon said, “How dare you fucking do this to us!”

As George remembered, “The dentist said something to John, and John turned to me and said, ‘We’ve had LSD.’ I just thought, ‘Well, what’s that? So what? Let’s go!'”

Eventually they escaped their dentist and ended up at George’s house. John “was beginning to reconsider his attitude toward acid,” in part because he was excited that “George’s house seemed to be just like a big submarine.”

Once they came down, John and George decided the other two Beatles needed to try LSD as well. “John and I had decided that Paul and Ringo had to have acid,” said George Harrison, “because we couldn’t relate to them any more. Not just on the one level, we couldn’t relate to them on any level, because acid had changed us so much.” 

This was easier said than done — Paul didn’t want to try it — but they threw a big house party with Peter Fonda, David Crosby, and various others where they all (except Paul) dropped acid, George fell in the swimming pool, they watched Cat Ballou (with a laugh track), they all got in the shower and passed around a guitar, normal party stuff. Paul didn’t take LSD that night but he took it shortly after, at which point he said it “explained the mystery of life.” The resulting insights helped form their next albums: Revolver, and of course, Sgt. Pepper’s Lonely Hearts Club Band.

The Beatles are just one example, of course. Pink Floyd, the Doors, Jefferson Airplane, and many other bands were all trying out LSD at around the same time. Bob Dylan took LSD (“Who smokes pot any more?” he asked in 1965) and he went on to win a Nobel Prize. The new drug influenced culture in many ways. The real question here is, who has dinner at their dentist’s house?

Another question is, why didn’t we discover how to use psychedelics earlier? Shrooms, at least, have been available for a long time. Why weren’t Leibniz, Galileo, and Shakespeare all tripping out of their minds?

We think there might be two reasons. Unlike stimulants, which have a pretty reliable effect, hallucinogens often have different effects on different people. And also unlike stimulants, it seems you often have to use hallucinogens in just the right way in order to unlock their creative potential. Coffee or cocaine make you more focused and more productive, even more creative, in the moment. But it’s very rare to be able to produce anything while high on psychedelics. 

In an interview in 1960, Aldous Huxley said:

But you see (and this is the most significant thing about the experience), during the experience you’re really not interested in doing anything practical — even writing lyric poetry. If you were having a love affair with a woman, would you be interested in writing about it? Of course not. And during the experience you’re not particularly in words, because the experience transcends words and is quite inexpressible in terms of words. So the whole notion of conceptualizing what is happening seems very silly. After the event, it seems to me quite possible that it might be of great assistance: people would see the universe around them in a very different way and would be inspired, possibly, to write about it.

The same insight was discovered by the Beatles. “We found out very early,” said Ringo Starr, “that if you play it stoned or derelict in any way, it was really shitty music, so we would have the experiences and then bring that into the music later.”

LSD helped Doug Englebart come up with the idea of the computer mouse, but he had the idea when he was down — the only thing he invented while actively tripping seems to have been a potty training tool.

Even CNN Business, the most unlikely of sources, says: “The last thing [a programmer should do] is take LSD and then code. It’s more subtle: ‘if you have issues in your life or anything, you’re going to think about them [while high], and think about them in a different perspective.’”

So much, so usual, right? “Drugs help you be creative” — you’ve heard this one before. By itself, it’s not very original as a thesis.

THEN CAME 1970

… and what can we say, but that science and the economy never recovered? 

The 1970 Controlled Substances Act invented five “schedules” or categories for regulating drugs. The most extreme level of regulation was Schedule I, for drugs that the feds decided had high potential for abuse, no accepted medical uses, and that were “not safe to use, even under medical supervision”. Into Schedule I went LSD, marijuana, mescaline, psilocybin, and many others. 

The next level of regulation was Schedule II, for drugs that the feds felt also had high potential for abuse, limited medical uses, and high risk of addiction. Into Schedule II went cocaine and amphetamines. 

Less exciting (for the most part) drugs went into Schedules III, IV, and V. 

Leaving out caffeine and alcohol was the only thing that spared us from total economic collapse. Small amounts of progress still trickle through; drugs continue to inspire humanity. This mostly happens with LSD, it seems, probably because the potential of that drug has not been as exhausted as the potential of cocaine and coffee. 

Steve Jobs famously took LSD in the early 70’s, just after the crackdown was revving up. “Taking LSD was a profound experience, one of the most important things in my life,” he said. “LSD shows you that there’s another side to the coin, and you can’t remember it when it wears off, but you know it. It reinforced my sense of what was important — creating great things instead of making money, putting things back into the stream of history and of human consciousness as much as I could.” 

Bill Gates has been more coy about his relationship with acid, but when an interviewer for Playboy asked him, “ever take LSD?” he pretty much admitted it. “My errant youth ended a long time ago,” he said in response to the question. “There were things I did under the age of 25 that I ended up not doing subsequently.”

So it seems like LSD had a small role in the lead-up to both Apple and Microsoft. These aren’t just two large companies — these are the two largest publicly-traded companies in the world. Apple is so big it accounts for almost 10% of the GDP of the United States (!!!), and about 7% of the value of the S&P 500. That is very big.

Economic growth is not objectively good by itself. But part of the question here is, “what happened to economic growth around 1970?” When the companies in the global #1 and #2 positions were both founded by people who used LSD, it makes you want to pay attention. It makes you wonder what Jeff Bezos, Larry Page, and Sergey Brin might have tried (though it might not be LSD).

It isn’t just the guys at the top, of course. In 2006, Cisco engineer Kevin Herbert told WIRED magazine that he “solved his toughest technical problems while tripping to drum solos by the Grateful Dead.” According to WIRED, Herbert had enough influence at Cisco that he was able to keep them from drug testing their employees. “When I’m on LSD and hearing something that’s pure rhythm,” says Herbert, “it takes me to another world and into another brain state where I’ve stopped thinking and started knowing.” We’re not sure where he is now, but he was still giving interviews advocating for LSD in 2008.

This is all business, but the impacts are not strictly economic. The big scientific breakthrough made on LSD after the drugs shutdown of 1970 is perhaps the most important one of all, Kary Mullis’s invention of polymerase chain reaction (PCR) in 1983.

PCR is basically the foundational method of all modern biochemistry/biomedicine. The New York Times called it, “highly original and significant, virtually dividing biology into the two epochs of before PCR and after PCR.” The scientific community agrees, and Mullis was awarded the Nobel Prize in Chemistry in 1993 for his invention, only seven years after he originally demonstrated the procedure.

Everyone knew that Mullis was big into psychedelics. “I knew he was a good chemist because he’d been synthesizing hallucinogenic drugs at Berkeley,” said one of his colleagues. And Mullis himself makes it pretty clear that LSD deserves a lot of the credit for his discovery. “Would I have invented PCR if I hadn’t taken LSD? I seriously doubt it,” said Mullis. “I could sit on a DNA molecule and watch the polymers go by. I learnt that partly on psychedelic drugs.” If this is even partially true, most progress in bioscience in the past 40 years was made possible by LSD. It may also have inspired Jurassic Park

(We also want to mention that Mullis was really weird. In addition to being a psychology and sociology denialist, HIV/AIDS denialist, and global warming denialist, he also claims he was visited by a fluorescent “standard extraterrestrial raccoon”, which spoke to him and called him “doctor”. Maybe this is because the first time he took acid, he took a dose of 1,000 micrograms, four times Hofmann’s original monster dose of 250 micrograms and about 10-20 times a normal dose. It really is possible to take too much LSD.)

Drugs continue to influence culture as well, of course, but none of those impacts seem to be as big as the Beatles. Michael Cera is a good actor, but we don’t know if his taking mescaline on-camera for the film Crystal Fairy & the Magical Cactus counts as a major discovery. We do appreciate that they included a crab, however. 

V.

Some accounts of scientific progress suggest that it happens based on foundational technologies, sometimes called “General Purpose Technologies”. For example, Tyler Cowen and Ben Southwood say: 

A General Purpose Technology (GPT), quite simply, is a technological breakthrough that many other subsequent breakthroughs can build upon. So for instance one perspective sees “fossil fuels,” or perhaps “fossil fuels plus powerful machines,” as the core breakthroughs behind the Industrial Revolution. Earlier GPTs may have been language, fire, mathematics, and the printing press. Following the introduction of a GPT, there may be a period of radical growth and further additional innovations, as for instance fossil fuels lead to electrification, the automobile, radio and television, and so on. After some point, however, the potential novel applications of the new GPT may decline, and growth rates may decline too. After America electrified virtually all of the nation, for instance, the next advance in heating and lighting probably won’t be as significant. Airplanes were a big advance, but over the last several decades commercial airliners have not been improving very much.

… [An] alternate perspective sees general technological improvement, even in such minor ways as ‘tinkering’, as more fundamental to the Industrial Revolution – and progress since then – as more important than any individual ‘general purpose’ breakthroughs. Or, if you like, the General Purpose Technology was not coal, but innovation itself.

So the foundational technologies driving innovation can be either literal technologies, new techniques and discoveries, or even perspectives like “innovation.”

When we cut off the supply and discovery of new drugs, it’s like outlawing the electric motor or the idea of a randomized controlled trial. Without drugs, modern people have stopped making scientific and economic progress. It’s not a dead stop, more like an awful crawl. You can get partway there by mixing redbull, alcohol, and sleep deprivation, but that only gets you so far.

There have been a few discoveries since 1970. But when we do develop new drugs, they get memory-holed. MDMA was originally discovered in 1912, but it didn’t start being used recreationally until about the mid-1970s. Because of this, it originally escaped the attention of the DEA, and for a while it was still legal. By 1985, the DEA made sure it was criminalized. 

Of course, people do still do drugs. But the question is who can do drugs, and who has access to them. When coffee was introduced, any student or lowlife in London could get a cup. Cocaine was more expensive, but doctors seem to have had relatively easy access, and Vin Mariani made the substance available to the masses. LSD has always been pretty cheap, and otherwise broke grad students seem to have had no trouble getting their hands on literally mindbending amounts. For a while, the CIA was paying people to take it!

Now that drugs are illegal, only a small percentage of the population really has reliable access to them — the rich and powerful. This is a problem because drugs only seem to unlock a great creative potential in a small number of people. “I don’t think there is any generalization one can make on this,” said Aldous Huxley. “Experience has shown that there’s an enormous variation in the way people respond to lysergic acid. Some people probably could get direct aesthetic inspiration for painting or poetry out of it. Others I don’t think could.” If we want drugs to help drive our economy and our scientific discovery, we need to make them widely available to everyone, so anyone who wants to can give them a try.

Not everyone needs drugs to have great breakthroughs. “I do not do drugs,” said Salvador Dalí, “I am drugs.” (Though Freud was one of his major influences, so drugs were in his lineage nonetheless.) Einstein doesn’t seem to have done drugs either, but like Dalí, he probably was drugs. 

But right now, we are losing the talent of people in whom drugs would unlock genius. A small number are still rich enough and privileged enough to both take drugs and get away with it. Anyone who has that potential, but who is currently too poor or too marginalized, will never get access to the drugs they need to change the world. Even the rich and well-connected may not be able to get the amount of drugs they need, or get them often enough, to finish their great works. Not everyone is Kary Mullis, able to synthesize their own LSD. Who knows what discoveries we have missed over the last 50 years.

We’ve heard a lot of moral and social arguments for legalizing drugs. Where are the scientific and economic arguments? Drugs are linked with great scientific productivity. Genome sequencing is the last big thing to happen in science, and it happened courtesy of LSD.

Drugs are also an enormous market. Commodity trading in drugs was so important to the origin of modern investing that today the ceiling of the New York Stock Exchange is decorated with gold tobacco leaves. Right now the markets for illegal drugs are not only unregulated, they’re untaxed. They’re probably immensely inefficient as well. We can more or less guarantee that your new cocawine startup will have a hard time getting VC backing. 

“It’s very hard for a small person to go into the drug importing business because our interdiction efforts essentially make it enormously costly,” said conservative economist Milton Friedman in 1991. “So, the only people who can survive in that business are these large Medellin cartel kind of people who have enough money so they can have fleets of airplanes, so they can have sophisticated methods, and so on. In addition to which, by keeping goods out and by arresting, let’s say, local marijuana growers, the government keeps the price of these products high. What more could a monopolist want? He’s got a government who makes it very hard for all his competitors and who keeps the price of his products high. It’s absolutely heaven.”

We’ll also note that America’s legal system is infamously slow and backed up. It’s easy to imagine that this is because the legal system is choking itself, trying to swallow all these drug cases, leaving no room to deal with anything else. In 1965, annual marijuana arrest rates were about 18,000. By 1970 they had increased tenfold, to 180,000. By 2000 the number was about 730,000 annually. As a result, we no longer have a functioning legal system. 

So maybe things began to crawl in 1970, when we began to take the steam out of our engine of progress. The first big shock was the Controlled Substances Act, but it wasn’t the last. 

VI.

Above, we quoted economist Tyler Cowen on foundational technologies. “The break point in America is exactly 1973,” he says elsewhere, “and we don’t know why this is the case.” Well, we may not know for sure, but we have a pretty good guess: The Drug Enforcement Administration, or DEA, was founded on July 1, 1973.

Before the DEA, enforcement of drug laws was sort of jumbled. According to the DEA’s own history of the period, “Previous efforts had been fragmented by competing priorities, lack of communication, multiple authority, and limited resources.” Nixon called for “a more coordinated effort,” and a few years later the DEA was born. Now there was a central authority enforcing the new laws, so perhaps it is not surprising that 1973, rather than 1970, was the break point. 

What about other countries? The trends since 1970 are global, not limited to the US. It’s not like the DEA is running around the rest of the world enforcing our drug laws on other countries, right? Well, first of all, the DEA is running around the rest of the world enforcing our drug laws on other countries.

Perfectly normal US law enforcement agents in… Afghanistan

Second, the rest of the world has largely followed the United States in criminalizing recreational drug use. This is regulated by a number of United Nations treaties. As a result of these treaties, most of the drugs that are illegal in the US are also illegal in most members of the United Nations.

Cocaine is illegal in most countries, including Canada, New Zealand, China, India, Japan, and Thailand. In Saudi Arabia, you can be executed for it. In Singapore, importing or exporting many drugs carries a mandatory death sentence.  

Friendly Singapore warning card about the death penalty for drug traffickers!

LSD was made illegal by the 1971 UN Convention on Psychotropic Substances, and it remains illegal in all 184 states that are party to the convention. 

The Netherlands has a reputation for being very drug-friendly, but this is largely undeserved. While they do tolerate some drugs (a policy known as gedoogbeleid), most drugs technically remain illegal. “Soft drugs” like marijuana, hash, and “magic truffles” (NOT shrooms — apparently these are different) are tolerated. Note the exact wording from this government website, though: “Although the sale of soft drugs is a criminal offence, coffee shops selling small quantities of soft drugs will not be prosecuted.” 

“Hard drugs”, including cocaine, magic mushrooms, and LSD are still very much illegal. Even for soft drugs like marijuana, however, you can’t possess more than a small amount for personal use. Producing any amount of any drug — including marijuana! — remains illegal. So even in this notorious drug haven, most drugs are still illegal and heavily restricted.

Any country that broke from this pact and really legalized drugs would see an explosion in their economy, and soon we expect, breakthroughs in their arts and sciences. But the UN wouldn’t like that, and you might wake up to find the DEA burning product in your backyard. So for now, with a small number of exceptions, these substances remain illegal. 

VII.

We hear a lot of talk these days about decriminalizing marijuana. This is the right thing to do, but it won’t be enough. Legalizing marijuana is not going to cut it.

Legalizing other drugs is more like it. When asked how he thought America would change if drugs were legalized, Milton Friedman said:

I see America with half the number of prisons, half the number of prisoners, ten thousand fewer homicides a year, inner cities in which there’s a chance for these poor people to live without being afraid for their lives, citizens who might be respectable who are now addicts not being subject to becoming criminals in order to get their drug, being able to get drugs for which they’re sure of the quality. …

I have estimated statistically that the prohibition of drugs produces, on the average, ten thousand homicides a year. It’s a moral problem that the government is going around killing ten thousand people. It’s a moral problem that the government is making into criminals people, who may be doing something you and I don’t approve of, but who are doing something that hurts nobody else. 

Friedman was a conservative’s conservative. He was an advisor to Reagan and to Thatcher. You can hardly get more impeccable conservative credentials than that! But when he looks at drug prohibition, he literally calls it socialism.

Everyone knows that hippies love drugs and want to legalize them. That much is not surprising. What is surprising is that conservatives are so firmly against drugs. It just doesn’t make any sense. Judge Juan Torruella of the First Circuit U.S. Court of Appeals was appointed by Ronald Reagan in 1984. In 1996, he said:

Prohibition’s enforcement has had a devastating impact on the rights of the individual citizen. The control costs are seriously threatening the preservation of values that are central to our form of government. The war on drugs has contributed to the distortion of the Fourth Amendment wholly inconsistent with its basic purposes. …

I detect considerable public apathy regarding the upholding of rights which have been cherished since this land became a constitutional Republic, when it comes to those accused of drug violations. Now I will grant you that people that sell drugs to children and the like are not very nice people, and I do not stand here or anywhere in defense of such heinous conduct. However, we must remember that we do not, and cannot, have one constitution for the good guys and another for the bad ones.

Paul Craig Roberts, an economist who served as Assistant Secretary of the Treasury for Economic Policy under Reagan, said in The Washington Times in 2001:

The conservatives’ war on drugs is an example of good intentions that have had unfortunate consequences. As often happens with noble causes, the end justifies the means, and the means of the drug war are inconsistent with the U.S. Constitution and our civil liberties.

Think about it. In the name of what other cause would conservatives support unconstitutional property confiscations, unconstitutional searches, and Orwellian Big Brother invasions of privacy? …

It is a personal tragedy for a person to ruin his life with alcohol, drugs, gambling or any other vice. But it is a public tragedy when government ruins the lives of millions of its citizens simply because it disapproves of a product they consume.

The “war on drugs” is, in truth, a war on the Constitution, civil liberties, privacy, property, freedom and common sense. It must be stopped.

Legalizing drugs is the right thing to do — from a moral point of view, from an economic point of view, from a scientific point of view. But legalizing drugs won’t be enough. We need new drugs. We need to taste drugs that no one has ever heard of, mysterious new combinations of drugs that no one’s ever tried before. Scientific and economic progress — great discoveries and major companies — comes on the heels of drug discovery. 

Is the Controlled Substances Act really responsible for the general decline since 1970? We’re not sure, but what is clear is that drugs are foundational technologies, like the motor, combustion engine, semiconductor, or the concept of an experiment. New drugs lead to scientific revolutions. Some of those drugs, like coffee, continue to fuel fields like mathematics and computer science, even some hundreds of years later. With apologies to Newton, “If I seem higher than other men, it is because I am standing on the shoulders of giants.”

Investigation: Were Polish Aristocrats in the 1890s really that Obese? by Budnik & Henneberg (2016)

I. 

A friend recently sent us a chapter by Alicja Budnik and Maciej Henneberg, The Appearance of a New Social Class of Wealthy Commoners in the 19th and the Early 20th Century Poland and Its Biological Consequences, which appeared in the 2016 volume Biological Implications of Human Mobility.

A better title would be, Were Polish Aristocrats in the 1890s really that Obese?, because the chapter makes a number of striking claims about rates of overweight and obesity in Poland around the turn of the century, especially among women, and especially especially among the upper classes.

Budnik & Henneberg draw on data from historical sources to estimate height and body mass for men and women in different classes. The data all come from people in Poland in the period 1887-1914, most of whom were from Warsaw. From height and body mass estimates they can estimate average BMI for each of these groups. (For a quick refresher on BMI, a value under 18.5 is underweight, over 25 is overweight, and over 30 is obese.) 

They found that BMIs were rather high; somewhat high for every class but quite high for the middle class and nobility. Peasants and working class people had average BMIs of about 23, while the middle class and nobles had average BMIs of just over 25.

This immediately suggests that more than half of the nobles and middle class were overweight or obese. The authors also estimate the standard deviation for each group, which they use to estimate the percentage of each group that is overweight and obese. The relevant figure for obesity is this: 

As you can see, the figure suggests that rates of obesity were rather high. Many groups had rates of obesity around 10%, while about 20% of middle- and upper-class women were obese. 

This is pretty striking. One in five Polish landladies and countesses were obese? Are you sure?

To begin with, it contradicts several other sources on what baseline human weight would be during this period. The first is a sample of Union Army veterans examined by the federal government between 1890-1900. The Civil War was several decades before, so these men were in their 40s, 50s, and 60s. This is in almost the exact same period, and this sample of veterans was Caucasian, just like the Polish sample, but the rate of obesity in this group was only about 3%. 

Of course, the army veterans were all men, and not a random sample of the population. But we have data from hunter-gatherers of both genders that also suggests the baseline obesity rate should be very low. As just one example, the hunter-gatherers on Kitava live in what might be called a tropical paradise. They have more food than they could ever eat, including potatoes, yams, fruits, seafood, and coconuts, and don’t exercise much more than the average westerner. Their rate of obesity is 0%. It seems weird that Polish peasants, also eating lots of potatoes, and engaged in backbreaking labor, would be so more obese than these hunter-gatherers. 

On the other hand, if this is true, it would be huge for our understanding of the history of obesity, so we want to check it out. 

Because this seems so weird, we decided to do a few basic sanity checks. For clarity, we refer to the Polish data as reported in the chapter by Budnik & Henneberg as the Warsaw data, since most (though not all) of these data come from Warsaw.

II.

The first sanity check is comparing the obesity rates in the Warsaw data to the obesity rates in modern Poland. Obesity rates have been rising since the 1890s [citation needed] so people should be more obese now than they were back then.

The Warsaw data suggests that men at the time were somewhere between 0% and 12.9% obese (mean of categories = 7.3%) and women at the time were between 8.8% and 20.9% obese (mean of categories = 16.2%). In comparison, in data from Poland in 1975, 7% of men were obese and 13% of women were obese. This suggests that obesity rates were flat (or perhaps even fell) between 1900 and 1975, which seems counterintuitive, and kinda weird. 

In data from Poland in 2016, 24% of men were obese and 22% of women were obese. This also seems weird. It took until 2016 for the average woman in Poland to be as obese as a middle-class Polish woman from 1900? This seems like a contradiction, and since the more recent data is probably more accurate, it may mean that the Warsaw data is incorrect.

There’s another sanity check we can make. Paintings and photographs from the time period in question provide a record of how heavy people were at the time. If the Warsaw data is correct, there should be lots of photographs and paintings of obese Poles from this era. We checked around to see if we could find any, focusing especially on trying to get images of Poles from Warsaw.

We found a few large group photographs and paintings, and some pictures of individuals, and no way are 20% of them obese.

We begin with Sokrates Starynkiewicz, who was president of Warsaw from 1875 to 1892. He looks like a very trim gentleman, and if we look at this photograph of his funeral from 1902, we see that most of the people involved look rather trim as well:

In addition, a photograph of a crowd from 1895:

And here’s a Warsaw street in 1905: 

People in these photographs do not look very obese. But most of the people in these photographs are men, and the Warsaw data suggests that rates of obesity for women were more than twice as high. 

We decided to look for more photographs of women from the period, and found this list from the Krakow Post of 100 Remarkable Women from Polish History, many of whom seem to have been decorated soldiers (note to self: do not mess with Polish women). We looked through all of the entries for individuals who were adults during the period 1887-1914. There are photographs and/or portraits for many of them, but none of them appear to be obese. Several of them were painters, but none of the subjects of their paintings appear obese either. (Unrelatedly, one of them dated Charlie Chaplin and also married a Count and a Prince.)

If rates of obesity were really 20% for middle and upper class women, then there should be photographic evidence, and we can’t find any. What we have found is evidence that Polish women are as beautiful as they are dangerous, which is to say, extremely.

Anna Iwaszkiewicz with a parrot in 1914

III.

If we’re skeptical of the Warsaw data, we have to wonder if there’s something that could explain this discrepancy. We can think of three possibilities. 

The first is that we have a hard time imagining that whoever collected this data got all these 19th-century Poles to agree to be weighed totally naked. If they were wearing all of their clothes, or any of their clothes, that could explain the whole thing. (It might also explain the large gender and class effects.) 

Clothing weighed a lot back then. Just as one example, a lady’s dolman could weigh anywhere between 6 and 12 pounds, and a skirt could weigh another 12 pounds by itself. We found another source that suggested a lady’s entire outfit in the 1880s (though not Poland specifically) would weigh about 25 lbs.

As far as we can tell, there’s no mention of clothes, clothing, garments, shoes, etc. in the chapter, so it’s quite possible they didn’t account for clothing at all. All the original documents seem to be in Polish and we don’t speak Polish, so it’s possible the original authors don’t mention it either. (If you speak Polish and are interested in helping unravel this, let us know!)

Also, how did you even weigh someone in 1890s Poland? Did they carry around a bathroom scale? We found one source that claims the first “bathroom” scale was introduced in 1910, but they must have been using something in 1890. 

Sir Francis Galton, who may have come up with the idea of weighing human beings, made some human body weight measurements in 1884 at London’s International Health Exhibition. He invited visitors to fill out a form, walk through his gallery, and have their measurements taken along a number of dimensions, including colour-sense, depth perception, sense of touch, breathing capacity, “swiftness of blow with fist”, strength of their hands, height, arm span, and weight. (Galton really wanted to measure the size of people’s heads as well, but wasn’t able to, because it would have required ladies to remove their bonnets.) In the end, they were given a souvenir including their measurements. To take people’s weights, Galton describes using “a simple commercial balance”.

Some of the “anthropometric instruments” Galton used.

Galton also specifically says, “Overcoats should be taken off, the weight required being that of ordinary indoor clothing.” This indicates he was weighing people in their everyday clothes (minus only overcoats), which suggests that the Polish data may also include clothing weight. “Stripping,” he elaborates, “was of course inadmissible.”

Card presented to each person examined. Note “WEIGHT in ordinary-in-door clothing in lbs.” in the lower righthand corner.

Also of interest may be Galton’s 1884 paper, The Weights of British Noblemen During the Last Three Generations, which we just discovered. “Messrs. Berry are the heads of an old-established firm of wine and coffee merchants,” he writes, “who keep two huge beam scales in their shop, one for their goods, and the other for the use and amusement of their customers. Upwards of 20,000 persons have been weighed in them since the middle of last century down to the present day, and the results are recorded in well-indexed ledgers. Some of those who had town houses have been weighed year after year during the Parliamentary season for the whole period of their adult lives.”

Naturally these British noblemen were not being weighed in a wine and coffee shop totally naked, and Galton confirms that the measurements should be, “accepted as weighings in ‘ordinary indoor clothing’.” This seems like further evidence that the Warsaw data likely included the weight of individuals’ clothes. 

Another explanation has to do with measurements and conversions. Poland didn’t switch to the metric system until after these measurements were made (various sources say 1918, 1919, 1925, etc.), so some sort of conversion from outdated units has to be involved. This chapter does recognize that, and mentions that body mass was “often measured in Russian tsar pounds (1 kg = 2.442 pounds).” 

We have a few concerns. First, if it was “often” measured in these units, what was it measured in the rest of the time? 

Second, what is a “Russian tsar pound”? We can’t find any other references for this term, or for “tsar pound”, but we think it refers to the Russian funt (фунт). We’ve confirmed that the conversion rate for the Russian funt matches the rate given in the chapter (409.5 g, which comes out to a rate of 2.442 in the opposite direction), which indicates this is probably the unit that they meant. 

But we’ve also found sources that say the funt used in Warsaw had a different weight, equivalent to 405.2 g. Another source gives the Polish funt as 405.5 g. In any case, the conversion rate they used may be wrong, and that could also account for some of the discrepancy.

The height measurements might be further evidence of possible conversion issues. The authors remark on being surprised at how tall everyone was — “especially striking is the tallness of noble males” — and this could be the result of another conversion error. Or it could be another side effect of clothing, if they were measured with their shoes on, since men’s shoes at the time tended to have a small heel. (Galton measured height in shoes, then the height of the heel, and subtracted the one from the other, but we don’t know if the Polish anthropometers thought to do this.)

A third possibility is that the authors estimated the standard deviation of BMI incorrectly. To figure out how many people were obese, they needed not only the mean BMI of the groups, they needed an estimate of how much variation there was. They describe their procedure for this estimation very briefly, saying “standard deviations were often calculated from grouped data distributions.” (There’s that vague “often” again.) 

What is this technique? We don’t know. To support this they cite Jasicki et al. (1962), which is the book Zarys antropologii (“Outline of Anthropology”). While we see evidence this book exists, we can’t find the original document, and if we could, we wouldn’t be able to read it since we don’t speak Polish. As a result, we’re concerned they may have overestimated how much variation there was in body weights at the time.

These three possibilities seem sufficient to explain the apparently high rates of obesity in the Warsaw data. We think the Warsaw data is probably wrong, and our best guess for obesity rates in the 1890s is still in the range of 3%, rather than 10-20%.