The Cathedral and the Bazaar is an essay/book about how Linus Torvalds threw all the normal rules of software out the window when he wrote the operating system Linux.
Back in the day, people “knew” that the way to write good software was to assemble an elite team of expert coders and plan things out carefully from the very beginning. But instead of doing that, Linus just started working, put his code out on the internet, and took part-time help from whoever decided to drop by. Everyone was very surprised when this approach ended up putting out a solid operating system. The success has pretty much continued without stopping — Android is based on Linux, and over 90% of servers today run a Linux OS.
Before Linux, most people thought software had to be meticulously designed and implemented by a team of specialists, who could make sure all the parts came together properly, like a cathedral. But Linus showed that software could be created by inviting everyone to show up at roughly the same time and place and just letting them do their own thing, like an open-air market, a bazaar.
Let’s consider in particular Chapter 4, Release Early, Release Often. One really weird thing Linus did was he kept putting out new versions of the software all the time, sometimes more than once a day. New versions would go out with the paint still wet, no matter how much of a mess they were.
People found this confusing. They thought putting out early versions was bad policy, “because early versions are almost by definition buggy versions and you don’t want to wear out the patience of your users.” Why the hell would you put out software if it were still crawling with bugs? Well,
Linus was behaving as though he believed something like this:
> Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
Or, less formally, “Given enough eyeballs, all bugs are shallow.” I dub this: “Linus’s Law”.
This bottom-up method benefits from two key advantages: the Delphi Effect and self-selection.
More users find more bugs because adding more users adds more different ways of stressing the program. This effect is amplified when the users are co-developers. Each one approaches the task of bug characterization with a slightly different perceptual set and analytical toolkit, a different angle on the problem. The “Delphi effect” seems to work precisely because of this variation. In the specific context of debugging, the variation also tends to reduce duplication of effort.
So adding more beta-testers may not reduce the complexity of the current “deepest” bug from the developer’s point of view, but it increases the probability that someone’s toolkit will be matched to the problem in such a way that the bug is shallow to that person.
One special feature of the Linux situation that clearly helps along the Delphi effect is the fact that the contributors for any given project are self-selected. An early respondent pointed out that contributions are received not from a random sample, but from people who are interested enough to use the software, learn about how it works, attempt to find solutions to problems they encounter, and actually produce an apparently reasonable fix. Anyone who passes all these filters is highly likely to have something useful to contribute.
Linus’s Law can be rephrased as “Debugging is parallelizable”. Although debugging requires debuggers to communicate with some coordinating developer, it doesn’t require significant coordination between debuggers. Thus it doesn’t fall prey to the same quadratic complexity and management costs that make adding developers problematic.
In practice, the theoretical loss of efficiency due to duplication of work by debuggers almost never seems to be an issue in the Linux world. One effect of a “release early and often” policy is to minimize such duplication by propagating fed-back fixes quickly.
Research is difficult because reality is complex and many things are confusing or mysterious. But with enough eyeballs, all research bugs are shallow too.
Without a huge research budget and dozens of managers, you won’t be able to coordinate a ton of researchers. But the good news is, you didn’t really want to coordinate everyone anyways. You can just open the gates and let people get to work. It works fine for software!
The best way to have troubleshooting happen is to let it happen in parallel. And the only way to make that possible is for everyone to release early and release often. If you sit on your work, you’re only robbing yourself of the debugging you could be getting for free from every interested rando in the world.
In the course of our obesity research, we’ve talked to water treatment engineers, social psychologists, software engineers, emeritus diabetes researchers, oncologists, biologists, someone who used to run a major primate lab, multiple economists, entrepreneurs, crypto enthusiasts, physicians from California, Germany, Austria, and Australia, an MD/PhD student, a retired anthropologist, a mouse neuroscientist, and
a partridge in a pear tree a guy from Scotland.
Some of them contributed a little; some of them contributed a lot! Every one had a slightly different toolkit, a different angle on the problem. Bugs that were invisible to us were immediate and obvious to them, and each of them pointed out different things about the problem.
For example, in our post recruiting for the potato diet community trial, we originally said that we weren’t sure how Andrew Taylor went a year without supplementing vitamin A, and speculated that maybe there was enough in the hot sauces he was using. But u/alraban on reddit noticed that Andrew included sweet potatoes in his diet, which are high in vitamin A. We totally missed this, and hadn’t realized that sweet potatoes are high in vitamin A. But now we recommend that people either eat some sweet potato or supplement vitamin A. We wouldn’t have caught this one without alraban.
In another discussion on reddit, u/evocomp challenged us to consider the Pima, a small ethnic group in the American southwest that were about 50% obese well before 1980, totally bucking the global trend. “What’s the chance that [this] population … [is] highly sensitive and equally exposed to Lithium, PFAS, or whatever contaminants are in SPAM or white bread?” evocomp asked. This led us to discover that the Pima in fact had been exposed to abnormal levels of lithium very early on, about 50x the median American exposure in the early 1970s. Before this, lithium had been just one hypothesis among many, but evocamp’s challenge and the resulting discoveries promoted it to the point where we now think it is the best explanation for the obesity epidemic. Good thing the community is helping us debug!
My original formulation was that every problem “will be transparent to somebody”. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. “Somebody finds the problem,” he says, “and somebody else understands it. And I’ll go on record as saying that finding it is the bigger challenge.”
This is a classic in the history of science. One person notices something weird; then, 100 years later, someone else figures out what is going on.
Brownian motion was first described by the botanist Robert Brown in 1827. He was looking at a bit of pollen in water and was startled to see it jumping all over the place, but he couldn’t figure out why it would do that. This bug sat unsolved for almost eighty years, until Einstein came up with a statistical explanation in 1905, in one of his four Annus Mirabilis papers. Bits of pollen jumping around in a glass of water doesn’t sound very interesting or mysterious, but this was a big deal because Einstein showed that Brownian motion is consistent with what would happen if the pollen was being bombarded from all sides by tiny water molecules. This was strong evidence for the idea that all matter is made up of tiny indivisible particles, which was not yet well-established in 1905!
Or consider DNA. DNA was first isolated from pus and salmon sperm by the Swiss biologist Friedrich Miescher in 1869, but it took until the 1950s before people figured out DNA’s structure.
Complex multi-symptom errors also tend to have multiple trace paths from surface symptoms back to the actual bug. … each developer and tester samples a semi-random set of the program’s state space when looking for the etiology of a symptom. The more subtle and complex the bug, the less likely that skill will be able to guarantee the relevance of that sample.
For simple and easily reproducible bugs, then, the accent will be on the “semi” rather than the “random”; debugging skill and intimacy with the code and its architecture will matter a lot. But for complex bugs, the accent will be on the “random”. Under these circumstances many people running traces will be much more effective than a few people running traces sequentially—even if the few have a much higher average skill level.
This is making an important point: if you want to catch a lot of bugs, a bunch of experts isn’t enough — you want as many people as possible. You do want experts, but you gain an additional level of scrutiny from having the whole fuckin’ world look at it.
Simple bugs can be caught by experts. But complex or subtle bugs are more insane. For those bugs, the number of people looking at the problem is much more important than the average skill of the readers. This is a strong particular argument for putting things on the internet and making them super enjoyable and accessible, rather than putting them in places where only experts will see them.
Not that we need any more reasons, but this is also a strong argument for publishing your research on blogs and vlogs instead of in stuffy formal journals. If you notice something weird that you can’t figure out, you should get it in front of the scientifically-inclined public as soon as possible, because one of them has the best chance of spotting whatever you have missed. Back in the day, the fastest way to get an idea in front of the scientifically-inclined public was to send a manuscript to the closest guy with a printing press, who would put it in the next journal. (Or if possible, go to a conference and give a talk about it.)
But journals today only want complete packages. If you write to them about the tiny animals you found in your spit, they aren’t going to want to publish that. Times have changed. Now the fastest way to get out your findings is to use a blog, newsletter, twitter, etc.
One thought on “Every Bug is Shallow if One of Your Readers is an Entomologist”
Stanford prof Ron Davis worked on the human genome project which dropped their research online each night. He now runs the Open Medicine Foundation which is tryign to solve me/cfs through a bazaar-style approach to data sharing. I wouldn’t say it’s *radically* open, patients don’t get to see data, and often researchers know they need publications to get tenure and grants so they don’t share all data. It’s butting up against the much more salient incenitves in the world of universities. But the idea is sound.
Davis has some great quotes on peer review and its tendency to both fuck up *and* delay papers. I agree 100 eyes on a paper after publication would be better then 3 or 4 prior.
I have to say I hate nothing more than seeing “Submitted 2019-01-01, Published 2022-04-01 on top of some paper I care about. Whole human lifetimes are disappearing while reviewer 2 strokes their own ego.
Journal formats are also too restrictive. Labs should just publish their progress on their own website each week: videos of the rats, data as it comes in, blueprint of the new rat maze, plans for next week’s pipetting, etc. If someone sees you fucking up an experiment, better to find out now!