In 1980 a brand new Xerox 9700 printer was installed in an office of MIT’s computer science department. It often jammed. Several people in that department probably could have fixed it if not for the fact that it ran on proprietary Xerox software.
One of those people, Richard Stallman, tried to program it to notify the rest of his office when it jammed but wasn’t allowed to do that either. Whenever he and his colleagues needed a printout they had to check if it was jammed first. It was on a different floor.
Stallman asked a colleague with Xerox connections to share the code for the driver. Despite having access, the colleague was contractually obligated to say no.
This experience was a catalyst that solidified Stallman’s resolve to give free software a permanent place in the world. If he couldn’t make all software free, he could at least lay the groundwork for free alternatives. He had the skills to help the world preserve the possibility of fixing your own printer or asking a nerdy neighbor to do so rather than remaining at Xerox’s mercy. He would not tolerate a society where people were actively discouraged from helping each other by the fact that Xerox’s profits might suffer a little.
He soon left MIT to work on GNU, which stands for “GNU’s Not Unix.” Unix was an operating system built and owned by AT&T. Stallman’s goal with GNU was to build a completely free alternative that could run on any computer.
Crucially, when he did this, he created a new type of copyright license to protect it called the GNU General Public License. It’s also called a “copyleft” license because it reverses what a traditional copyright license does. The GPL states not just that anyone can modify or distribute the software but also that any modified versions fall under the same rules. Corporations who wanted to build on top of GNU would have to be careful in how they did so lest their product become free software.
His vision was no pipe dream. In the early 1990s a Finnish student named Linus Torvalds used what already existed of GNU to finish a central piece of the operating system known as the kernel. The GNU/Linux operating system was born. Stallman, Torvalds, and their globally distributed team showed the world that wildly successful software could be developed without corporate funding.
That such a project could be built this way feels obvious now, but even for many software engineers, it wasn’t obvious back then. In 1997 a hacker named Eric Raymond, inspired by his experiences working on GNU/Linux, published an essay called “The Cathedral and the Bazaar,” arguing that building software in this open, communal, bottom-up way, rather than the private and top-down way favored by most companies, yields inherently better results. Putting vastly more eyeballs on a project’s code, as it turns out, makes it vastly harder for bugs and security flaws to take root. It also doesn’t hurt when a lot of the contributors work on it out of pure interest and fun rather than economic need.
Many huge companies use GNU/Linux extensively (and other tools developed by its creators like the version control system Git) and owe much of their profit to it, including software giants like Google, Apple, Facebook, Amazon, and even Microsoft, despite Steve Ballmer once calling the General Public License a “cancer.” They’ve been careful, of course, to avoid doing so in a way that makes their products free.
In 1998 a browser company called Netscape did something crazy. Desperate to stop Microsoft’s Internet Explorer from cutting into their market share and inspired by this essay, they publicly shared the code for their latest web browser, inviting the world to build it with them. In the same initiative, with help from supportive hackers like Raymond himself, they also coined the term “open source.”
The nascent open source community ended up scrapping the browser and starting Mozilla.org instead but the move heralded the future. Companies were now allowed and even encouraged to start projects where large chunks of development, along with distribution and branding, still happened in-house but where the whole world could contribute. (Projects like React continue to see success under this model.) From their perspective it makes perfect sense: if people are going to build free software anyway, why not benefit from that?
In giving companies a free pass to enter the “open source community,” however, certain hackers said “take what you want and give what you want” to a bunch of organizations built around maximizing the ratio of the former to the latter.
Here we are with Schrödinger’s open source, both broken and unbroken.
It’s unbroken in the sense that people still write terrific software because they can and want to, and the world still benefits from it. There is also, of course, a darker side.
The recent fiascos around log4j's security flaws and the intentional bricking of the popular npm libraries ‘colors’ and ‘faker’ are merely the latest couple of dancers in a long conga line of problems stemming from the unholy marriage of the cathedral and the bazaar. Companies adapt an open source library without contributing; the library falls into disrepair because its maintainers can’t work for free forever; security vulnerabilities arise that the companies, building their cathedrals, don’t notice or address until irreparable damage has been done.
These companies often have the nerve — as Microsoft did in 1998 in response to Netscape’s “stunt” — to claim that the open source maintainers are the ones being risky, and that the cure is pouring even more trust and resources into their company, which definitely knows what it’s doing.
Again, though, they’re incentivized to be this way. Software consumers like you and me, instead of rallying around better free software, want and use their products. The collective demand that this puts on good cheap software has direct impacts on the maintainers of the free software underlying it.
Facing increasing pressure without proportionate rewards, maintainers can burn out. Even Guido van Rossum, “benevolent dictator for life” of Python, stepped down. Linus Torvalds took a break from Linux. Jacob Thornton, whose talk on the history of open source I’ve been paraphrasing large chunks of, references “cute puppy dog” syndrome: you start a project because it’s fun, but then it grows. It becomes a more thankless job. While it’s hard to pass off your baby to people who won’t care for it like you did, it’s often harder to take care of Clifford the Big Red Dog when everyone wants to play with him and no one wants to finance his kibble.
Filippo Valsorda, an engineer on the Go team at Google with extensive open source experience, recently suggested forcing companies to “professionalize” (invoice without hiring) the maintainers of the open source projects they use. I suspect this suggestion will grow in popularity; it lets companies part with some extra cash rather than their fundamental assumptions about how freely shareable software should be, or how it should be used on people who never signed up for it. (Some have carried the torch of questioning these assumptions, like the late Aaron Swartz in his unfinished work about the Programmable Web, though they’re few and far between.)
In a 2001 speech where he told the Xerox printer story, Stallman noted that in the 1970s, the heady days of free software being the norm, none of this was an issue. The survival of your project didn’t depend on funding or software licenses. It didn’t really even depend on what model of governance your project had. People contributed work they cared about. They debated it carefully. They delegated decisions about specifics to people who clearly knew and cared about those specifics, probably because they’d already worked on them. There was also no insane reactionary erasure of all governance whatsoever, as some have found in “flat” organizations; there was no leaving people to grapple with the Tyranny of Structurelessness — informal cliques of power — and no driving people to burn out and disengage because they cared more than the higher-ups knew how to handle. (Or because they kept running into buggy proprietary software they weren’t allowed to fix.)
It was all a “do”-ocracy. People weren’t in it for power and clout; they were in it for fun and community. “Cooperation was our way of life,” Stallman said.
For a long time after the Renaissance, European culture actually stayed pretty authoritarian, hierarchical, and bound by strict puritanical codes of morality.
People like Jean-Jacques Rousseau who finally started touting “Enlightenment” values — individual liberty, questioning authority, rational debate, all that good stuff — often did so with traceable influence, sometimes even explicitly-stated influence, from Native Americans, many of whose nuanced and well-reasoned critiques of the West (read up on Kondiaronk for a good example) found their way into widely-read books written by New World explorers.
In fact, it was natives of the American northeast and Great Lakes regions who were already living these values. They prized and prioritized personal autonomy, doing virtually nothing they morally disagreed with, to an extent that would make any modern libertarian jealous.
They went so far with those principles as to automatically ensure a social safety net. When you’re going hungry or otherwise in mortal danger, you don’t have much personal autonomy, as you’re just trying to survive. The community would help you out. One could say the reverse was true; in caring for each other, they protected personal autonomy. It doesn’t really matter. This was just how they lived. Cooperation was their way of life.
There was no blind, reflexive, dogmatic reverence to the authority of the humans you lived with beyond their ability to morally and logically convince you. Words like “lord” and “commandment” were hard for early missionaries to translate into the natives’ languages. Rigorous notions of private ownership were not really a thing; in practice, these amount to telling everyone else in your community “you cannot access this, even if you find it helpful; it’s mine.” Amassing power over others wasn’t really a thing either. The mechanisms that allowed it in European society, like money and social status, were simply not that important, even if they existed. The European reflex to sanctify and protect ideas like these was alien, especially when it came at the cost of actual human life.
This had the interesting effect of translating directly into the kind of “equality under the law” that the Founding Fathers began enshrining. (If anything, the Greco-Roman traditions that my high school history books instead pointed to when explaining the Founders’ talk of “equality” merely kept it limited to property-owning white males.) The natives, in living this way, flouted the European cultural assumption that individual liberty and social cohesiveness were at odds. It was literally revolutionary.
In the centuries since, the tendency of Western historians and anthropologists, lulled by stereotypes of the “noble savage” (or simply the “savage”), has been to discount the direct shoutouts of people like Rousseau, as though they couldn’t possibly be accurate. These thinkers must have been trying to seem “exotic,” or were just joshing us, right? Indigenous cultures were too “simple,” “innocent,” and “primitive” to know what they were talking about when it came to statecraft, right?
The time has come to show my hand again. Much of what you’ve just read is a direct paraphrase of the beginning of the book The Dawn of Everything by David Wengrow and David Graeber. They go on:
What if the sort of people we like to imagine as simple and innocent are free of rulers, governments, bureaucracies, ruling classes and the like, not because they are lacking in imagination, but because they're actually more imaginative than we are? We find it difficult to picture what a truly free society would be like; perhaps they have no similar trouble picturing what arbitrary power and domination would be like. Perhaps they can not only imagine it, but consciously arrange their society in such a way as to avoid it.
If the natives’ way of life was fundamentally inferior, it should be difficult to find cases where people voluntarily chose their way of life. It isn’t. Benjamin Franklin himself wrote puzzled accounts of people with years of experience living among both Europeans and indigenous people (via adoption, kidnapping, etc.) choosing to live amongst the Native Americans.
This may be anecdotal, but it points at an important truth. As Wengrow and Graeber write: “There is the security of knowing one has a statistically smaller chance of getting shot with an arrow. And then there’s the security of knowing that there are people in the world who will care deeply if one is.”
The type of world you live in comes from the way your society values and prioritizes different freedoms.
Everyone, by nature, is “free” to do anything, including murder; when we deny the freedom to murder, though, it protects our freedom to do much more. So while freedom is not always a zero-sum game, like when people write useful free software that enables greater autonomy for all, there can be tradeoffs.
The current trouble with open source comes from the fanatical worship and defense of one particular freedom at the exclusion of others. It is the freedom to own and hoard property — to keep software (or land or whatever else) both private and maximally-profitable by all means necessary — even at the cost of our freedom to help each other, breathe clean air, enact a 4-day workweek that lets us write more free software, or not be wrongfully imprisoned by Chevron.
As those examples show, the freedom to hoard is tied up in a vicious cycle (well, virtuous for some) with the freedom to use money to influence laws. People from a certain class, like Billy McFarland of Fyre Festival infamy, get 6 years in prison for committing tens of millions of dollars in wire fraud, and that’s when they get caught and publicized; poor people routinely go to jail for life for stealing $50 of food.
As Stallman mentions in his 2001 speech, when something (like a computer, or a justice system) is broken, people stop caring as a defense mechanism:
You know if the computer is constantly frustrating to use, and people are using it, their lives are going to be frustrating, and if they're using it in their jobs, their jobs are going to be frustrating; they're going to hate their jobs. And you know, people protect themselves from frustration by deciding not to care. So you end up with people whose attitude is, "Well, I showed up for work today. That's all I have to do. If I can't make progress, that's not my problem; that's the boss's problem." And when this happens, it's bad for those people, and it's bad for society as a whole.
This has created a feedback loop. Prioritizing the protection of capital creates a justice system that does the same, which leaves many people in the dust; they see a broken system, become apathetic about changing it, and allow it to become even worse. The end result? We have all but enshrined greed and criminalized caring, labeling any efforts to change this fact “socialist.”
Our world depends on systems: software systems, energy systems, healthcare systems, and more. The legwork of keeping them going falls to workers. Instead of incentivizing caring for these people, we have incentivized the greed of the brand names that leech off of them, and many of our systems — increasingly exploited by those brand names for private profit — are falling apart.
In a great though unfortunately paywalled article called Guardians of the Internet, Charlie Warzel points out our growing awareness of this. There is a budding realization — not just among developers, but also among Kellogg’s workers and healthcare workers and others — that we need to force changes in how companies pay workers and invest in systems that sustain them.
Workers in many industries, not just open source software, are increasingly overburdened and undercompensated, largely due to the same dynamics that hobble the justice system. The federal minimum wage, especially compared to inflation, remains a sad joke. Executives make record profits while paying historically low taxes, and with no hard data to show that they deserve this. Just as it is with software, companies take, and fail to give, as much as the law will allow. “Money as speech” means the law will allow a lot. They are always lobbying for it to allow even more.
Do you want your laws to enshrine the freedom to care only and solely for yourself, at the expense of the freedom to cooperate? Does prizing private property and its protection above the protection of all else seem smart, or do community and personal autonomy deserve a place in our laws, and maybe even — sounds crazy, I know — an even bigger one?
As the early free software community shows, living in a complex technological world doesn’t mean the latter is blocked. It can be harder, of course, given all the corporate resistance one may face, but it’s possible.
Since I value autonomy, I’ll respect your choice. I just find it a pretty obvious one.