Reflecting On Stable Totalitarianism
I used to think that I was writing dystopias. Reflecting on that term, what it actually means, and its relationship to totalitarianism taught me that this is not true.
When I found the website 80,000 Hours several years ago, I was immediately entranced. I am passionate about worldbuilding, and I love good worldbuilding.1 The organization behind the website has a goal of inspiring young people to take mission-driven careers that can steer humanity in directions that will help us thrive, with a variety of analyses on social, environmental, and material challenges that need lots of brain power to tackle. Eventually, I found my way to their page about the risk of stable totalitarianism, which is the inspiration for this post.
One of the implicit hypotheses in my worldbuilding project and my stories is that (most) people do not rebel if they have their basic needs met and can see a future for themselves. Any group that would like to seize power and maintain it would need to craft society so that the inequalities and grievances are minimized, where individuals and groups feel as if they have agency and are empowered to make decisions even if their field of choice has been limited by the powers that be. The places where this cannot be maintained are where things break down.
80,000 Hours defines stable totalitarianism as “a totalitarian regime lasting many millennia, or even in perpetuity.” The page they built around it seems to link this risk to those related to AI, as they believe that this is far more likely of a scenario than a dictator crafting a successful succession plan. I would counter that machines are easier to unplug than ideologies, and the ideology may remain even if there is a revolving door of leaders. In my work, though, the tesekhaira (a tesekhaira is a “sort of immortal” person) have extreme personal continuity that can impact societies over millennia, especially those tesekhaira who are surrounded by collectives. So … there are agents who operate across long periods of time.
I never thought of this system as totalitarian even when I thought it was dystopian. However, one of the major multistory arcs comes out of the instabilities created when Ameisa lost 20% of its population and 80% of its factory capacity at the beginning of a period called the Blackout, which ran for about 5,000 years. Everything that happens, including the moral ruin of Ameisi societies’ tesekhaira architects, arises due to that catastrophic event and the series of mistakes made in the decades that followed. Millennia of strife arising from careless historical blips are fairly normal.2 The system that is created to cope with those circumstances resembles stable totalitarianism more than those on the other worlds that I write about, definitely, but where is that line even drawn?
Again, machines are easier to unplug than ideologies.
What 80,000 Hours draws on to set up their argument about stable totalitarianism — from its existence as a threat to why they ranked it low — are historical regime systems that burned hot and heavy as an O star before exploding into supernovas of shrapnel.
The Khmer Rouge ruled Cambodia for just four years, yet in that time they murdered about one-quarter of Cambodia’s population.
Even short-lived totalitarian regimes can inflict enormous harm. In the 20th century alone, they committed some of the most horrific crimes against humanity in history.
What follows the quoted segment above is a very short list of very intensive atrocities.
One of the flaws in their thinking is assuming that ideologies that burn hot and fast through a population, leading the societies to commit massive atrocities, are the types of ideologies that could serve as the ground floor for stable totalitarianism. That is, perhaps, why they had to choose AI as a vector for how such a thing could be enforced.
Stable totalitarianism, to most people in a regime, might not look that different from utopia seen through a distorted glass. If you’ve read The Village of Strong Branches, my main character, Keð, seems perfectly calm about sentiment analysis — machine learning tools that read everyone’s private communications to infer how people are feeling and create anonymized dashboards for decision-makers. If you’ve read A Matter of Oracles, there is a moment towards the end of the novel that I hope you find unsettling.
Most people in the worlds I write about have it good. All of their basic needs are met. Changing the way things are would require risks that most people are unwilling to take. It looks a lot less like stable totalitarianism, then, and more like a realistic depiction of a set of complex societal choices that are different from the ones that we have made. People in the societies I have written expect all technology to be repairable and to last for decades, if not centuries. They expect everything to be recycled. Most people have not personally touched or handled any plastics unless they’ve needed specific types of medical services or have received infrastructure job training. They expect the state to provide all of the basic amenities needed for housing, healthcare, and food, the ground floor for people to function. They would think that we’re living in a dystopian nightmare while at the same time overlooking some of the atrocities that have allowed them to live so beautifully, such as what happens to families that flaunt the wealth ceiling. But they would have a bit of a point: How much of a difference is there between, say, the setting in the widely-read Hunger Games series, and the exploited labor in factories and mines around the real world today? Even on Ameisa, which faces so many challenges, everyone has a place to sleep, healthcare, and food to eat.
As I said in the log line of this post, I used to think that I was writing dystopias. Then, I thought I was writing speculative planetary opera, and then I realized it was hieropoeic fiction and experimental, speculative scenario-building in a controlled environment. I still joked that “everything is secretly terrible” for a while, thinking back to the early worldbuilding days. Reflecting on dystopia, what the term actually means, and its relationship to totalitarianism makes me even more certain about these shifts in classification.
Perhaps I need a better intensifier here. I have been immersed in the creation of a set of worlds in my setting since I was young. As a middle schooler, I gravitated towards the heaps of National Geographic magazines in the school library and would open them, briefly forgetting everything else because the world and the variety of people, places, and ecosystems in it were so interesting. I spent school summers in high school on early Wikipedia teaching myself linguistics so I could create realistic conlangs. I was even able to justify being able to pay attention in a college senior seminar class on early modern women writers — it wasn’t my first choice — because it would be useful for realism if I ever chose to write in an epistolary style, and it’s one of the most useful and memorable courses from college. Earlier this year, I had a deep and prolonged multi-week period when I was so anxious about plate tectonics and whether I had accounted for all of the tidal heating impacts on the tackiness of the crust in the Ameisa-Laseå binary planet system that I consumed everything I could about the development of plate tectonics on Earth.
Would Constantine have made the decision he made if he had known about the rivers of blood that it would lead to or that we would ultimately face massive ecological collapse due to what would follow from his choice?