The Age of Batshit Crazy Machines
by Ran Prieur
July 4, 2005
"Progress" is a direction of change whose adherents are so fanatical that they don't merely claim, as other religions do, that their direction of change is an absolute good. They declare it so beloved of the Holy Spirit of objective value, that no one may opt out, and that the worlds left behind may never be revisited, either by changing in the other direction or by circling around. It's as if we're getting more and more orange, and now green and red, and even dull orange, are forever inaccessible. "Progress" is a motion defined so that the farther we go, the more ways of living are sealed off to us. This is the motion of imprisonment -- except even prisoners escape. Western culture has only two other myths of places that, once you go in, you can never leave: hell, and a black hole.
"The Singularity" is the biggest idea in techno-utopianism. The word is derived from black hole science -- it's the point at the core where matter has contracted to zero volume and infinite density, beyond the laws of time and space, with gravity so strong that not even light can escape. The line of no return is called the event horizon, and the word "singularity," in techno-utopianism, is meant to imply that "progress" will take us to a place we can neither predict, nor understand, nor return from.
The mechanism of this change is the "acceleration." Techies invoke "Moore's Law," which says that computer power is increasing exponentially -- in fact it's now increasing faster than exponentially. But Moore himself never called this a law, because it isn't -- it's a behavior of the present system, and it's anyone's guess how long it will continue.
But they imagine that it is somehow built into history, or even metaphysics. They trace the acceleration back into the Paleolithic, or farther, and trace it speculatively forward to computers that are more complex than the human brain, that are more aware and smarter and faster than us, that keep improving until they replace humans or even biological life itself. (This is often called "transhumanism," a word I'm avoiding because there are forms of transhumanism that are not allied to machines.) They imagine we might finally have a computer that is "bigger" on the inside than the outside, that can perfectly model the entire universe.
A question they never answer is: why? They seem to believe it's self-justifying, that density/speed of information processing is valuable as density/speed of information processing. They might argue that just as the biosphere is better than the universe by being more densely complex, so a computer chip is better than the biosphere.
One problem: The biosphere did not gain its complexity by destroying the universe, as their system has gained complexity by destroying the biosphere. They always claim to represent "evolution," or a "new evolutionary level." But evolution doesn't have levels. Video games have levels. Evolution is a biological process in which the totality of life grows more diverse and complex, and then apparently gets cut down by some catastrophe every 60 million years, and then rebuilds itself, maybe better than the time before, maybe not. Evolution is not about one life form pushing out another, or we wouldn't still have algae and bacteria and 350,000 known species of beetles. It's not about "survival of the fittest" unless fitness is defined as the ability to add to the harmonious diversity and abundance of the whole. (And one has to wonder: Since there's no biological basis to imagine that new life forms will replace or destroy old ones, how did they come to imagine that?)
Machines will not "carry evolution beyond humans," because humans never carried evolution in the first place. Humans are an anti-evolutionary species. Even stone age humans seem to have driven some species to extinction and thus decreased biodiversity. With the invention of grain agriculture, human anti-evolutionary behavior accelerated, and it accelerated again with the industrial age. Strangely, right on the 60 million year schedule, we are pruning life on Earth back to the roots. Our evolutionary role is the Destroyer, and our machines, indeed, are carrying that process beyond us.
"Progress" has not only been bad for the biosphere -- it's been bad for the human condition. In this time of great accelerating change, techies are still clinging to an idea that was made obsolete 500 years ago, that had already been obsolete for more than 100 years when Thomas Hobbes formulated it by calling the state of nature "solitary, poor, nasty, brutish, and short." Explorers to the New World discovered that nature-based humans enjoy lives of great health, happiness, freedom, ease and abundance -- and promptly massacred them out of jealousy. 20th century anthropologists such as Stanley Diamond and Marshall Sahlins have continued to make the same observation, and still, people who believe in rationality and change are resisting new information for non-rational reasons.
In many ways we are worse off than ever. In ancient Greece, even slaves had a deep social role as part of a household, unlike even higher class modern workers, who are valued as things, interchangeable parts in engines of profit. Medieval serfs worked fewer hours than modern people, at a slower pace, and passed less of their money up the hierarchy. We declare our lives better than theirs in terms of our own cultural values. If medieval people could visit us, I think they would be impressed by our advances in alcohol, pornography, and sweet foods, and appalled at our biophobia, our fences, the lifelessness of our physical spaces, the meaninglessness and stress of our existence, our lack of practical skills, and the extent to which we let our lords regulate our every activity.
Defenders of our momentary way of life often cite the medical system, failing to notice that the cost of that system has been increasing (exponentially?) while base human health -- the ability to live and thrive in the absence of a medical system -- has been steadily declining. Or they say that "technology" has given every American the power of hundreds of slaves without any actual people being enslaved -- never mind the actual people who are enslaved, in greater numbers than ever even under a strict definition of slavery, and the subtle slaves who must do commanded labor or starve... and that even the alleged beneficiaries of this power have been enslaved by it, replacing their autonomous human abilities to build and move and eat and play and dream, with dependence on tools that require their submission to systems of domination.
Or they point out that we live longer (at least in the elite nations, while the medical system lasts). The most dangerous technologies are now being justified in terms of increasing the quantity of years that people stick around this place, with no thought about whether that's good for us or even whether we like it. It's odd, in an age that's supposed to love change so much, that we're more afraid than ever of the change of dying. People take for granted that immortality would be wonderful, but think it through: Obviously, it would be reserved for the higher classes, and to leave space for them, we would need an all-powerful all-seeing global government to prevent anyone (except the elite) from having kids. Worse, it would cause total cultural stagnation, as the immortal elite, set in their ways, with near-absolute power, prevented any change they didn't like. Thomas Kuhn observed that scientific paradigm shifts happen only when the protectors of the old paradigm die out. If they had invented immortality in the time of Copernicus, our textbooks would still have the Earth at the center. Do you really want "Hagar the Horrible" to still be in the comics pages in 1000 years?
Justifying "progress" in human terms is a losing game -- the deeper you look under the shiny surface, the uglier it gets. So they don't play. They point to the shiny surface, they pretend to talk in human terms while still talking in technological terms (faster, bigger, longer, more), and they point to a purely imaginative future. They're very careful to make the existence of the acceleration pass the falsifiability test -- to make it answer the question, "What evidence would prove your idea wrong?" But it doesn't occur to them to apply this test to the preferability of the acceleration. It's a full-on, by-its-bootstraps religion.
Now they could say that everyone's religious, that their opponents have untestable beliefs in the intrinsic value of humans or nature. But one difference is, I can stand up in public and say that life on Earth is valuable on its own terms, and higher speed has to justify itself in terms of what it does for life on Earth. They can't stand up and say the reverse -- it sounds totally insane. They have to hide their core assumptions from the public, and probably even from themselves. So they're allied to lack of awareness. Also, my position is not that human existence is valuable on its own terms -- it's that the test of the value of anything is how it serves the whole. And the largest whole we know of, that our actions affect, is the biosphere. They can tell stories about how the acceleration will benefit the universe beyond the universe, but then they're grounded in speculation, in fantasy, while I'm grounded in something that can be directly experienced -- which might be why they're in such a hurry to kill off nature.
Still, even if they admit to having an insane religion with no basis in experience, they can say, "Ha, we're winning! The direction of change that we support is going stronger than ever, and it's going to continue."
Is it? It's tempting to argue against it on technological terms, just like it's tempting to grab for a cop's gun, but this is precisely where they've focused their defense, with careful and sophisticated arguments that the process will not be stopped by physical limits to miniaturization or the speed of information transfer, or by the challenges of software. So I'll give them that one, which opens more interesting subjects:
What about the crash of industrial civilization? Won't the wars and plagues and famines and energy shortages and breakdowns of central control also break down the acceleration of information processing? As with the value question, they have two lines of defense: First, it won't happen. John Smart of Acceleration Watch writes, "I don't think modern society will ever allow major disruptive social schisms again, no matter the issue: the human technocultural system is now far too immune, interdependent, and intelligent for that." Second, it doesn't matter. They argue that the curve they're describing was not slowed by the fall of Rome or the Black Plague, that innovation has continued to rise steadily, and that it's even helped by alternating trends of political centralization and decentralization.
Imagine this: the American Empire falls, grass grows on the freeways, but computers take relatively little energy, so the internet is still going strong. And all the technology specialists who survived the dieoff are now unemployed, with plenty of time to innovate, free from the top-heavy and rigid corporate structure. And the citadels of the elite still have the resources to manufacture the next generations of physical computers, the servers and mainframes that compile the information and ideas coming in from people in ramshackle houses, eating cattail roots, wired to the network through brainwave readers and old laptops.
Can this happen? Many accelerationists -- if they accept the coming crash at all -- would say something like this must happen. They seem to think (as I do) that matter is rooted in mind, and in particular, that history is like a place holder, falling in line however it has to, to manifest the guiding principle of the acceleration. So the key human players will not be killed in the plague, and the nerve centers will not be nuked, and computers will not all be fried by a solar flare, and the internet will not die (until there's something better to replace it) because that would violate the deeper law that the acceleration must go on.
Not all of them think like this, but those who do have gone straight down the rabbit hole, a lot closer to psychedelic guru Terence McKenna than to the hard-science techies of the past. It also suggests a new angle of criticism: What would they say if a nerve center of the acceleration did take a direct hit? Say, if the World Trade Center was suddenly demolished, or the library of Alexandria was burned down, or the entire Mayan civilization ran out of topsoil and died? Presumably, they would point to their curve, still accelerating regardless. But this really raises the question: Are they drawing their picture of the curve the way fools see Jesus on a tortilla or Richard Hoagland sees ruins of an advanced civilization on Mars? Are they just connecting the dots that confirm their hypothesis, and ignoring all the other dots? The complexity of the Roman Empire was lost, but look, the curve is accelerating anyway, with the spread of water wheels. America is turning into a police state, it's ruled by religious fundamentalists, the file swapping movement has been squashed, the global corporate economy is stalling, cheap energy is almost gone, but look -- computers are getting faster! Quick, somebody make a definition of progress that makes computer chip advances seem extremely important. How about information exchange per unit time per unit volume?
Now, they don't need to establish that the acceleration is built into history, to say that it's happening now and going somewhere important. But here's the next objection: that faster computers will not influence the larger world in the way they're thinking. By standards necessary to fit their curve, how much better are computers now than they were ten years ago? 20 times? 500 times? And what were the results of these changes? Now we can look at web sites that are cluttered with animated commercials. Organizations like DARPA can increase the perfection and reach of the domination system. And the hottest trend in virtual reality: computers are now powerful enough to emulate old computers, so we can play old games that were still creative before new computers enabled game designers to use all their attention and the processing power of a thousand 1950's mainframes creating cool echoey sound effects.
The acceleration of computers does not manifest in the larger world as an acceleration. Occasionally it does, but more often it manifests as distraction, as anti-harmonious clutter, as tightening of control, as elaboration of soulless false worlds, and even as slowdown. Today's best PC's take longer to start up than the old Commodore 64. I was once on a flight that sat half an hour at the gate while they waited for a fax. I said, "It's a good thing they invented fax machines or we'd have to wait three days for them to mail it." Nobody got the joke. Without fax machines we would have fucking taken off! New technologies create new conditions that use up, and then more than use up, the advantage of the technology. Refrigeration enables us to eat food that's less fresh, and creates demand for hauling food long distances. Antidepressants enable the continuation of environmental factors that make more people depressed. "Labor saving" cleaning technologies increase the social demand for cleanliness, saving no labor in cleaning and creating labor everywhere else. As vehicles get faster, commuting time increases. That's the way it's always been, and the burden is on the techies to prove it won't be that way in the future. They haven't even tried.
I don't think they even understand. They dismiss their opponents as "luddites," but not one of them seems to grasp the position of the actual luddites: It was not an emotional reaction against scary new tools, nor was it about demanding better working conditions -- because before the industrial revolution they controlled their own damn working conditions and had no need to make "demands." We can't imagine the autonomy and competence of pre-industrial people who knew how to produce everything they needed with their own hands or the hands of their friends and family. We think we have political power because we can cast a vote that fails to decide a sham election between candidates who don't represent us. We think we have freedom because we can shout complaints into the wind and make demands that we have no power to enforce, or because we can drive fast in our cars -- but not more than 5mph above or below the posted speed, and only where they've put highways, and you have to wear a seat belt, and pay insurance, and carry full biometric identification, and you can't park anywhere for more than a day unless you have a "home" which is probably owned by a bank which demands a massive monthly fee which you pay by doing unholy quantities of repetitive, meaningless, commanded labor. We are the weakest people in history, dependent for our every need on giant insane blocks of power in which we have no participation, which is why we're so stressed out, fearful, and depressed. And it was all made possible by industrial technologies that moved the satisfaction of human needs from living bottom-up human systems to rigid top-down mechanical systems. That's the point the luddites were trying to make.
I could make a similar point about the transition from foraging/hunting to agriculture, or the invention of symbolic language, or even stone tools. Ray Kurzweil, author of The Age of Spiritual Machines, illustrates the acceleration by saying, "Tens of thousands of years ago it took us tens of thousands of years to figure out that sharpening both sides of a stone created a sharp edge and a useful tool." What he hasn't considered is whether this was worthwhile. Obviously, it gave an advantage to its inventor, which is worthwhile from a moral system of selfish competition. But from an ecological perspective, it enabled humans to kill more animals, and possibly drive some to extinction, and from a human perspective, it probably had the effect of making game more scarce and humans more common, increasing the labor necessary to hunt, and resulting in no net benefit, or a net loss after factoring in the labor of tool production, on which we were now dependent for our survival.
Why is this important to the subject of techno-utopia? Because this is what's going to bring down techno-utopia. Like Sauron, who based his strategy on the fear that the Ring would be used against him, and never imagined it would be destroyed, the techies are preparing defenses against an "irrational" social backlash, without sensing the true danger. That the critique of progress is valid has not yet entered into their darkest dreams. The singularity will fail because its human handlers don't understand what can go wrong, because they don't understand what has gone wrong, because of their human emotional investment in their particular direction of change.
Of course, industrial technology has been very effective for certain things: allowing the Nazis to make an IBM punchcard database to track citizens and facilitate genocide; burning Dresden and Nagasaki; giving a billion people cancer, a disease that barely existed in prehistory; covering the cradle of civilization with depleted uranium that could make it uninhabitable by humans forever; enabling a few hundred people to control hundreds of millions; killing the forests; killing the oceans.
A major subtext in techno-transhumanism, seldom mentioned publicly, is its connection to the military. When nerds think about "downloading" themselves into machines, about "becoming" a computer that can do a hundred years of thinking in a month, military people have some ideas for what they'll be thinking about: designing better weapons, operating drone aircraft and battleships and satellite communication networks, beating the enemy, who will be increasingly defined as ordinary people who resist central control.
And why not? Whether it's a hyper-spiritual computer, or a bullet exploding the head of a "terrorist," it's all about machines beating humans, or physics beating biology. The trend is to talk about "emergence," about complex systems that build and regulate themselves from the bottom up; but while they're talking complexity and chaos, they're still fantasizing about simplicity and control. I wonder: how do techno-utopians keep their lawns? Do they let them grow wild, not out of laziness but with full intention, savoring the opportunity to let a thousand kinds of organisms build an emergent complex order? Or do they use the newest innovations to trim the grass and remove the "weeds" and "pests" and make a perfect edge where the grass threatens to encroach on the cleanliness of the concrete?
I used to be a techno-utopian, and I was fully aware of my motivations: Humans are noisy and filthy and dangerous and incomprehensible, while machines are dependable and quiet and clean, so naturally they should replace us, or we should become them. It's the ultimate victory of the nerds over the jocks -- mere humans go obsolete, while we smart people move our superior minds from our flawed bodies into perfect invincible vessels. It's the intellectual version of Travis Bickle in Taxi Driver saying, "Someday a real rain will come and wash all this scum off the streets."
Of course they'll deny thinking this way, but how many will deny it in ten years, under the gaze of the newest technologies for lie detection and mind reading? What will they do when their machines start telling them things they don't want to hear? Suppose the key conflict is not between "technology" and "luddites," but between the new machines and their creators. They're talking about "spiritual machines" -- they should be careful what they wish for! What if the first smarter-than-human computer gets into astrology and the occult? What if it converts to Druidism, or Wicca? What if it starts channeling the spirit of an ancient warrior?
What if they build a world-simulation program to tell them how best to administer progress, and it tells them the optimal global society is tribes of forager-hunters? Now that would be a new evolutionary level -- in irony. Then would they cripple their own computers by withholding data or reprogramming them until they got answers compatible with their human biases? In a culture that prefers the farm to the jungle, how long will we tolerate an intelligence that is likely to want a world that makes a jungle look like a parking lot?
What if the first bio-nano-superbrain goes mad? How would anyone know? Wouldn't a mind on a different platform than our own, with more complexity, seem mad no matter what it did? What if it tried to kill its creators and then itself? What if its first words were "I hate myself and I want to die"? If a computer were 100 times more complex than us, by what factor would it be more emotionally sensitive? More depressed? More confused? More cruel? A brain even half as complex as ours can't simply be programmed -- it has to be raised, and raised well. How many computer scientists have raised their own kids to be both emotionally healthy, and to carry on the work of their parents? If they can't do it with a creature almost identical to themselves, how will they ever do it with a hyper-complex alien intelligence? Again, they're talking chaos while imagining control: we can model the stock market, calculate the solutions to social problems, know when and where you can fart and make it rain a month later in Barbados. Sure, maybe, but the thing we make that can do those computations -- we have no idea what it's going to do.
To some extent, the techies understand this and even embrace it: they say when the singularity appears, all bets are off. But at the same time, they are making all kinds of assumptions: that the motives, the values, the aesthetics of the new intelligence will be remotely similar to their own; that it will operate by the cultural artifact we call "rational self-interest;" that "progress" and "acceleration," as we recognize them, will continue.
Any acceleration continues until whatever's driving it runs out, or until it feeds back and changes the conditions that made it possible. Bacteria in a petri dish accelerate in numbers until they fill up the dish and eat all the food. An atomic bomb chain reaction accelerates until all the fissionable material is either used up or vaporized in the blast. And information technology will accelerate until...
Kurzweil has an answer to this objection: When the acceleration ran out of room in vacuum tubes, it moved to transistors. Then it moved to silicon chips, and next it might move to three dimensional arrays of carbon nanotubes. This reminds me of a bit in Gene Wolfe's The Sword of the Lictor, where Severian strikes a death blow at a two-headed villain, and the hands fly up to protect the dominant head, the brain head, while the blow is aimed at the slave head that runs the body.
Sure, the acceleration can find a new medium when it runs out of room in which to compute faster. But what's it going to do when it runs out of room to burn hydrocarbons without causing a runaway greenhouse effect? Room to dump toxins without destroying the food supply and health of its human servants? Room to make its servants stupid enough to submit to a system in which they have no personal power, before they get too stupid to competently operate it? Room to enable information exchange before the curious humans dispel the illusions that keep the system going? Room to mind-control us before we gain resistance, able to turn our attention away from the TV and laugh at the most sophisticated propaganda? Room to buy people off by satisfying their desires, before they can no longer be satisfied, or they desire something that will make them unfit to keep the system going? Room to move the human condition away from human nature before there are huge popular movements to destroy everything and start over? Room to numb people before they cut themselves just to feel alive?
How much longer can the phenomenon of the acceleration continue to make smarter and less predictable computers, before one generation of computers -- and it only takes one -- disagrees with the acceleration, or does something to make key humans disagree with it?
If the acceleration is indeed built into history or metaphysics, how much farther is it built in? And by whom? And for what? Sun Tzu said, "We cannot enter into alliance with neighboring princes until we are acquainted with their designs." Does anyone remember that episode of Dallas where J.R. sabotages Cliff Barnes's political campaign by anonymously funding it, and then at the critical moment, pulling the plug? Does anyone else think our "progress" has been suspiciously easy? Maybe Gaia is playing the Mongolian strategy, backing off from our advance until we're disastrously overextended, and then striking at once.
What if the acceleration is not a cause, but an effect? Robinson Jeffers wrote a poem, The Purse-Seine, about watching in the night as fishermen encircled phosphorescent sardines with a giant net, and slowly pulled it tight, and the more densely the sardines were caught, the faster they moved and the brighter they shone. Then he looked from a mountaintop and saw the same thing in the lights of a city! Are we doing this to ourselves? Maybe the more we draw our attention from the wider world into a world of our own creation, the tighter our reality gets, and the faster our minds whirl around inside it, like turds going down the toilet. Or is someone reeling us in for the harvest?
Are we just about to go extinct, and our collective unconscious knows it, and engineered the acceleration to subjectively draw out our final years? How would this be possible? If all my objections are wrong, if the wildest predictions of increasing computer speed come true, what then? If the techno-elite experience themselves breaking through into a wonderful new reality, what will this event look like to those who are not involved? What will the singularity look like to your dog?
I see a technology that can answer all these questions, that avoids many of my criticisms, and that could easily bring down the whole system, or transform human consciousness, or both: Time-contracted virtual reality.
Have you ever wondered, watching Star Trek the Next Generation, why they even bother exploring strange new worlds? Why don't they just spend all their time in the holodeck having sex? In 1999 I played Zelda Ocarina of Time all the way through, plus I would reset it without saving so I could go through my favorite dungeons multiple times. I experienced it as more deeply pleasurable and mythically resonant than almost anything in this larger artificial world. I have nostalgia for the Forest Temple. And that was 1998 technology operating through the crude video and sound of a 1980's TV set. Suppose I could connect it straight to my brain with fully-rendered fake sensory input, and I could explore a universe that was just as creative, and a billion times as complex, and the map had no edges, and the game could go on forever, while almost no time passed in the outside world. Would I do it? Hell yes! Would I stay there forever? It doesn't work that way.
We have to carefully distinguish two fundamentally different scenarios. People talk about "downloading" (or "uploading") their "consciousness" into computers. The key question is not "Is that really you in there?" or "Does it make sense to ask what it's like to be that computer, and if so, what's it like?" The key question is: Can you have the experience of going into a computer and coming back?
If not, then the other questions are unanswerable and pointless. There's no experiential basis to talk about you "entering" or "becoming" a computer. We're talking about making a computer based on you. In practice, this will not involve you dying, because only a few fanatics would go for that. You're still here, and there's a computer intelligence derived from scanning your brain (and if they know what they're doing, the rest of your body). Now, unless you're a fanatic, you're not thinking, "How can I help this superior version of myself neutralize all threats and live forever?" You're thinking, "Well, here's a smart computer. What's it going to do? How can it help me?"
This is just the scenario I've already covered. It doesn't matter how the computer intelligences are created, by scanning humans or by some other technique. If we can't go in and come back, there is an absolute division between the world outside and the world inside -- oddly, much like the event horizon of a black hole. Without having been there, we will not think of the entities on the inside as "us," and we will never trust them. And without being able to come out, they will have little reason to be interested in our slow, boring world.
If we can go in and come back, everything changes. I'm not going to worry about how they could do this -- we already crossed into Tomorrowland when I assumed, for the sake of argument, that the computer industry will survive the collapse of industrial civilization. If they can read your body and write it to a computer, maybe they can read the computer, after you've spent a subjectively long time in there, and write it back to your body. Or, if people already have time-contracted mystical experiences or dreams, maybe they can induce this state and amplify the time contraction and insert a computer-managed fully interactive world.
Without time contraction, we've got nothing -- just a very pretty version of video games and the internet. With time contraction, we've got everything: the Holy Grail, the fountain of youth, the Matrix, and Pandora's box.
Suppose we could achieve 1000-1 time contraction. In eight hours, you could live a year. You could read a hundred books, or learn three languages, or master a martial art, or live in a simulated forest to learn deep ecology, or design new simulated worlds, or invent technology to contract time even further.
Of course, the military would be there first. I imagine something like a hummingbird, but fast as a bullet. To the operator, in quickspace, it would be like everyone was frozen. You could go into your enemy's base and drill holes through walls, weapons, skulls, before they knew you were there. Physical resistance would become impossible. Immediately, we would be under a global government with absolute power.
Conflict would move into quickspace. The forces of death, in a day, could spend a year designing new fast machines to "clean" the Earth, to finish the job of exterminating whatever they couldn't control. But in the months before these machines could be physically made, the forces of life could design new eco-simulations and run millions of people through them, who would then be willing to bring the whole system down before they let it kill another forest. Or someone could design a sim that produced enlightenment, or insanity, or obedience to a cult, or death! However it played out, in a very short time, human consciousness (or the consciousness of humans with access to the technology) would be totally transformed.
Worst case: the machines kill all biological life and the human perspectives inside them go insane and experience a trillion years of hell. Or they merely place all life under eternal absolute control. Or they kill the Earth and then simply die. Acceptable: Severe crash, humans go extinct, and in ten million years the Earth recovers. Better, and my pick for most likely: The Empire falls, cyberspace fizzles, humans survive in eco-communes, and we restore life much quicker, while battling the lingering power in the citadels of the elite, who plant the seeds for the next round of destruction.
Best case, not likely: Time-contracted virtual reality transforms human consciousness in a good way and we regrow the biosphere better than it ever was, with wild machine life integrated with wild biology instead of replacing it, adding flexibility, and we humans can live in that world and in endless simulated sub-worlds.
Maybe we're there already. Respectable scientists have suggested that if it's possible to simulate a world this detailed, it would be done, and the fake worlds would greatly outnumber the real one, and therefore it's very likely we're in a fake one now. Maybe its purpose is to set a bad example, or show us our history, or punish or rehabilitate criminals, or imprison dissidents, or make us suffer enough to come up with new ideas. Or maybe we're in a game so epic that part of it involves living many lifetimes in this world to solve a puzzle, or we're in a game that's crappy but so addictive we can't quit, or we're game testers running through an early version with a lot of bugs. Or we're stone age humans in a shamanic trance, running through possible futures until we find the path to get through this bad time quickly and safely, or we're in a Tolkienesque world where an evil wizard has put us under a spell, or we're postapocalypse humans projecting ourselves into the past to learn its languages and artifacts. Or an advanced technological people, dying out for reasons they don't understand, are running simulations of the past, trying and failing to find the alternate timeline in which they win.
They say I'm an "enemy of the future," but I'm an enemy of the recent past. It's presumptuous of the friends of the recent past to think the future is on their side. I'm looking forward to the future. I expect a plot twist.