The future of AI is already written

Matthew Barnett, Tamay Besiroglu, Ege Erdil
October 6, 2025

Innovation often appears as a series of branching choices: what to invent, how to invent it, and when. In our case, we are confronted with a choice: should we create agents that fully automate entire jobs, or create AI tools that merely assist humans with their work?

Upon closer examination, however, it becomes clear that this is a false choice. Autonomous agents that fully substitute for human labor will inevitably be created because they will provide immense utility that mere AI tools cannot. The only real choice is whether to hasten this technological revolution ourselves, or to wait for others to initiate it in our absence.

This lesson is not unique. Humanity is often imagined to be like a ship captain, with the ability to chart our course, navigate away from storms, and select our destination. Yet this view is wrong.

To a first approximation, the future course of civilization has already been fixed, predetermined by hard physical constraints combined with unavoidable economic incentives. Whether we like it or not, humanity will develop roughly the same technologies, in roughly the same order, in roughly the same way, regardless of what choices we make now.

Rather than being like a ship captain, humanity is more like a roaring stream flowing into a valley, following the path of least resistance. People may try to steer the stream by putting barriers in the way, banning certain technologies, aggressively pursuing others, yet these actions will only delay the inevitable, not prevent us from reaching the valley floor.

This may sound surprising. After all, we stopped human cloning worldwide and nearly ended nuclear power in the United States. Doesn’t that prove we can choose which technologies to develop?

In truth, we have far less control over our technological destiny than is often thought.

The tech tree is discovered, not forged

Technological progress occurs in a logical sequence. Each innovation rests on a foundation of prior discoveries, forming a dependency tree that constrains what we can develop, and when. You can’t invent the telescope before discovering how to grind optical lenses, or develop electric lighting before learning how to generate electricity.

We did not design this tech tree; it arose from forces outside of our control. The evidence for this lies in two observations: first, technologies routinely emerge soon after they become possible, often discovered simultaneously by independent researchers who never heard of each other. Second, isolated societies converge on the same fundamental technologies when facing similar problems and resource constraints.

Simultaneous discovery is common. The Hall–Héroult process now used to smelt the world’s supply of aluminum was discovered in the same year, 1886, by different people, separated by an ocean: Charles Martin Hall in the United States, and Paul Héroult in France. Both worked independently, using the same method of dissolving aluminum oxide in molten cryolite and using electrolysis. Neither knew of the other’s work.

The jet engine was first demonstrated independently in the late 1930s by UK engineer Frank Whittle and German engineer Hans von Ohain. Von Ohain’s design was the first to take flight (1939), but Whittle was the first to run his engine (1937).

Perhaps most strikingly, the first two patents for the telephone were filed independently on the same exact day, February 14, 1876, hours apart, by Alexander Graham Bell and Elisha Gray.

These are not rare occurrences. As Robert K. Merton put it, “the pattern of independent multiple discoveries in science is in principle the dominant pattern rather than a subsidiary one. It is the singletons - discoveries made only once in the history of science - that are the residual cases, requiring special explanation.”

This pattern suggests that technologies emerge almost spontaneously when the necessary conditions are in place. When the prerequisites fall into place, invention follows quickly. Consider LLMs: based on NVIDIA’s revenue data, we can infer that the compute required to train a GPT-4 level model only became available around 2020. GPT-4 itself was trained just two years later in 2022.

Isolated civilizations converge on the same basic technologies. When Hernán Cortés arrived in the New World in 1519, he encountered a civilization that had been evolving independently from his own for over 10,000 years. The Aztec were unlike the Spanish in many ways. They didn’t speak the same language, practice the same cultural traditions, or worship the same gods.

Yet for all their differences, there were also many striking similarities. Both had independently developed intensive agriculture with irrigation and terracing. Both designed their cities using rectangular street grids centered on public plazas. Both utilized the concept of zero, wove cotton into dyed clothing, used currency for trade, and built monumental stone architecture. Both were hierarchical societies that featured a monarch at the top, a hereditary nobility, bureaucracies to administer taxation, and standing professional armies.

This pattern is normal, not unusual. Civilizations separated by vast distances and time independently developed metallurgy, the wheel, writing, and bureaucratic states. Specifics varied with local circumstances: writing emerged on clay tablets in Mesopotamia, papyrus in Egypt, bamboo strips in China, and bark paper in Mesoamerica. Each civilization worked within their local constraints, utilizing the materials that were available to them. Yet each came to possess similar technologies when faced with similar problems.

These observations suggest that there is often only one efficient method of solving problems that societies face. Rather than having a free hand in how to develop, societies are fundamentally constrained by what works and what doesn’t. Certain technological and social structures must emerge at given developmental stages, regardless of specific cultural choices.

This principle parallels evolutionary biology, where different lineages frequently converge on the same methods to solve similar problems. The vertebrate eye and the cephalopod eye evolved completely independently, yet both converged on a remarkably similar camera-type design. Both have a cornea, a spherical lens for focusing light, an iris for controlling the amount of light that enters, a retina for resolving sharp details, eye muscles for tracking movement, and an optic nerve for transmitting visual information to their brains.

We do not control our technological trajectory

Nuclear energy is highly regulated, but this does not imply humanity has much control over technology in general. It is easy to constrain a technology when there are readily available substitutes that work about as well for nearly as cheap. Without nuclear energy, humans can still use coal, oil, natural gas or hydroelectric energy to heat their homes and power their devices.

The true test of whether humanity can control technology lies in its experience with technologies that provide unique, irreplaceable capabilities. Rather than looking at nuclear energy, we should look at nuclear weapons. Nuclear weapons are orders of magnitude more powerful than conventional alternatives, which helps explain why many countries developed and continued to stockpile them despite international efforts to limit nuclear proliferation.

History is replete with similar examples. In the 15th and 16th centuries, Catholic monarchies attempted to limit the printing press through licensing and censorship, but ultimately failed to curtail the growth of Protestantism. In the early 19th century, Britain made it illegal to export industrial machinery, but their designs were still smuggled abroad, enabling industries to emerge elsewhere in Europe and the United States. The United States attempted to restrict strong encryption technology in the 1990s by classifying it as a munition, prosecuting developers, and promoting mandatory government backdoors through the Clipper Chip, but these efforts collapsed within five years as encryption software spread globally through the Internet.

Human cloning appears to be a genuine counterexample, but it is important to consider the timescales involved. Even if fully developed, cloning would take many decades, if not centuries, to deliver meaningful competitive advantages. It has only been about one human generation since human cloning became technologically feasible. The fact that we have not developed it after only one generation tells us relatively little about humanity’s ability to resist technologies that provide immediate and large competitive advantages.

The broader picture suggests that when a technology offers quick, overwhelming economic or military advantages to those who adopt it, efforts to prevent its development will fail. Delaying or regulating its use may be possible, but forgoing the technology entirely seems to be beyond our capabilities. Transformative technologies will be developed anyway.

Full automation is inevitable

AI presents a powerful case for a technology that can’t be easily constrained. Since any task can, in principle, be performed by a machine, AI promises to increase productivity in virtually every conceivable domain of economic activity. The rapid economic growth that will likely result from the deployment of advanced AI means that any nation that chooses not to adopt AI will quickly fall far behind the rest of the world.

Yet there are many who believe, or at least hope, that we can seize the benefits of AI without making human labor obsolete. They imagine that we can just build AIs that augment or collaborate with human workers, ensuring that there is always a place for human labor.

These hopes are, unfortunately, mistaken. In the short-run, AIs will augment human labor due to their limited capabilities. But in the long-run, AIs that fully substitute for human labor will likely be far more competitive, making their creation inevitable.

Consider two possible ways to develop technology to clean rooms. The first method involves augmenting a human worker, perhaps by giving them a vacuum, or a robotic arm to help pick up trash. The second method is to build a fully autonomous humanoid robot that cleans an entire room like a human would, without needing human involvement at all.

The first approach is inherently constrained by human effort. For a room to be cleaned, a human needs to be in the room and use the tools available. The second approach lifts this bottleneck and becomes highly scalable: a thousand robots could be manufactured to clean a thousand rooms simultaneously, needing zero human workers.

This logic applies to virtually every job a human can perform, which suggests that we will eventually see the automation of virtually all human jobs. Companies that recognize this fact will be better positioned to play a role in the coming technological revolution; those that don’t will either struggle to succeed or will be forced to adapt.

Full automation is desirable

Even if you accept the inevitability of full automation, you might still think that we should delay this outcome in order to keep human labor relevant as long as possible. This sentiment is understandable but ultimately misguided. The upside of automating all jobs in the economy will likely far exceed the costs, making it desirable to accelerate, rather than delay, the inevitable.

This upside is not just about creating vast material abundance. It includes the development of medical cures that could dramatically extend the lifespans of billions of people alive today. Fully automating the industries that support innovation will spur an age of invention so profound that it has no precedent. We might even gain the ability to eliminate chronic pain and suffering, elevate our own cognition, sculpt our emotional states with precision, and enhance our capacity for joy. Today’s peak experiences may one day feel like pale shadows in comparison to what will one day be possible.

Little can stop the inexorable march towards the full automation of the economy. We should be glad.

Want to help accelerate the inevitable? We’re hiring software engineers.

← Back