“The advent of artificial intelligence might be just the latest stage in a guiding biological process that has produced ever more complex, mutually dependent organisms over the history of life.
Ten years ago, I would have turned my nose up at the idea that we already understood how to get machines to think. In the 2010s, my team at Google Research was working on a wide variety of artificial-intelligence models, including the next-word predictor that powers the keyboard on Android smartphones. Artificial neural networks of the sort we were training were finally solving long-standing challenges in visual perception, speech recognition, game playing and many other domains. But it seemed absurd to me that a mere next-word predictor could ever truly understand new concepts, write jokes, debug code or do any of the myriad other things that are hallmarks of human intelligence.
‘Solving’ this kind of intelligence would surely require some fundamentally new scientific insight. And that would probably be inspired by neuroscience — the study of the only known embodiment of general intelligence, the brain.
My views back then were comfortably within the scientific mainstream, but in retrospect were also tinged with snobbery. My training was in physics and computational neuroscience, and I found the Silicon Valley hype distasteful at times. The cultish view that Moore’s law — the observation that computing power increases exponentially over time — would solve not only every technological problem, but also all social and scientific ones, seemed naive to me. It was the epitome of the mindset “when you have a hammer, everything looks like a nail”.
I was wrong. In 2019, colleagues at Google trained a massive (for the time) next-word predictor — technically, a next-token predictor, with each token corresponding to a word fragment — codenamed Meena1. It seemed able, albeit haltingly, to understand new concepts, write jokes, make logical arguments and much else. Meena’s scaled-up successor, LaMDA, did better2. This trend has continued since. In 2025, we find ourselves in the rather comical situation of expecting such large language models to respond fluently, intelligently, responsibly and accurately to all sorts of esoteric questions and demands that humans would fail to answer. People get irritated when these systems fail to answer appropriately — while simultaneously debating when artificial general intelligence will arrive.
How to detect consciousness in people, animals and maybe even AI
Large language models can be unreliable and say dumb things, but then, so can humans. Their strengths and weaknesses are certainly different from ours. But we are running out of intelligence tests that humans can pass reliably and AI models cannot. By those benchmarks, and if we accept that intelligence is essentially computational — the view held by most computational neuroscientists — we must accept that a working ‘simulation’ of intelligence actually is intelligence. There was no profound discovery that suddenly made obviously non-intelligent machines intelligent: it did turn out to be a matter of scaling computation.
Other researchers disagree with my assessment of where we are with AI. But in what follows, I want to accept the premise that intelligent machines are already here, and turn the mirror back on ourselves. If scaling up computation yields AI, could the kind of intelligence shown by living organisms, humans included, also be the result of computational scaling? If so, what drove that — and how did living organisms become computational in the first place?
Over the past several years, a growing group of collaborators and I have begun to find some tentative, but exciting answers. AI, biological intelligence and, indeed, life itself might all have emerged from the same process. This insight could shed fresh light not just on AI, neuroscience and neurophilosophy, but also on theoretical biology, evolution and complexity science. Moreover, it would give us a glimpse of how human and machine intelligence are destined to co-evolve in the future.
Predictive brains
The idea that brains are essentially prediction machines isn’t new. German physicist and physician Hermann von Helmholtz advanced it in the nineteenth century in his Treatise on Physiological Optics (1867). The idea was developed further by the founders of cybernetics, especially US mathematician Norbert Wiener, in the early 1940s — a starting point of modern, neural-net-based AI research.
Wiener realized3 that all living systems have ‘purposive’ behaviours to stay alive, and that such actions require computational modelling. Our internal and external senses enable us to compute predictive models, both of ourselves and of our environment. But these are useful only if we can act to affect the future — specifically, to increase the odds that we will still be a thriving part of it. Evolution selects for entities that use predictions to make the best survival decisions. The actions we take, and the observations that ensue, become part of our past experience, creating a feedback loop that enables us to make further predictions.
Hunting is a prime example of this predictive modelling. A predator must predict actions that will get the prey into its stomach; the prey must predict the predator’s behaviour to stop that from happening. Starting in the 1970s, neuropsychologists and anthropologists began to realize that other intelligent entities are often the most important parts of the environment to model — because they are the ones modelling you back, whether with friendly or hostile intent4. Increasingly intelligent predators put evolutionary pressure on their prey to become smarter, and vice versa.
A pod of Humpback whales bubble-net feed in the Pacific Ocean with flock of gulls in flight above and the treelined shore in the background.
The pressures towards intelligence become even more intense for members of social species. Winning mates, sharing resources, gaining followers, teaching, learning and dividing labour: all of these involve modelling and predicting the minds of others. But the more intelligent you become — the better to predict the minds of others (at least in theory) — the more intelligent, and thus hard to predict, those others have also become, because they are of the same species and doing the same thing. These runaway dynamics produce ‘intelligence explosions’: the rapid evolutionary increases in brain size that have been observed in highly social animals, including bats, whales and dolphins, birds and our own ancestors.
During a social intelligence explosion, individuals get smarter, but so do groups. Bigger brains can model more relationships, allowing groups to become larger while retaining social cohesion. Sharing and division of labour enable these larger social units to do much more than individuals can on their own.
Take humans. Individually, we aren’t much smarter than our primate ancestors. Humans raised in the wild, like the fictional Mowgli in Rudyard Kipling’s The Jungle Book (1894), would seem unexceptional relative to the forest’s other large-ish animals — if, indeed, they survive at all. But in large numbers, humans can achieve many improbably complex feats beyond any individual’s cognitive or physical capacity: transplanting organs, travelling to the Moon, manufacturing silicon chips. These feats require cooperation, thinking in parallel and division of labour. They are group-level phenomena, and can justifiably be called superhuman.
Symbiogenic transitions
What applies to human sociality arguably also applies to every previous major evolutionary transition throughout life’s history on Earth. These include the transition from simple prokaryotic cells to more-complex eukaryotic ones, from single-celled life to multicellular organisms, and from solitary insects to colony-dwellers. In each case, entities that previously led independent lives entered into a close symbiosis, dividing labour and working in parallel to create a super-entity5.
A growing body of evidence suggests that this ‘symbiogenesis’ is much more common than has generally been supposed. Horizontal gene transfer between cells, the incorporation of a useful retroviral element into a host’s genome and symbiotic bacteria establishing themselves in an animal’s gut are commonplace examples that would not count as ‘major’ transitions. Yet they have certainly produced organisms with innovative capabilities. The ability of termites to digest wood, for instance, depends entirely on enzymes produced by symbiotic microorganisms. The formation of the placental barrier in humans depends on syncytin, a protein derived from the envelope of a retrovirus that fused into the mammalian germ line tens of millions of years ago.
Standard Darwinian evolution, involving the familiar mechanisms of mutation and selection, has no intrinsic bias towards increasing complexity. It is this less familiar mechanism of symbiogenesis that gives evolution its arrow of time: life progresses from simple to more-complex forms when existing parts merge to form new super-entities. This process speeds up over time, as the catalogue of parts available to be combined afresh increases in size and sophistication. Over the past billion years, symbiogenesis has produced increasingly complex nervous systems, colonies of social animals — and eventually our own technological society.
Is this nature’s version of Moore’s law? Yes and no. As originally formulated in 1965 by US engineer Gordon Moore, the co-founder of chip company Intel, the ‘law’ states that transistor size shrinks exponentially6. This translates into exponential declines in computer size, cost and power consumption, and exponential increases in operating speed.
Biological cells have not become exponentially smaller or faster throughout evolution. The advent of electrically excitable neurons sometime around 650 million years ago introduced a fast new computational timescale, but that was a one-off: since then, neurons have not become faster or smaller, nor have their energetic requirements decreased. This does not obviously resemble Moore’s law as it played out in the twentieth century.
But look at the law in the twenty-first century, and a connection becomes more apparent. Since around 2006, transistors have continued to shrink, but the rise in semiconductor operating speed has stalled. To keep increasing computer performance, chip-makers are instead adding more processing cores. They began, in other words, to parallelize silicon-based computation. It’s no coincidence that this is when modern, neural-net-based AI models finally began to take off. Practically speaking, neural nets require massively parallel processing; for a single modern processor to sequentially perform the trillion or so operations needed for a state-of-the-art large language model to predict the next token in a sequence would take minutes.
This starts to look a lot more like the story of biological intelligence. AI emerged not through speed alone, but from a division of labour arising from the cooperation of many simple computational elements running in parallel: what we might term technological symbiogenesis.
In this context, computer science is a natural science as well as an engineering discipline. Humans did not invent computation any more than they did electric current or optical lenses. We merely re-discovered a phenomenon nature had already exploited, developed mathematical theories to understand it better and worked out how to engineer it on a different substrate. Our phones, laptops and data centres could aptly be called ‘artificial computers’.
Computogenesis
If symbiogenesis explains the evolution of natural computational complexity and the emergence of intelligence, how and why did nature first become computational? Work that I and colleagues have been doing on artificial life over the past couple of years helps to clarify this.
To set the scene, imagine an enormous variety of randomly configured feedback mechanisms that are simple enough to arise spontaneously in a thermally variable environment such as that of Earth. Now, assume that each of these mechanisms can work only within some narrow temperature range. After a while, the mechanisms that persist will be the ones that work as thermostats, maintaining their temperature within the right range, so that they can continue to operate. This thought experiment illustrates how purposive behaviour, oriented towards self-preservation — a kind of proto-life — can emerge from random initial conditions.
Even a thermostat is, by definition, performing a computation: it implements a behaviour (turn the heat on or off) that is conditional on an information input (the temperature). Thus, a minimal kind of computation — perhaps nothing more than an ‘if … then’ operation — will arise and persist whenever the output can affect the likelihood of whatever is doing the computation continuing to exist.
This kind of simple operation is still a long way from a general-purpose computer, which was defined by English computing pioneer Alan Turing using a theoretical construct we refer to today as a universal Turing machine. It consists of a ‘head’ that can move left or right along a tape, reading, writing and erasing symbols on that tape according to a table of rules. Turing realized that a rule table can also be encoded as a sequence of symbols on the tape — what we’d now call a program. Certain rule tables exist that will cause the machine to read that program from the tape, performing any computation it specifies.
Six combine-like robots harvest rice in a V-formation, with a human in a vehicle ahead of them.
Around 1950, John von Neumann, another founding figure in computer science, discovered a remarkable link between this general-purpose computation and biology7. Living systems must heal, grow or reproduce — ‘autopoietic’ processes that involve partial or complete self-construction8. For a complex system to build a copy of itself, it must contain an instruction tape specifying the steps needed for self-assembly, a tape-copying machine to endow the copy with its own tape and a ‘universal constructor’ that can execute the tape’s instructions. The manual for building the tape-copying machine and universal constructor must also be included on that tape.
Remarkably, von Neumann thus predicted, on purely theoretical grounds, the function of DNA (the instruction tape), DNA polymerase (the tape-copying machine) and the ribosome (the universal constructor). But most importantly for our purposes, he showed that the universal constructor is a universal Turing machine. Reproduction is a form of general computation. Biology is messy and complex: how organisms develop is influenced by many random contingencies, higher-level interactions between living tissues and feedback mechanisms with the environment that go beyond what is encoded in DNA. But at a fundamental level, life is — literally, by construction — an embodied instance of general-purpose computation.
The question remains of how minimal, self-perpetuating ‘thermostats’ became general-purpose computers that continually construct and reproduce themselves — that is, life. We have already encountered the answer: symbiogenesis.
How close is AI to human-level intelligence?
How so? If a set of computing instructions is theoretically able to emulate a universal Turing machine, it is called Turing-complete. But a non-Turing-complete subset of those instructions might still be able to compute something — for instance, the ‘if, then’ of a thermostat’s operation. In the kind of pre-biotic environment we might imagine, say around a hydrothermal vent at the bottom of Earth’s oceans in the Hadean eon some four billion years ago, every chemical reaction can be thought of as a potential instruction in a stochastic computational ‘soup’: if these molecules are present, then react to produce this. Directly or indirectly, the products of some of these reactions probably acted as catalysts to those same reactions, spontaneously forming autocatalytic loops analogous to those self-perpetuating thermostats. Long-chain molecules that form through polymerization can, in turn, act as information-carrying tapes — as they still do, in our DNA and RNA. From this starting point, all it would have taken to spark life is a symbiogenic event that combined the information carrier with a suitable Turing-complete ensemble of chemical reactions.
The emergence of life in a previously lifeless Universe was a great puzzle for Charles Darwin and his contemporaries. It’s hard to imagine how life could have arisen without a mechanism for pre-existing, non-living parts to assemble into something new and alive. Symbiogenesis is that mechanism. Geneticist Theodosius Dobzhansky famously observed that “nothing in biology makes sense except in the light of evolution”. You might say that nothing in evolution — starting with the origin of life itself — makes sense except in the light of computational symbiogenesis.
Our future with machines
In standard Darwinian evolution, a mutation generally gives rise to a new genetic strain that competes directly with the unmutated one. If the mutant has a reproductive advantage, the original will die out; if the original maintains an advantage, it will persist instead. Thus, the standard evolutionary process, operating on a single species, is an algorithm for optimizing evolutionary fitness. It’s also a zero-sum game: one genetic strain replaces another.
By contrast, because symbiogenesis involves the creation of entirely new entities, these organisms can occupy or even create fresh niches — and themselves be niches for their constituents. Hence, symbiogenesis need neither be an optimization algorithm nor a zero-sum game. It is better thought of as an open-ended, creative process.
This doesn’t mean that its budget is unconstrained. Computations involve irreversible steps, and they therefore consume free energy while ejecting waste heat — what in biology is known as metabolism. The amount of energy needed by a living entity scales with its size and intelligence: the computations it needs for autopoiesis and for thinking. Thinking burns 20% of an adult human’s metabolic budget. In children, whose brains are even larger relative to their bodies, the figure is close to 50%. These numbers are much larger than they were for our smaller-brained ancestors.
AI can supercharge inequality — unless the public learns to control it
Why would it ever be favourable for entities to aggregate through symbiogenesis, given the costly intelligence requirement? Economies of scale offer a simple answer. Division of labour can increase efficiency, as popularized in the world of manufacturing during the early twentieth century by Ford Motor’s assembly line. Economies of scale explain why an organism’s energy consumption does not increase linearly with increasing mass, but only as the three-quarters power. Such sublinear scaling in energy demand can, on its own, motivate symbiogenesis.
But not only that. Evolution, especially in its creative, symbiogenetic aspect, can be understood as a continuous quest for fresh sources of free energy9. In the case of humans, our rising collective intelligence has unlocked many sources of energy, from fire and domesticated animals in prehistoric times to waterwheels, fossil fuels and nuclear fission more recently. That has allowed humanity to simultaneously explode in numbers and enjoy widespread (albeit unequal) increases in quality of life. However, the unsustainable nature of our current patterns of resource consumption means that we must now innovate further, and fast, to maintain that quality.
The symbiogenetic events that enabled this evolution have involved not just humans, but also other animals, microbes and plants, as well as machines. All of these entities are now dependent on each other: machines, for example, would not exist without people, but nowhere near as many people as are alive on Earth today could exist without machines to free us from the ‘Malthusian trap’ of bare-bones agricultural subsistence10. (A lower-tech life of hunting and gathering, although arguably less miserable than subsistence farming, supports even lower population densities.) In short, machines and human society have co-constructed one another.
It’s helpful to view the emergence of AI in this larger context. AI is not distinct from humanity, but rather is a recent addition to a mutually interdependent superhuman entity we are all already part of. An entity that has long been partly biological, partly technological — and always wholly computational.
The picture of the future that emerges here is sunnier than that often painted by researchers studying the ethics of AI or its existential risks for humanity. People often presume that evolution — and intelligence — are zero-sum optimization processes, and that AI is both alien to and competitive with humanity. The symbiogenetic view does not guarantee positive outcomes, but neither does it position AI as an alien ‘other’, nor the future as a Malthusian tug-of-war over resources between humans and machines. In the symbiogenetic view, we can expect the continued scaling of intelligence to produce continuous improvements in the quality of human lives and efficiency of our collective ‘metabolism’, including energy production, manufacturing, transportation and construction. It might also unlock unexpected energy sources, such as nuclear fusion or space-based solar power.
If the metabolic scaling laws of evolution so far offer any guide, these intelligence-enabled efficiencies and fresh energy sources will more than compensate for AI’s terrestrial resource footprint. Given a stabilizing human population, this implies easing pressure on ecosystems that have been damaged by our explosive and inefficient growth over the past two centuries. More collective intelligence, not less, lights our brightest path forwards.” [A]
A. Nature 647, 846-850 (2025) By Blaise Agüera y Arcas
Komentarų nėra:
Rašyti komentarą