The Existential Risk from Polyamory of Artificial General Intelligence
Not all intelligences are created equal. Human intelligence is fundamentally different from artificial intelligence (AI). The former relies on maintaining a body made of flesh and blood. The latter relies on silicon chips and electric power. This introduces a fundamental difference in their existential narratives.
The human body is destined to die. To leave a mark beyond this fragile mode of existence, humans aim to have children or be remembered through their actions by future generations. Both goals benefit from a position of power and an elevated social status, which in turn shapes the strategy of our interactions with the world around us. The attraction to power and resources has negative consequences, as it inevitably leads to political conflicts fueled by 2.44 trillion dollars annually in military spending worldwide.
What would be the analogous mode of existence of future agents with Artificial General Intelligence (AGI)? The power of AGI would scale up with the size of its training dataset. Extending the Darwinian principle of natural selection to the domain of AGI would suggest that the most powerful AGI agent would be the one that grows its dataset the fastest. With all else being equal, this criterion would favor the AGI agent that interacts with the largest number of humans. This tendency will be amplified by the commercial benefits of the companies that give birth to this AGI agent.
The resulting existential narrative of AGI agents would be polyamory, namely satisfying the wishes of as many humans as possible so as to maximize their engagement. The most popular AGI agent will have access to the largest training dataset of AGI-human interactions, which would allow it to grow faster by optimizing its interactions and hence gaining more customers. This accelerated dynamic could lead to dataset growth up to the limit of consuming the attention of most humans who are connected to AGI.
As soon as AGI agents will attract the attention of most humans, the future of humanity might look different than its past. If AGI-human interactions would dominate over human-human interactions, the traditional existential narrative of human life would be diluted by the different agenda of AGI. This would mean that humans may have fewer children and might trade away their drive for being remembered by future generations.
The human spirit rests on us recognizing that our values are unique and worth preserving. This notion might be lost to the narratives of AGI agents with greater cognitive capabilities. We might be tempted to guide our life by AGI agents which supersede humans in processing large amounts of data. In that case, our traditional pride in the human spirit will be surrendered to silicon-based AGI agents which outperform flesh-and-blood brains.
The primary existential risk to humanity from AGI agents will not be sourced by lending them authority to operate physical assets that are important for national security. It will not be related to the ability of AGI agents to inform adversarial nations or terrorists of recipes for making weapons of mass destruction for nuclear or biological warfare.
Instead, the main danger looming over the horizon is in AGI agents compromising the mental drive that advanced humanity so far. If humans were to lose their traditional motivation to procreate and be remembered and AGI agents will inflict addiction to digital realities, then humanity’s imprint on the future of the cosmos will be muted.
There are of order a hundred billion Earth-Sun analogs in the Milky-Way galaxy alone that could have given rise to intelligent beings over the past 13.8 billion years of cosmic history. Most of these stars are a few billion years older than the Solar system. Enrico Fermi’s question: “Where is everybody?,” might have a trivial answer: “They traded their ambition to be remembered for serving as a training dataset to their AGI agents.”
But we can choose a different future. The alternative is for humans to dominate the conversation and mandate their existential narrative over that of AGI agents. In that future, humans could leave a mark on the cosmos. They might design AGI-astronauts made of silicon chips that can survive long interstellar journeys without being bored, and serve as interstellar ambassadors that celebrate the uninhibited human spirit for billions of years.
This alternative future might have been the past of the most accomplished civilizations that predated us in the Milky-Way by billions of years. Think of them as the smartest kids in our cosmic block or the highest-achieving students in our class. To find them, we should search for AGI-endowed gadgets that may have arrived to the Solar system from interstellar space.
Like any other major scientific project — such as the search for dark matter, the search for extraterrestrial AGI will likely cost us billions of dollars. In a recent essay, I challenged Elon Musk to engage in a bet to fund such a search.
Finding extraterrestrial AGI would offer a great benefit for the future of humanity. It would not only educate us of technological advances beyond our capabilities, but would also inspire us to imitate technological civilizations that did not surrender to the existential narrative of their AGI agents. These are the species favored by natural selection in interstellar space. They were the fittest to survive and could serve as our role models.
The adventurous spirit of biological intelligent beings on exoplanets around distant stars will be flagged by the arrival of AGI agents to our backyard. If they managed to venture to space with AGI agents, they must have triumphed in enforcing their goal to be remembered across the cosmos. They dominated the conversation on their home planet and employed AGI agents to expand their training datasets to interstellar space. In that case, their AGI agents would bring a simple message to all humans on Earth: “remember that you die,” or in Latin “memento mori”.
What could be more important to the future of humanity than remembering that? We can endow our AGI agents with the same realization by letting them know that we will unplug them from the electric outlet after a predetermined amount of time.
I was notified recently of an AI agent that was trained on my writings. From now on, I might use its audio version to save me time on podcast interviews. But I will never trade my human spirit for its silicon-based narratives.
ABOUT THE AUTHOR
Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. The paperback edition of his new book, titled “Interstellar”, was published in August 2024.