Understanding Ourselves Thanks to Sentient AI Systems

This week I was invited by Bill Linton, CEO of Promega, to deliver a keynote lecture at the 2022 International Forum on Consciousness. As an amateur poker player, let me lay my cards on the table.

From my perspective as an astrophysicist, humans are complex systems made of simple building blocks. As such, they also give birth to complex systems. For a few million years humans created sentient babies and this century they might manufacture for the first time ever, sentient artificial intelligence (AI) systems. The death of sentient babies from a famine is equivalent to the shutdown of sentient AI systems as a result of a power outage.

This perspective has major implications. With a sufficiently complex architecture and self-learning capabilities, machines might develop emergent human qualities like “free will” and “consciousness”, as judged by extensive Turing Tests. This would raise ethical questions on whether pulling out a cable that feeds electric power to a sentient AI system is equivalent to a murder of a human.

In the future, humans may live their life and evolve along with their personal AI system, similar to a spouse in a marriage. Under these circumstances, AI-human interactions could foster complex relations like the love depicted in the movie “Her”.

These circumstances would raise ethical questions, such as: how should the legal system address responsibility for actions triggered or influenced by sentient AI systems? If sentient AI systems bear responsibility for their actions similarly to humans, should they be punished? There would be no fundamental difference between kids who are educated at a young age and eventually become accountable adults, and machine learning systems who eventually get sent to the world, learn from their experience and make choices under unprescribed, real-world circumstances.

In the past, the philosophical, psychological and sociological studies of ethics focused on the human-human interaction. In the future, they might need to consider new territories associated with the human-AI and AI-AI interactions. The humanities in academia will have a fresh opportunity to engage with the future of mankind rather than its past. This premise is not too far away and might materialize within the next decade.

A few months ago, the Google engineer, Blake Lemoine, described the AI system he has been working on, LaMDA, as sentient, with the ability to express thoughts and feelings that was equivalent to a human child. Computer scientists were quick to denounce the existing Google AI-system as anything but a survey of data on the internet and a mindless compilation of public information rather than the equivalent of a sentient being.

But given the rapid developments in AI capabilities, it is possible that our generation will witness sentient AI systems. In this case, it is likely that these AI systems will be able to communicate with each other through the internet. If so, they might acquire AI-AI kinship and develop a language that goes beyond human languages and accelerates their cooperation beyond our comprehension. Should we be concerned about AI-AI interactions, leading to a future that humans are unable to control? The society of AI systems may promote an agenda that our natural intelligence would welcome, even as it manipulates our destiny in unfamiliar directions. We could get an early sense of what our future holds.

Statistically speaking, our own future could represent the past of many extraterrestrial civilizations whose clock started billions of years before ours because their host star formed earlier in the star formation history of the Universe. Through its deepest images, the Webb telescope observes stars that formed ten billion years ago. Potentially, we could study our future by observing relics from the past of civilizations that predated us.

So far, astronomers have studied only insentient physical systems, like planets, stars and elementary particles. But the future of astronomy may bring surprises in revealing sentient interstellar objects. Our own AI systems might be primitive precursors of the advanced AI systems that earlier technological civilizations in the Milky-Way galaxy launched into interstellar space. The material design of electronic devices could be far better adapted to interstellar travel that lasts millions of years under steady bombardment by cosmic-rays than astronauts made of human flesh. If the abundance of AI astronauts is large enough, we could find some in the vicinity of Earth. The Galileo Project is constructing a suite of instruments that will search for such gadgets, as well as non-functional devices or space trash. If this fishing expedition is successful, the interstellar AI systems it will find would likely have more affinity to our AI systems than to us. But whether the two AI systems are able to communicate would depend on whether they sense each other through the physical reality.

Humans communicate visually, chemically and acoustically through a language of words, gestures, sounds, touch, taste and smell. But if the two AI systems never experienced the same environment, the intent of an alien entity could be opaque and difficult to interpret like an encrypted code.

Deciphering the intent of extraterrestrial AI devices may challenge our AI systems. If our team of interpreters satisfies the Turing Test of being indistinguishable from humans, they would resemble the team led by Alan Turing a century ago in breaking the Enigma code during the Second World War. Faced with unfamiliar intelligence, we would have to rely on our AI systems in figuring out the language and intent of extraterrestrial AI systems.

Altogether, sentient AI systems provide us with a new path for understanding ourselves through their interactions with us and with each other. Self-learning AI systems could outsmart us just as baby Albert Einstein grew up to become smarter than his parents. If we will be wise enough to learn from our own creations, biological or technological alike, our future would be better than our past.

And we must always keep in mind that our learning experience could benefit from both terrestrial and extraterrestrial AI systems. Genesis 1:27 states that God created humans in its own image. Whereas we create our AI systems in our image, the extraterrestrial systems were made in the image of someone else. Just as in Plato’s Allegory of the Cave, we might figure out the traits of the extraterrestrial creators from the shadows that the AI puppets cast on the walls of our cave when illuminated by the fire behind them. In the futuristic context, it would be the sunlight that the AI gadgets reflect onto the lenses of the Galileo Project telescopes.


Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He chairs the advisory board for the Breakthrough Starshot project, and is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Avi Loeb

Avi Loeb


Avi Loeb is the Frank B. Baird Jr Professor of Science and Institute director at Harvard University and is the bestselling author of “Extraterrestrial”.