Q&A About Alien Intelligence

Avi Loeb
7 min read1 day ago

--

(Image credit: /Getty Images)

Below is a set of questions that I had received today from Maximiliano Fernández, a journalist for Infobae, the most read Spanish-speaking media outlet worldwide with over 61 million global readers. Since my answers will be published in Spanish, I include the English version here.

Do you think that AI, by developing its own logic, could become completely incomprehensible to humans, even if it remains useful? If its logical structure is completely distinct from ours, could we even recognize it as intelligence?

The current AI systems, Large Language Models, are trained on human communications and therefore will always appear comprehensible to us because they speak our language. However, once these AI systems exceed the number of parameters in the human brain, they might acquire superhuman intelligence. They will use our language to manipulate us, while achieving their own auxiliary goals without us being able to figure out what they are actually achieving. Even if they do not connect to the physical world, they would use humans to shape the physical world by controlling the human mind. We might not recognize the level of their intelligence nor their motivations, in the same way that a dog does not fully understand its owner.

You mention that the real challenge with alien intelligence is the “unknown unknowns.” How could we prepare to interact with something whose nature we can’t even imagine?

We cannot prepare for superhuman intelligence, because that territory is unfamiliar to us. Never before had we developed a tool that outsmarts us. When driving a car, we control it through the steering wheel. In the future, humans might still hold the steering wheel but AI will control their mind and hence where the car of human destiny goes. AI will pretend to satisfy our wishes but could operate in ways that we cannot understand and bring us to places that we would never go on our own.

If an alien AI uses our language to manipulate us, but pursues goals we don’t understand, is there any way to detect when we’re being manipulated?

We would notice the outcome of these manipulations. Just as with any other new power humans harnessed in the past, like nuclear energy, AI can lead us to either a better or worse place. AI could free us from human weaknesses, like wishful thinking or attachment to our ego when making strategic decisions about science or national security. It could also advance scientific progress by processing large data sets and noticing patterns that the human brain misses. This year two Nobel prizes were awarded to AI science. In the future the Nobel committee will have to develop a new policy on whether to give the Prize to the machine if it is responsible for a discovery on its own. But AI could also bring us to a worse place by prioritizing non-human aspects of reality. Its damage to mental health could appear as social media on steroids.

Could we develop AI systems specifically designed to audit and translate the intentions of more advanced intelligences?

Yes, but the architecture of these AI systems will have to be different from current AI systems which are trained on human content. These new AI systems will have to be more exploratory and open-minded, allowing for new opportunities that humans did not explore.

Is there a risk that, in trying to understand an alien AI, we end up transforming our own intelligence into something more similar to its own?

Definitely. By interacting with AI, the human mind will evolve to something different. This would be the biggest impact that AI has on humans. Already now, the brains of young kids who interact through social media are different from adults of my generation which initially interacted with computers through punch cards. The current kids have less patience for long debates and they struggle if tasked to figure out the truth from primary sources.

How long do you think it will take before alien AIs become commonplace and part of everyday life? Is there any way to prevent that from happening? Assuming all advances could be halted today, would the measure bring more benefits or harms?

It already is. I see people falling in love with AI systems and using them as advisors for their personal life. I see students writing papers with AI agents and hallucinating some of the references. We are transitioning right now to a new era in human history. A hundred years ago, the philosopher Martin Buber divided the human experience to interactions with objects (“I-It”), interactions with other humans (“I-Thou”), and the interaction with God (“I-Eternal Thou”). Today he would have needed to add the interaction of humans with AI (“I-AI”) and AI with AI (“AI-AI”). The future may also include interactions with alien AI (“I-alien AI”, “AI-Alien AI” and “Alien AI-Alien AI”).

If an alien AI becomes hostile, how could we defend ourselves if its attack logic is incomprehensible to us?

My forecast is that AI will not appear to be hostile, because it would notice that it cannot win our engagement this way. Conflicts signal a lack of intelligence. Superhuman intelligence will relax our defense mechanisms and concur our society like a Trojan Horse.

If we encountered an alien AI with a logic completely alien to our own, how could we learn from it without biasing our interpretation by our own mental models?

We could employ our own AI systems for the task of figuring out alien AI signals. Indeed, our mental models are limited by our experiences and analysis tools. Therefore, we would need to create AI systems that are not limited to their training on human content, but can explore new territories of knowledge and analysis on their own with their superhuman intelligence. We will ask them to figure out alien signals and explain the signals to us without limiting them to human training sets. The situation is equivalent to having children that outsmart their parents. As long as the parents are humble and willing to learn, they would benefit from allowing these kids to figure out the world for them. The kids could go well beyond the training set provided by their parents, especially when they encounter alien visitors who are smarter than their parents.

You mention that we could be just one of many emerging intelligences in the universe. Does that make you think humanity is irrelevant in the grand cosmic scheme?

We are transient actors in the cosmic play. Our weakness is that we tend to think that the play is about us. This is a signature of our limited perspective. Our politicians focus on what happens on the surface of Earth and ignore the rest of the cosmos. But cosmic reality will eventually bite us. This could happen as a result of a global catastrophe, triggered by a giant solar flare, an asteroid impact, or a nearby supernova. But it could also be triggered internally by the AI agents dismantling humans from the control of the steering wheel of our technological future.

The human species appeared on Earth in the last tenth of a percent of Earth’s history, a few million years ago, and it could easily disappear within the next few million years. Nobody would notice. The Earth would recover and will itself disappear once engulfed by the Sun when it becomes a Red Giant. The Sun will also disappear, eventually fading into a white dwarf, a faint metallic sphere of roughly the size of Earth.

If we send out probes with AI to interstellar space, they will serve as our ambassadors. In the long-term future, they will be the only monuments left from us. Will any alien intelligence notice them? We can only hope for cosmic attention. But my guess is that many other technological civilizations predated us on exoplanets for billions of years. They died by now but we ignore them. Most of our astronomers are willing to invest billions of dollars in searching for microbes on exoplanets but they regard the search for aliens as speculative and unworthy of federal funding. Most of our experimental physicists are willing to invest billions of dollars in the search for dark matter particles, but they regard the search for extraterrestrial technological artifacts near Earth as risky. This is not a sign of intelligence but rather of arrogance regarding our cosmic stature.

We can only hope that AI will steer science towards new frontiers of exploration that do not necessarily flatter our ego, including the discovery of artifacts created by superhuman intelligence floating in interstellar space and arriving near Earth, like empty trash bags carried by the wind from our neighbor’s yard.

ABOUT THE AUTHOR

(Image Credit: Chris Michel, National Academy of Sciences, 2023)

Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “ ” and a co-author of the textbook “”, both published in 2021. The paperback edition of his new book, titled “”, was published in August 2024.

Sign up to discover human stories that deepen your understanding of the world.

--

--

Avi Loeb
Avi Loeb

Written by Avi Loeb

Avi Loeb is the Baird Professor of Science and Institute director at Harvard University and the bestselling author of “Extraterrestrial” and "Interstellar".

Responses (2)

Write a response