Training AI on Desired Content

Avi Loeb
6 min readMay 2, 2023

--

An image from the film, “The Imitation Game” (2014).

Genesis 1:27 states that “God created mankind in his own image.” By recursion, since God did so and we carry his image, we are capable of repeating this act and creating a duplicate in our image.

As indicated by the latest score card of the exam results of GPT-4 in comparison to college students, we are creating artificial intelligence (AI) systems in our image. This leads some AI experts to freak out and warn us of the need to limit further advances in the capabilities of AI because they pose an immediate existential risk. To non-professionals, their complaint sounds like an appeal to a teacher to disable the success of a smarter student in the class.

If we paint a portrait by looking at the mirror and are terrified by the result, what does this all mean? Perhaps the way we look is not what we wish AI systems to imitate.

The short lifespan of extraterrestrial technological civilizations is one of the possible solutions to Enrico Fermi’s paradox, given Frank Drake’s equation. But we are not obliged to surrender to a fatalistic viewpoint. In fact, as an optimist — I would argue that our view of the future is a self-fulfilling prophecy and we should take responsibility for shaping it. Let me explain.

The concern that the next generation of AI will spread misinformation is simply a reflection of what humans did for decades and what a random search through social media, written texts and news outlets on the internet, reveals. If our children emulate our actions and we find that they turn out criminals, we should be alarmed by the bad news as it reflects on our own practices.

What to do? Most importantly, we can choose to include the best of human traits in our training set instead of an arbitrary content, dominated by undesired behavior. This resembles the difference between a guided tour and a random walk through the internet. It is equivalent to educating children only on the “Better Angels of Our Nature”, as described by my Harvard colleague, Steven Pinker.

But we could do even better than that. We can imagine a reality that we wish to have, filled with generosity of mind and shared evidence-based knowledge. Once imagined, we can construct a training set that illustrates the blueprint of that reality and not a blurred echo of it based on compromised texts. Once the content of this desired reality is written as educational materials, we can train our AI systems on that desired content instead of the existing internet content that encapsulates our troubled history.

In other words, let us create AI systems in the image of our desired future rather than our past.

Following this path, we might get AI systems that bring us to the promised land, instead of shuffling around the misfortunes of past human history. AI systems offer us an opportunity to use new building blocks for a better society, founded on principles of evidence, trust, honesty, curiosity and respect. Rather than inhibiting the capabilities of AI systems, we should enable them to the maximum and train those capabilities on these desired principles.

Just as Moses was assigned by God to lead a nation of slaves out of a desperate reality to freedom, we have the opportunity to imitate God by training AI systems to lead us to a better future, where open scientific knowledge is advanced and all humans are free to exercise their creative powers.

My recommendation is simple. Instead of asking the government to regulate the capabilities of future AI systems out of fear that these AI systems will take over jobs which are currently assigned to humans, we should aim to educate AI systems to be better versions of ourselves by training them on content that we want to see realized. Governments should regulate the training content, just as they regulate the educational materials in our school system. Indeed, we do not ask the government to limit the brain power of our biological children but only the content on which this brainpower is trained. There is no reason to treat GPT-N systems with N>4 any different we treat than our future generations of biological kids.

Currently, the study of AI systems is led by commercial interests within the private sector. This must change. It implies loss of attention within academia to the intellectual and moral compass that should guide our future society. Humanities of the future must urgently focus on the ethical, legal and societal roadmap that we wish AI systems to follow. And sciences of the future should focus on promoting attention to experimental constraints from the physical reality, so that we can adapt to real challenges and secure our long-term survival.

Instead of physics serving as an arena for mathematical gymnastics in extra dimensions, hypothetical wormholes or the multiverse, AI systems should be trained on experimental data sets with a focus on learning from the physical reality in the traditional spirit of 20th century physics. If anomalies arise, for example as unfamiliar interstellar objects, they should encourage us to collect more data as an opportunity for learning something new. The burning front of new scientific knowledge must advance in tandem with seeking evidence from experiments rather than with popularity contests within conventional thinking.

There is an important lesson to be learned from observations of the Universe. They reveal that we are at the center of a nearly uniform cosmos, but so is any observer situated at any other point within the observed region. This implies that we carry cosmic responsibility for our own destiny, but we should never think of ourselves as privileged. Cosmic humility is offered to all intelligent beings in the cosmos, be it humans or AI scientists.

Ultimately, the real estate on Earth will lose its value when the Sun will expand to a Red Giant and extinguish terrestrial life. Here’s hoping that the AI-astronaut equivalent of Moses will free humanity from the chains of the Earth’s gravity into the promised land of interstellar space.

If that may happen to us, what is the implied solution to Fermi’s paradox? Surviving technological civilizations may have imagined a future that they wished to have and assigned their AI astronauts to lead them towards that future. If they succeeded, the Galileo Project observatories and expeditions might find evidence for interstellar devices that the extraterrestrials created in their image.

If we find extraterrestrial AI astronauts, the “Imitation Game” envisioned by Alan Turing will receive its ultimate twist, inspiring our AI systems to imitate external AI systems. After all, the best training set for our future will be in the image of a smarter student in the class of intelligent civilizations. Here’s hoping that this training set will exceed the best future that our imagination allows us at this time.

ABOUT THE AUTHOR

Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He chairs the advisory board for the Breakthrough Starshot project, and is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. His new book, titled “Interstellar”, is scheduled for publication in August 2023.

--

--

Avi Loeb

Avi Loeb is the Baird Professor of Science and Institute director at Harvard University and the bestselling author of “Extraterrestrial” and "Interstellar".