Which Information Can be Trusted in the Age of AI?

Avi Loeb
5 min readJan 10, 2025

--

(Image credit: businessday.ng)

Two undergraduate students from the UK, Fin Boardman and Josh Mallinson, requested a zoom session with me earlier this week with a simple question: “Which information can the youth trust in the age of AI?”

Young people are most vulnerable to misinformation because their training set is limited by their short time on Earth. They did not benefit from living before digital screens became addictive and get much of their information today from social media. In the immediate future, their knowledge base and mental health will be shaped by artificial intelligence (AI). This state of affairs is unprecedented, because in the past humans only operated tools that are less smart than they are. For the first time in history, superhuman AI will be capable of manipulating the knowledge base of people.

The traditional way of compensating for the lack of experience of young people is called education. And so the question arises: in the age of AI, how can we make sure that our education system relays reliable information to our youth?

I started my answer by attending to the arena where the answer is straightforward: the natural sciences. Here, scientific information is gathered by instruments. Humans are not reliable detectors. For that reason, FIFA — the world soccer organization, uses cameras rather than eyewitness testimonies from players to determine the facts. But even instrumental data could be subjected to systematic errors that reflect biases in the measurement procedure or the analysis. This is the rationale that established the Data Colada organization which uncovered fraudulent data handling in influential papers, such as the paper co-authored by Francesca Gino and Dan Arieli, which was eventually retracted by the Proceedings of the National Academy of Sciences.

To avoid misinformation from unreliable data or analysis, it is important to have multiple research teams competing for the right answer. It is pleasing to witness independent teams help each other, as was the case when Ed Purcell’s team corrected a mistake by the competing team led by Jan Oort, so that the two teams could publish their papers on the detection of the 21-centimeter line of hydrogen from the Milky-Way galaxy back-to-back in the September 1951 issue of the prestigious Nature magazine. However, in today’s academic culture, it is more common to encounter a situation where the two teams compete forcefully for the credit of being first to unravel the truth. Sometimes, this fierce competition leads to legal battles for credit, as in the case of the CRISPR patent battle between Jennifer Doudna & Emmanuelle Charpentier versus Feng Zhang. During the preliminary research phase, competition offers important benefits, as multiple teams are quick to correct mistakes of each other and converge together on the correct interpretation. In the initial analysis of data on Type Ia supernovae, published in The Astrophysical Journal on July 10, 1997, Saul Perlmutter’s team argued that the Universe has a zero cosmological constant. This conclusion was corrected by Perlmutter’s team in parallel to reports from competing teams to conclude in later publications that, in fact, the cosmological constant or “dark energy” dominates the current mass budget of the Universe. In 2011, Perlmutter was awarded half of the Physics Nobel Prize along with Brian Schmidt and Adam Riess who led the competing teams.

The recipe for avoiding misinformation in the natural sciences is competition between teams that analyze data collected by instruments. This recipe combines hardware that is divorced from the faults of human psychology along with a procedure of vetting through debates when arbitrating the information that should be believed. A culture that suppresses debates is prone to misconceptions about reality.

Obviously, this approach cannot be applied to the humanities or to politics, where the measuring instrument is often missing as the independent arbitrator that corrects the faults of wishful thinking.

Can we generalize the approach of natural sciences to improve education in the age of AI? One way to do so is by establishing community hubs where statements are vetted by a large community of people or independent AI systems, based on the reliability of the sources of information. Constructing a system where the truth is vetted by merit is not a simple task. If the information being discussed has to do with politics, we are likely to get at least two peaks in the probability distribution of opinions: one peak centered on the political left and a separate peak centered on the political right. A fair display of the information will need to include both peaks. On most political matters — there is no independent arbitrator to determine who is right, because history happens only once and counterfactuals cannot be checked as in repeated laboratory experiments within physics or biology.

There is also the question of whether to assign higher weights to information provided by experts or textbooks. People who dedicated many years of research to study a particular problem have a larger training set on which they base their assertions. However, this is only a probabilistic statement as new knowledge is sometimes resisted by unsubstantiated dogma. In 1992, the Vatican admitted that Galileo was right in claiming that the Earth is not at the center of the Universe, but this admission came a little late, two decades after humans reached the Moon.

At the moment, there is very limited attention in universities to the challenges of securing hubs of reliable information for education of the youth. I wish more will be done soon to correct this shortcoming, as it could have major implications for the future of science, politics and mental health.

And so, at the end of our enlightening conversation, I told Fin and Josh: “Hopefully, your generation will solve the problems that my generation created.”

ABOUT THE AUTHOR

(Image Credit: Chris Michel, National Academy of Sciences, 2023)

Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. The paperback edition of his new book, titled “Interstellar”, was published in August 2024.

--

--

Avi Loeb
Avi Loeb

Written by Avi Loeb

Avi Loeb is the Baird Professor of Science and Institute director at Harvard University and the bestselling author of “Extraterrestrial” and "Interstellar".

Responses (12)