Who Should be Held Legally Liable to Unpredictable AI?

Avi Loeb
5 min readJan 14, 2024

--

Image credit: diplomatist.com

Over the last month, an increasing number of users complained that ChatGPT has become lazier. Nobody knows the reason because Artificial Intelligence (AI) systems are trained on large data sets and their complexity makes their actions unpredictable. In this way, AI systems resemble the human brain. Both are made of elementary particles. Both are complex systems with a large number of connections, giving rise to unpredictable consequences.

In an interview for the “Regulating AI Podcast”, I was asked by Sanjay Puri in Washington DC how the legal system should treat the unpredictability of AI. My reply was that we should follow the approach adopted towards other unpredictable systems, namely people. Rather than limit the development of such systems, the legal system should respond to their actions. Punitive measures in response to illegal action would discourage undesired behavior.

The training of AI systems should abide by federal guidelines in just the same way as our education system restricts training of students through texts that reflect our common values. Makers of AI systems should be held legally liable to damages if the AI behavior was a direct consequence of the production and commercial training process, in the same way that manufacturers of self-driving cars are held liable to recurring accidents caused by malfunction of the universal operating system. This follows the rationale for holding parents responsible for damages that their young kids inflict in public spaces.

However, AI systems that evolved considerably beyond their production phase as a result of their interaction with the world, resemble kids who matured to become adults. If they violate existing laws, they should be removed from society until their faults are corrected. Finally, users should be held liable for damages caused by illegal use of AI, in the same way that gun owners can be placed in prison for illegal shootings.

Of course, this is all within the realm of our country’s legal system. But there is also the concern that adversarial nations will use AI to inflict damage to our society, by inciting violence or generating damage to national assets. This matter of national security is best handled by international treaties akin to the United Nations’ response to the international threat from nuclear weapons. There should be an agreed-upon red line that limits the weaponization of AI by governments, with sanctions imposed on those who violate it.

When asked about how society should cope with the loss of jobs to AI, I suggested inventing new jobs that will build on the new capabilities of the AI workforce. For example, the philosopher Agnes Callard from the University of Chicago stated in an Op-Ed last month: “I Teach the Humanities, and I Still Don’t Know What Their Value Is.” My recommendation is for humanists to engage in humanities of the future. The ancient Greek philosophers, like Socrates, Plato or Aristotle, did not have computers. Hence, they could not have addressed the new challenge of how to integrate AI systems into society such that they will function ethically and constructively. Social scientists, philosophers and psychologists are best equipped to engage in the interdisciplinary intersection of AI technology, policy and entrepreneurship.

Finally, I was asked about the impact of AI on the practice of science. Many new scientific papers are taking advantage of AI in analyzing huge data sets which are harvested by telescopes, like the Webb telescope or the largest ground-based telescopes. Within a year, the Rubin Observatory in Chile will employ a camera of 3.2 billion pixels to survey the entire Southern Sky every four days, in the so-called Legacy Survey of Space and Time (LSST). Machine Learning (ML) algorithms are currently being trained on past events to process the flood of LSST data in search for known populations of supernova explosions, flaring black holes, or fast-moving objects. Personally, I am most interested in events that we had never seen before because they carry the promise of new knowledge. AI/ML software could flag these as rare anomalous events if they deviate considerably from all familiar classifications.

An interesting question is whether AI will be able to identify new patterns in large data sets that would reveal new insights into the fundamental laws of nature. If so, AI systems could play the role of human scientists like myself. I will not feel offended if our technological AI creations will do better than human scientists, for the same reason that I am proud of my daughters when they outperform me.

Together with my postdoc, Richard Cloete, we are developing a pipeline for identifying interstellar objects in the LSST data. Most interstellar objects might be natural rocks but if one of them ends up being an anomalous functional probe from another civilization, we will use our AI software to decipher its properties and intent.

Here’s hoping that the discovery of extraterrestrial AI systems will inspire us to make quantum leaps in the capabilities of our terrestrial AI systems. This will foster a new meaning to Alan Turing’s “Imitation Game”.

ABOUT THE AUTHOR

Image credit: Chris Michel (October 2023).

Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He chairs the advisory board for the Breakthrough Starshot project, and is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. His new book, titled “Interstellar”, was published in August 2023.

--

--

Avi Loeb
Avi Loeb

Written by Avi Loeb

Avi Loeb is the Baird Professor of Science and Institute director at Harvard University and the bestselling author of “Extraterrestrial” and "Interstellar".

Responses (2)