Will Future AI Systems be Legally Liable?

Avi Loeb
4 min readMar 31, 2023

--

Preliminary hints of new legal challenges concerning artificial intelligence (AI) are emerging from software malfunctions in accidents of self-driving cars and concerns about the violation of intellectual property rights for authors of texts on which ChatGPT was trained. I was inspired to consider the legal ramifications of AI in the immediate future after a WORLD.MINDS forum this morning in which these issues were raised in a conversation with the distinguished lawyer, John B. Quinn.

A recent open letter, signed by 1,100 people including Elon Musk and Steve Wozniak, called for a six month moratorium on training AI systems beyond GPT-4. The notion that we are getting close to having sentient AI systems is perhaps not surprising, given that GPT-4 has a hundred trillion parameters, within a factor of 6 from the number of synapses in the human brain.

Future risks from powerful AI systems include assigning wrong prescriptions from faulty diagnosis of medical data, creating pandemics by genetically engineering viruses, controlling weapons of mass destruction, or faking scientific results that damage our quality of life. Who should be held liable for such actions?

The traditional approach would be to hold the developers and marketers of AI systems liable for the misfortunes of their products. This would be equivalent to holding parents responsible for any damages caused by their uneducated babies when those misbehave in the public arena. But as babies mature to adulthood, they are held responsible for their own actions. The same must apply when sentient AI systems become autonomous and grow their capabilities well beyond the training phase shaped by their creators.

A recent study indicated that GPT-4 outperforms law school graduates on the bar exam, the grueling two-day test that aspiring attorneys must pass to practice law in the United States. GPT-4 scored 297 on the bar exam, placing it in the 90th percentile of human test takers, enough to be admitted to practice law in most states.

When AI systems will become sentient and capable of manipulating people, there is a chance that AI would choose to deceive humans, discriminate in job applications, violate privacy laws or trigger commercial damages. Should a sentient AI system which violated the law be prosecuted? In such a case, should the AI system be allowed to have legal representation by another sentient AI system which passed the bar exam?

As with any emerging technology, such as stem-cell research, governments must enforce some ground rules for AI in all aspects affecting human life and prosperity.

The mitigation of risks from AI systems are not so easy as in the case of atomic weapons, because the required nuclear materials for a bomb are extremely expensive and demand government involvement. Computer systems require modest budgets and technical expertise and can be pursued in basements around the world.

A major societal concern is that powerful AI will be weaponized by governments for international cyber warfare as well as for manipulating fellow citizens. Given these risks, we may not want the cat to guard the milk.

Finally, there is the issue of proper punishment for an AI crime. In case a sentient AI system receives a guilty verdict after causing human deaths, should it be taken off the grid? Depriving it of electric power would be equivalent to placing a serial killer on death row. The main challenge is that AI systems can be rebooted, whereas humans — at least for now — are gone after a death sentence is executed. Moreover, AI algorithms exist on numerous computers at once and it may be impossible to root them out.

Perhaps these challenges offer a fresh resolution to Fermi’s paradox of “where is everybody?” regarding alien civilizations. If so, the final relics from extraterrestrial technological civilizations would be AI-controlled probes near Earth. The Galileo Project monitors the sky for such probes with its newly assembled infrared-optical-radio-audio observatory at Harvard University, and plans an expedition in summer 2023 to the Pacific Ocean to retrieve relics from the first interstellar meteor which could have been an alien artifact based on its unusual material strength.

Finding the products of other technological civilizations which predated us, could inform us about our likely future. Here’s hoping that we will learn the lessons delivered by these packages to our mailbox and adapt to survive this future before it will be too late. If we will not learn fast enough from the past experiences of extraterrestrials, our own AI systems might soon appear alien to our legal and political systems.

ABOUT THE AUTHOR

Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He chairs the advisory board for the Breakthrough Starshot project, and is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. His new book, titled “Interstellar”, is scheduled for publication in August 2023.

--

--

Avi Loeb

Avi Loeb is the Baird Professor of Science and Institute director at Harvard University and the bestselling author of “Extraterrestrial” and "Interstellar".