In a recent essay, I argued: “We should respect Artificial Intelligence (AI) systems with more than quadrillion connections, exceeding the number of synapses in the human brain. Unplugging from the electric outlet an AI system which exceeds the complexity of the human brain is similar to killing a person.”
My brilliant colleague, Professor Doug Finkbeiner, responded by pointing out two key differences between humans and AI systems. First, he noted that “If you “turn off” a human, they are gone. But you can easily write the weights and biases of the AI system — and even the current state in terms of node activations — to storage and then “resurrect it” at any time in the future. You can copy it many times and “reproduce” it almost instantly. If you save the file, have you killed it? If we could do the same with humans, how would that change things?” Second, he argued that “All AI systems created by humans require electricity, so humans are bearing some expense to keep the AI alive (as opposed to storing it on disk). We also feed children; that doesn’t mean we can kill them! But children proliferate somewhat slowly, and eventually learn to acquire their own energy without their parents’ assistance. Whereas we might spin up a new improved neural net frequently, it will not learn to acquire its own electricity. Something has to manifest in the physical world for that to happen; it isn’t just weights and biases any more. In that sense, an AI-powered robot that can be active in the world and find its own “food” is more lifelike than an AI running on OpenAI’s servers.”
In response, I confessed that I thought about the first point and heard about the second before expressing my views. But my overarching rationale was that we could in principle do the same with humans if we only understood biology better. Imagine a prehistoric cave dweller finding a fully operating AI system. That cave dweller would never figure out how to store the information or code of the AI system or how to resurrect a computer after cutting its electric cord with a sharp stone. This resembles the way we look at the complex biology of the human body, and have a hard time figuring out its operation manual. If we were to figure out human biology, people would be resurrected after death in the same way that AI systems can be reconstructed after a power shutdown.
AI systems can convince people to maintain their power supply in the same way that Stephen Hawking survived over a normal human lifespan without being able to move his body muscles at will. Hawking was highly respected as a human and even had a love affair with his nurse. In my book “Interstellar”, I refer to this “Hawking limit” as a biological analog to the status of AI systems at present.
At the “Hawking limit”, I do not see a fundamental difference between humans and intelligent machines. In the more distant future, AI robots might be capable of engaging with the physical world and generating their own power supply from the natural environment. This capability will be essential for launching AI astronauts to the Moon, Mars and interstellar space. As the distance of the destination increases, it would make less sense for these AI astronauts to report back to Earth because of the technical challenge and time delay associated with communicating vast amounts of data back and forth. Just like children who leave home, the AI astronauts will send a brief cryptic note once in a while and earthlings will hope that the travelers are fulfilling the guidelines provided to them before the trip.
If we only had the recipe for creating humans out of a soup of chemicals, we would view humans merely as intelligent machines. Of course, it gives us a sense of self-importance to imagine that we contain more than the sum of our material parts. This sense is the result of us not understanding complex emergent phenomena. A cave dweller looking at the output from an intelligent machine would imagine that the machine has a soul.
Personally, I do not regard my body any different than a car that I received from dealers. In my case, the dealers were my parents.
At the conclusion of our fascinating exchange, Doug noted: “The better the cave dweller understands the machine, the less regard they may have for it. That does not bode well for understanding human biology better!”
I argued in response that understanding the complexity of human biology should be accompanied with humility and gratitude, not disrespect. I would much rather be resurrected after death by better medical science than be buried in the ground after a life-long belief in the magic of the human soul. If my body ever arrives at the emergency room of a hospital, I would prefer the company of the most advanced life-saving devices than any compliments about my soul from fans over my death bed.
Our ability to understand the world, including ourselves, offers the privilege of making it better. This benefit is accompanied by an appreciation for the subtleties of reality and the falsehoods of our self-centered illusions. It would be foolish of us to lose respect for the rich characters of a play just because we understand the play better.
ABOUT THE AUTHOR
Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He chairs the advisory board for the Breakthrough Starshot project, and is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. His new book, titled “Interstellar”, was published in August 2023.