The poet Rainer Maria Rilke recommended: “Be patient toward all that is unsolved in your heart and try to love the questions themselves … Live the questions now. Perhaps you will then gradually, without noticing it, live along some distant day into the answer.” As a practicing scientist, I respectfully disagree with Rilke’s patience. Science offers the privilege of clearing Rilke’s mystic fog over physical reality and seeking answers now.
While preparing breakfast this morning, I picked a tomato and a knife to cut it. This led me to ask the question: “Why does the knife cut the tomato?” I could have lived this question as Rilke recommends, but instead I thought about it and figured out the answer. The force exerted by my hand on the knife’s handle is transferred through its rigid construction to a small surface area at the knife’s edge, resulting in a large force per unit area which exceeds the material strength of the tomato. As a result, the knife makes its cut and the tomato is split. Science goes beyond Rilke’s mysticism. It brings the “Pleasure of Finding Things Out,” as described by the physicist Richard Feynman. And the cherry on top is that once we understand reality, we can adapt to it and figure out how to use it for our benefits. We can create artificial diamonds by compressing carbon, or we can reach Mars from Earth by realizing that both planets revolve around the Sun.
Quantitative insights about nature are communicated by scientists through peer-reviewed papers. This process raises new challenges because of the human factor involved. For original manuscripts to get published, they must be approved for publication by reviewers who are appointed by editors of scientific journals. As in any other hierarchical system, the authority to “approve” or “decline” a publication can be abused. In academia, reviewers can misuse their power as a result of competition with the authors, professional jealousy, or simply incompetence. A dysfunctional review process slows-down the acceptance of new knowledge and its use by society.
It is easy to have an unsubstantiated opinion or metaphorically throw ash in the air and claim that we do not see anything. This ash metaphor became literal recently for the Galileo Project expedition team under my leadership. In the Appendix of an extended research paper, we had to refute an unsubstantiated claim that the unique “BeLaU”-type spherules we retrieved in the Pacific Ocean site of a meteor are not coal ash but instead have an unfamiliar origin, potentially from outside the solar system.
A few hours after breakfast, I attended a discussion with the Editor in Chief of Nature Magazine, the brilliant Dr. Magdalena Skipper, during a WORLDS.MINDS forum led by Rolf Dobelli. My primary question to Magdalena was how to minimize unfair criticism of innovative scientific ideas by biased reviewers, who are motivated by academic ego or conservatism in echo chambers of past knowledge. Magdalena agreed that reviewers are often more critical about papers written by others than their own papers. Her wish for better behavior reminded me of one of the central principles of the Torah, according to Rabbi Akiva: “Love your neighbor as yourself” (Leviticus 19:18).
Magdalena explained that innovation implies taking the risk of being wrong and the culture of science must tolerate retractions based on new facts. In the current culture of science, stones are thrown at innovators too often in the town square, even when there is no evidence to support the criticism. The claim that the “BeLaU”-type spherules are coal ash by critics who had no access to these materials, is a good example for unsubstantiated scrutiny of this nature. “I wish we were humbler and bolder”, Magdalena concluded.
One is left to wonder whether the new tool of artificial intelligence (AI) could help editors spot misconduct by scientists or reviewers. At present, it is straightforward for AI to identify plagiarism or repetition of published ideas. This goal can be achieved by training Machine Learning (ML) algorithms on the entire scientific literature. AI/ML systems could advise authors about references that they have missed. Similarly, they can identify conflicts of interest between potential reviewers and authors, and serve as a preliminary tester for the originality of claims by comparing them to previously published papers. Human reviewers must be included in the process, but AI/ML can potentially flag human biases or compensate for the incomplete ability of the human brain to process the entire available literature and scientific data during the review process. AI/ML systems that are trained on human-written texts could flag the weaknesses of scientists and reviewers alike during the peer-review process.
Despite the faults inherent in the current publication process of scientific results, I remain optimistic that our future with AI/ML will be better. The improvement could accelerate by encountering an extraterrestrial scientific civilization, which would serve as a better role model for the way science should be pursued.
Science holds the promise of clearing Rilke’s mystic fog over Nature magazine. Here’s hoping that the new knowledge will add prosperity and longevity to our otherwise tumultuous life, so that more poems could be composed by talented writers like Rilke.
ABOUT THE AUTHOR
Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. His new book, titled “Interstellar”, was published in August 2023.