Our IQ Will be Higher in the Future

Avi Loeb
5 min readAug 22, 2024

--

(Image credit: Mother Jones)

Following several brainstorming sessions with the brilliant evolutionary biologist, Jacob Wilde from Oxford University in the UK, we arrived at some interesting conclusions regarding the past and future evolution of IQ (Intelligence Quotient) on planet Earth.

The Central Limit Theorem in statistics states that for a large sample of independent units, the probability distribution for a variable converges to a Gaussian distribution, characterized by a mean value and a standard deviation. The tail of the distribution is exponentially suppressed, diluting the likelihood for an individual unit to be many standard deviations away from the mean of the population.

For simplicity, we assumed that the statistical distribution of cognitive abilities within the human population follows a Gaussian distribution and used the number of people who lived at any given time in human history to calculate how many standard deviations the most capable human brain was away from the mean brain. Our underlying assumption was that the number of parameters, namely synaptic connections between neurons in the human brain, follows a Gaussian distribution.

At any time in human history, the deviation of the highest value from the mean value is obtained by equating the area under the tail of the Gaussian probability distribution beyond this highest value to one person out of the number of humans alive at that time.

Based on data concerning the growth in the human population over time, we calculated that the most capable brain changed between 5.2 standard deviations above the mean 50,000 years B.C.E. to 6.47 standard deviations at the present time.

Next, we assumed that the average number of parameters, namely synapses of the neural network in the human brain, scales in proportion to the average volume of the evolving human skull. This assumption is valid for primates, where average neuron count scales linearly with average brain volume, but not for other mammalian clades such as rodents, where larger brains have fewer neurons per unit volume. Based on empirical data for the human brain, we adopted an average density of 63.7 million neurons per centimeter cubed, and took the upper range of current estimates for the number of synaptic connections per neuron of about 9,000. The mean number of parameters within the human population is the product of these numbers with an average value of the skull volume as a function of time.

To estimate the standard deviation of parameters in the human brain, we assumed that the ratio between the standard deviation and the mean value approximates the standard deviation of human IQ as a fraction of its mean value, namely about 15%. We adopted IQ as an indicator because neuron count, while an excellent predictor of cognitive ability over long evolutionary timescales, is a poor predictor of cognitive ability over short evolutionary timescales, i.e. within species.

We calculated the values of as functions of time in the past, based on reported data for hominid skull size over the history of human evolution.

However, the good news is that the future could be better than the past. Artificial Intelligence (AI) systems such as Large Language Models (LLM) offer neural networks with a number of parameters that is increasing exponentially with time. At the present time, the state-of-the-art number of parameters in LLMs is of order 1.7 trillion or equivalently ~0.2% of the average number of synaptic connections in the human brain. The exponential-folding time is currently estimated to be in the range of 1–2 years.

It remains unclear how AI parameters relate to biological synapses as predictors of cognitive ability in large neural networks. It is more straightforward to compare the human brain to neuromorphic computers, which more explicitly model the architecture of biological brains. At present time, the largest neuromorphic computer contains 0.128 trillion artificial synapses, or equivalently about 0.015% the number of connections in the human brain.

The time history (left) for mean number of hominid neural parameters (red line), and its maximum value within the human population (blue line), based on data (black dots) on hominid skull volumes, taken from fossil record. This is compared to an exponential future evolution (right) in the number of parameters for AI systems, assuming an exponential-folding time of 2 years for both LLMs (grey) and neuromorphic computers (black).

Our calculation showed that AI systems will potentially surpass the largest number of parameters within the human population within the next one to two decades.

The exponential growth of AI is expected to saturate based on the limit in available electric power. Capping the growth based on an electric power limit of ten times the global electric power consumption today, 30 terawatts, and assuming a current consumption of 1GW for LLM, limits the exponential growth to about 10 exponential-folding times, comparable to the time when AI systems will be comparable to the largest human brain. Interestingly, both the growth of the human brain and AI are limited by power supply to a similar magnitude in their number of parameters.

In summary, over the past 10 million years, the number of parameters per neural network system in hominid brains increased by an order of magnitude. Within the coming decades, it could increase exponentially on a timescale of years in AI systems. The largest number of parameters of any brain within the entire human population will potentially be surpassed by the number of parameters in AI systems within ~1–2 decades.

This is all good news. The smarter kid on the terrestrial block might eventually be our own technological creation of AI systems within a matter of decades. The question is whether we will find an extraterrestrial system with yet a higher IQ even sooner. I am working on that together with the Galileo Project team. Stay tuned.

ABOUT THE AUTHOR

(Image credit: Chris Michel, 2023)

Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. His new book, titled “Interstellar”, was published in August 2023.

--

--

Avi Loeb

Avi Loeb is the Baird Professor of Science and Institute director at Harvard University and the bestselling author of “Extraterrestrial” and "Interstellar".