Myth or Reality? “AI is (way) better than humans”.

As discussed in the previous articles, we agreed that Artificial Intelligence is able to do many “intelligent” things, like understand verbal commands, recognize images, play games better than humans do, and perform complex tasks on their own (like driving cars for example). Does that mean however that AI is better than humans are? This article aims at answering this exact question!
As discussed in a previous article[1], industry has defined 3 levels of Artificial Intelligence, from the simplest to the most complex:
- Narrow Intelligence (ANI),
- General Intelligence (AGI),
- and Super Intelligence (ASI).
As a reminder, current developments are at the ANI level given their limitations. AGI is comparable to human capacities, commonly agreed to be achievable within the next 40 years, quickly followed by ASI. This latter will overpass human brains.
A White House report on Artificial Intelligence[2], predicts that within the next 20 years we will likely not see machines “exhibit broadly-applicable intelligence comparable to or exceeding that of humans,” though it does go on to say that in the coming years, “machines will reach and exceed human performance on more and more tasks”.
Arend Hintze[3], an AI researcher from Michigan University, opts for four progressive levels of AI:
- Reactive Machines
- Limited Memory
- Theory of Mind
- Self-Awareness
As of today, most AI systems are reactive machines with limited memory. To summarize Hintze’s excellent article, existing AI reacts to input and data, but does not take into consideration past actions or history. A chess-playing AI analyzes the checkerboard and computes the best possible moves, analyzing its opponent’s possible moves, but does not take into account the past games the latter has played. Similarly, autonomous vehicles keep track of a limited number of past actions.
What is interesting in Hintze’s considerations is that he provides an analysis of what is missing in AI to achieve levels of human thinking: history (theory of mind) and the ability to form representations of themselves (self-awareness), which may be considered as a form of consciousness.
An interesting article (in French) from Parcoor[4], a small company in Lyon, develops similar concepts and comes to similar conclusions. Instead of talking of “memory”, the article discusses “volume” of information, and representations of self is referred to as adopting a higher perspective about information.
Both articles agree on the fact that current AI is very limited to reactive or limited tasks, AI is currently often referred to as a “learning fool”, due to its incapacity for performing repetitive tasks without any form of criticism.
The best proof of AI limitations is the number of neurons in neuronal networks.
A typical artificial neural network is composed of hundreds neurons, where the average human brain consists of 120 billion neurons and 150 trillion synapses.
The third generation of Generative Pretrained Transformer, also known as GPT-3[5], a neural network dedicated to Natural Language processing, has 100 billion neurons. Training such a gigantic neural network has mobilized a tremendous amount of the world’s computing capacity, and despite being in the same order of magnitude of a human brain, its only has the capacity to understand natural languages.
AI is no exception to the generic adage that says, “computers do better at what humans do worst and vice versa (computers do worse at what human do best)”.
This observation having been agreed upon, it is worth noting that some philosophers predict that Super Intelligence might not be a good thing for humanity.
For example, Nick Bolstrom, a Swedish philosopher that founded the World Transhumanist Association, remarks in his “Superintelligence” book[6] that if superintelligence were created, it would quickly surpass human intelligence and dominate humankind. Frightening…
This is why some are concluding that researchers and industrial actors should restrict AI to carrying out basic routine tasks, and avoid equipping them with “consciousness”.
Myth or Reality?

As of today, and probably for the next couple of decades, AI will remain “dumb” and a “learning fool”, although it will become more and more integrated in various aspects of our lives.
The future of Artificial Intelligence is clearly a matter of technology, but also involves a lot of thinking and reflection about the ethical use of such technologies. This is probably why sometimes, AI falls under “soft” sciences rather than “hard” sciences. Besides the ethical questions, which philosophers are probably in the best position to answer, there is probably a strong need to provide legal rules, laws and legislation. However, lawmakers are slow, compared to the speed at which technology progresses.
Governments have begun to perceive the potential and the risks of Artificial Intelligence and have begun to act… More importantly, citizens have started to become aware, and there have been initiatives (Open letter to the United Nations, followed by the creation of “AI for good” institutes[7], etc…), that encourage us to be cautiously optimistic about the future of AI.
Thanks
I would like to thank warmly Jean-Eric Michallet who supported me in writing this series of articles, and Kate Margetts who took time to read and correct the articles.
I would also like to give credit to the following people, who inspired me directly or indirectly:
- Patrick GROS – INRIA Rhône-Alpes director
- Bertrand Brauschweig – INRIA AI white book coordinator
- Patrick Albert – AI Vet’ Bureau at AFIA
- Julien Mairal – INRIA Grenoble
- Eric Gaussier – LIG Director & President of MIAI
[1]https://miainnovation.fr/2020/10/15/several-levels-of-ai/
[3]https://www.govtech.com/computing/Understanding-the-Four-Types-of-Artificial-Intelligence.html
[4]https://blog.parcoor.com/2020-08-03-regle-intelligence/
[5]https://en.wikipedia.org/wiki/GPT-3
[6]https://www.amazon.fr/Superintelligence-Nick-Bostrom/dp/2100764861

Philippe Wieczorek
Director of R&D and Innovation, Minalogic