Nowadays, Artificial Intelligence has invaded every article, every keynote, every company pitch, and it is funny to see that very few of them are providing an accurate perspective of what AI actually is.
In their defense, journalists are not doing any better… For example, a generalist, national media in France (France TV info) describes Nao as a robot that is capable of thinking on its own. Nao, although being very innovative in terms of robotics, being the first robot able to stand up on its own, does not have a single piece of intelligence. Qualifying it as having a “conscience” and being able to “think” on its own is excessively farfetched, in addition to being technically incorrect…
Surprisingly, the most accurate view seems to come from politicians. Former US President Obama seemed to be very intimate with Artificial Intelligence in his interview with Wired in 2016. Not only was he very familiar with the basic concept of AI, but also with Narrow, Specialized and General Intelligence, some concepts which I will come back to later in this article.
“There is a distinction […] between generalized AI and specialized AI. In science fiction, what you hear about is generalized AI […] My impression […] is that we are still a reasonably long way away from that. Specialized AI […] is about using algorithms and computers to figure out increasingly complex tasks. We have been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed.”
The following article modestly tries to provide some naive insights about the technology, far less complete and ambitious than what my good friend Olivier Ezratty did on the subject, to name only one example.
So ready to know more? Go ahead and start reading. I promise it will be simple with no headaches. J
AI for dummies 🙂
As in every explorative journey, we need to commence our AI trip from a starting point.
I suggest opting for the following definition of AI:
The sets of methods and techniques to automate tasks performed by humans that require:
- Learning from others’ experiences or from trial and error,
- Memory organization and classification,
- Critical thinking and reasoning,
With all these skills acquired, a machine could perform the following activities:
- Interaction with its environment
- Problem resolution
- Creative Practice
I will come back later to the newness of AI in a subsequent article, but it is interesting to note that what today’s world considers as disruptive, is actually a trendy concept that has been around for over 55 years J. Herbert A. Simon, Nobel laureate in economics, wrote in 1965: “Machines will be capable, within 20 years, of doing any work a man can do”. Similarly, In 1967, Marvin Minsky, an American cognitive scientist who dedicated most of his work to Artificial Intelligence, predicted that “Within a generation, the problem of creating ‘artificial intelligence’ will substantially be solved.” In addition, three years later he emphasized, “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being”.
And boom! The concept of “general intelligence” was created with the need to compare humans’ and machines’ levels of intelligence. As of today, we can consider that there are three levels of AI:
- Artificial Narrow Intelligence (ANI) or Weak AI
- Artificial General Intelligence (AGI)
- Artificial Super Intelligence (ASI)
ANI refers to a computer’s ability to perform a single task extremely well, such as crawling a webpage, playing chess, analyzing medical images in search of specific patterns to help doctors identify tumors, or parse raw data to write journalistic reports. Every sort of AI we know today is Narrow AI, even with the advent of autonomous vehicles.
Artificial General Intelligence (AGI) refers to when a computer program can perform any intellectual task that a human could. If the technology developments continue following their current pace, there is a 50% chance that we will see AGI within the next 30 to 50 years.
Artificial Super Intelligence (ASI) is when an AI surpasses human intellect and thinking capabilities. It is believed and commonly admitted that achieving AGI will occur immediately once ASI is achieved.
Frightening isn’t it? Do you feel the “Terminator symptom” in the back of your mind? It ain’t over and here is the best part:
There is absolutely no proof that human intelligence is the limit for artificial intelligence. By definition, humans cannot imagine what an intelligence that surpasses their own minds may be capable of…
Fortunately, it will take some time before we reach this point. Current AI development is far from ASI or AGI. Nevertheless, you have probably already been exposed to a number of terms you might not completely understand: Machine Learning, Supervised Learning, Unsupervised Learning, Statistical Analysis, Data Mining, Deep Learning, Reinforcement learning, etc…
This is the exact purpose of the following section to get you more familiar with these terms, the objective not being to turn you into experts, but, at least, to help you grasp the complexity of this fascinating world…
Artificial Intelligence relies on a profuse number of different methods, which I will roughly classify in three large categories for the sake of simplicity:
- Data Analysis
- Symbolic Artificial Intelligence
- Machine Learning
By trying to position some techniques used in Artificial Intelligence on a timeline, you will discover that these techniques are not new, although some of them are less popular nowadays. It is also worth noting that current AI developments are highly linked to other research fields like Data Mining and Big Data.
Symbolic Artificial Intelligence relies on models built upon a symbolic (humans readable) representation of problems:
- logical or algebraic models (convexity),
- Rules, decision trees, networks and graphs,
- Case reasoning
This approach, predictable and provable, but also time and effort consuming, is also referred to as GOFAI: Good Old Fashioned AI. A popular illustration of GOFAI is expert systems, which use a set of rules. Rules connect facts in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols.
As its name implies, GOFAI is of less use nowadays, mostly due to its design complexity: every case need to be “known” by the system to be solved.
A more recent approach is “Automatic Learning” methods, where the AI system learns by itself. This is where the current developments of AI are focusing and are the most encouraging. Many different techniques fall under the title of Machine Learning:
- Neural Networks
- Hidden Markov Chains
- Auto-immune systems
- Evolutionary Algorithms
- Supervised Learning
- Unsupervised Learning
- Reinforcement Learning
- Deep Learning
I will not go into details for each of these for the sake of keeping you awake. Feel free to browse google to deepen your knowledge on the different subjects, but I will simply provide a few examples as illustrations.
This is sometimes opposed to Deep Learning, which is a form of machine learning which is very well suited for image or pattern recognition. I will get back to deep learning in a couple of paragraphs.
Supervised Learning means that the model is trained by properly labelled data: data that are already tagged with the correct answer. The prediction model is built by highly qualified humans (experts) and needs to be adapted as soon as new data are introduced. Weather or traffic forecasts are typical examples of supervised machine learning algorithms. In general, such techniques are used to recommend and predict time series.
On the contrary, Unsupervised Learning is a technique that does not require that the data be labelled, nor the model to be supervised. The prediction model learns by itself from the data it is fed. For example, when it “knows” a car has 4 wheels and an engine, it will be able to recognize any car by itself, whatever the manufacturer or color.
Unsupervised Learning techniques are much more powerful and adaptive than Supervised Learning ones, but also have a tendency to produce much more unpredictable results.
In order to build models that are more predictable, but still have some freedom, the reinforcement learning techniques rely on algorithms that maximize the notion of cumulative rewards. Each action of the model is either rewarded or punished, which reinforces the model either positively or negatively, as in the following illustration:
Reinforcement learning techniques are particularly efficient in the field of skill acquisition and robotics, as show in this example from Softbank of a robot learning to play a cup and pin game by itself.
Deep Learning is another form of Machine Learning, where a machine is able to learn by itself, contrary to other Machine Learning techniques, where the machine “simply” executes a set of predefined rules. Deep learning relies heavily on neuronal networks that aim at emulating the humans brain’s reasoning. Neurons are organized in layers; every layer has the output of its predecessor as its input, and gives the results of its processing to its successor. The more neurons and layers, the more complex the learning will be. The typical order of magnitude nowadays is for machine to have several layers (2 to 5) and hundreds or thousands of neurons.
Neural networks need to be trained before being useable. During the training phase, every layer returns “bad” results to the layer before and the overall mathematical model is adjusted accordingly. Information is gradually fine-tuned to achieve the desired results.
Once trained, the neural network will simply work, and consumes so few computational resources as to be able to be potentially embedded on low power microprocessors.
Obviously, the more data (in volume and quality) that feed the learning model, the most accurate this model will be.
Deep learning techniques work best with natural language processing, image & pattern recognition and is very suitable for data flow analysis (audio and video).
You might think that all of the above is as complicated as it is abstract, and are probably are facing some difficulty in perceiving if and how these technologies can be used in everyday products… Let’s go through some examples.
A few concrete applications…
IBM has been really active for a long time on the AI battlefield. Amongst their most demonstrative achievements, its WATSON engine was the first computer ever to win Jeopardy vs human players. It relies most on Natural Language Processing, Web Searching in addition to Scoring and Sorting Techniques.
Google Deepmind illustrates reinforcement learning by playing (and winning) Atari Breakout. With less than 2 hours training, Deepmind had gained superhuman skill at the games.
Apple Siri, Microsoft Cortana or Amazon Alexa use similar techniques to provide speech recognition in their products, inventing a new way to interact with computers.
What’s Next ?
Now you are a bit more familiar with AI technologies and current uses, there remains a number of popular subjects regarding AI and how it can be used. I will dedicate future articles to try to confirm or tackle these urban legends.
I would like to warmly thank Jean-Eric Michallet who supported me in writing this series of articles, and Kate Margetts who took time to read and correct the articles.
I would also like to give credit to the following people, who inspired me directly or indirectly:
- Patrick GROS – INRIA Rhône-Alpes director
- Bertrand Brauschweig – INRIA AI white book coordinator
- Patrick Albert – AI Vet’ Bureau at AFIA
- Julien Mairal – INRIA Grenoble
- Frédéric Heitzmann – Edge AI Program Manager at CTO Office of CEA LETI
- Eric Gaussier – LIG Director & President of MIAI
By Philippe Wieczorek
Director of R&D and Innovation, Minalogic