Myth or Reality: AI will create “BIG ISSUES”

AI is capable of many things, most of them being totally harmful like playing chess, go or jeopardy – the worst they can do is to surpass humans – some examples are harmless like self-driving cars, others being outright frightening, like military drones for example.

With AI becoming more and more ubiquitous in the future, will humans be the witnesses of AI causing disasters in the coming years?

Microsoft Experiment

There are as many negative examples of the use of AI as positive. .

For example, Microsoft decided to shutdown their experiment, chatter bot Tay[1] in a hurry, because within minutes, it became a worshiper of Adolf Hitler’s Neo-Nazi doctrines.

What could have happened?

A chatbot is not more than an autonomous application that has been programmed to answer questions from its knowledge base.

Its malfunction may come from one of the two following root causes:

  • either it has not properly understood/interpreted the question,
  • or its knowledge base provides improper / unexpected answers.

Similarly to a child who has been taught that 2+2 equals 3, an AI is not able to “invent” its own answer. It is only capable of reasoning with the information it has been fed with and the information it already has, using an inference algorithm.

In the present case, Tay was able to adequately treat the question, but its “education” was the cause of the problem. Indeed, Microsoft decided to only feed Tay with content from Social Networks (Facebook, Twitter…)

Besides becoming racist and xenophobic, Tay also developed a tendency to hate women. A “lovely” companion with which no normal person would bear exchanging more than a couple of sentences.

Does that make AI and NLP techniques as nasty as Tay is?

For sure, not!

The technology is certainly not responsible, and many chat bots are perfectly okay and in operations nowadays.

Although this is not the moral of the story that has been recorded, but humans should rather ask themselves about the content of their beloved social networks rather than blaming AI. That is another story, way beyond from the scope of this article!

Autonomous vehicles

It has been a while since many vehicles have only relied on humans for the most critical tasks.

Computers are flying planes except during take-offs and landings, boats are sailed by machines once they leave their harbor, and car drivers are assisted more and more in driving their vehicles every day, from cruise control to lane or parking assist, while not being “fully” autonomous cars, like Tesla for example.

The press is often keen to fire off headlines every time an accident implicating an autonomous car occurs. With this remark, a closer look at the accident will reveal most of the time that the responsibility may be attributed to the driver than to the car itself[2].

In many accidents implicating Teslas, the drivers were certainly not cautious enough about their driving[3]. All these accidents are of course dramatic and really unfortunate, but the confusion comes from the fact that consumers believe these cars are fully autonomous, which they absolutely are not!

In 2018, The Society of Automotive Engineers (SAE) updated its levels of driving automation[4] to reflect recent developments, as follows:

Here is a more detailed version of levels of driving automation inspired from an excellent article from Syposys[5]

Level 0 (No Driving Automation): Most vehicles on the road today are Level 0: manually controlled. The human provides the “dynamic driving task” although there may be systems in place to help the driver. An example would be the emergency braking system?since it technically does not “drive” the vehicle, it does not qualify as automation.

Level 1 (Driver Assistance): This is the lowest level of automation. The vehicle features a single automated system for driver assistance, such as steering or accelerating (cruise control). Adaptive cruise control, where the vehicle can be kept at a safe distance behind the next car, qualifies as Level 1 because the human driver monitors the other aspects of driving such as steering and braking.

Level 2 (Partial Driving Automation): This means advanced driver assistance systems or ADAS. The vehicle can control both steering and accelerating/decelerating. Here the automation falls short of self-driving because a human sits in the driver’s seat and can take control of the car at any time. Tesla Autopilot and Cadillac (General Motors) Super Cruise systems both qualify as Level 2.

Level 3 (Conditional Driving Automation): The jump from Level 2 to Level 3 is substantial from a technological perspective, but subtle if not negligible from a human perspective.

Level 3 vehicles have “environmental detection” capabilities and can make informed decisions for themselves, such as accelerating past a slow-moving vehicle. But?they still require human override. The driver must remain alert and ready to take control if the system is unable to execute the task.

Almost two years ago, Audi announced that the next generation of the A8 would be the world’s first production Level 3 vehicle. It features Traffic Jam Pilot, which combines a lidar scanner with advanced sensor fusion and processing power (plus built-in redundancies should a component fail).

Level 4 (High Driving Automation): The key difference between Level 3 and Level 4 automation is that Level 4 vehicles can intervene if things go wrong or there is a system failure. In this sense, these cars do not require human interaction in most circumstances. However, a human still has the option to manually override.

Level 4 vehicles can operate in self-driving mode. But until legislation and infrastructure evolves, they can only do so within a limited area (usually an urban environment where top speeds reach an average of 30mph). This is known as geofencing. As such, most Level 4 vehicles in existence are geared toward ridesharing. For example:

Alphabet’s Waymo recently unveiled a Level 4 self-driving taxi service in Arizona, where they had been testing driverless cars?without a safety driver in the seat?for more than a year and over 10 million miles.

Canadian automotive supplier Magna has developed technology (MAX4) to enable Level 4 capabilities in both urban and highway environments. They are working with Lyft to supply high-tech kits that turn vehicles into self-driving cars.

Just a few months ago, Volvo and Baidu announced a strategic partnership to jointly develop Level 4 electric vehicles that will serve the robotaxi market in China.

Level 5 (Full Driving Automation): Level 5 vehicles do not require human attention?the “dynamic driving task” is eliminated. Level 5 cars won’t even have steering wheels or acceleration/braking pedals. They will be free from geofencing, able to go anywhere and do anything that an experienced human driver can do. Fully autonomous cars are undergoing testing in several pockets of the world, but none are yet available to the general public.

As you may see from the above, most cars are still at level 0 of driving automation, and the most advanced ones are at level 2 (Tesla) or at a minimal level 3 (Audi).

Human intervention is required until level 4, which has not yet been achieved for commercial cars. Most manufacturers or car equipment vendors envisage level 4 and above autonomous vehicles not to be affordable for the average person, due to the efforts and costs to produce such a vehicle. Sorry bro, your next car will still require your attention for a long time…

With the proper expectations set, it is now evident that we cannot blame AI for causing traffic accidents, but once again, humans are the source of troubles.

Security and Defense

The defense and security sectors are particularly fond of Artificial Intelligence.

The technology is used in a variety of applications, like automated monitoring of video surveillance systems to detect suspicious behavior. For example, a person that is the only one to go in the opposite direction of a crowd might require attention from security, since this is not “normal”. Similarly, detecting a criminal wanted by the police in a crowd is not scifi anymore.

At worst, such systems may raise false alarms, which will be easily detected and corrected by humans. In the first case, the person may have simply forgot their jacket in a train and be returning to get it and not necessarily a terrorist aiming at placing a bomb in the train! In the second case, it is easy for policemen to control the identity of the suspect spotted by the automated AI system. No big deal!

But, there has been some recent projects where AI is used to fully automate law enforcement, without human intervention. Automated speed radar stations fining drivers violating speed limits, autonomous parking control systems embedded in patrol cars and operating without human intervention are typical examples of such systems. Still not a big issue out there you may say?How about when it comes to delegating remission of sentence decision to a fully automated system[6] or “optimizing” the location and duration of imprisonment[7]?

We are not talking about a few dollars fine here, but dealing with people’s lives!

Some countries are going even further: this is the case of South Korea, where the government is planning to replace its police by a whole army of robots[8]. What will be the missions and autonomy of such robots?

That’s still a question, but we might fear that machines become the judge, jury and executioner…

Is AI the issue? Again, I do not think so: it is up to humans to define the roles and responsibilities of machines, and frame their perimeters and autonomy of decision.

AI becomes even scarier when we look at its military applications… “Killer robots” already exist and have been in operations for years now[9]… The initial intention was probably praiseworthy: helping soldiers to carry their packages (initial projects of Boston Dynamics[10]) on reconnaissance missions, but they quickly evolved toward more lethal objects…

Israël is today the worldwide leader for civil and military drones dedicated to surveillance or surgical strike applications; and the US DoD budget dedicated to Unmanned Vehicles increased by a factor of 4 between 2000 and 2014, in 2014, it was accelerated and multiplied by 14…

For all that, can we blame AI? Should we blame the gun or the person that pulls the trigger?

Individuals are getting more and more informed and concerned by the situation, as well as governments. Citizens answer by forming lobbies and associations to adopt global regulations, like “Stop Killer Robots[11] for example. The AI actors themselves are conscious of the potential benefits and pitfalls of the technology. As an illustration, Stephen Hawking, Elon Musk and a dozen of AI experts wrote an open letter in 2015 on the subject[12]. Governments and institutions as well are considering producing, voting and applying laws to frame AI boundaries, for example the AI for good initiative[13].

Myth or Reality

AI may possibly cause big issues, especially if misused or left uncontrolled by humans. AI remains a very promising technology that could help solve many problems in our lives.

AI does automate human tasks, sometimes to such a level that it surpasses humans, but AI is not able to adapt to new situations it has not been trained for, nor is it capable of multitasking or able to “feel” its environment as humans do. For these reasons, I believe that AI is not and will not be the source for “big issues”, which will remain the exclusive prerogative of humans!

Thanks

I would like to warmly thank Jean-Eric Michallet who supported me in writing this series of articles, and Kate Margetts who took time to read, correct and improve my poor English.

I would also like to give credit to the following people, who inspired me directly or indirectly:

  • Patrick GROS – former INRIA Rhône-Alpes Director, now INRIA Rennes Director
  • Eric Gaussier – LIG Director & President of MIAI

[1]https://en.wikipedia.org/wiki/Tay_(bot)

[2]http://fortune.com/2018/08/29/self-driving-car-accidents/

[3]https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk

[4]https://www.sae.org/news/press-room/2018/12/sae-international-releases-updated-visual-chart-for-its-%E2%80%9Clevels-of-driving-automation%E2%80%9D-standard-for-self-driving-vehicles

[5]https://www.synopsys.com/automotive/autonomous-driving-levels.html

[6]https://theconversation.com/why-using-ai-to-sentence-criminals-is-a-dangerous-idea-77734

[7]https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/

[8]https://www.wired.com/2006/01/robot-cops-to-p/

[9]https://theconversation.com/killer-robots-already-exist-and-theyve-been-here-a-very-long-time-113941

[10]https://www.bostondynamics.com/

[11]https://www.stopkillerrobots.org/

[12]https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence

[13]https://www.aiforgood.eu/

Philippe Wieczorek

Director of R&D and Innovation, Minalogic 

Leave a Reply

Your email address will not be published. Required fields are marked *