It was customary to use mathematics and formal methods to prove that critical applications behaved as expected in traditional programming.
With Artificial Intelligence, the inference engine may very well be bug-free and perform as expected, but the result may not be what is expected.
Indeed, the data used to “educate” AI may very well introduce a cognitive bias.
Just as a child could have been misled by giving him lousy reasoning bases, an AI can very well recognize cats by analyzing images of dogs if he has been taught to acknowledge felines as “dogs.”
Does the question then arise how to validate that an AI will have the expected behavior in all cases?
And the underlying question: how to trust an AI?
These are the questions we will try to answer during this webinar.
- Philippe Wieczorek, Directeur R&D et Innovation, Minalogic
- Arnault Ioualalen, CEO and R&D director at Numalis
- Marie-Christine Rousset, Professeur à l’Université de Grenoble Alpes & Déléguée Scientifique au LIG et co-portrice de la chaire MIAI “Explainable and Responsible AI”