Explainable Deep Learning AI: Methods and Challenges - Softcover

 
9780323960984: Explainable Deep Learning AI: Methods and Challenges

Synopsis

Explainable Deep Learning AI: Methods and Challenges presents the latest works of leading researchers in the XAI area, offering an overview of the XAI area, along with several novel technical methods and applications that address explainability challenges for deep learning AI systems. The book overviews XAI and then covers a number of specific technical works and approaches for deep learning, ranging from general XAI methods to specific XAI applications, and finally, with user-oriented evaluation approaches. It also explores the main categories of explainable AI – deep learning, which become the necessary condition in various applications of artificial intelligence.

The groups of methods such as back-propagation and perturbation-based methods are explained, and the application to various kinds of data classification are presented.

  • Provides an overview of main approaches to Explainable Artificial Intelligence (XAI) in the Deep Learning realm, including the most popular techniques and their use, concluding with challenges and exciting future directions of XAI
  • Explores the latest developments in general XAI methods for Deep Learning
  • Explains how XAI for Deep Learning is applied to various domains like images, medicine and natural language processing
  • Provides an overview of how XAI systems are tested and evaluated, specially with real users, a critical need in XAI

"synopsis" may belong to another edition of this title.

About the Authors

Jenny Benois-Pineau is a professor of computer science at the University of Bordeaux and head of the “Video Analysis and Indexing” research group of the “Image and Sound” team of LABRI UMR 58000 Université Bordeaux / CNRS / IPB-ENSEIRB. She was deputy scientific director of theme B of the French national research unit CNRS GDR ISIS (2008-2015) and is currently in charge of international relations at the College of Sciences and Technologies of the University of Bordeaux. She obtained her doctorate in Signals and Systems in Moscow and her Habilitation to Direct Research in Computer Science and Image Processing at the University of Nantes in France. Her subjects of interest include image and video analysis and indexing, artificial intelligence methods applied to image recognition.

Since 2009 he’s been an Associate Professor in the Computer Science Department of the IUT ("Technical School"), University of Bordeaux (Talence), France. He is also deputy director of the BKB ("Bench to Knowledge and Beyond") team of LaBRI.

Dragutin Petkovic is Professor in the Computer Science department at San Francisco State University, USA.

Senior researcher at CNRS, leader of the MRIM group. Works at the Laboratory of Informatics of Grenoble and Multimedia Information Indexing and Retrieval Group.

From the Back Cover

The recent focus of Artificial Intelligence (AI) researchers and practitioners on supervised learning approaches, particularly on Deep Learning, has resulted in a considerable increase of performance of AI systems, but this has raised the question of the trustfulness and explainability of their predictions for human decision makers and adopters. Explainable AI (XAI) is addressing this challenge by developing methods to "understand" and "explain" to humans how these systems produce their decisions. This book presents the latest works of leading researchers in XAI area and will offer the reader, besides an overview of the XAI area, several novel technical methods and applications that address explainability challenges for Deep Learning AI systems.

The book starts with the overviewing the XAI area, then in 13 chapters covers a number of specific technical works and approaches to XAI for Deep learning, ranging from general XAI methods, to specific XAI applications, and finally with user-oriented evaluation approaches.

It explores the main categories of methods of explainable AI – Deep Learning, which become the necessary condition in various applications of Artificial Intelligence, following a methodological approach. The groups of methods such as back-propagation and perturbation-based methods are explained, and the application to various kinds of the data classification is presented. It also addresses important questions on evaluation by users.

"About this title" may belong to another edition of this title.