Technical Talks

View All

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Ronny Luss Ronny Luss | Research Staff Member | IBM Research AI

As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing for these algorithms to explain their decisions. Whether you are the loan officer that needs to understand why an algorithm accepted a particular application or the applicant that wants to know what they could've done different to avoid their rejected application, the need for explanations is clear.

This talk gives an overview to AI Explainability 360, an open-source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics. We further provide a taxonomy to help those requiring explanations figure out which explanation method will best serve their purposes.

Data scientists and other users will learn about a new toolkit that offers hands-on experience with some of the latest explainability methods through various demos and tutorials in Jupyter notebooks. Taken together, our toolkit and taxonomy can help identify gaps where more explainability methods are needed and provide a platform to incorporate them as they are developed.

Ronny Luss
Ronny Luss
Research Staff Member | IBM Research AI

Ronny Luss is a Research Staff Member at IBM Research AI where he has worked on projects across a multitude of industries and applications including product recommendations, advertising, insurance, and explainability, among others. Ronny has published articles in various machine learning and optimization journals and conferences, and holds a Ph.D. in Operations Research from Princeton University.