In recent years, Model Interpretability has become a hot area of research in Machine Learning, mainly due to the proliferation of ML in products and the resulting social implications. It is clear that we now need tools and platforms for Model Interpretability more than ever to make sure, ML practitioners can detect bias, understand model performance and improve models with better confidence.

Fiddler Labs is a startup where we're building a new Explainable AI Engine to solve these problems at scale, we've assembled a team that solved some of these problems at companies like Facebook, Google and Twitter.

Slides Not Available

Experience talks like this and many more at San Francisco 2019