Technical Talks

View All

Explaining AI: Putting Theory into Practice

Luke Merrick Luke Merrick | Data Scientist | Fiddler Labs

In recent years, Model Interpretability has become a hot area of research in Machine Learning, mainly due to the proliferation of ML in products and the resulting social implications. At Fiddler Labs, we're building a general purpose Explainable AI Engine to help ML practitioners better trust and understand their models at scale.

In this talk, we will cover some of the learnings from our experiences working with various model-explanation algorithms across business domains. Through the lens of two case studies, we will discuss the theory, application, and practical-use guidelines for effectively using explainability techniques to generate value in your data science lifecycle.

Luke Merrick
Luke Merrick
Data Scientist | Fiddler Labs

Luke Merrick is a Data Scientist working to tackle AI explainability at Fiddler Labs by combining deep theoretical understanding with an appreciation for the challenges of deploying ML in the real world. Luke is a graduate of the University of Virginia with a background in applying machine learning techniques to the pricing of financial assets. He has previously done work in the areas of quantitative finance and insurance tech, and he particularly enjoys tackling the challenges of low-signal time-series prediction problems often found in these disciplines.

FEATURED MEETINGS