Model monitoring has become a critical capability that data scientists need to master in order to keep their models delivering throughout the ups and downs and sudden shocks of real world use. This session will cover key topics that data scientists should consider as they embark on a monitoring program for their company, including: Why is monitoring important? What should data science and MLOps teams be watching out for? We’ll cover the scope of key metrics that are important to managing a model’s ongoing success, as well as how to make sure that you’re setting your alert thresholds in a way that means you’re not going off on snipe hunts. How does ML explainability have anything to do with monitoring? We’ll show how explainability is key to root cause analysis and rapid debugging, so that your model stays in production longer.
Anupam Datta is Co-Founder, President, and Chief Scientist of TruEra. He is also Professor of Electrical and Computer Engineering and (by courtesy) Computer Science at Carnegie Mellon University. His research focuses on enabling real-world complex systems to be accountable for their behavior, especially as they pertain to privacy, fairness, and security. His work has helped create foundations and tools for accountable data-driven systems. Specific results include an accountability tool chain for privacy compliance deployed in industry, automated discovery of gender bias in the targeting of job-related online ads, principled tools for explaining decisions of artificial intelligence systems, and monitoring audit logs to ensure privacy compliance.
Datta serves as lead PI of a large NSF project on Accountable Decision Systems, on the Steering Committees of the Conference on Fairness, Accountability, and Transparency in socio-technical systems and the IEEE Computer Security Foundations Symposium, and as an Editor-in-Chief of Foundations and Trends in Privacy and Security. He obtained Ph.D. and M.S. degrees from Stanford University and a B.Tech. from IIT Kharagpur, all in Computer Science.