Technical Talks

View All

AI Governance to Ensure Ethics, Transparency, and Compliance in AI

Krishna Gade Krishna Gade | CEO & Founder | Fiddler Labs

With the rise of AI, computers are able to take on more tasks usually done by humans with an increase in efficiency and productivity. From manufacturing to finance, industries realize the importance of AI which is increasingly being adopted at companies across all industries. But the economic opportunities AI presents don’t come without risk. As frequent news stories indicate, companies employing AI face ethical and compliance challenges. When not addressed, these issues can lead to a lack of trust, negative publicity, and regulatory action. Industries differ widely in the scope and approach taken to address these risks, in large part due to the varying regulations governing each.

Though some sectors, in particular finance, have implemented policies and systems designed to safeguard against potential adverse effects of models, there is not yet a canonical approach to AI Governance that protects against key bias and fairness risks and satisfies more general data privacy regulation such as GDPR and CCPA. With Explainable AI at the core of AI governance, companies can leverage this technology to not only understand and explain AI outcomes, but also build responsible and fair AI. Explainability fills a critical gap in operationalizing AI when infused into the end-to-end ML workflow.

This session will look at how an AI governance system promotes ethical, transparent, and compliant use of AI across industries and application domains.

People will walk away with a better understanding of:

  • The need for governing AI;
  • What exists today and where are the gaps;
  • The future of AI governance with explainability at the heart of it;
  • How to implement an AI governance mindset in your organization.

Krishna Gade
Krishna Gade
CEO & Founder | Fiddler Labs

Krishna Gade is the co-founder and CEO of Fiddler Labs, an enterprise startup building an Explainable AI Engine to address problems regarding bias, fairness and transparency in AI. At Facebook, he led the team that built Facebook’s explainability feature ‘Why am I seeing this?’.

He’s an entrepreneur with a technical background with experience creating scalable platforms and expertise in converting data into intelligence. Having held senior engineering leadership roles at Facebook, Pinterest, Twitter and Microsoft, he’s seen the effects that bias has on AI and machine learning decision making processes, and with Fiddler, his goal is to enable enterprises across the globe solve this problem.

FEATURED MEETINGS