117. Beena Ammanath - Defining trustworthy AI

Published: March 30, 2022, 3:39 p.m.

Trustworthy AI is one of today\u2019s most popular buzzwords. But although everyone seems to agree that we want AI to be trustworthy, definitions of trustworthiness are often fuzzy or inadequate. Maybe that shouldn\u2019t be surprising: it\u2019s hard to come up with a single set of standards that add up to \u201ctrustworthiness\u201d, and that apply just as well to a Netflix movie recommendation as a self-driving car.

\n

So maybe trustworthy AI needs to be thought of in a more nuanced way\u200a\u2014\u200aone that reflects the intricacies of individual AI use cases. If that\u2019s true, then new questions come up: who gets to define trustworthiness, and who bears responsibility when a lack of trustworthiness leads to harms like AI accidents, or undesired biases?

\n

Through that lens, trustworthiness becomes a problem not just for algorithms, but for organizations. And that\u2019s exactly the case that Beena Ammanath makes in her upcoming book, Trustworthy AI, which explores AI trustworthiness from a practical perspective, looking at what concrete steps companies can take to make their in-house AI work safer, better and more reliable. Beena joined me to talk about defining trustworthiness, explainability and robustness in AI, as well as the future of AI regulation and self-regulation on this episode of the TDS podcast.

\n

Intro music:

\n

- Artist: Ron Gelinas

\n

- Track Title: Daybreak Chill Blend (original mix)

\n

- Link to Track: https://youtu.be/d8Y2sKIgFWc

\nChapters:\n
    \n
  • 1:55 Background and trustworthy AI
  • \n
  • 7:30 Incentives to work on capabilities
  • \n
  • 13:40 Regulation at the level of application domain
  • \n
  • 16:45 Bridging the gap
  • \n
  • 23:30 Level of cognition offloaded to the AI
  • \n
  • 25:45 What is trustworthy AI?
  • \n
  • 34:00 Examples of robustness failures
  • \n
  • 36:45 Team diversity
  • \n
  • 40:15 Smaller companies
  • \n
  • 43:00 Application of best practices
  • \n
  • 46:30 Wrap-up
  • \n