Trustworthy AI is one of today\u2019s most popular buzzwords. But although everyone seems to agree that we want AI to be trustworthy, definitions of trustworthiness are often fuzzy or inadequate. Maybe that shouldn\u2019t be surprising: it\u2019s hard to come up with a single set of standards that add up to \u201ctrustworthiness\u201d, and that apply just as well to a Netflix movie recommendation as a self-driving car.
\nSo maybe trustworthy AI needs to be thought of in a more nuanced way\u200a\u2014\u200aone that reflects the intricacies of individual AI use cases. If that\u2019s true, then new questions come up: who gets to define trustworthiness, and who bears responsibility when a lack of trustworthiness leads to harms like AI accidents, or undesired biases?
\nThrough that lens, trustworthiness becomes a problem not just for algorithms, but for organizations. And that\u2019s exactly the case that Beena Ammanath makes in her upcoming book, Trustworthy AI, which explores AI trustworthiness from a practical perspective, looking at what concrete steps companies can take to make their in-house AI work safer, better and more reliable. Beena joined me to talk about defining trustworthiness, explainability and robustness in AI, as well as the future of AI regulation and self-regulation on this episode of the TDS podcast.
\nIntro music:
\n- Artist: Ron Gelinas
\n- Track Title: Daybreak Chill Blend (original mix)
\n- Link to Track: https://youtu.be/d8Y2sKIgFWc
\nChapters:\n