101. Ayanna Howard - AI and the trust problem

Published: Nov. 3, 2021, 2:51 p.m.

b'

Over the last two years, the capabilities of AI systems have exploded. AlphaFold2, MuZero, CLIP, DALLE, GPT-3 and many other models have extended the reach of AI to new problem classes. There\\u2019s a lot to be excited about.

\\n

But as we\\u2019ve seen in other episodes of the podcast, there\\u2019s a lot more to getting value from an AI system than jacking up its capabilities. And increasingly, one of these additional missing factors is becoming trust. You can make all the powerful AIs you want, but if no one trusts their output\\u200a\\u2014\\u200aor if people trust it when they shouldn\\u2019t\\u200a\\u2014\\u200ayou can end up doing more harm than good.

\\n

That\\u2019s why we invited Ayanna Howard on the podcast. Ayanna is a roboticist, entrepreneur and Dean of the College of Engineering at Ohio State University, where she focuses her research on human-machine interactions and the factors that go into building human trust in AI systems. She joined me to talk about her research, its applications in medicine and education, and the future of human-machine trust.

\\n

---

\\n

Intro music:

\\n

- Artist: Ron Gelinas

\\n

- Track Title: Daybreak Chill Blend (original mix)

\\n

- Link to Track: https://youtu.be/d8Y2sKIgFWc

\\n

---

\\n

Chapters:

\\n

- 0:00 Intro

\\n

- 1:30 Ayanna\\u2019s background

\\n

- 6:10 The interpretability of neural networks

\\n

- 12:40 Domain of machine-human interaction

\\n

- 17:00 The issue of preference

\\n

- 20:50 Gelman/newspaper amnesia

\\n

- 26:35 Assessing a person\\u2019s persuadability

\\n

- 31:40 Doctors and new technology

\\n

- 36:00 Responsibility and accountability

\\n

- 43:15 The social pressure aspect

\\n

- 47:15 Is Ayanna optimistic?

\\n

- 53:00 Wrap-up

'