113. Yaron Singer - Catching edge cases in AI

Published: Feb. 9, 2022, 3:46 p.m.

b'

It\\u2019s no secret that AI systems are being used in more and more high-stakes applications. As AI eats the world, it\\u2019s becoming critical to ensure that AI systems behave robustly\\u200a\\u2014\\u200athat they don\\u2019t get thrown off by unusual inputs, and start spitting out harmful predictions or recommending dangerous courses of action. If we\\u2019re going to have AI drive us to work, or decide who gets bank loans and who doesn\\u2019t, we\\u2019d better be confident that our AI systems aren\\u2019t going to fail because of a freak blizzard, or because some intern missed a minus sign.

\\n

We\\u2019re now past the point where companies can afford to treat AI development like a glorified Kaggle competition, in which the only thing that matters is how well models perform on a testing set. AI-powered screw-ups aren\\u2019t always life-or-death issues, but they can harm real users, and cause brand damage to companies that don\\u2019t anticipate them.

\\n

Fortunately, AI risk is starting to get more attention these days, and new companies\\u200a\\u2014\\u200alike Robust Intelligence\\u200a\\u2014\\u200aare stepping up to develop strategies that anticipate AI failures, and mitigate their effects. Joining me for this episode of the podcast was Yaron Singer, a former Googler, professor of computer science and applied math at Harvard, and now CEO and co-founder of Robust Intelligence. Yaron has the rare combination of theoretical and engineering expertise required to understand what AI risk is, and the product intuition to know how to integrate that understanding into solutions that can help developers and companies deal with AI risk.

\\n

--- 

\\n

Intro music:

\\n

\\u279e Artist: Ron Gelinas

\\n

\\u279e Track Title: Daybreak Chill Blend (original mix)

\\n

\\u279e Link to Track: https://youtu.be/d8Y2sKIgFWc

\\n

--- 

\\n

Chapters:

\\n
    \\n
  • 0:00 Intro
  • \\n
  • 2:30 Journey into AI risk
  • \\n
  • 5:20 Guarantees of AI systems
  • \\n
  • 11:00 Testing as a solution
  • \\n
  • 15:20 Generality and software versus custom work
  • \\n
  • 18:55 Consistency across model types
  • \\n
  • 24:40 Different model failures
  • \\n
  • 30:25 Levels of responsibility
  • \\n
  • 35:00 Wrap-up
  • \\n
'