106. Yang Gao - Sample-efficient AI

Published: Dec. 8, 2021, 3:26 p.m.

Historically, AI systems have been slow learners. For example, a computer vision model often needs to see tens of thousands of hand-written digits before it can tell a 1 apart from a 3. Even game-playing AIs like DeepMind\u2019s AlphaGo, or its more recent descendant MuZero, need far more experience than humans do to master a given game.

\n

So when someone develops an algorithm that can reach human-level performance at anything as fast as a human can, it\u2019s a big deal. And that\u2019s exactly why I asked Yang Gao to join me on this episode of the podcast. Yang is an AI researcher with affiliations at Berkeley and Tsinghua University, who recently co-authored a paper introducing EfficientZero: a reinforcement learning system that learned to play Atari games at the human-level after just two hours of in-game experience. It\u2019s a tremendous breakthrough in sample-efficiency, and a major milestone in the development of more general and flexible AI systems.

\n

--- 

\n

Intro music:

\n

\u279e Artist: Ron Gelinas

\n

\u279e Track Title: Daybreak Chill Blend (original mix)

\n

\u279e Link to Track: https://youtu.be/d8Y2sKIgFWc

\n

---

\n

Chapters: 

\n

- 0:00 Intro

\n

- 1:50 Yang\u2019s background

\n

- 6:00 MuZero\u2019s activity

\n

- 13:25 MuZero to EfficiantZero

\n

- 19:00 Sample efficiency comparison

\n

- 23:40 Leveraging algorithmic tweaks

\n

- 27:10 Importance of evolution to human brains and AI systems

\n

- 35:10 Human-level sample efficiency

\n

- 38:28 Existential risk from AI in China

\n

- 47:30 Evolution and language

\n

- 49:40 Wrap-up