100. Max Jaderberg - Open-ended learning at DeepMind

Published: Oct. 27, 2021, 2:49 p.m.

b'

On the face of it, there\\u2019s no obvious limit to the reinforcement learning paradigm: you put an agent in an environment and reward it for taking good actions until it masters a task.

\\n

And by last year, RL had achieved some amazing things, including mastering Go, various Atari games, Starcraft II and so on. But the holy grail of AI isn\\u2019t to master specific games, but rather to generalize \\u2014 to make agents that can perform well on new games that they haven\\u2019t been trained on before.

\\n

Fast forward to July of this year though and a team of DeepMind published a paper called \\u201cOpen-Ended Learning Leads to Generally Capable Agents\\u201d, which takes a big step in the direction of general RL agents. Joining me for this episode of the podcast is one of the co-authors of that paper, Max Jaderberg. Max came into the Google ecosystem in 2014 when they acquired his computer vision company, and more recently, he started DeepMind\\u2019s open-ended learning team, which is focused on pushing machine learning further into the territory of cross-task generalization ability. I spoke to Max about open-ended learning, the path ahead for generalization and the future of AI.

\\n

---

\\n

Intro music by:

\\n

\\u279e Artist: Ron Gelinas

\\n

\\u279e Track Title: Daybreak Chill Blend (original mix)

\\n

\\u279e Link to Track: https://youtu.be/d8Y2sKIgFWc 

\\n

---

\\n

Chapters: 

\\n

- 0:00 Intro

\\n

- 1:30 Max\\u2019s background

\\n

- 6:40 Differences in procedural generations

\\n

- 12:20 The qualitative side

\\n

- 17:40 Agents\\u2019 mistakes

\\n

- 20:00 Measuring generalization

\\n

- 27:10 Environments and loss functions

\\n

- 32:50 The potential of symbolic logic

\\n

- 36:45 Two distinct learning processes

\\n

- 42:35 Forecasting research

\\n

- 45:00 Wrap-up

'