60. Rob Miles - Why should I care about AI safety?

Published: Dec. 2, 2020, 1:46 p.m.

b'

Progress in AI capabilities has consistently surprised just about everyone, including the very developers and engineers who build today\\u2019s most advanced AI systems. AI can now match or exceed human performance in everything from speech recognition to driving, and one question that\\u2019s increasingly on people\\u2019s minds is: when will AI systems be better than humans at AI research itself?

\\n

The short answer, of course, is that no one knows for sure \\u2014 but some have taken some educated guesses, including Nick Bostrom and Stuart Russell. One common hypothesis is that once an AI systems are better than a human at improving their own performance, we can expect at least some of them to do so. In the process, these self-improving systems would become an even more powerful system that they were previously\\u2014and therefore, even more capable of further self-improvement. With each additional self-improvement step, improvements in a system\\u2019s performance would compound. Where this all ultimately leads, no one really has a clue, but it\\u2019s safe to say that if there\\u2019s a good chance that we\\u2019re going to be creating systems that are capable of this kind of stunt, we ought to think hard about how we should be building them.

\\n

This concern among many others has led to the development of the rich field of AI safety, and my guest for this episode, Robert Miles, has been involved in popularizing AI safety research for more than half a decade through two very successful YouTube channels, Robert Miles and Computerphile. He joined me on the podcast to discuss how he\\u2019s thinking about AI safety, what AI means for the course of human evolution, and what our biggest challenges will be in taming advanced AI.

'