64. David Krueger - Managing the incentives of AI

Published: Dec. 30, 2020, 3:02 p.m.

b'

What does a neural network system want to do?

\\n

That might seem like a straightforward question. You might imagine that the answer is \\u201cwhatever the loss function says it should do.\\u201d But when you dig into it, you quickly find that the answer is much more complicated than that might imply.

\\n

In order to accomplish their primary goal of optimizing a loss function, algorithms often develop secondary objectives (known as instrumental goals) that are tactically useful for that main goal. For example, a computer vision algorithm designed to tell faces apart might find it beneficial to develop the ability to detect noses with high fidelity. Or in a more extreme case, a very advanced AI might find it useful to monopolize the Earth\\u2019s resources in order to accomplish its primary goal\\u200a\\u2014\\u200aand it\\u2019s been suggested that this might actually be the default behavior of powerful AI systems in the future.

\\n

So, what does an AI want to do? Optimize its loss function\\u200a\\u2014\\u200aperhaps. But a sufficiently complex system is likely to also manifest instrumental goals. And if we don\\u2019t develop a deep understanding of AI incentives, and reliable strategies to manage those incentives, we may be in for an unpleasant surprise when unexpected and highly strategic behavior emerges from systems with simple and desirable primary goals. Which is why it\\u2019s a good thing that my guest today, David Krueger, has been working on exactly that problem. David studies deep learning and AI alignment at MILA, and joined me to discuss his thoughts on AI safety, and his work on managing the incentives of AI systems.

'