66. Owain Evans - Predicting the future of AI

Published: Jan. 13, 2021, 2:53 p.m.

b'

Most researchers agree we\\u2019ll eventually reach a point where our AI systems begin to exceed human performance at virtually every economically valuable task, including the ability to generalize from what they\\u2019ve learned to take on new tasks that they haven\\u2019t seen before. These artificial general intelligences (AGIs) would in all likelihood have transformative effects on our economies, our societies and even our species.

\\n

No one knows what these effects will be, or when AGI systems will be developed that can bring them about. But that doesn\\u2019t mean these things aren\\u2019t worth predicting or estimating. The more we know about the amount of time we have to develop robust solutions to important AI ethics, safety and policy problems, the more clearly we can think about what problems should be receiving our time and attention today.

\\n

That\\u2019s the thesis that motivates a lot of work on AI forecasting: the attempt to predict key milestones in AI development, on the path to AGI and super-human artificial intelligence. It\\u2019s still early days for this space, but it\\u2019s received attention from an increasing number of the AI safety and AI capabilities researchers. One of those researchers is Owain Evans, whose work at Oxford University\\u2019s Future of Humanity Institute is focused on techniques for learning about human beliefs, preferences and values from observing human behavior or interacting with humans. Owain joined me for this episode of the podcast to talk about AI forecasting, the problem of inferring human values, and the ecosystem of research organizations that support this type of research.

'