130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*

Published: Oct. 12, 2022, 1:31 p.m.

Progress in AI has been accelerating dramatically in recent years, and even months. It seems like every other day, there\u2019s a new, previously-believed-to-be-impossible feat of AI that\u2019s achieved by a world-leading lab. And increasingly, these breakthroughs have been driven by the same, simple idea: AI scaling.

\n

For those who haven\u2019t been following the AI scaling sage, scaling means training AI systems with larger models, using increasingly absurd quantities of data and processing power. So far, empirical studies by the world\u2019s top AI labs seem to suggest that scaling is an open-ended process that can lead to more and more capable and intelligent systems, with no clear limit.

\n

And that\u2019s led many people to speculate that scaling might usher in a new era of broadly human-level or even superhuman AI\u200a\u2014\u200athe holy grail AI researchers have been after for decades.

\n

And while that might sound cool, an AI that can solve general reasoning problems as well as or better than a human might actually be an intrinsically dangerous thing to build.

\n

At least, that\u2019s the conclusion that many AI safety researchers have come to following the publication of a new line of research that explores how modern AI systems tend to solve problems, and whether we should expect more advanced versions of them to perform dangerous behaviours like seeking power.

\n

This line of research in AI safety is called \u201cpower-seeking\u201d, and although it\u2019s currently not well understood outside the frontier of AI safety and AI alignment research, it\u2019s starting to draw a lot of attention. The first major theoretical study of power seeking was led by Alex Turner, who\u2019s appeared on the podcast before, and was published in NeurIPS (the world\u2019s top AI conference), for example.

\n

And today, we\u2019ll be hearing from Edouard Harris, an AI alignment researcher and one of my co-founders in the AI safety company (Gladstone AI). Ed\u2019s just completed a significant piece of AI safety research that extends Alex Turner\u2019s original power-seeking work, and that shows what seems to be the first experimental evidence suggesting that we should expect highly advanced AI systems to seek power by default.

\n

What does power seeking really mean though? And does all this imply for the safety of future, general-purpose reasoning systems? That\u2019s what this episode will be all about.

\n

***

\n

Intro music:

\n

- Artist: Ron Gelinas

\n

- Track Title: Daybreak Chill Blend (original mix)

\n

- Link to Track: https://youtu.be/d8Y2sKIgFWc

\n

*** 

\n

Chapters:

\n

- 0:00 Intro

\n

- 4:00 Alex Turner's research

\n

- 7:45 What technology wants

\n

- 11:30 Universal goals

\n

- 17:30 Connecting observations

\n

- 24:00 Micro power seeking behaviour

\n

- 28:15 Ed's research

\n

- 38:00 The human as the environment

\n

- 42:30 What leads to power seeking

\n

- 48:00 Competition as a default outcome

\n

- 52:45 General concern

\n

- 57:30 Wrap-up