20 - 'Reform' AI Alignment with Scott Aaronson

Published: April 12, 2023, 9:43 p.m.

How should we scientifically think about the impact of AI on human civilization, and whether or not it will doom us all? In this episode, I speak with Scott Aaronson about his views on how to make progress in AI alignment, as well as his work on watermarking the output of language models, and how he moved from a background in quantum complexity theory to working on AI.

\xa0

Note: this episode was recorded before this story (vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says) emerged of a man committing suicide after discussions with a language-model-based chatbot, that included discussion of the possibility of him killing himself.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

\xa0

Topics we discuss, and timestamps:

\xa0- 0:00:36 - 'Reform' AI alignment

\xa0\xa0 - 0:01:52 - Epistemology of AI risk

\xa0\xa0 - 0:20:08 - Immediate problems and existential risk

\xa0\xa0 - 0:24:35 - Aligning deceitful AI

\xa0\xa0 - 0:30:59 - Stories of AI doom

\xa0\xa0 - 0:34:27 - Language models

\xa0\xa0 - 0:43:08 - Democratic governance of AI

\xa0\xa0 - 0:59:35 - What would change Scott's mind

\xa0- 1:14:45 - Watermarking language model outputs

\xa0\xa0 - 1:41:41 - Watermark key secrecy and backdoor insertion

\xa0- 1:58:05 - Scott's transition to AI research

\xa0\xa0 - 2:03:48 - Theoretical computer science and AI alignment

\xa0\xa0 - 2:14:03 - AI alignment and formalizing philosophy

\xa0\xa0 - 2:22:04 - How Scott finds AI research

\xa0- 2:24:53 - Following Scott's research

\xa0

The transcript: axrp.net/episode/2023/04/11/episode-20-reform-ai-alignment-scott-aaronson.html

\xa0

Links to Scott's things:

\xa0- Personal website: scottaaronson.com

\xa0- Book, Quantum Computing Since Democritus: amazon.com/Quantum-Computing-since-Democritus-Aaronson/dp/0521199565/

\xa0- Blog, Shtetl-Optimized: scottaaronson.blog

\xa0

Writings we discuss:

\xa0- Reform AI Alignment: scottaaronson.blog/?p=6821

\xa0- Planting Undetectable Backdoors in Machine Learning Models: arxiv.org/abs/2204.06974