88. Oren Etzioni - The case against (worrying about) existential risk from AI

Published: June 16, 2021, 1:26 p.m.

b'

Few would disagree that AI is set to become one of the most important economic and social forces in human history.

\\n

But along with its transformative potential has come concern about a strange new risk that AI might pose to human beings. As AI systems become exponentially more capable of achieving their goals, some worry that even a slight misalignment between those goals and our own could be disastrous. These concerns are shared by many of the most knowledgeable and experienced AI specialists, at leading labs like OpenAI, DeepMind, CHAI Berkeley, Oxford and elsewhere.

\\n

But they\\u2019re not universal: I recently had Melanie Mitchell \\u2014 computer science professor and author who famously debated Stuart Russell on the topic of AI risk \\u2014 on the podcast to discuss her objections to the AI catastrophe argument. And on this episode, we\\u2019ll continue our exploration of the case for AI catastrophic risk skepticism with an interview with Oren Etzioni, CEO of the Allen Institute for AI, a world-leading AI research lab that\\u2019s developed many well-known projects, including the popular AllenNLP library, and Semantic Scholar.

\\n

Oren has a unique perspective on AI risk, and the conversation was lots of fun!

'