#431 Roman Yampolskiy: Dangers of Superintelligent AI

Published: June 2, 2024, 9:18 p.m.

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
\n- Yahoo Finance: https://yahoofinance.com
\n- MasterClass: https://masterclass.com/lexpod to get 15% off
\n- NetSuite: http://netsuite.com/lex to get free product tour
\n- LMNT: https://drinkLMNT.com/lex to get free sample pack
\n- Eight Sleep: https://eightsleep.com/lex to get $350 off
\n
\nTranscript: https://lexfridman.com/roman-yampolskiy-transcript
\n
\nEPISODE LINKS:
\nRoman's X: https://twitter.com/romanyam
\nRoman's Website: http://cecs.louisville.edu/ry
\nRoman's AI book: https://amzn.to/4aFZuPb
\n
\nPODCAST INFO:
\nPodcast website: https://lexfridman.com/podcast
\nApple Podcasts: https://apple.co/2lwqZIr
\nSpotify: https://spoti.fi/2nEwCF8
\nRSS: https://lexfridman.com/feed/podcast/
\nYouTube Full Episodes: https://youtube.com/lexfridman
\nYouTube Clips: https://youtube.com/lexclips
\n
\nSUPPORT & CONNECT:
\n- Check out the sponsors above, it's the best way to support this podcast
\n- Support on Patreon: https://www.patreon.com/lexfridman
\n- Twitter: https://twitter.com/lexfridman
\n- Instagram: https://www.instagram.com/lexfridman
\n- LinkedIn: https://www.linkedin.com/in/lexfridman
\n- Facebook: https://www.facebook.com/lexfridman
\n- Medium: https://medium.com/@lexfridman
\n
\nOUTLINE:
\nHere's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
\n(00:00) - Introduction
\n(09:12) - Existential risk of AGI
\n(15:25) - Ikigai risk
\n(23:37) - Suffering risk
\n(27:12) - Timeline to AGI
\n(31:44) - AGI turing test
\n(37:06) - Yann LeCun and open source AI
\n(49:58) - AI control
\n(52:26) - Social engineering
\n(54:59) - Fearmongering
\n(1:04:49) - AI deception
\n(1:11:23) - Verification
\n(1:18:22) - Self-improving AI
\n(1:30:34) - Pausing AI development
\n(1:36:51) - AI Safety
\n(1:46:35) - Current AI
\n(1:51:58) - Simulation
\n(1:59:16) - Aliens
\n(2:00:50) - Human mind
\n(2:07:10) - Neuralink
\n(2:16:15) - Hope for the future
\n(2:20:11) - Meaning of life