Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere.
\n
\nSupport this podcast by signing up with these sponsors:
\n- Cash App - use code "LexPodcast" and download:
\n- Cash App (App Store): https://apple.co/2sPrUHe
\n- Cash App (Google Play): https://bit.ly/2MlvP5w
\n
\nEPISODE LINKS:
\nNick's website: https://nickbostrom.com/
\nFuture of Humanity Institute:
\n- https://twitter.com/fhioxford
\n- https://www.fhi.ox.ac.uk/
\nBooks:
\n- Superintelligence: https://amzn.to/2JckX83
\nWikipedia:
\n- https://en.wikipedia.org/wiki/Simulation_hypothesis
\n- https://en.wikipedia.org/wiki/Principle_of_indifference
\n- https://en.wikipedia.org/wiki/Doomsday_argument
\n- https://en.wikipedia.org/wiki/Global_catastrophic_risk
\n
\nThis conversation is part of the Artificial Intelligence podcast.\xa0If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
\n
\nHere's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
\n
\nOUTLINE:
\n00:00 - Introduction
\n02:48 - Simulation hypothesis and simulation argument
\n12:17 - Technologically mature civilizations
\n15:30 - Case 1: if something kills all possible civilizations
\n19:08 - Case 2: if we lose interest in creating simulations
\n22:03 - Consciousness
\n26:27 - Immersive worlds
\n28:50 - Experience machine
\n41:10 - Intelligence and consciousness
\n48:58 - Weighing probabilities of the simulation argument
\n1:01:43 - Elaborating on Joe Rogan conversation
\n1:05:53 - Doomsday argument and anthropic reasoning
\n1:23:02 - Elon Musk
\n1:25:26 - What's outside the simulation?
\n1:29:52 - Superintelligence
\n1:47:27 - AGI utopia
\n1:52:41 - Meaning of life