We talk about how everybody on the superalignment team at OpenAI\u2014focused on safety, risk, adversarial testing, societal impacts, and existential concerns\u2014is resigning, including high-profile people like Illya Sutskever. And nobody can talk about it because of draconian rules (even for Silicon Valley) about non-disclosure and non-disparagement people must sign (or risk their vested equity) upon exiting the company. For us, the turmoil of OpenAI is indicative of conflict between true believers (superalignment) and cynical operators (Sam Altman).\n\nOutro: Aunty Donna \u2013 Real Estate Agents https://www.youtube.com/watch?v=VGm267O04a8\n\n\u2022\u2022\u2022 \u201cI lost trust\u201d: Why the OpenAI team in charge of safeguarding humanity imploded https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence\n\u2022\u2022\u2022 ChatGPT can talk, but OpenAI employees sure can\u2019t https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release \n\nSubscribe to hear more analysis and commentary in our premium episodes every week! https://www.patreon.com/thismachinekills \n\nHosted by Jathan Sadowski (www.twitter.com/jathansadowski) and Edward Ongweso Jr. (www.twitter.com/bigblackjacobin). Production / Music by Jereme Brown (www.twitter.com/braunestahl)