Is A.I. the Problem? Or Are We?

Published: June 4, 2021, 9 a.m.

If you talk to many of the people working on the cutting edge of artificial intelligence research, you\u2019ll hear that we are on the cusp of a technology that will be far more transformative than simply computers and the internet, one that could bring about a new industrial revolution and usher in a utopia \u2014 or perhaps pose the greatest threat in our species\u2019s history.\n\nOthers, of course, will tell you those folks are nuts.\n\nOne of my projects this year is to get a better handle on this debate. A.I., after all, isn\u2019t some force only future human beings will face. It\u2019s here now, deciding what advertisements are served to us online, how bail is set after we commit crimes and whether our jobs will exist in a couple of years. It is both shaped by and reshaping politics, economics and society. It\u2019s worth understanding.\n\nBrian Christian\u2019s recent book \u201cThe Alignment Problem\u201d is the best book on the key technical and moral questions of A.I. that I\u2019ve read. At its center is the term from which the book gets its name. \u201cAlignment problem\u201d originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals. And that\u2019s a central worry with A.I., too: that we will create something to help us that will instead harm us, in part because we didn\u2019t understand how it really worked or what we had actually asked it to do.\n\nSo this conversation is about the various alignment problems associated with A.I. We discuss what machine learning is and how it works, how governments and corporations are using it right now, what it has taught us about human learning, the ethics of how humans should treat sentient robots, the all-important question of how A.I. developers plan to make profits, what kinds of regulatory structures are possible when we\u2019re dealing with algorithms we don\u2019t really understand, the way A.I. reflects and then supercharges the inequities that exist in our society, the saddest Super Mario Bros. game I\u2019ve ever heard of, why the problem of automation isn\u2019t so much job loss as dignity loss and much more.\n\nMentioned: \n\n\u201cHuman-level control through deep reinforcement learning\u201d \n\n\u201cSome Moral and Technical Consequences of Automation\u201d by Norbert Wiener\n\n\nRecommendations: \n\nWhat to Expect When You're Expecting Robots by Julie Shah and Laura Major\n\nFinite and Infinite Games by James P. Carse \n\nHow to Do Nothing by Jenny Odell\n\n\nIf you enjoyed this episode, check out my conversation with Alison Gopnik on what we can all learn from studying the minds of children. \n\n\nYou can find transcripts (posted midday) and more episodes of "The Ezra Klein Show" at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein.\n\nThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.\n\n\u201cThe Ezra Klein Show\u201d is produced by Annie Galvin, Jeff Geld and Rog\xe9 Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Special thanks to Kristin Lin.