Ethical AI

Published: Dec. 31, 2022, 9 a.m.

In this episode of High Theory, Alex Hanna talks with Nathan Kim about Ethical AI. Their conversation is part of our High Theory in STEM series, which tackles topics in science, technology, engineering, and medicine from a highly theoretical perspective. In this episode, Alex helps us think about the complicated recipes we call \u201cartificial intelligence\u201d and what we mean when we ask our technologies to be ethical.\nIn the episode Alex references an article by Emily Tucker, called \u201cArtifice and Intelligence,\u201d (Tech Policy Press,\xa017 March 2022) which suggests we should stop using terms like \u201cartificial intelligence\u201d and an opinion piece in the\xa0Washington Post,\xa0on a similar theme, by Timnit Gebru and Margaret Mitchell, \u201cWe warned Google that people might believe AI was sentient. Now it\u2019s happening\u201d (17 June 2022). She also mentions a claim by Blake Lemoine that Google\u2019s LaMDA (Language Model for Dialogue Applications) is sentient. We\u2019ll leave that one to your googling, if not your judgment.\nDr. Alex Hanna is Director of Research at the\xa0Distributed AI Research Institute\xa0(DAIR). A sociologist by training, her work centers on the data used in new computational technologies, and the ways in which these data exacerbate racial, gender, and class inequality. You can read her recent article, \u201cAI Ethics Are in Danger. Funding Independent Research Could Help,\u201d co-authored with Dylan Baker in the\xa0Stanford Social Innovation Review,\xa0and learn more about her work on her\xa0website.\nThis week\u2019s image was produced by\xa0DALL-E 2\xa0responding to the prompt:\xa0"generate the image of an artificial intelligence entity, deciding to protect shareholder interests over public good, in the style of Van Gogh."\nLearn more about your ad choices. Visit megaphone.fm/adchoices\nSupport our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/law