Did Google make conscious AI?

Published: June 16, 2022, 8:10 a.m.

Earlier this week, Blake Lemoine, an engineer who works for Google\u2019s Responsible AI department, went public with his belief that Google\u2019s LaMDA chatbot is sentient.\n\nLaMDA, or Language Model for Dialogue Applications, is an artificial intelligence program that mimics speech and tries to predict which words are most related to the prompts it is given.\n\nWhile some experts believe that conscious AI is something that will be possible in the future, many in the field think that Lemoine is mistaken \u2014 and that the conversation he has stirred up about sentience takes away from the immediate and pressing ethical questions surrounding Google\u2019s control over this technology and the ease at which people can be fooled by it.\n\nToday on Front Burner, cognitive scientist and author of Rebooting AI, Gary Marcus, discusses LaMDA, the trouble with testing for consciousness in AI and what we should really be thinking about when it comes to AI\u2019s ever-expanding role in our day-to-day lives.