Earlier this week, Blake Lemoine, an engineer who works for Google’s Responsible AI department, went public with his belief that Google’s LaMDA chatbot is sentient.
LaMDA, or Language Model for Dialogue Applications, is an artificial intelligence program that mimics speech and tries to predict which words are most related to the prompts it is given.
While some experts believe that conscious AI is something that will be possible in the future, many in the field think that Lemoine is mistaken — and that the conversation he has stirred up about sentience takes away from the immediate and pressing ethical questions surrounding Google’s control over this technology and the ease at which people can be fooled by it.
Today on Front Burner, cognitive scientist and author of Rebooting AI, Gary Marcus, discusses LaMDA, the trouble with testing for consciousness in AI and what we should really be thinking about when it comes to AI’s ever-expanding role in our day-to-day lives.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode