Exploring the potential for large language models to actively mislead or deceive.
00:00
Transcript
Episode notes
On today’s show, we are joined by Thilo Hagendorff, a Research Group Leader of Ethics of Generative AI at the University of Stuttgart. He joins us to discuss his research, Deception Abilities Emerged in Large Language Models.
Thilo discussed how machine psychology is useful in machine learning tasks. He shared examples of cognitive tasks that LLMs have improved at solving. He shared his thoughts on whether there’s a ceiling to the tasks ML can solve.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode