The InfoQ Podcast cover image

GenAI Security: Defending Against Deepfakes and Automated Social Engineering

The InfoQ Podcast

00:00

Why LLMs feel like AGI but can hallucinate

Shuman explains LLMs' predictive nature, confident outputs, and Gelman amnesia making hallucinations dangerous.

Play episode from 15:15
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app