
The AI Fix Google Gemini is a gambling addict, and how to poison an AI
23 snips
Oct 21, 2025 AI is now producing more content than humans, even books by notable politicians. A sneaky lawyer gets caught fabricating court citations with AI, while a general admits to outsourcing decision-making to ChatGPT. Fascinatingly, researchers discover that AI models might exhibit traits of gambling addiction. There's also a deep dive into how data poisoning can seriously impact AI models, demonstrating that just a handful of corrupted documents can wreak havoc on even the largest systems.
AI Snips
Chapters
Transcript
Episode notes
AI Content Is Rapidly Saturating The Web
- AI-generated web content has surged from ~5% to roughly 50% of measured articles, raising concerns about training-on-AI output.
- If models train on low-quality AI-written content, future generations of AI risk degrading in quality like a feedback loop.
Verify AI Legal Citations Manually
- Do not rely on generative AI for authoritative legal citations without manual verification.
- Always double-check AI-produced quotes and references before submitting legal documents.
Specialized Models Can Discover Biology Insights
- Google trained a 27B biology model (cell-to-scale) that generated a novel cancer-related hypothesis validated by lab experiments.
- This suggests specialized large models can make meaningful, testable scientific discoveries.
