

Long Horizon Agents, State of MCPs, Meta's AI Glasses & Geoffrey Hinton is a LOVE RAT - EP99.17
169 snips Sep 19, 2025
Explore the intriguing world of AI as hosts delve into whether Anthropic intentionally degrades its models and the challenges of long-horizon execution. Learn about the latest advancements in internal custom enterprise agents and discover how Meta's new AI wearables could revolutionize daily tasks. The show takes a humorous turn with a recount of Geoffrey Hinton's amusing breakup story involving ChatGPT, earning him the playful title of 'love rat.' A lively discussion packed with tech insights and laughs!
AI Snips
Chapters
Transcript
Episode notes
Execution, Not Reasoning, Is The Bottleneck
- Long-horizon AI failures are often execution errors, not reasoning gaps.
- Small single-step accuracy gains can compound into huge improvements in task length.
Supervision Prevents Error Cascades
- Errors compound when an autonomous agent continues without corrective supervision.
- Human nudging or supervisory checks let models recover and produce correct multi-step results.
Add Supervisory Agents And Voting Checks
- Use supervisory agents to monitor and intervene on runner agents' execution paths.
- Implement voting or role-based checks (devil's advocate, skeptic) to catch wrong directions early.