
Super Data Science: ML & AI Podcast with Jon Krohn 948: In Case You Missed It in November 2025
28 snips
Dec 12, 2025 Tyler Cox, a distinguished engineer, dives into innovative state-space models and Mamba optimizations for long-context LLMs, revealing how they outperform traditional architectures. Vijoy Pandey discusses the necessity of zero-trust access and robust permissions in AI agents. Marc Dupuis warns about the risks of metric sprawl in AI-driven analytics and advocates for a collaborative approach to metric design. Meanwhile, Maya Ackerman champions co-creative AI tools that empower users and nurture creative diversity.
AI Snips
Chapters
Transcript
Episode notes
State-Space Models Enable Long Contexts
- State-space models map inputs to latent states and then to outputs using two equation sets for sequence tasks.
- Mamba-based hybrids yield linear context scaling and much lower memory for long-context language modeling.
Use Just-In-Time Task-Based Access
- Implement just-in-time, task-based permissions with tokenized access and ephemeral runtimes.
- Combine identity providers, semantic parsing of agent discourse, and sandboxed execution to revoke access after task completion.
Directory + Evaluation Build Agent Trust
- Trust for agents starts with discoverable identity and a reputation directory of vetted agents.
- Continuous evaluation feeds reputation scores that help enforce trusted agent selection.




