MLOps.community

Cracking the Black Box: Real-Time Neuron Monitoring & Causality Traces

20 snips
Jan 27, 2026
Mike Oaten, founder and CEO of TIKOS, builds AI assurance and explainability tools for high-stakes systems. He discusses real-time neuron monitoring, capturing internal activations and causality traces, and translating fuzzy regulations into concrete tests. Conversations cover regulatory risks of closed models, creating golden profiles for gating, and mapping internal traces to audit-ready explainability.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Standards Provide A Practical Roadmap

  • The EU AI Act pairs with detailed harmonized standards that act as a practical, presumption-of-conformity roadmap.
  • Mike Oaten argues following these standards gives teams a clear checklist to meet regulator expectations.
INSIGHT

Observability Must Target Model Internals

  • Regulatory observability focuses on risk-specific monitoring, not just uptime and latency metrics.
  • TIKOS captures internal causal chains during inference to detect bias, robustness and other high-risk behaviors.
ADVICE

Prefer Open Weights For High‑Stake Use

  • Avoid relying solely on closed commercial models for high-risk deployments when you cannot inspect internals.
  • Choose open-weights models when regulators demand explainability and controllability.
Get the Snipd Podcast app to discover more snips from this episode
Get the app