
Causal Bandits Podcast The Causal Gap: Truly Responsible AI Needs to Understand the Consequences | Zhijing Jin S2E7
12 snips
Oct 30, 2025 In this discussion, Zhijing Jin, a leading research scientist at the Max Planck Institute and incoming Assistant Professor at the University of Toronto, dives into the critical intersection of causality and AI ethics. She explores why LLMs often falter in their decision-making and the importance of causal reasoning in moral frameworks. Highlighting her work on multi-agent simulations, she reveals troubling patterns of self-destructive behavior in AI models. Zhijing also emphasizes the need for interdisciplinary research and greater awareness of causal understanding in AI to foster responsible development.
AI Snips
Chapters
Books
Transcript
Episode notes
Causality Improves Decision Responsibility
- Causality can make AI decisions more responsible by focusing on causal features instead of correlations.
- Zhijing Jin argues this helps avoid biased decisions in real-world systems like HR screening.
Regulate Misinformation, Adapt Education
- Regulate high-risk AI uses but adapt education through feedback loops rather than outright bans.
- Zhijing recommends regulation for misinformation while letting education evolve with model use.
Hybrid Path: Symbolic Experts With RL
- Narrow, symbolic prompts plus expert tools can yield reliable causal reasoning from LLMs.
- Combining symbolic experts with RL and generated data may bridge emergent and designed approaches.


