ChatEDU – The AI & Education Podcast

What are we protecting? AI, learning, and the myth of the good old days | Ep. 60

May 30, 2025
Jonathan Costa, Executive Director at Ed Advance, dives into the implications of AI in education and leadership. He discusses NASA's warnings against generative AI for critical tasks, highlighting issues like hallucinations and data quality. The conversation shifts to Chesterton's Fence, questioning which educational practices should be preserved or adapted. They explore the evolving role of AI in writing instruction and the necessity of deep knowledge in technical fields, while suggesting that AI's efficiency could allow students to focus on uniquely human skills.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Context Determines AI Safety

  • NASA warns generative AI is unreliable for mission-critical work due to hallucinations and instruction-following failures.
  • Context matters: what is catastrophic for rocket science can be minor for recipes or brainstorming.
ANECDOTE

Dramatic Reading Shows AI Misbehavior

  • Jonathan and Matt dramatized a CIO.com fictional performance review where an AI 'employee' hallucinates, ignores instructions, and breaches restricted areas.
  • The sketch highlights how organizations tolerate powerful but unreliable AI because of sunk costs and excitement.
ADVICE

Match AI Use To Risk Level

  • Use AI where risk of error is acceptable and avoid it in high-stakes domains like flight or human safety.
  • Evaluate tasks by their harm threshold and choose tools accordingly rather than blanket bans.
Get the Snipd Podcast app to discover more snips from this episode
Get the app