Marketplace All-in-One

AI-enabled ed tech vendors fail to disclose capabilities and safeguards, report finds

Nov 26, 2025
Hannah Quay-de la Vallee, a Senior Technologist at the Center for Democracy and Technology, coauthored a report on transparency in AI education technologies. She discusses how AI is increasingly used in classrooms and the potential risks it poses, including privacy violations and inequitable treatment. Hannah emphasizes the need for a transparency rubric for vendors, focusing on data governance and success metrics. She encourages schools to evaluate these tools carefully, highlighting the importance of context and ongoing monitoring in educational settings.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Concrete Tool Examples: Wixi And ClassDojo

  • Hannah names examples like Wixi, which personalizes learning with avatar conversations, and ClassDojo, which uses AI across many functions.
  • These concrete tool examples illustrate how varied AI uses already are in education.
INSIGHT

General AI Models May Not Fit Classrooms

  • Many ed‑tech tools are built on general-purpose models like ChatGPT or Claude, raising core questions about fit and safety.
  • These models may not be tailored for education and can produce errors or inappropriate content for students.
INSIGHT

Data Flows Can Escape School Control

  • Data fed into third‑party models may be used to further train those models or leave school control.
  • This raises concerns about data protection, student privacy, and compliance with education laws.
Get the Snipd Podcast app to discover more snips from this episode
Get the app