Marketplace Tech

AI-enabled ed tech vendors fail to disclose capabilities and safeguards, report finds

Nov 26, 2025
Hannah Quay-de la Vallee, a senior technologist at the Center for Democracy and Technology, dives into her recent report on AI in education. She discusses how tools like Wixi and ClassDojo personalize learning while highlighting risks associated with third-party models, including data protection concerns. Quay-de la Vallee emphasizes the need for schools to rigorously evaluate AI tools and outlines a transparency rubric focusing on data governance and effectiveness. She also addresses alarming failures in the sector, including inequitable treatment and privacy issues.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

General-Purpose Models Aren't School-Ready

  • Many ed‑tech tools are built on general-purpose models like ChatGPT or Claude which may not be tailored for schools.
  • That mismatch can create errors, inappropriate content, and unclear data flows into third-party models.
INSIGHT

Mismatch With Niche Educational Tasks

  • General models can struggle with niche, technical tasks like individualized education plans.
  • They may also fail to filter age‑appropriate content or tailor outputs to younger audiences.
ADVICE

Demand Clear Vendor Transparency

  • Require vendors to disclose use limits, data governance, and testing metrics before adoption.
  • Schools should demand vendor transparency so they can evaluate tools against local needs and safety requirements.
Get the Snipd Podcast app to discover more snips from this episode
Get the app