#281 - Risk Assessments for AI Learning Tools, a conversation, Part 1
Nov 7, 2024
auto_awesome
Dave Turnbull, the Deputy Head of Educator AI Training at Educate Ventures Research, shares insights on the importance of risk assessments for AI learning tools in education. He discusses who these assessments are for and their accessibility for educators. The conversation highlights ethical risks, data privacy, and the need for clear communication about AI tools. They explore the challenges educators face when integrating AI while ensuring student safety, emphasizing the shared responsibilities of developers and educators in fostering effective use of AI technologies.
Risk assessments for AI learning tools provide educators with crucial insights about potential limitations and misuse scenarios in the classroom.
The responsibility for using AI tools safely rests with teachers, who often prioritize student safety over unverified technologies despite lacking regulatory power.
Deep dives
The Need for AI Risk Assessments
Risk assessments for AI learning tools are essential to evaluate their implications for education. These assessments help educators understand the functionalities of AI tools while identifying potential limitations and misuse scenarios. The process aims to support teachers, particularly those who might lack time or expertise, by providing clear insights on using these technologies effectively. By offering guidance on risk mitigation, the assessments create a framework for informed decision-making around AI implementations in classrooms.
Target Audience for Assessments
The primary audience for AI risk assessments consists of educators, particularly teachers directly involved in classroom settings. These tools are designed to be accessible, using straightforward language and concise information to ensure comprehension among those without a technical background. Additionally, the assessments address the needs of tech developers by highlighting areas of concern that may be overlooked in the creation process. This dual focus ensures both educators and developers are informed about the potential risks and responsibilities associated with AI tools.
Balancing Data and Ethical Risks
Assessments categorize risks into data-related and ethical concerns, reflecting the complexities of integrating AI in the educational landscape. Data risks focus on privacy and security, particularly regarding the handling of sensitive student information. Ethical risks examine biases inherent in AI models, urging developers to consider the diverse backgrounds of users. This thorough separation of risks aims to prompt both educators and developers to engage in constructive dialogues about improving safety and efficacy in AI applications.
Empowering Educators Amidst Rapid Innovation
Teachers bear the responsibility of using AI tools safely within their classrooms, yet they often lack the power to enforce regulations or demand accountability from developers. Many educators instinctively mitigate risks by choosing not to expose students to unverified technologies, reflecting an understanding of safety in practice. With ongoing rapid development in AI technologies, a clear framework for guidance is increasingly necessary to protect learners. This scenario underscores the importance of fostering a proactive safety mindset among educators to manage their concerns with innovative tools.
In today’s episode, we have the first part of a two-part miniseries on risk management, risk mitigation and risk assessment in AI learning tools. Professor Rose Luckin is away in Australia, speaking internationally, so Rowland Wells takes the reins to chat with Educate Ventures Research team members about their experience managing risk as teachers and developers. What does a risk assessment look like and whose responsibility is it to take onboard its insights? Rose joins our discussion group towards the end of the episode, and in the second instalment of the conversation, Rowland sits down with Dr Rajeshwari Iyer of sAInaptic to hear her perspective on risk and testing features of a tool as a developer and CEO herself.
Dave Turnbull, Deputy Head of Educator AI Training, EVR
Ibrahim Bashir, Technical Projects Manager, EVR
Rose Luckin, CEO & Founder, EVR
Talking points and questions include:
Who are these for? what’s the profile of the person we want to engage with these risk assessments? They’re concise, easy-to-read, no technical jargon. But it’s still an analysis, for people with a research/evidence mindset. Many people ignore it: we know that even learning tool developers who put research on their tools ON THEIR WEBSITES do not actually have it read by the public. So how do we get this in front of people? Do we lead the conversation with budget concerns? Safeguarding concerns? Value for money?
What’s the end goal of this? Are you trying to raise the sophistication of conservation around evidence and risk? Many developers who you critique might just think you’re trying to make a name pulling apart their tools. Surely the market will sort itself out?
What’s the process involved in making judgements about a risk assessment? If we’re trying to demonstrate to the buyers of these tools, the digital leads in schools and colleges, what to look for, what’s the first step? Can this be done quickly? Many who might benefit from AI tools might not have the time to exhaustively hunt out all the little details of a learning tool and interpret them themselves?
Schools aren’t testbeds for intellectual property or tech interventions. Why is it practitioners’ responsibilities to make these kind of evaluations, even with the aid of these kind of assessments? Why is the tech and AI sector not capable of regulating their own practices?
You’ve all worked with schools and learning and training institutions using AI tools. Although this episode is about using the tools wisely, effectively and safely, please tell us how you’ve seen teaching and learning enhanced with the safe and impactful use of AI
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode