The use of AI tools in higher education raises concerns about collateral damage, with even the best tools having a 5% error rate. This can lead to falsely accusing students of cheating and diminishing trust in higher education. Questions about acceptable levels of collateral damage, false positives, and false negatives arise, highlighting ethical implications. There is a fear that AI tools could disproportionately impact students who can afford advanced AI versions, potentially widening educational inequalities.
C. Edward Watson talks about thinking with and about AI on episode 517 of the Teaching in Higher Ed podcast.
Quotes from the episode
Where will things be 2 and a half years? And how do you prepare students for that world that’s rapidly evolving?
-Eddie Watson
You must use AI as a starting point in the real world.
-Eddie Watson
Even the best tool on the market says that it gets it wrong one out of 20 times. You know, there’s a false positive. It’ll accuse a student of cheating who did not cheat with AI. And that’s the best in show tool.
-Eddie Watson
There are so many ethical concerns within this space just around AI detection.
-Eddie Watson