Episode 26: Universities Anxiously Buy in to the Hype (feat. Chris Gilliard), February 5 2024
Feb 15, 2024
auto_awesome
Chris Gilliard, Tech Fellow, discusses the lack of student protections in AI-driven educational technologies at universities. Topics include the wave of universities adopting AI, limitations of Chat GPT, surveillance concerns, consequences of AI in higher education, privacy concerns in enterprise chatbots, impact of AI on journalism, and misconceptions about public statements and the retirement of a subway robot.
The deployment of AI tools in education raises concerns about their effectiveness in promoting learning and addressing inequalities, as well as the potential privacy implications and displacement of human instructors.
Employers offering wellness chatbots as a worker benefit raises concerns about their effectiveness in providing mental health support, as well as the potential for surveillance and data exploitation.
Advancements in image processing technology enable AI systems to identify locations in photos, posing privacy risks and potential misuse.
Deep dives
The Rise of AI in Education
AI tools, like chatbots, are being increasingly deployed in educational settings. However, there are concerns regarding the inadequacy of these tools in helping students learn, as well as the potential harms of biases and surveillance. Critics argue that AI technology perpetuates inequalities rather than leveling the playing field, as claimed by some proponents. Moreover, the consent and privacy of students involved in such initiatives are questionable. The push for AI in education raises important questions about the future of learning and the potential displacement of human instructors.
Employers Offer Wellness Chatbots
Employers are now offering wellness chatbots as a worker benefit. These chatbots utilize artificial intelligence to engage in therapist-like conversations, make diagnoses, and provide mental health support. The aim is to address the rising need for mental health services, given the high demand and limited supply of mental health professionals. However, there are concerns about the effectiveness and privacy of relying on chatbots for mental health support, as well as the potential for surveillance and data exploitation.
AI Can Identify Locations in Photos
Advancements in image processing technology now allow AI systems to accurately identify locations in photos. This raises privacy concerns, as AI can potentially track and analyze individuals' movements based on their online photos. There are worries about government surveillance, corporate tracking, and the potential for misuse, such as stalking. It is important to be cautious about sharing location information and consider the privacy implications of new AI capabilities.
Virtual Students and AI
Ferris State University in Michigan has created virtual students using AI technology. These virtual students enroll in classes and participate in lessons alongside human classmates. The aim is to gather insights about the college experience, but critics argue that virtual students cannot replicate the true learning and social interactions of human students. Concerns arise regarding informed consent, as well as the role of AI in replacing instructors and casualizing education.
Robot Patrol in New York Subway
The New York Police Department's robotic patrol, known as the Knightscope K5, has faced difficulties in navigating the subway and has mainly remained plugged into a charger. It has required police officers to accompany it, raising questions about the effectiveness and necessity of integrating robots into law enforcement. Critics argue that resources could be better allocated to address other public safety concerns.
Fresh AI Hell
These examples highlight concerning trends in the AI landscape. From replicating famous individuals using AI chatbots to implementing AI in mental health support, there are instances where AI is being relied upon in ways that may raise ethical and privacy concerns. Furthermore, advancements in image processing and location tracking capabilities bring about potential privacy risks. It is imperative to critically examine the impact and implications of AI technologies as they continue to be integrated into various areas of our lives.
Just Tech Fellow Dr. Chris Gilliard aka "Hypervisible" joins Emily and Alex to talk about the wave of universities adopting AI-driven educational technologies, and the lack of protections they offer students in terms of data privacy or even emotional safety.