The Conversation Weekly

Silicon Valley’s bet on a future of AI-enabled warfare

Jan 16, 2025
Elke Schwarz, a Reader in political theory at Queen Mary University, dives into the moral implications of AI in warfare. She discusses how war zones like Gaza and Ukraine are testing grounds for autonomous weapons. With billions from Silicon Valley fueling this trend, Schwarz sheds light on the ethical dilemmas of using AI for target identification and the rapid rise of defense tech startups. She also emphasizes the risks of deploying untested systems and questions the narratives that prioritize tech over ethical considerations.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI in Military Operations

  • Militaries use AI for logistics, translation, and supply chains.
  • Thorny issues arise when AI takes over human decision-making, like targeting.
ANECDOTE

Lavender AI System in Gaza

  • Israel reportedly used an AI system called Lavender to identify Hamas targets in Gaza.
  • The system flagged over 37,000 Palestinians, raising concerns about accuracy.
INSIGHT

Accelerated Targeting with AI

  • AI systems in warfare, like Project Maven, aim to shorten the "sensor to shooter" timeline.
  • This acceleration raises concerns about human oversight and potential for errors.
Get the Snipd Podcast app to discover more snips from this episode
Get the app