

Last Week in AI
Skynet Today
Weekly summaries of the AI news that matters!
Episodes
Mentioned books

Oct 1, 2020 • 23min
Bias in Twitter & Zoom, LAPD Facial Recognition, GPT-3 Exclusivity
Dive into the heated discussion on the algorithmic bias issues plaguing Twitter and Zoom. Explore the controversial use of facial recognition by the LAPD, revealing surprising stats. Hear skepticism from AI experts about trusting technology in healthcare, especially post-COVID. Delve into the debate on introducing robotic companions in UK care homes – a potential remedy for loneliness, but with a need for human connection. Plus, learn about OpenAI's exclusive deal with Microsoft regarding GPT-3 and its implications for the future of AI.

Sep 24, 2020 • 24min
Face Mask Recognition, Detecting Disinformation, Protecting Kids, and Uber‘s Crash
The latest discussions revolve around the emergence of face mask recognition technology, highlighting both its benefits and ethical concerns. Experts delve into the necessity of protecting children from AI's pervasive influence and misinformation. Google’s advancements in AI for detecting disinformation are also examined, especially in the context of elections. Additionally, the podcast scrutinizes Uber's lack of accountability after a fatal self-driving car incident, raising critical questions about responsibility and future safety in AI development.

Sep 19, 2020 • 30min
The Evolving Impact of Robots on Jobs
In this insightful conversation, Professors Jong Hyun Chung, an expert in international trade, and Yong Suk Lee, a labor economics researcher at Stanford, delve into their research on the impact of robots on jobs. They discuss the surprising shift from initial job losses to recent trends of job creation and wage growth. The duo emphasizes the importance of collaborative robots and the need for supportive policies to navigate the evolving job market, highlighting the potential for both displacement and new opportunities in an increasingly automated world.

Sep 17, 2020 • 24min
GPT-3 Clickbait, Wildfires, Heroes, Standards, Exports
This week, innovative discussions cover AI's role in managing California's wildfire risks using drones. The podcast humorously introduces AI researchers as superheroes, while tackling serious media misrepresentation surrounding AI-generated content. It raises alarms over the sensationalism in journalism, especially regarding healthcare applications of AI. Lastly, there’s a critical look at the emergence of AI standards and export regulations, particularly for facial recognition technologies. Tune in for a blend of levity and serious insights!

Sep 10, 2020 • 20min
Heartbeat DeepFake Detection, Robot Drug Tests, Ethics as a Service
This week, researchers find a way to use heartbeat patterns to detect deepfake videos, a groundbreaking approach combining biology with AI. Meanwhile, a clever AI is constantly learning by scouring the entire web. Google steps in to tackle the tricky ethics surrounding artificial intelligence, offering guidance to others. Additionally, innovations in drug synthesis are showcased, particularly AI's role in accelerating chemical reactions for vaccine development. The intersection of AI, robotics, and ethical considerations is explored throughout!

Sep 4, 2020 • 29min
DeepFake Ads and Memes, New AI Ethics, and AI for Emergency Response
In this discussion, Daniel Beshear, a contributor to Skynet Today's Week in AI, dives into the fascinating world of deepfake technology and its implications for advertising, spotlighting Hulu's innovative use. He explores how meme creators are pushing boundaries with deepfakes while raising ethical concerns about disinformation. The conversation also delves into the evolving landscape of ethical AI, emphasizing the importance of inclusivity in tech development and accountability. Lastly, they address the potential of AI in enhancing emergency responses while navigating its complexities.

Sep 2, 2020 • 38min
U.S. Public Opinion about AI with Professor Paul Brewer and co-authors
An interview with Professor Paul Brewer and PhD Students James Bingaman and Ashely Paintsil about their new survey paper "Media Messages and U.S. Public Opinion about Artificial Intelligence" about what the general U.S. public thinks about AI, and how popular media and interaction with new technology shapes it.
Subscribe: RSS | iTunes | Spotify | YouTube
Check out coverage of similar topics at www.skynettoday.com
Theme: Deliberate Thought Kevin MacLeod (incompetech.com)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Aug 29, 2020 • 48min
Machine Learning + Procedural Content Generation with Julian Togelius and Sebastian Risi
Joining the conversation are Julian Togelius, an Associate Professor researching AI in game development, and Sebastian Risi, co-director of the Robotics Evolution and Arts Lab at IT University of Copenhagen. They dive into how procedural content generation boosts adaptability in gaming. The duo discusses the exciting synergy between AI and game design, showcasing its potential to evolve dynamic environments. They also touch on the challenges of creating diverse gaming worlds and the innovative AI tools transforming game development, including NPC enhancement and automated level creation.

Aug 26, 2020 • 27min
Hate Speech, Applied AI, NYPD, & Grades
Daniel Bashir, an AI news summarizer, joins Stanford PhDs Andrey Kurenkov and Sharon Zhou to delve into recent AI developments. They tackle controversial topics like NYPD's facial recognition use during protests, raising ethical concerns about surveillance. The discussion shifts to the struggles of social media platforms, especially Facebook, in managing hate speech and misinformation. They also critique the fairness of algorithms used for student grading during the pandemic, highlighting the need for better methods to support educational equity.

Aug 22, 2020 • 30min
AI Setting Grades, ICE Pays Clearview, and Much More
This week, experts tackle a $224,000 contract between ICE and Clearview AI, raising serious ethics questions around facial recognition. They delve into how AI can amplify historical biases in policing, leading to troubling implications for justice. The conversation shifts to the controversial use of AI in grade predictions during COVID, which risks deepening educational inequalities. Lastly, they address the dangers of misinformation through deep fakes and the crucial need for authenticity in a digitally manipulated world.


