Training image generation models with skewed data sets can result in generating outputs that reflect the bias in the data. Google had to engineer prompts to ensure diversity in generated images, even to the extent of generating racially diverse Nazis. Despite historically accurate prompts, the model consistently produced racially mixed outputs, sparking controversy with its overly diverse results.
Our 157th episode with a summary and discussion of last week's big AI news!
Check out our sponsor, the SuperDataScience podcast. You can listen to SDS across all major podcasting platforms (e.g., Spotify, Apple Podcasts, Google Podcasts) plus there’s a video version on YouTube.
Bonus plug: also check out this new book by Stanford AI expert, bestselling author, and Last Week in AI supporter Jerry Kaplan! Generative Artificial Intelligence: What Everyone Needs to Know
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai
Timestamps + links:
- (00:00:00) Intro / Banter
- Tools & Apps
- Applications & Business
- Projects & Open Source
- Research & Advancements
- Policy & Safety
- Synthetic Media & Art
- Fun!