This Day in AI Podcast cover image

This Day in AI Podcast

EP35: AI Safety Gone Mad, Stable 3B Cheese Test, GPT4 Vision & DALL-E 3 Diversity + Sydney is BACK!

Oct 6, 2023
In this podcast, they discuss the wild world of AI image generation and vision, including racist cartoon captions, heartfelt poetry by Bing, and teaching AI to forget unwanted knowledge. They debate AI safety controls, the limitations of Turnitin for detecting AI-generated writing, biases in AI-generated images, and the potential disappearance of captchas. They also explore censorship potential in AI models and express gratitude for audience engagement.
01:17:39

Podcast summary created with Snipd AI

Quick takeaways

  • Few-shot learning in GPT allows AI models to learn how to solve problems with limited examples.
  • GPT for vision can generate unique images based on prompts, but may demonstrate bias in labeling based on race.

Deep dives

Few-shot learning for teaching the model how to think

The paper explores the concept of few-shot learning, where the model is taught how to think by providing it with a few examples of how to solve a problem.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner