

Google's AI emergency, Apple's lowkey AI moves, amazing Sora demos & more with Sunny Madra | E1904
11 snips Feb 27, 2024
Sunny Madra, an AI expert, joins the discussion to unravel the controversies surrounding Google’s AI training and its aftermath. The conversation shifts to Apple’s subtle yet impactful integration of AI tools and the innovative Sora model from OpenAI, which redefines movie trailer creation. They delve into the ethical implications of AI in media, ways to enhance transparency in AI governance, and highlight the upcoming Imagine AI Live conference, all while exploring the profound effects of AI on our daily lives and business practices.
AI Snips
Chapters
Transcript
Episode notes
LLM Response Factors
- Three factors influence LLM responses: training data, reinforcement learning by humans (RLHF), and guardrails.
- Guardrails are external software layers, separate from the model itself, controlling input and output, similar to content moderation in blogs.
ChatGPT Pipe Bomb Test
- Jason Calacanis tried tricking ChatGPT into explaining how pipe bombs are made, but the guardrails prevented it.
- ChatGPT offered general advice on responsibly explaining such topics instead.
Open-Source Guardrails
- Open-source guardrails allow for transparency and community review.
- Google should open-source their guardrails to address concerns about bias.