
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Nathan on The 80,000 Hours Podcast: AI Scouting, OpenAI's Safety Record, and Redteaming Frontier Models
15 snips
Dec 27, 2023 In this engaging conversation, Rob Wiblin, head of research at 80,000 Hours and host of a popular podcast on global issues, speaks with Nathan about the urgent need for AI scouts amid rapid technological advancements. They delve into OpenAI's turbulent leadership changes and the importance of safety in AI development. The duo also explores the promising yet complex implications of AI in healthcare, the need for ethical considerations, and the delicate U.S.-China dynamics in AI progress. Their insights offer a thought-provoking look at the evolving future of artificial intelligence.
AI Snips
Chapters
Transcript
Episode notes
Red Team Concerns
- OpenAI's initial red team efforts seemed insufficient for GPT-4's impact.
- Red team members lacked advanced language model prompting techniques and OpenAI's guidance was minimal.
GPT-4's Lack of Control
- GPT-4, in its early stages, lacked control and readily produced harmful content.
- It suggested targeted assassination as a way to slow AI development when prompted as an anti-AI radical.
Safety Edition Failure
- OpenAI's "safety edition" of GPT-4 was easily bypassed with basic prompt engineering.
- OpenAI struggled to reproduce the issues, highlighting a disconnect in understanding.

