Snipd home pageGet the app
public
For Humanity: An AI Safety Podcast chevron_right

Episode #30 - “Dangerous Days At Open AI” For Humanity: An AI Risk Podcast

May 29, 2024
Exploration of AI safety competence at Open AI and the shift to AI Risk. Challenges in achieving super alignment, unethical behavior in powerful organizations, and navigating AI ethics and regulation. Risks of AI biothreats, uncertainties in AI development, and debates on human vs AI intelligence limits.
01:39:35
forum Ask episode
view_agenda Chapters
auto_awesome Transcript
1
Introduction
00:00 • 2min
chevron_right
2
AI Risk Focus and OpenAI Departures
01:45 • 10min
chevron_right
3
Exploring OpenAI's AI Safety Prioritization and Researcher Departures
11:23 • 3min
chevron_right
4
Challenges of Achieving Super Alignment in AI
14:18 • 10min
chevron_right
5
Unethical Behavior and Lack of Accountability in Powerful Organizations
24:39 • 8min
chevron_right
6
Navigating AI Ethics and Regulation
32:23 • 13min
chevron_right
7
AI Biothreat Risks and Responsible Deployment Policies
45:16 • 22min
chevron_right
8
Navigating the Uncertainties of AI Development
01:07:18 • 11min
chevron_right
9
Discussion on the Interpretation of a Paper in the AI Safety Movement
01:18:45 • 3min
chevron_right
10
Exploring the Limits of Human Intelligence in Contrast to AI
01:21:59 • 16min
chevron_right
HomeTop podcastsPopular guests