

Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)
8 snips Nov 8, 2023
Ryan Kidd, Co-Director at MATS, discusses the accelerated training program for AI safety researchers, research directions, alignment gaps, and the importance of ethical decision-making in AI safety. They explore quantum decoherence, open-source innovation trade-offs, AI model training, aligning models, dangers of sycophancy, and human feedback mechanisms in AI safety research.
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9
Introduction
00:00 • 2min
Navigating AI Safety Risks and Opportunities
02:12 • 13min
Exploring Quantum Decoherence and Neural Process Modeling in AI
15:10 • 3min
Balancing Open-source Innovation and AI Safety Concerns
18:33 • 22min
Accelerated AI Safety Research Program Overview
40:12 • 7min
AI Training Models and Future Research Directions
47:18 • 4min
Enhancing AI Training Signals and Aligning Models
51:06 • 4min
Exploring the Dangers of Sycophancy in AI Models
55:12 • 2min
Exploring Human Feedback Mechanisms in AI Safety Research
56:51 • 20min