Artificial General Intelligence (AGI) Show with Soroush Pour cover image

Artificial General Intelligence (AGI) Show with Soroush Pour

Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)

Nov 8, 2023
Ryan Kidd, Co-Director at MATS, discusses the accelerated training program for AI safety researchers, research directions, alignment gaps, and the importance of ethical decision-making in AI safety. They explore quantum decoherence, open-source innovation trade-offs, AI model training, aligning models, dangers of sycophancy, and human feedback mechanisms in AI safety research.
01:16:58

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • MATS program trains AI safety researchers to align AI systems with human values for a safer future.
  • Research directions in MATS include scalable oversight strategies and integrating AI systems into human processes for efficiency.

Deep dives

MATTS Program Overview

The MATS program, ML Alignment and Theory Scholars Program, aims to train researchers in AI safety, emphasizing the significance of understanding and steering advanced AI systems for a safer future. By offering seminars, mentorship, and research opportunities, MATS focuses on aligning AI systems with human values to mitigate existential risks and ensure a positive future for humanity.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner