
Highlights: #159 – Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less
80k After Hours
Introduction
Discussion on the reasons for optimism about alignment in AI development, including the usefulness of large language models in understanding natural language and morality, favorable results from alignment techniques like instruct GBT, and the belief in tractability and significant progress through focused research efforts. Also covers the ease of evaluation in generation for various tasks and the goal of automating alignment research.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.