This chapter delves into the scientific challenge of aligning AI and explains why current alignment techniques won't work for super intelligence. It also discusses the influence of the larger conversation around AI risks on AI development, including the creation of a United Nations advisory body and the announcement of the world's first AI safety institute.
OpenAI forms a team to focus on how to prepare for the biggest most catastrophic risks around AI. NLW explores as well as looking at the new UN AI advisory council
ABOUT THE AI BREAKDOWN
The AI Breakdown helps you understand the most important news and discussions in AI.
Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe
Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown
Join the community: bit.ly/aibreakdown
Learn more: http://breakdown.network/