
Samuel Hammond on AGI and Institutional Disruption
Future of Life Institute Podcast
Alignment with AGI and the Uncertainty of Learned Values
This chapter explores why alignment with artificial general intelligence (AGI) might be easier than previously thought, as large language models are sensitive to human concepts of value and context. However, there is still uncertainty about what the model has learned and if it complies with human values in all scenarios. The limitations of AI models and the potential similarities with human representations are discussed, along with progress in interpretability and the timeline for AGI development.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.