Future of Life Institute Podcast cover image

Samuel Hammond on AGI and Institutional Disruption

Future of Life Institute Podcast

00:00

Alignment with AGI and the Uncertainty of Learned Values

This chapter explores why alignment with artificial general intelligence (AGI) might be easier than previously thought, as large language models are sensitive to human concepts of value and context. However, there is still uncertainty about what the model has learned and if it complies with human values in all scenarios. The limitations of AI models and the potential similarities with human representations are discussed, along with progress in interpretability and the timeline for AGI development.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app