80,000 Hours Podcast cover image

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

80,000 Hours Podcast

00:00

Reevaluating Long-Termism in Ethical Considerations

This chapter presents a critical analysis of long-termism, questioning the effectiveness of prioritizing future generations over present needs. It advocates for a balanced approach that addresses current ethical challenges, particularly those related to AI, in addition to considering future implications.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app