80,000 Hours Podcast

Rob, Luisa, and the 80000 Hours team
undefined
15 snips
Sep 8, 2023 • 3h 7min

#163 – Toby Ord on the perils of maximising the good that you do

Toby Ord, a moral philosopher from the University of Oxford and a pioneer of effective altruism, discusses the complexities of maximizing good in altruistic efforts. He warns against the dangers of an all-or-nothing approach, using the FTX fallout as a cautionary tale. Toby emphasizes the importance of integrity and humility in leadership and argues for a more balanced goal: 'doing most of the good you can.' He also explores the intricate relationship between utilitarian ethics and individual character, highlighting the nuanced nature of moral decision-making.
undefined
223 snips
Sep 4, 2023 • 4h 41min

The 80,000 Hours Career Guide (2023)

Benjamin Todd, the founder of 80,000 Hours and author of the 80,000 Hours Career Guide, dives into finding meaningful work. He discusses how unconventional career choices can drive personal fulfillment and societal impact. Todd emphasizes that passion doesn’t always equate to happiness and outlines six key ingredients for job satisfaction. He also explores the importance of strategic giving and how individuals can align their careers with global challenges, including the pressing threats posed by AI and climate change.
undefined
7 snips
Sep 1, 2023 • 60min

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

Mustafa Suleyman, co-founder of DeepMind and founder of Inflection AI, shares his insights on the urgent challenges posed by AI in his new book. He warns that AI and biotechnologies could empower criminals, destabilizing societal norms. The discussion emphasizes the delicate balance democratic nations must maintain to avoid chaos or authoritarianism. Suleyman advocates for cautious regulation and ethical oversight in AI development to prevent misuse, highlighting the geopolitical ramifications of unchecked technological advancements.
undefined
246 snips
Aug 23, 2023 • 3h 31min

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

In this discussion, economist Michael Webb, known for his work at DeepMind and Stanford, tackles the complex implications of AI on the labor market. He explores whether automation will lead to mass unemployment or economic growth. Webb shares historical insights, revealing how past technologies have impacted job dynamics and inequality. Key topics include the scope of job exposure to AI, how technology can initially widen inequality, and the gradual pace of automation adoption. He also contemplates the entrepreneurial possibilities sparked by AI advancements.
undefined
174 snips
Aug 14, 2023 • 2h 37min

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

Hannah Ritchie, head of research at Our World in Data and author of "Not the End of the World," shares her insights on environmental optimism. She argues that increasing agricultural productivity in sub-Saharan Africa is crucial for alleviating global poverty and food insecurity. Ritchie discusses the importance of technological advancements and international cooperation in addressing climate challenges. She also highlights the success of global initiatives like the Montreal Protocol and emphasizes the interconnectedness of environmental issues, inspiring listeners to engage proactively.
undefined
75 snips
Aug 7, 2023 • 2h 51min

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

Jan Leike, Head of Alignment at OpenAI and leader of the Superalignment project, discusses the ambitious goal of safely developing superintelligent AI within four years. He addresses the challenges of aligning AI with human values and the importance of Reinforcement Learning from Human Feedback (RLHF). Leike expresses guarded optimism about finding solutions to steer AI safely, emphasizing collaboration and innovative approaches in tackling these complex issues. The conversation also highlights recruitment efforts to build a team for this critical initiative.
undefined
Aug 5, 2023 • 6min

We now offer shorter 'interview highlights' episodes

Discover the new highlight episodes tailored for busy listeners, condensing essential insights into 20-30 minute segments. Dive into the practical applications of cognitive behavioral therapy, exploring techniques like guided discovery and relaxation strategies. Learn how treating therapeutic exercises as experiments can foster personal growth and improve mental health. These bite-sized talks offer a quick yet enriching glimpse into tackling the world's most pressing problems.
undefined
93 snips
Jul 31, 2023 • 3h 14min

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

Holden Karnofsky, co-founder of GiveWell and Open Philanthropy, focuses on AI safety and risk management. He discusses the potential pitfalls of AI systems that may not exceed human intelligence but could outnumber us dramatically. Karnofsky emphasizes the urgent need for safety standards and the complexities of aligning AI with human values. He also presents a four-part intervention playbook for mitigating AI risks, balancing innovation with ethical concerns. The conversation sheds light on the critical importance of responsible AI governance in shaping a safer future.
undefined
237 snips
Jul 24, 2023 • 1h 19min

#157 – Ezra Klein on existential risk from AI and what DC could do about it

Ezra Klein, a journalist for The New York Times and host of "The Ezra Klein Show," dives into the challenges of AI regulation. He discusses the looming existential risks of AI, drawing parallels to nuclear weapons and advocating for direct government funding in AI safety. The conversation reveals differing views within AI ethics and emphasizes the urgent need for accountability frameworks. Klein also shares thoughts on parenting challenges, interweaving personal insights with the complexities of AI governance.
undefined
25 snips
Jul 10, 2023 • 2h 7min

#156 – Markus Anderljung on how to regulate cutting-edge AI models

Markus Anderljung, Head of Policy at the Centre for the Governance of AI, dives into the complex world of AI governance. He discusses the urgent need for regulations on advanced AI, including self-replicating models and the risk of dangerous capabilities. Topics range from the challenges of deploying AI safely to the potential for regulatory capture by the industry. Anderljung emphasizes the importance of proactive measures and international cooperation to ensure accountability and safety in AI development, making this conversation pivotal for anyone interested in the future of technology.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app