80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
Jan 24, 2024 • 2h 47min

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

AI entrepreneur Nathan Labenz discusses the capabilities and limitations of AI, concerns about AI deception, breakthroughs in protein folding, safety comparison of self-driving cars, the potential of GPT for vision, the online conversation around AI safety, negative impact of Twitter on public discourse, contrasting views on AI, backfire of anti-regulation sentiment in tech industry, importance of constructive policy discussions on AI, concerns about face recognition technology, capabilities and concerns of autonomous AI drones, staying up to date with AI research.
undefined
Jan 12, 2024 • 2h 59min

#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be

Topics discussed include worldview diversification, anthropic reasoning, longtermism in philanthropy, empirical returns and worldview diversification in funding solutions, exploring worldviews and allocating resources, fairness agreements and prior assumptions, evaluating mindsets and worldviews for doing good, challenges of human settlement in space, exploring the simulation argument and its implications, urgency of understanding transformative AI, estimating computational power needed to replicate the human brain, comparing human technology and nature, uncertainty and planning for transformative AI, comparing giving opportunities, opportunities and growth at Open Phil, strategies for getting involved and standing out, and discussions about TV shows and movies.
undefined
Jan 8, 2024 • 3h 51min

#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications

In this rebroadcast episode, Carl Shulman, research associate at Oxford University, discusses the practical implications of working on existential risks. He explains the high probability of catastrophic disasters and the potential to reduce them at an acceptable cost. The conversation covers topics such as AI safety, biological weapons programs, and the importance of clean energy research. The chapter also explores the challenges of estimating risks and identifying choke points in conflict escalation. The episode concludes with a discussion on the significance of the present time and the misinterpretation of existential risk work. And don't miss the discussion on unwinding with hot springs and the TV show Rick and Morty!
undefined
Jan 4, 2024 • 3h 22min

#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms

Mushtaq Khan, expert in using institutional economics to predict effective government reforms, discusses the challenges of networked corruption, the limitations of privatization and liberalization, the role of industrial policy in economic development, and strategies for building capabilities in developing countries.
undefined
Dec 31, 2023 • 1h 54min

2023 Mega-highlights Extravaganza

Highlights from each episode of the podcast show in 2023 include topics like punctuated equilibrium, fast AI takeoff, political action vs lifestyle changes, environmental impact, rational irrationality of voters, AI extinction risks, elephants' resistance to cancer, civilization lifespan, and neural interface hacking. These topics cover a wide range of interesting subjects and can be found in the 80K After Hours highlights reels.
undefined
Dec 27, 2023 • 2h 52min

#100 Classic episode – Having a successful career with depression, anxiety, and imposter syndrome

This podcast episode provides a searingly honest account of Howie's experiences with mental illness, including losing a beloved job and his journey to recovery. It challenges conventional wisdom on mental health and offers practical advice for improvement. The episode also explores the importance of balancing mental health and career success, overcoming avoidance, and seeking treatment for depression and anxiety. It highlights the supportive altruism community and concludes with personal stories and recommended podcast episodes.
undefined
Dec 22, 2023 • 3h 47min

#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

Nathan Labenz, entrepreneur and AI scout, discusses OpenAI's mission and the recent drama surrounding its leadership. He shares his experience as part of the GPT-4 red team, raising concerns about AI safety and control measures. The podcast explores OpenAI's actions in ensuring safety, the importance of specialized models, and the impact of GPT-4 on the field of AI. The conversation also delves into communication breakdowns, knowledge sharing practices, and the need for caution in open-sourcing AI models.
undefined
Dec 14, 2023 • 2h 14min

#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child

Lucia Coulter, Expert on preventing lead poisoning, speaks about the successful Lead Exposure Elimination Project (LEEP) reducing childhood lead exposure in poor countries and the devastating effects of lead poisoning on health and cognitive development. They discuss the cost-effectiveness of LEEP's work, the sources of lead exposure, and efforts to remove leaded paint. The chapter also explores challenges in lead testing methods, intervention strategies, funding, and industry support for regulation.
undefined
Dec 7, 2023 • 2h 1min

#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

Nita Farahany, expert on the impact of neurotechnology, discusses mind reading accuracy, hacking neural interfaces for depression, companies using neural data in the workplace, unlocking phones by singing in our heads, and the implications of wearable mind-reading technologies.
undefined
Nov 22, 2023 • 2h 38min

#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

Jeff Sebo, expert on digital minds and avoiding moral catastrophe, discusses the potential sentience of AI systems by 2030, the concept of digital minds and its impact on moral concepts, legal and political status for AI, the ethical implications of creating copies of digital minds, and the challenges of fragmentation within individual organisms. They also explore the possibility of granting legal personhood and political rights to AI systems and the major priorities for AI welfare research.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode