80k After Hours

The 80000 Hours team
undefined
9 snips
Feb 26, 2024 • 23min

Highlights: #179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Explore the evolutionary origins of emotions like anxiety and sadness. Understand how evolution has left humans vulnerable to depression. Delve into the relationship between low mood, unreachable goals, and mental health. Discover the misconceptions and realities of anxiety and depression rates. Learn about the evolution of emotions and morality in human behavior.
undefined
Feb 19, 2024 • 1h 21min

Actually After Hours #1: Bean Counting with Chana Messinger

Join the podcast team and guest Chana Messinger as they discuss effective altruism, family perspectives on intellectual pursuits, navigating perceptions of importance, truth-seeking in adversarial systems, interactions and pub trivia banter, insights on weddings and personal growth, nuances of apologies and reflections on drinking behavior, redefining work and career satisfaction, striking a balance in human interactions, and unraveling the origins of the term 'steelman'
undefined
4 snips
Feb 15, 2024 • 24min

Highlights: #178 – Emily Oster on what the evidence actually says about pregnancy and parenting

Guest Emily Oster challenges parenting stereotypes and discusses the evidence behind doulas in birth, the importance of vaginal birth, the long-term aspect of parenting, and the impact of childcare arrangements on children's outcomes.
undefined
Feb 7, 2024 • 27min

Highlights: #177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

Explore recent AI breakthroughs and the growing tension between AI safety and accelerationist camps. Delve into the challenges of self-driving cars and AI technology, advancements in medicine with GPT-4 and MedPOM 2, and the importance of balancing present understanding with future projections in AI development.
undefined
Jan 25, 2024 • 21min

Highlights: #146 – Robert Long on why large language models like GPT (probably) aren’t conscious

Robert Long, an expert on large language models and consciousness, discusses the ethical implications of AI systems experiencing pleasure and suffering. They explore the possibility of digital minds colonizing the solar system, the therapeutic effects of psychedelics, and whether large language models like GPT are conscious.
undefined
Jan 15, 2024 • 34min

Highlights: #176 – Nathan Labenz on the final push for AGI, understanding OpenAI’s leadership drama, and red-teaming frontier models

Nathan Labenz discusses OpenAI's final push for AGI, understanding the leadership drama, and red-teaming frontier models. Highlights include OpenAI's proposal for AI development and regulation, questioning China's dominance in AI, and exploring the need for control measures in AGI development.
undefined
Jan 9, 2024 • 20min

Highlights: #175 – Lucia Coulter on preventing lead poisoning for $1.66 per child

This is a selection of highlights from episode #175 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Lucia Coulter on preventing lead poisoning for $1.66 per childAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong
undefined
Jan 3, 2024 • 25min

Highlights: #174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

Nita Farahany, expert on neurotechnology, discusses the potential of brain-to-brain communication and overcoming interface limitations. She explores using neurotechnology to alleviate depression and implications for mental privacy. The podcast also delves into unlocking apps with brainwaves and neurotechnology in the workplace, addressing issues like monitoring employees and ethical concerns.
undefined
4 snips
Dec 14, 2023 • 31min

Highlights: #173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

Jeff Sebo, expert on digital minds and preventing moral catastrophes, discusses extending moral consideration to AI systems, determining moral weight for sentient AI, assessing the likelihood of AI satisfying conditions, repugnant conclusion in relation to insects and humans, and unintentional exploitation of AI systems.
undefined
4 snips
Dec 12, 2023 • 51min

Career review: AI safety technical research

Benjamin Hilton, AI safety technical research career reviewer, discusses various roles and career paths within AI safety technical research, tips for becoming an ML engineer and learning safety techniques, self-study recommendations for machine learning and AI safety, considering a PhD for AI safety research, and various organizations and research labs focusing on AI safety.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app