Hear This Idea cover image

Hear This Idea

Latest episodes

undefined
Nov 22, 2023 • 1h 28min

#73 – Michelle Lavery on the Science of Animal Welfare

Michelle Lavery discusses the science of animal welfare, including how scientists study animal emotions, the use of anthropomorphism, measuring animal preferences, perceptions of animal welfare science, and how to get involved with the study of animal emotions.
undefined
24 snips
Nov 4, 2023 • 1h 48min

#72 – Richard Bruns on Indoor Air Quality

Richard Bruns, Senior Scholar at Johns Hopkins Center for Health Security and former Senior Economist at the US FDA, discusses the importance of indoor air quality (IAQ) and interventions to improve it such as air filtration and germicidal UV light. They also explore the impact of particulate matter on human health, the value of statistical life, national policy changes needed for widespread adoption of IAQ interventions, and the role of FDA regulation. Additionally, they touch on rethinking cost-benefit analysis, complex systems, and cultural socialization.
undefined
4 snips
Oct 19, 2023 • 2h 53min

#71 – Saloni Dattani on Malaria Vaccines and Missing Data in Global Health

Saloni Dattani, a researcher at Our World in Data and founder of Works in Progress, discusses the history of malaria eradication efforts, delays in developing a malaria vaccine, the rollout of the RTS,S vaccine, the issue of missing global health data, and the uncounted deaths from snakebites in India. They also talk about new funding models for life-saving research like vaccines for TB and HIV.
undefined
Sep 20, 2023 • 1h 40min

#70 – Liv Boeree on Healthy vs Unhealthy Competition

Liv Boeree is a former poker champion turned science communicator and podcaster, with a background in astrophysics. In 2014, she founded the nonprofit Raising for Effective Giving, which has raised more than $14 million for effective charities. Before retiring from professional poker in 2019, Liv was the Female Player of the Year for three years running. Currently she hosts the Win-Win podcast (you’ll enjoy it if you enjoy this podcast). You can see more links and a full transcript at hearthisidea.com/episodes/boeree. In this episode we talk about: Is the ‘poker mindset’ valuable? Is it learnable? How and why to bet on your beliefs — and whether there are outcomes you shouldn’t make bets on Would cities be better without public advertisements? What is Moloch, and why is it a useful abstraction? How do we escape multipolar traps? Why might advanced AI (not) act like profit-seeking companies? What’s so important about complexity? What is complexity, for that matter? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
undefined
Aug 31, 2023 • 1h 47min

#69 – Jon Y (Asianometry) on Problems And Progress in Semiconductor Manufacturing

Jon Y, creator of the Asianometry YouTube channel and newsletter, discusses compute trends, semiconductor geopolitics, and the potential of room temperature superconductivity. They explore the distinctions between consumer and specialized hardware for AI, the role of semiconductor chips in the industry, and the challenges and concerns in semiconductor manufacturing. They also delve into complex supply chains, the concept of superconductivity, the research process, and the connection between nuclear weapons and the semiconductor supply chain.
undefined
Aug 4, 2023 • 1h 39min

#68 – Steven Teles on what the Conservative Legal Movement Teaches about Policy Advocacy

Steven Teles s is a Professor of Political Science at Johns Hopkins University and a Senior Fellow at the Niskanen Center. His work focuses on American politics and he written several books on topics such as elite politics, the judiciary, and mass incarceration. You can see more links and a full transcript at hearthisidea.com/teles In this episode we talk about: The rise of the conservative legal movement; How ideas can come to be entrenched in American politics; Challenges in building a new academic field like "law and economics"; The limitations of doing quantitative evaluations of advocacy groups. If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! Key links:
undefined
10 snips
Jul 18, 2023 • 2h

#67 – Guive Assadi on Whether Humanity Will Choose Its Future

Guive Assadi is a Research Scholar at the Center for the Governance of AI. Guive’s research focuses on the conceptual clarification of, and prioritisation among, potential risks posed by emerging technologies. He holds a master’s in history from Cambridge University, and a bachelor’s from UC Berkeley. In this episode, we discuss Guive's paper, Will Humanity Choose Its Future?. What is an 'evolutionary future', and would it count as an existential catastrophe? How did the agricultural revolution deliver a world which few people would have chosen? What does it mean to say that we are living in the dreamtime? Will it last? What competitive pressures in the future could drive the world to undesired outcomes? Digital minds Space settlement What measures could prevent an evolutionary future, and allow humanity to more deliberately choose its future? World government Strong global coordination Defensive advantage Should this all make us more or less hopeful about humanity's future? Ideas for further research Guive's recommended reading: Rationalist Explanations for War by James D. Fearon Meditations on Moloch by Scott Alexander The Age of Em by Robin Hanson What is a Singleton? By Nick Bostrom Other key links: Will Humanity Choose Its Future? by Guive Assadi Colder Wars by Gwern The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter by Joseph Henrich (and a review by Scott Alexander)
undefined
Jun 25, 2023 • 2h 32min

#66 – Michael Cohen on Input Tampering in Advanced RL Agents

Michael Cohen is is a DPhil student at the University of Oxford with Mike Osborne. He will be starting a postdoc with Professor Stuart Russell at UC Berkeley, with the Center for Human-Compatible AI. His research considers the expected behaviour of generally intelligent artificial agents, with a view to designing agents that we can expect to behave safely. You can see more links and a full transcript at www.hearthisidea.com/episodes/cohen. We discuss: What is reinforcement learning, and how is it different from supervised and unsupervised learning? Michael's recently co-authored paper titled 'Advanced artificial agents intervene in the provision of reward' Why might it be hard to convey what we really want to RL learners — even when we know exactly what we want? Why might advanced RL systems might tamper with their sources of input, and why could this be very bad? What assumptions need to hold for this "input tampering" outcome? Is reward really the optimisation target? Do models "get reward"? What's wrong with the analogy between RL systems and evolution? Key links: Michael's personal website 'Advanced artificial agents intervene in the provision of reward' by Michael K. Cohen, Marcus Hutter, and Michael A. Osborne 'Pessimism About Unknown Unknowns Inspires Conservatism' by Michael Cohen and Marcus Hutter 'Intelligence and Unambitiousness Using Algorithmic Information Theory' by Michael Cohen, Badri Vallambi, and Marcus Hutter 'Quantilizers: A Safer Alternative to Maximizers for Limited Optimization' by Jessica Taylor 'RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning' by Marc Rigter, Bruno Lacerda, and Nick Hawes 'Quantilizers: A Safer Alternative to Maximizers for Limited Optimization' by Jessica Taylor Season 40 of Survivor
undefined
24 snips
Jun 10, 2023 • 1h 44min

#65 – Katja Grace on Slowing Down AI and Whether the X-Risk Case Holds Up

Katja Grace is a researcher and writer. She runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of artificial intelligence (AI). Katja blogs primarily at worldspiritsockpuppet, and indirectly at Meteuphoric, Worldly Positions, LessWrong and the EA Forum. We discuss: What is AI Impacts working on? Counterarguments to the basic AI x-risk case Reasons to doubt that superhuman AI systems will be strongly goal-directed Reasons to doubt that if goal-directed superhuman AI systems are built, their goals will be bad by human lights Aren't deep learning systems fairly good at understanding our 'true' intentions? Reasons to doubt that (misaligned) superhuman AI would overpower humanity The case for slowing down AI Is AI really an arms race? Are there examples from history of valuable technologies being limited or slowed down? What does Katja think about the recent open letter on pausing giant AI experiments? Why read George Saunders? Key links: World Spirit Sock Puppet (Katja's main blog) Counterarguments to the basic AI x-risk case Let's think about slowing down AI We don't trade with ants Thank You, Esther Forbes (George Saunders) You can see more links and a full transcript at hearthisidea.com/episodes/grace.
undefined
8 snips
Jun 7, 2023 • 3h 13min

#64 – Michael Aird on Strategies for Reducing AI Existential Risk

Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52. In this episode, we talk about: The basic case for working on existential risk from AI How to begin figuring out what to do to reduce the risks Threat models for the risks of advanced AI 'Theories of victory' for how the world mitigates the risks 'Intermediate goals' in AI governance What useful (and less useful) research looks like for reducing AI x-risk Practical advice for usefully contributing to efforts to reduce existential risk from AI Resources for getting started and finding job openings Key links: Apply to be a Compute Governance Researcher or Research Assistant at Rethink Priorities (applications open until June 12, 2023) Rethink Priority's survey on intermediate goals in AI governance The Rethink Priorities newsletter The Rethink Priorities tab on the Effective Altruism Forum Some AI Governance Research Ideas compiled by Markus Anderljung & Alexis Carlier Strategic Perspectives on Long-term AI Governance by Matthijs Maas Michael's posts on the Effective Altruism Forum (under the username "MichaelA") The 80,000 Hours job board

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode