
Hear This Idea
Hear This Idea is a podcast showcasing new thinking in philosophy, the social sciences, and effective altruism. Each episode has an accompanying write-up at www.hearthisidea.com/episodes.
Latest episodes

8 snips
Jun 7, 2023 • 3h 13min
#64 – Michael Aird on Strategies for Reducing AI Existential Risk
Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52.
In this episode, we talk about:
The basic case for working on existential risk from AI
How to begin figuring out what to do to reduce the risks
Threat models for the risks of advanced AI
'Theories of victory' for how the world mitigates the risks
'Intermediate goals' in AI governance
What useful (and less useful) research looks like for reducing AI x-risk
Practical advice for usefully contributing to efforts to reduce existential risk from AI
Resources for getting started and finding job openings
Key links:
Apply to be a Compute Governance Researcher or Research Assistant at Rethink Priorities (applications open until June 12, 2023)
Rethink Priority's survey on intermediate goals in AI governance
The Rethink Priorities newsletter
The Rethink Priorities tab on the Effective Altruism Forum
Some AI Governance Research Ideas compiled by Markus Anderljung & Alexis Carlier
Strategic Perspectives on Long-term AI Governance by Matthijs Maas
Michael's posts on the Effective Altruism Forum (under the username "MichaelA")
The 80,000 Hours job board

98 snips
May 13, 2023 • 2h 58min
#63 – Ben Garfinkel on AI Governance
Ben Garfinkel is a Research Fellow at the University of Oxford and Acting Director of the Centre for the Governance of AI.
In this episode we talk about:
An overview of AI governance space, and disentangling concrete research questions that Ben would like to see more work on
Seeing how existing arguments for the risks from transformative AI have held up and Ben’s personal motivations for working on global risks from AI
GovAI’s own work and opportunities for listeners to get involved
Further reading and a transcript is available on our website: hearthisidea.com/episodes/garfinkel
If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

18 snips
Apr 20, 2023 • 53min
#62 – Anders Sandberg on Exploratory Engineering, Value Diversity, and Grand Futures
Anders Sandberg is a researcher, futurist, transhumanist and author. He holds a PhD in computational neuroscience from Stockholm University, and is currently a Senior Research Fellow at the Future of Humanity Institute at the University of Oxford. His research covers human enhancement, exploratory engineering, and 'grand futures' for humanity.
This episode is a recording of a live interview at EAGx Cambridge (2023). You can find upcoming effective altruism conferences here: www.effectivealtruism.org/ea-global
We talk about:
What is exploratory engineering and what is it good for?
Progress on whole brain emulation
Are we near the end of humanity's tech tree?
Is diversity intrinsically valuable in grand futures?
How Anders does research
Virtue ethics for civilisations
Anders' takes on AI risk and whether LLMs are close to general intelligence
And much more!
Further reading and a transcript is available on our website: hearthisidea.com/episodes/sandberg-live
If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

Apr 3, 2023 • 60min
#61 – Rory Stewart on GiveDirectly and Massively Scaling Cash Transfers
Rory Stewart is the President of GiveDirectly and a visiting fellow at Yale’s Jackson Institute for Global Affairs. Before that, Rory was (amongst other things) a Member of Parliament in the UK, a Professor in Human Rights at Harvard, and a diplomat. He is also the author of several books and co-hosts the podcast The Rest Is Politics.
In this episode, we talk about:
The moral case for radically scaling cash-transfers
What we can do to raise governments’ ambitions to end global poverty
What Rory learned about aid since being Secretary of State for International Development
Further reading is available on our website: hearthisidea.com/episodes/stewart
If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

14 snips
Mar 15, 2023 • 1h 31min
#60 – Jaime Sevilla on Trends in Machine Learning
Jaime Sevilla is the Director of Epoch, a team of researchers investigating and forecasting the development of advanced AI. This is his second time on the podcast.
Over the next few episodes, we will be exploring the potential for catastrophe cause by advanced artificial intelligence. Why? First, you might think that AI is likely to become transformatively powerful within our lifetimes. Second, you might think that such transformative AI could result in catastrophe unless we’re very careful about how it gets implemented. This episode is about understanding the first of those two claims.
Fin spoke with Jaime about:
We've seen a crazy amount of progress in AI capabilities in the last few months; even weeks. How should we think about that progress continuing into the future?
How has the amount of compute used to train AI models been changing over time? What about algorithmic efficiency?
Will data soon become a bottleneck in training state-of-the-art text models?
Further reading is available on our website: hearthisidea.com/episodes/sevilla
If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

11 snips
Mar 2, 2023 • 32min
#59 – Chris Miller on the History of Semiconductors, TSMC, and the CHIPS Act
Chris Miller is an Associate Professor of International History at Tufts University and author of the book “Chip War: The Fight for the World's Most Critical Technology” (the Financial Times Business Book of the Year). He is also a Visiting Fellow at the American Enterprise Institute, and Eurasia Director at the Foreign Policy Research Institute.
Over the next few episodes we will be exploring the potential for catastrophe cause by advanced artificial intelligence. But before we look ahead, we wanted to give a primer on where we are today: on the history and trends behind the development of AI so far. In this episode, we discuss:
How semiconductors have historically been related to US military strategy
How the Taiwanese company TSMC became such an important player in this space — while other countries’ attempts have failed
What the CHIPS Act signals about attitudes to compute governance in the decade ahead
Further reading is available on our website: hearthisidea.com/episodes/miller
If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

Feb 24, 2023 • 2h 40min
Bonus: Preventing an AI-Related Catastrophe
AI might bring huge benefits — if we avoid the risks.
This episode is a rebroadcast of an article written for 80,000 Hours Preventing an AI-related catastrophe. It was written by Benjamin Hilton and narrated by Perrin Walker for Type III Audio.
The full url is: 80000hours.org/problem-profiles/artificial-intelligence
Why are we sharing this article on our podcast feed? Over the next few months, we are planning to do a bunch of episodes on artificial intelligence. But first, we wanted to share an introduction to the problems: something which explains why AI might pose existential-level threats to humanity, and why you might prioritise this problem when you’re thinking about what to work on or just what to learn more about. And we don’t think we’re going to be able to do a better job than this article.
You can view all our episodes at hearthisidea.com, and you give feedback at feedback.hearthisidea.com/listener.

10 snips
Feb 16, 2023 • 3h 42min
#58 – Carl Robichaud on Reducing the Risks of Nuclear War
A full writeup of this episode, including references and a transcript, is available on our website: https://hearthisidea.com/episodes/robichaud.
Carl Robichaud co-leads Longview Philanthropy’s programme on nuclear weapons.
We discuss:
Lessons from the Ukraine crisis
China's future as a nuclear power
Nuclear near-misses
The Reykjavik Summit, Acheson–Lilienthal Report and Baruch Plan
Lessons from nuclear risk for other emerging technological risks
What's happened to philanthropy aimed at reducing risks from nuclear weapons, and what philanthropy can support today
If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

21 snips
Jan 30, 2023 • 4h 1min
Bonus: Damon Binder on Economic History and the Future of Physics
Damon Binder is a research analyst at Open Philanthropy. His research focuses on potential risks from pandemics and from biotechnology. He previously worked as a research scholar at the University of Oxford’s Future of Humanity Institute, where he studied existential risks. Prior to that he completed his PhD in theoretical physics at Princeton University.
We discuss:
How did early states manage large populations?
What explains the hockeystick shape of world economic growth?
Did urbanisation enable more productive farming, or vice-versa?
What does transformative AI mean for growth?
Would 'degrowth' benefit the world?
What do theoretical physicists actually do, and what are they still trying to understand?
Why not just run bigger physics experiments to solve the latest problems?
What could the history of physics tell us about its future?
In what sense are the universe's constants fine-tuned?
Will the universe ever just... end?
Why might we expect digital minds to be a big deal?
Links
Damon's list of book recommendations
A Collection of Unmitigated Pedantry (history blog)
Cold Takes by Holden Karnofsky (blog on futurism and AI).
Highlight from Cold Takes: The Most Important Century series of posts
Crusader Kings
Europa Universalis
The Age of Em by Robin Hanson
The Five Ages of the Universe by Fred Adams
You can find more episodes and links at our website, hearthisidea.com.
(This episode is a bonus episode because it's less focused on topics in effective altruism than normal)

5 snips
Dec 20, 2022 • 1h 49min
#57 – Greg Nemet on Technological Change and How Solar Became Cheap
A full writeup of this episode, including references and a transcript, is available on our website: https://hearthisidea.com/episodes/nemet
Greg Nemet is a a Professor at the University of Wisconsin–Madison in the La Follette School of Public Affairs and an Andrew Carnegie Fellow. He is also the author of How Solar Energy Became Cheap
We discuss:
The distinct phases that helped solar PV move down its learning curve
What lessons we can learn on how to accelerate and affect other technologies
Theories about National Innovation Systems and lock-in
If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!