
Michael Aird
Michael Aird is a senior research manager at Rethink Priorities, specializing in Artificial Intelligence Governance and Strategy, with a background in nuclear risk research and longtermist macrostrategy research.
Top 3 podcasts with Michael Aird
Ranked by the Snipd community

36 snips
Aug 31, 2022 • 0sec
#52 – Michael Aird on how to do Impact-Driven Research
Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian.
We discuss:
Whether you should stay in academia if you want to do impactful research
How to start looking for roles at impact-driven research organisations
What simple changes can improve how you write about your research
The uses of 'reductionism' and quantitative thinking
The concept of ‘reasoning transparency’
Michael’s experience investigating nuclear security
Key links:
Michael's posts on the EA Forum
Interested in EA/longtermist research careers? Here are my top recommended resources
Don’t think, just apply! (usually)
List of EA funding opportunities
Rethink Priorities
Reasoning Transparency
A central directory for open research questions
You can find more links, and read the full transcript, in this episode's write-up: hearthisidea.com/episodes/aird.
If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter.
Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

8 snips
Jun 7, 2023 • 3h 13min
#64 – Michael Aird on Strategies for Reducing AI Existential Risk
Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52.
In this episode, we talk about:
The basic case for working on existential risk from AI
How to begin figuring out what to do to reduce the risks
Threat models for the risks of advanced AI
'Theories of victory' for how the world mitigates the risks
'Intermediate goals' in AI governance
What useful (and less useful) research looks like for reducing AI x-risk
Practical advice for usefully contributing to efforts to reduce existential risk from AI
Resources for getting started and finding job openings
Key links:
Apply to be a Compute Governance Researcher or Research Assistant at Rethink Priorities (applications open until June 12, 2023)
Rethink Priority's survey on intermediate goals in AI governance
The Rethink Priorities newsletter
The Rethink Priorities tab on the Effective Altruism Forum
Some AI Governance Research Ideas compiled by Markus Anderljung & Alexis Carlier
Strategic Perspectives on Long-term AI Governance by Matthijs Maas
Michael's posts on the Effective Altruism Forum (under the username "MichaelA")
The 80,000 Hours job board

Apr 1, 2023 • 38min
Workshop: Building a Theory of Change for Your Research | Michael Aird | EAG DC 22
Michael Aird discusses the importance of developing a theory of change for research to enhance impact and career progression. He covers key concepts like paths to impact, stakeholders, and effective research project planning and execution. The workshop provides strategies to connect research to tangible actions and decision-makers, ensuring research is insightful and actionable.