Future Matters Reader cover image

Future Matters Reader

Latest episodes

undefined
Mar 20, 2023 • 19min

Holden Karfnosky — Success without dignity: a nearcasting story of avoiding catastrophe by luck

Exploring AI development and existential risks, the podcast delves into challenges of achieving human-level AI safely, the impact of AI training on human concepts, risks in AI alignment, and strategies to mitigate them. It also discusses handling AI risks in human ventures, emphasizing the need for alignment research, standards, security, and communication.
undefined
Mar 20, 2023 • 15min

Larks — A Windfall Clause for CEO could worsen AI race dynamics

In this post, Larks argues that the proposal to make AI firms promise to donate a large fraction of profits if they become extremely profitable will primarily benefitting the management of those firms and thereby give managers an incentive to move fast, aggravating race dynamics and in turn increasing existential risk. https://forum.effectivealtruism.org/posts/ewroS7tsqhTsstJ44/a-windfall-clause-for-ceo-could-worsen-ai-race-dynamics
undefined
Mar 20, 2023 • 7min

Otto Barten — Paper summary: The effectiveness of AI existential risk communication to the American and Dutch public

This is Otto Barten's summary of 'The effectiveness of AI existential risk communication to the American and Dutch public' by Alexia Georgiadis. In this paper Alexia measures changes in participants' awareness of AGI risks after consuming various media interventions. Summary: https://forum.effectivealtruism.org/posts/fqXLT7NHZGsLmjH4o/paper-summary-the-effectiveness-of-ai-existential-risk Original paper: https://existentialriskobservatory.org/papers_and_reports/The_Effectiveness_of_AI_Existential_Risk_Communication_to_the_American_and_Dutch_Public.pdf Note: Some tables in the summary have been omitted in this audio version.
undefined
Mar 20, 2023 • 58min

Shulman & Thornley — How much should governments pay to prevent catastrophes? Longtermism's limited role

Carl Shulman & Elliott Thornley argue that the goal of longtermists should be to get governments to adopt global catastrophic risk policies based on standard cost-benefit analysis rather than arguments that stress the overwhelming importance of the future. https://philpapers.org/archive/SHUHMS.pdf Note: Tables, notes and references in the original article have been omitted.
undefined
Mar 14, 2023 • 13min

Elika Somani — Advice on communicating in and around the biosecurity policy community

"The field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if you are a non-technical person or more junior, or talking to a new (non-EA) expert). This might help make more headway on safer biosecurity policy." https://forum.effectivealtruism.org/posts/HCuoMQj4Y5iAZpWGH/advice-on-communicating-in-and-around-the-biosecurity-policy Note: Some footnotes in the original article have been omitted.
undefined
Mar 14, 2023 • 7min

Riley Harris — Summary of 'Are we living at the hinge of history?' by William MacAskill.

The Global Priorities Institute has published a new paper summary: 'Are we living at the hinge of history?' by William MacAskill. https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/ Note: Footnotes and references in the original article have been omitted.
undefined
Mar 14, 2023 • 6min

Riley Harris — Summary of 'Longtermist institutional reform' by Tyler M. John and William MacAskill

The Global Priorities Institute has published a new paper summary: 'Longtermist institutional reform' by Tyler John & William MacAskill. https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/ Note: Footnotes and references in the original article have been omitted.
undefined
Mar 13, 2023 • 45min

Hayden Wilkinson — Global priorities research: Why, how, and what have we learned?

The Global Priorities Institute has released Hayden Wilkinson's presentation on global priorities research. (The talk was given in mid-September last year but remained unlisted until now.) https://globalprioritiesinstitute.org/hayden-wilkinson-global-priorities-research-why-how-and-what-have-we-learned/
undefined
Mar 13, 2023 • 8min

Piper — What should be kept off-limits in a virology lab?

New rules around gain-of-function research make progress in striking a balance between reward — and catastrophic risk. https://www.vox.com/future-perfect/2023/2/1/23580528/gain-of-function-virology-covid-monkeypox-catastrophic-risk-pandemic-lab-accident
undefined
Mar 13, 2023 • 11min

Ezra Klein — This changes everything

"One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough." https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode