EAG Talks cover image

EAG Talks

Latest episodes

undefined
Nov 23, 2023 • 54min

Screening all DNA synthesis and reliably detecting stealth pandemics | Kevin Esvelt | EAG Boston 23

Pandemic security aims to safeguard the future of civilisation from exponentially spreading biological threats. In this talk, Kevin will outline two distinct scenarios–"Wildfire" and "Stealth"–by which pandemic-causing pathogens could cause societal collapse. He will then explain the ‘Delay, Detect, Defend’ plan to prevent such pandemics, including the key technological programmes his team oversees to mitigate pandemic risk: a DNA synthesis screening system that prevents malicious actors from synthesizing and releasing pandemic-causing pathogens; a pathogen-agnostic wastewater biosurveillance system for early detection of novel pathogens; AI/bio capability evaluations and technical risk mitigation strategies; and pandemic-proof PPE. Kevin M. Esvelt is an associate professor at the MIT Media Lab, where he leads the Sculpting Evolution Group in advancing biotechnology safely. In 2013, he invented CRISPR-based gene drive, kept it to himself until confident the technology favored defense, then revealed his findings and called for open discussion and safeguards before building the first CRISPR-based gene drive system and demonstrating reversibility in the laboratory. Having focused on mitigating catastrophic biorisks for over a decade, his MIT lab seeks to accelerate beneficial advances while safeguarding biotechnology against mistrust and misuse. Projects include building catalytic platforms for directed evolution, pioneering new ways of developing ecotechnologies with the guidance of local communities, developing early-warning systems to reliably detect any catastrophic biological threat, applying cryptographic methods to enable secure and universal DNA synthesis screening, and advising policymakers on how best to mitigate global catastrophic biorisks.
undefined
Nov 23, 2023 • 54min

Quantifying animal suffering and the impact of welfare interventions | Cynthia Schuck| EAG Boston 23

Content warning: This presentation contains images some may find distressing. In this talk Cynthia Schuck, Research Director of the Welfare Footprint Project, describes their approach to quantifying animal suffering, and how it can be used to measure the impact of interventions and inform policies. She also addresses major gaps in welfare research and in our understanding of suffering. The talk concludes by highlighting how collaboration with organizations and academics is key to enabling the broad mapping of animal suffering across settings, and points to promising areas of high-impact work. Cynthia earned her PhD in Zoology from Oxford University, which was followed by postdoctoral research on the evolution of advanced cognition. She also holds an MSc in Ecology and a BSc in Life Sciences. Between 2005 and 2017, Cynthia was the co-founder and Scientific Director of Origem Scientifica, a research company in the field of global health. Since 2018, she has been the Research Director of the Welfare Footprint Project.
undefined
Nov 23, 2023 • 56min

Panel on nuclear risk | James Acton, Francesca Giovannini, and Heather Williams | EAG Boston 23

This will be a panel discussion on nuclear policy, deterrence, inadvertent escalation, entanglement, emerging technologies, and related topics. James Acton holds the Jessica T. Mathews Chair and is co-director of the Nuclear Policy Program at the Carnegie Endowment for International Peace. A physicist by training, Acton is currently writing a book on the nuclear escalation risks of advanced nonnuclear weapons and how to mitigate them. Francesca Giovannini is the Executive Director of the Project on Managing the Atom at the Harvard Kennedy School's Belfer Center for Science & International Affairs. In addition, she is an Adjunct Associate Professor at the Fletcher School of Law and Diplomacy at Tufts University, where she designs and teaches graduate courses on global nuclear policies and emerging technologies. Heather Williams is the director of the Project on Nuclear Issues and a senior fellow in the International Security Program at the Center for Strategic and International Studies (CSIS). She is also an associate fellow with the Project on Managing the Atom in the Belfer Center for Science and International Affairs at the Harvard Kennedy School.
undefined
Nov 23, 2023 • 45min

Opening session | Frances Lorenz, Arden Koehler, Lizka Vaintrob, Kuhan Jeyapragasan | EAG Boston 23

Join the organizers for a brief welcome and a group photo of attendees, followed by three short talks from key community members. We will hear remarks from: Lizka Vaintrob Kuhan Jeyapragasan Arden Koehler Lizka runs the EA Newsletter, the EA Forum Digest, and the non-engineering side of the EA Forum at the Centre for Effective Altruism. Kuhan currently runs the Cambridge Boston Alignment Initiative, which runs AI technical safety and governance programming primarily at MIT and Harvard, and previously co-founded and ran the Stanford Existential Risks Initiative. Arden manages the 80,000 Hours website, with a particular focus on the advice they give their readers about how to use their careers to tackle the world’s most pressing problems. She has a PhD in philosophy from New York University (2020), where she specialised in ethics and attitudes toward time.
undefined
Nov 23, 2023 • 56min

Lessons from reinforcement learning from human feedback | Stephen Casper | EAG Boston 23

Reinforcement Learning from Human Feedback (RLHF) has emerged as the central alignment technique used to finetune state-of-the-art systems such as GPT-4, Claude-2, Bard, and Llama-2. However, RLHF has a number of known problems, and these models have exhibited some troubling alignment failures. How did we get here? What lessons should we learn? And what does it mean for the next generation of AI systems? Stephen is a third year Computer Science Ph.D student at MIT in in the Algorithmic Alignment Group advised by Dylan Hadfield-Menell. Formerly, he has worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI. His main focus is on interpreting, diagnosing, debugging, and auditing deep learning systems.
undefined
Nov 23, 2023 • 54min

Causes and uncertainty: Rethinking value in expectation | Bob Fischer, Laura Duffy | EAG Boston 23

We want to help others as much as we can. Knowing how is hard: there are many empirical, normative, and decision-theoretic uncertainties that make it difficult to identify the best paths toward that goal. Should we be focused on sparing children from vitamin deficiencies? Reducing suffering on factory farms? Mitigating the threats associated with AI? Should we split our attention between all three? Something else entirely? Two common answers to these questions are (1) that we ought to set priorities based on what would maximize expected value and (2) that expected value maximization supports prioritizing existential risk mitigation over all else. This presentation introduces a sequence from Rethink Priorities’ Worldview Investigations Team that examines these two claims. We argue that there are reasons to doubt them both—reasons stemming from significant uncertainty about the correct normative theory of ethical decision-making and uncertainty about many of the parameters and assumptions that enter into expected value calculations. We also introduce a tool for comparing the cost-effectiveness of different causes and summarize its implications for decision-making under uncertainty. There is a follow-on workshop (Modeling your own cause prioritization) straight after this talk for those who would like hands-on experience on using the model.
undefined
Nov 23, 2023 • 52min

Evidence Action: Inside the accelerator | Kevin Kelsey | EAG Boston 23

Associate Director of New Program Development and Cost-Effectiveness, Kevin Kelsey, will give us an inside look at Evidence Action's Accelerator - their engine for new program development. The Accelerator develops new programs using a six-stage, decision-focused process designed to scale only the most cost-effective, evidence-based interventions. He'll provide insight into how Evidence Action moves promising new programs through their development pipeline, bringing to scale only those with the potential to cost-effectively reach hundreds of millions of people.
undefined
Nov 23, 2023 • 45min

Closing session: Averting future pandemics | Matthew Meselson | EAG Boston 23

The final session of the conference will include some closing words, followed by a talk on averting future pandemics from Dr. Matthew Meselson. Dr. Meselson will discuss past pandemics, the mode of transmission of the particular pathogen, measures to prevent such transmission, and the practical politics of implementing such measures. Dr. Meselson has conducted research at Harvard University mainly in molecular genetics. Since 1963, Dr. Meselson has had an interest in biological and chemical weapons arms control and defense and has served as a consultant on this subject to numerous government agencies. He has organized and implemented three on-site investigations regarding the military use of herbicides in Viet Nam; the 1979 outbreak of anthrax in the Soviet city of Sverdlovsk; and the allegations of toxin warfare, the so-called 'yellow rain' in Laos and Cambodia during 1979-1983.
undefined
Nov 11, 2023 • 53min

What we’re learning about spreading EA ideas | Grace Adams | EAGxAustralia 2022

In this talk, we will cover: why we need to spread the ideas of effective altruism, the principles Giving What We Can uses to spread EA ideas, what has worked well in the past, some preliminary results and insights from recent marketing tests completed and how individuals can best share these ideas.
undefined
Nov 11, 2023 • 27min

Using stories to make the long-term feel real | Michael Aird and Elise Bohan | EAGxAustralia 2022

Join Elise Bohan (Future of Humanity Institute) and Michael Aird (Rethink Priorities) as they discuss how to use stories to get people to care about the long-term future. Elise will reflect on her book Future Superhuman and the importance of stories when talking about transhumanism.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode