

EAG Talks
Aaron Bergman
Talks from EAG Global
Episodes
Mentioned books

Jun 21, 2024 • 33min
Peace Boat’s Hibakusha Project—testimony from survivors of Hiroshima and Nagasaki | EAG London: 2024
This session features testimony from Ms. Tanaka Toshiko, Mr. Ogawa Tadayoshi, and other representatives from the Peace Boat's Hibakusha Project. Ms. Tanaka Mr. Ogawa are survivors of the bombings of Hiroshima and Nagasaki—this session will recount the devastating impacts of nuclear warfare through first-hand accounts, illustrating the importance of ensuring such horrors are never repeated.
Ms Tanaka Toshiko was exposed to the atomic bomb while on her way to school, 2.3km from the hypocenter. She covered her face with her right arm at the time without thinking, and therefore suffered burns to her head, right arm and the back left side of her neck. She had a high fever from that night and lost consciousness, but was somehow able to survive. She has travelled to the United States ten times in the past seven years, including on invitation of the “Hibakusha Stories” project in New York, and has given testimony to many people in the US. To celebrate the United Nations International Day of Peace in 2020, five U.S. gardens have raked “patterns for peace” into their karesansui (Japanese style garden). The patterns were designed by Tanaka Toshiko.
The day the bomb was dropped, Mr. Ogawa Tadayoshi had been evacuated outside of the city, however, he was exposed to radiation when his family returned to Nagasaki one week later to check on their home. He does not have any direct memory of being exposed to the bomb, however he participated in a Peace Boat voyage in 2012 to pass on the testimonies of the atomic bomb survivors to future generations. Mr Ogawa is an amateur photographer, and is active collecting pictures taken every year on August 9 at two minutes past eleven, the time the bomb was dropped on Nagasaki. Last year, Ogawa collected 200 pictures from Nagasaki and around the world and is aiming to collect 1,000 pictures at the 100th anniversary of the atomic bombing.
Watch on Youtube: https://www.youtube.com/watch?v=YXP9OlCR1Ho

Mar 6, 2024 • 12min
Opening Remarks | Ben West | EA Global Bay Area 2024
Join the organizers for a brief welcome and a group photo of attendees, followed by a short talk from Ben West.
Ben is the Interim Managing Director of the Centre for Effective Altruism (CEA) and he is responsible for overseeing CEA’s work during the transition to new permanent leadership. He will speak on the current state of the EA movement and possible directions for its future.
Watch on Youtube: https://www.youtube.com/watch?v=XqwE9RyxxQs&t=4s

Mar 6, 2024 • 48min
Pandemic Proof PPE & AI Safety | Ryan Ritterson | EA Global Bay Area 2024
In this session, Ryan Ritterson will use examples of his and others’ work at Gryphon to tell stories about how to effectively influence public policy. Based on Gryphon’s data-driven approach, he’ll provide key lessons learned and takeaways for others interested in influencing policy.
Along the way, he’ll also talk about two recent Gryphon efforts, including one focused on developing and securing pandemic-proof PPE, and the other on Gryphon’s recent AI safety contributions, which played a key role in informing US policy, including the recent executive order.
Watch on Youtube: https://www.youtube.com/watch?v=0ohVtc5Vnps

Mar 6, 2024 • 52min
Scheming AIs | Joe Carlsmith | EA Global Bay Area 2024
This talk examines whether advanced AIs that perform well in training will be doing so in order to gain power later — a behavior Joe Carlsmith calls "scheming" (also often called "deceptive alignment"). This talk gives an overview of his recent report on the topic, available on arXiv here: https://arxiv.org/abs/2311.08379.
Joe Carlsmith is a senior research analyst at Open Philanthropy, where he focuses on existential risk from advanced artificial intelligence. He also writes independently about various topics in philosophy and futurism, and he has a doctorate in philosophy from the University of Oxford.
Watch on Youtube: https://www.youtube.com/watch?v=AxUTiGS6BHM

Mar 6, 2024 • 41min
Preventing Engineered Pandemics | Tessa Alexanian | EA Global Bay Area 2024
Should you be able to order smallpox DNA in the mail? Biosecurity professionals have argued for almost 20 years that synthesis companies should screen orders so that pathogen and toxin sequences are only sent to people with a real scientific use for them. Now, it seems like fears of AI-engineered pandemics may spur governments to make screening mandatory.
Tessa will discuss why securing nucleic acid synthesis is a biosecurity priority, methods for identifying concerning synthesis orders, and why it’s so challenging to implement robust screening systems.
Watch on Youtube: https://www.youtube.com/watch?v=q-dVdRe3oco

Mar 6, 2024 • 56min
Media Professionals on Impactful GCR Communications | Kelsey Piper, Shakeel Hashim, and Clara Collier | EA Global Bay Area: 2024
Media coverage of catastrophic risk is going mainstream; our panel of media professionals discuss the ways it's going right vs going wrong, and what actions they are taking to focus the conversation on well-reasoned risk models and effective interventions.
On the panel will be Kelsey Piper of Vox Future Perfect, Shakeel Hashim of the AI Safety Communications Centre, and Clara Collier of Asterisk Magazine.
Watch on Youtube: https://www.youtube.com/watch?v=33XDXk6wBgg

Mar 6, 2024 • 53min
Cause Prioritization & Global Catastrophic Risk | Hayley Clatterbuck | EA Global Bay Area: 2024
Hayley Clatterbuck will summarize key findings from Rethink Priorities' ""Causes and uncertainty: Rethinking value in expectation"" (CURVE) project, which evaluated the cost-effectiveness of existential risk mitigation projects under both standard expected utility maximization and risk aversion. Persistence of effect and future growth trajectory are the strongest contributors to expected utility. Different risk models often deliver different recommendations about which existential risk projects to pursue, but there are some actions for which these models agree. Clatterbuck will use these findings to draw lessons for cause prioritization within the global catastrophic risk space.

Mar 6, 2024 • 46min
The Precipice Revisited | Toby Ord | EA Global Bay Area 2024
The five years since Toby Ord wrote The Precipice have seen dramatic changes to the landscape of existential risk. Ord will explore the biggest changes to the biggest risks, showing how new developments have upended key assumptions and pushed some of these risks into new phases. And we’ll see how the world has woken up to the very idea of existential risk, with it becoming an explicit priority and talking point on the global stage.
Toby Ord is a philosopher at Oxford University. His work focuses on the big picture questions facing humanity. What are the most important issues of our time? How can we best address them?
Toby's earlier work explored the ethics of global health and global poverty. This led him to found Giving What We Can, whose 8,000 members have so far donated over 300 million dollars to the most effective charities helping to improve the world. He also co-founded the wider effective altruism movement, encouraging thousands of people to use reason and evidence to help others as much as possible.
His current research is on avoiding the threat of human extinction and thus safeguarding a positive future for humanity, which he considers to be among the most pressing and neglected issues we face. He addresses this in his book, The Precipice.
Watch on Youtube: https://www.youtube.com/watch?v=vQ3ml6wcsn4

Mar 6, 2024 • 50min
Sleeper Agents | Evan Hubinger | EA Global Bay Area: 2024
If an AI system learned a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? That's the question that Evan and his coauthors at Anthropic sought to answer in their work on ""Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training"", which Evan will be discussing.
Evan Hubinger leads the new Alignment Stress-Testing team at Anthropic, which is tasked with red-teaming Anthropic's internal alignment techniques and evaluations. Prior to joining Anthropic, Evan was a Research Fellow at the Machine Intelligence Research Institute and worked on a variety of theoretical alignment work, including ""Risks from Learned Optimization in Advanced Machine Learning Systems"". Evan will be talking about the Anthropic Alignment Stress-Testing team's first paper, ""Sleeper Agents: Building Deceptive LLMs that Persist Through Safety Training"".
Watch on Youtube: https://www.youtube.com/watch?v=BgfT0AcosHw

Mar 6, 2024 • 39min
The Sword of Damocles | Dan Zimmer | EA Global Bay Area: 2024
The final session of the conference will include some closing words, followed by a talk and fireside chat with Dan Zimmer.
Dan Zimmer completed his Ph.D. from the Department of Government at Cornell University. His research focuses on the implications that anthropogenic existential risk (x-risk) poses for some of the foundational categories of Western political thought, paying particular attention to the historical dimension of ongoing engagement and avoidance with the subject. His doctoral dissertation examined how the political debates inspired by the thermonuclear fallout crisis of the 1950s came to be reformulated in light of the growing public preoccupation with ecological x-risks such as global warming and nuclear winter beginning in the 1980s. His research at Stanford seeks to bring this historical analysis up to the present by tracking how the contemporary study of x-risk came to be formalized in the early 2000s in response to growing concerns about the prospect of machine superintelligence.
Watch on Youtube: https://www.youtube.com/watch?v=E6Fe1iPCgfU