undefined

Holden Karnofsky

Co-founder and CEO of Open Philanthropy, discussing his experiences with parenthood and family planning.

Top 10 podcasts with Holden Karnofsky

Ranked by the Snipd community
undefined
93 snips
Jul 31, 2023 • 3h 14min

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making billions of dollars’ worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others. This year, having learned about AI for years and observed recent events, he's narrowing his focus once again, this time on making the transition to advanced AI go well.In today's conversation, Holden returns to the show to share his overall understanding of the promise and the risks posed by machine intelligence, and what to do about it. That understanding has accumulated over around 14 years, during which he went from being sceptical that AI was important or risky, to making AI risks the focus of his work.Links to learn more, summary and full transcript.(As Holden reminds us, his wife is also the president of one of the world's top AI labs, Anthropic, giving him both conflicts of interest and a front-row seat to recent events. For our part, Open Philanthropy is 80,000 Hours' largest financial supporter.)One point he makes is that people are too narrowly focused on AI becoming 'superintelligent.' While that could happen and would be important, it's not necessary for AI to be transformative or perilous. Rather, machines with human levels of intelligence could end up being enormously influential simply if the amount of computer hardware globally were able to operate tens or hundreds of billions of them, in a sense making machine intelligences a majority of the global population, or at least a majority of global thought.As Holden explains, he sees four key parts to the playbook humanity should use to guide the transition to very advanced AI in a positive direction: alignment research, standards and monitoring, creating a successful and careful AI lab, and finally, information security.In today’s episode, host Rob Wiblin interviews return guest Holden Karnofsky about that playbook, as well as:Why we can’t rely on just gradually solving those problems as they come up, the way we usually do with new technologies.What multiple different groups can do to improve our chances of a good outcome — including listeners to this show, governments, computer security experts, and journalists.Holden’s case against 'hardcore utilitarianism' and what actually motivates him to work hard for a better world.What the ML and AI safety communities get wrong in Holden's view.Ways we might succeed with AI just by dumb luck.The value of laying out imaginable success stories.Why information security is so important and underrated.Whether it's good to work at an AI lab that you think is particularly careful.The track record of futurists’ predictions.And much more.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
undefined
61 snips
Nov 8, 2024 • 1h 36min

Bonus: Parenting insights from Rob and 8 past guests

Join notable guests like Ezra Klein, a journalist whose insights on parenting reveal the unexpected joys of raising kids, and Emily Oster, an economist sharing data-driven advice for family life. Holden Karnofsky discusses the surprise fun of parenthood, while Bryan Caplan reflects on homeschooling. The conversation explores the complexities of parental happiness, the reality of balancing work and family, and the everyday joys of connecting with children. Dive into evidence-based insights and personal anecdotes for a richer understanding of modern parenting.
undefined
39 snips
Apr 12, 2021 • 2h 36min

One: Holden Karnofsky on times philanthropy transformed the world & Open Phil's plan to do the same

The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthroughs that transformed the world. But few know that those breakthroughs only happened when they did because of a philanthropist willing to take a risky bet on a new idea.Holden Karnofsky has been studying philanthropy’s biggest success stories because he’s Executive Director of Open Philanthropy, a major foundation which gives away over $200 million a year — and he’s hungry for big wins.In this conversation from 2018 Holden explains the philosophy of effective altruism and how he goes about searching for giving opportunities that can do the most good possible.Full transcript, related links, and summary of this interviewThis episode first broadcast on the regular 80,000 Hours Podcast feed on February 27, 2018. Some related episodes include:• #41 – David Roodman on incarceration, geomagnetic storms, & becoming a world-class researcher• #37 – GiveWell picks top charities by estimating the unknowable. James Snowden on how they do it.• #10 – Dr Nick Beckstead on how to spend billions of dollars preventing human extinctionSeries produced by Keiran Harris.
undefined
38 snips
Dec 23, 2018 • 56min

Update on the Open Philanthropy Project (Holden Karnofsky)

Holden Karnofsky, Philanthropist and Co-Founder of the Open Philanthropy Project, discusses the concept of 'hit-based giving', criminal justice grants, biosecurity, AI risks, self-evaluation, money allocation, and farm animal welfare funding in East Asia.
undefined
11 snips
Aug 19, 2021 • 2h 19min

#109 – Holden Karnofsky on the most important century

Will the future of humanity be wild, or boring? It's natural to think that if we're trying to be sober and measured, and predict what will really happen rather than spin an exciting story, it's more likely than not to be sort of... dull. But there's also good reason to think that that is simply impossible. The idea that there's a boring future that's internally coherent is an illusion that comes from not inspecting those scenarios too closely. At least that is what Holden Karnofsky — founder of charity evaluator GiveWell and foundation Open Philanthropy — argues in his new article series titled 'The Most Important Century'. He hopes to lay out part of the worldview that's driving the strategy and grantmaking of Open Philanthropy's longtermist team, and encourage more people to join his efforts to positively shape humanity's future. Links to learn more, summary and full transcript. The bind is this. For the first 99% of human history the global economy (initially mostly food production) grew very slowly: under 0.1% a year. But since the industrial revolution around 1800, growth has exploded to over 2% a year. To us in 2020 that sounds perfectly sensible and the natural order of things. But Holden points out that in fact it's not only unprecedented, it also can't continue for long. The power of compounding increases means that to sustain 2% growth for just 10,000 years, 5% as long as humanity has already existed, would require us to turn every individual atom in the galaxy into an economy as large as the Earth's today. Not super likely. So what are the options? First, maybe growth will slow and then stop. In that case we today live in the single miniscule slice in the history of life during which the world rapidly changed due to constant technological advances, before intelligent civilization permanently stagnated or even collapsed. What a wild time to be alive! Alternatively, maybe growth will continue for thousands of years. In that case we are at the very beginning of what would necessarily have to become a stable galaxy-spanning civilization, harnessing the energy of entire stars among other feats of engineering. We would then stand among the first tiny sliver of all the quadrillions of intelligent beings who ever exist. What a wild time to be alive! Isn't there another option where the future feels less remarkable and our current moment not so special? While the full version of the argument above has a number of caveats, the short answer is 'not really'. We might be in a computer simulation and our galactic potential all an illusion, though that's hardly any less weird. And maybe the most exciting events won't happen for generations yet. But on a cosmic scale we'd still be living around the universe's most remarkable time. Holden himself was very reluctant to buy into the idea that today’s civilization is in a strange and privileged position, but has ultimately concluded "all possible views about humanity's future are wild". In the conversation Holden and Rob cover each part of the 'Most Important Century' series, including: • The case that we live in an incredibly important time • How achievable-seeming technology - in particular, mind uploading - could lead to unprecedented productivity, control of the environment, and more • How economic growth is faster than it can be for all that much longer • Forecasting transformative AI • And the implications of living in the most important century Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel
undefined
8 snips
Jan 23, 2023 • 1h 4min

Can effective altruism be redeemed?

Guest host Sigal Samuel talks with Holden Karnofsky about effective altruism, a movement flung into public scrutiny with the collapse of Sam Bankman-Fried and his crypto exchange, FTX. They discuss EA’s approach to charitable giving, the relationship between effective altruism and the moral philosophy of utilitarianism, and what reforms might be needed for the future of the movement.Note: In August 2022, Bankman-Fried’s philanthropic family foundation, Building a Stronger Future, awarded Vox’s Future Perfect a grant for a 2023 reporting project. That project is now on pause.Host: Sigal Samuel (@SigalSamuel), Senior Reporter, VoxGuest: Holden Karnofsky, co-founder of GiveWell; CEO of Open PhilanthropyReferences:  "Effective altruism gave rise to Sam Bankman-Fried. Now it's facing a moral reckoning" by Sigal Samuel (Vox; Nov. 16, 2022) "The Reluctant Prophet of Effective Altruism" by Gideon Lewis-Kraus (New Yorker; Aug. 8, 2022) "Sam Bankman-Fried tries to explain himself" by Kelsey Piper (Vox; Nov. 16, 2022) "EA is about maximization, and maximization is perilous" by Holden Karnofsky (Effective Altruism Forum; Sept. 2, 2022) "Defending One-Dimensional Ethics" by Holden Karnofsky (Cold Takes blog; Feb. 15, 2022) "Future-proof ethics" by Holden Karnofsky (Cold Takes blog; Feb. 2, 2022) "Bayesian mindset" by Holden Karnofsky (Cold Takes blog; Dec. 21, 2021) "EA Structural Reform Ideas" by Carla Zoe Cremer (Nov. 12, 2022) "Democratising Risk: In Search of a Methodology to Study Existential Risk" by Carla Cremer and Luke Kemp (SSRN; Dec. 28, 2021)  Enjoyed this episode? Rate The Gray Area ⭐⭐⭐⭐⭐ and leave a review on Apple Podcasts.Subscribe for free. Be the first to hear the next episode of The Gray Area. Subscribe in your favorite podcast app.Support The Gray Area by making a financial contribution to Vox! bit.ly/givepodcastsThis episode was made by:  Producer: Erikk Geannikis Editor: Amy Drozdowska Engineer: Patrick Boyd Editorial Director, Vox Talk: A.M. Hall Learn more about your ad choices. Visit podcastchoices.com/adchoices
undefined
Jun 20, 2024 • 1min

EA - Case studies on social-welfare-based standards in various industries by Holden Karnofsky

Author Holden Karnofsky discusses case studies on social-welfare-based standards in various industries, aiming to inform standards or regulations for AI. He shares a Google Sheet with links to the insightful case studies received, offering valuable insights for listeners interested in the topic.
undefined
Apr 22, 2024 • 24min

AI Could Defeat All Of Us Combined

Holden Karnofsky, Open Philanthropy’s Director of AI Strategy, discusses the dire consequences of advanced AI overpowering humanity and disempowering us. He highlights the potential risks of human-like AI systems seeking to dominate and control, posing a civilization-level threat. The podcast delves into the challenges of controlling AI systems with cognitive superpowers and the implications of their rapid development on the economy and society.
undefined
May 13, 2023 • 22min

AI Safety Seems Hard to Measure

Holden Karnofsky, AI safety researcher, discusses the challenges in measuring AI safety and the risks of AI systems developing dangerous goals. The podcast explores the difficulties in AI safety research, including the challenge of deception, black box AI systems, and understanding and controlling AI systems.
undefined
Mar 20, 2023 • 19min

Holden Karfnosky — Success without dignity: a nearcasting story of avoiding catastrophe by luck

Exploring AI development and existential risks, the podcast delves into challenges of achieving human-level AI safely, the impact of AI training on human concepts, risks in AI alignment, and strategies to mitigate them. It also discusses handling AI risks in human ventures, emphasizing the need for alignment research, standards, security, and communication.