The Foresight Institute Podcast cover image

The Foresight Institute Podcast

Latest episodes

undefined
4 snips
Nov 17, 2023 • 8min

David Dalrymple | Rethinking Uploading Given 10-Year AI Timelines

Neuroinformatics and machine learning researcher David Dalrymple discusses a 10-year plan for brain uploading, AI techniques for parameter estimation, challenges of dense brain data transfer, and potential solutions. Also explores techniques and challenges in studying synapses and receptors in the brain and building datasets for AI learning.
undefined
Nov 10, 2023 • 10min

Existential Hope Special: Six Hopeful Visions of the Future

SpeakerBeatrice Erkers reads six Existential Hope Scenarios. Beatrice is the program manager of the Existential Hope program, and Chief of Operations at Foresight Institute. The positive scenarios were a collaborative output from our 2023 Existential Hope Day attendees.Session SummaryThis podcast has been created from our 2023 Existential Hope Day, held earlier this year. This workshop aimed to provide a forum for enhancing our conceptual clarity of what positive futures might look like. This includes understanding the what, how, and why of various ideas to determine their relevance and potential impact. The discussions aimed to deepen our understanding of the term ‘Existential Hope’ and similar concepts, probe why these concepts might have been sidelined, and delve into specific future visions.The Scenarios AI-Enabled Personal FlourishingMultigenerational Habitat in SpaceEpistemic RevolutionGood SingularityFlying CowsHuman-AI Partnership / ParetopiaDive into our 2023 Existential Hope Day: ReportExistential Hope was created to collect positive and possible scenarios for the future so that more people can commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
undefined
Nov 3, 2023 • 10min

Jan Leike | Superintelligent Alignment

Jan Leike is a Research Scientist at Google DeepMind and a leading voice in AI Alignment, with affiliations at the Future of Humanity Institute and the Machine Intelligence Research Institute. At OpenAI, he co-leads the Superalignment Team, contributing to AI advancements such as InstructGPT and ChatGPT. Holding a PhD from the Australian National University, Jan's work focuses on ensuring AI Alignment.Key HighlightsThe launch of OpenAI's Superalignment team, targeting the alignment of superintelligence in four years.The aim to automate of alignment research, currently leveraging 20% of OpenAI's computational power.How traditional reinforcement learning from human feedback may fall short in scaling language model alignment.Why there is a focus on scalable oversight, generalization, automation interpretability, and adversarial testing to ensure alignment reliability.Experimentation with intentionally misaligned models to evaluate alignment strategies.Dive deeper into the session: Full SummaryAbout Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
undefined
Oct 27, 2023 • 55min

Existential Hope Podcast: Emilia Javorsky | The Future of AI, Bioengineering, and Human Empathy

Emilia Javorsky, MD, MPH, is the Director of the Futures Program at the Future of Life Institute. A physician-scientist and entrepreneur, she specializes in the development of medical technologies and is a mentor at Harvard's Wyss Institute. Recognized as a Forbes 30 Under 30 and a Global Shaper by the World Economic Forum, Javorsky is committed to guiding emerging technologies towards ethical, safe, and beneficial applications. With a strong foundation in AI, biotech, and nuclear risk management, she champions the responsible evolution of transformative tech for humanity's advancement.Session SummaryEmilia envisions a future where our innate talents join forces with artificial intelligence to tackle global challenges. This isn't merely about the speed of AI advancements, but how they harmonise with human goals. Emilia stresses the importance of creating positive narratives, ones that integrate AI with genuine human empathy, pointing towards a world where technology complements, not replaces, our connections. To get here, however, she emphasises the need for thoughtful regulation, which moves away from only-theoretical approaches, and promotes an inclusive, multi-stakeholder approach. Emilia sees this pathway as a means to harness AI's capabilities fully. Looking ahead, she believes AI can better human health, pioneer bioengineering, and aid in space exploration, whilst enhancing human connection. For her, it’s more than just mitigating risk – it's about unlocking the vast potential that AI and human collaboration promise. Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
undefined
Oct 20, 2023 • 49min

Mark Miller | Paperclips and Pyramids: Misdiagnosing AI Risks

Mark Miller, Chief Scientist at Agoric, discusses common misconceptions about AI risks and the importance of structuring institutions effectively. The blend of human and AI intelligence demands innovative governance for future civilization. Topics include unipolar takeover, cooperation, assessing AI risks, uncertainties, and building super intelligences.
undefined
Oct 13, 2023 • 56min

Dr. Thomas Macrina | AI for Whole Brain Circuit Mapping

Dr. Thomas Macrina, CEO of Zetta AI, explores advancements in AI-analyzed neuroimaging and the recovery of petascale cortical circuits. The talk delves into AI intricacies, potential pathways for whole-brain connectomes, and challenges in mapping neural circuits. They discuss the impact of AI on proofreaders and clarify terminology. The podcast also explores the limitations of connectomes and the goal of accessibility for neuroscientists. Additionally, they discuss the hiring process for machine learning projects and the challenges in analyzing neural circuits. Future plans include working with brain size data sets and launching a brain connects program.
undefined
Oct 6, 2023 • 1h 1min

Robert Zubrin | The Mars Society

Robert Zubrin is an American aerospace engineer, author, and advocate for human exploration of Mars. He and his colleague at Martin Marietta, David Baker, were the driving force behind Mars Direct, a proposal in a 1990 research paper intended to produce significant reductions in the cost and complexity of such a mission.Summary Zubrin recaps his early interest and hopes for space exploration.  He speaks about the mission creep that plagued space travel efforts in the 90’s and his efforts to make Mars the focus of space travel once again.  He is adamantly against on-orbit assembly and the generally more complex plans that people come up with for Mars bases. He also touched on his interactions with Elon Musk and SpaceX.  After founding the Mars Society, Zubrin needed money to develop practice locations for Mars colonization.  He held a fundraiser where they met, and from there, Musk was all-in on getting humanity to Mars.Robert continues to create innovative technologies with his independent company.  Even with his success at stimulating SpaceX and other organizations to explore Mars, he continues to push for more progress toward the red planet.Full session summary: https://foresight.org/summary/robert-zubrin-the-mars-society/The Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. Apply to Foresight’s virtual salons and in person workshops here!We are entirely funded by your donations. If you enjoy what we do please consider donating through our donation page.Visit our website for more content, or join us here:TwitterFacebookLinkedInEvery word ever spoken on this podcast is now AI-searchable using Fathom.fm, a search engine for podcasts.  Hosted on Acast. See acast.com/privacy for more information.
undefined
Sep 28, 2023 • 49min

Existential Hope Podcast: Joe Carlsmith | Infite Ethics and the Sublime Utopia

Joe Carlsmith is a writer, researcher, and philosopher. He works as a senior research analyst at Open Philanthropy, focusing on existential risk from advanced artificial intelligence. He also writes independently about various topics in philosophy and futurism and has a doctorate in philosophy from the University of Oxford.Much of his work is about trying to help us orient wisely towards humanity’s long-term future. He delves into questions about meta-ethics and rationality at the foundation, feeding into questions about ethics (and especially about effective altruism), which motivate concern for the long-term future. Session SummaryJoin us to explore Joseph Carlsmith’s insights into his ongoing work and thoughts on issues including AI alignment, lesser-known future risks, infinite ethics and digital minds, and the sublime utopia. Carlsmith shares his concerns in ensuring that advanced AI systems behave beneficially for humanity. However, he emphasizes the importance of broadening the horizon to identify and address other critical factors beyond technical AI alignment. For instance, he delves into infinite ethics, addressing the ethical considerations involving infinite impacts and numbers of people – necessary if we are going to meet future realities. Despite the challenges, this episode is devoted to the exploration of utopia. Beyond the lesser boundary of a concrete utopia, Carlsmith envisions the sublime utopia: a realm of aspirational goals and visions. Although filled with vulnerabilities tied to hoping for extraordinary and unprecedented outcomes, Carlsmith  lays out the essential nature of such a pursuit.Full transcript, list of resources, and art piece https://www.existentialhope.com/podcasts Existential Hope was created to collect positive and possible scenarios for the future, so that we can have more people commit to the creation of a brighter future, and to start mapping out the main developments and challenges that need to be navigated to reach it. Find all previous podcast episodes here, always featuring a full transcript, artwork inspired by the episode, and a list of recommended resources from the podcast. Existential Hope is a Foresight Institute project. The Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison Duettmann is the president and COO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edit Hosted on Acast. See acast.com/privacy for more information.
undefined
6 snips
Sep 22, 2023 • 54min

Sumner Norman | What To Expect From The Next Generation Of Brain-Computer Interfaces

Sumner Norman, a research scientist with extensive experience in designing brain-computer interfaces, explores the next generation of BCIs. He discusses the limitations of current devices in terms of longevity and coverage, and proposes the use of ultrasound for a non-invasive, long-lasting BCI that can interact with large swaths of the brain. The podcast also touches on advancements in BCIs for treating neurological disorders, challenges in electrode arrays, and the potential of gene therapy combined with BCIs.
undefined
Sep 15, 2023 • 1h 13min

Stuart Russel | Human Compatible AI

Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence. He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at the University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the most popular textbook in the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.SummaryThis episode explores the development of artificial intelligence (AI) and its potential impact on humanity. Russel emphasizes the importance of aligning AI systems with human values to ensure their compatibility and avoid unintended consequences. He discusses the risks associated with AI development and proposes the use of provably beneficial AI, which prioritizes human values and addresses concerns such as safety and control. Russel argues for the need to reframe AI research and policymaking to prioritize human well-being and ethical considerations. Computation: Intelligent Cooperation Foresight Group– Info and apply to join!  https://foresight.org/technologies/computation-intelligent-cooperation/The Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. Apply to Foresight’s virtual salons and in person workshops here!We are entirely funded by your donations. If you enjoy what we do please consider donating through our donation page.Visit our website for more content, or join us here:TwitterFacebookLinkedInEvery word ever spoken on this podcast is now AI-searchable using Fathom.fm, a search engine for podcasts.  Hosted on Acast. See acast.com/privacy for more information.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode