Future of Life Institute Podcast

Future of Life Institute
undefined
Feb 28, 2019 • 57min

Part 1: From DNA to Banning Biological Weapons With Matthew Meselson and Max Tegmark

In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in disarmament, working with the US government to halt the use of Agent Orange in Vietnam and developing the Biological Weapons Convention. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats. In Part One, Dr. Meselson describes how he designed the experiment that helped prove Watson and Crick’s hypothesis, and he explains why this type of research is uniquely valuable to the scientific community. He also recounts his introduction to biological weapons, his reasons for opposing them, and the efforts he undertook to get them banned. Dr. Meselson was a key force behind the U.S. ratification of the Geneva Protocol, a 1925 treaty banning biological warfare, as well as the conception and implementation of the Biological Weapons Convention, the international treaty that bans biological and toxin weapons.
undefined
Feb 21, 2019 • 38min

AIAP: Human Cognition and the Nature of Intelligence with Joshua Greene

See the full article here: https://futureoflife.org/2019/02/21/human-cognition-and-the-nature-of-intelligence-with-joshua-greene/ "How do we combine concepts to form thoughts? How can the same thought be represented in terms of words versus things that you can see or hear in your mind's eyes and ears? How does your brain distinguish what it's thinking about from what it actually believes? If I tell you a made up story, yesterday I played basketball with LeBron James, maybe you'd believe me, and then I say, oh I was just kidding, didn't really happen. You still have the idea in your head, but in one case you're representing it as something true, in another case you're representing it as something false, or maybe you're representing it as something that might be true and you're not sure. For most animals, the ideas that get into its head come in through perception, and the default is just that they are beliefs. But humans have the ability to entertain all kinds of ideas without believing them. You can believe that they're false or you could just be agnostic, and that's essential not just for idle speculation, but it's essential for planning. You have to be able to imagine possibilities that aren't yet actual. So these are all things we're trying to understand. And then I think the project of understanding how humans do it is really quite parallel to the project of trying to build artificial general intelligence." -Joshua Greene Josh Greene is a Professor of Psychology at Harvard, who focuses on moral judgment and decision making. His recent work focuses on cognition, and his broader interests include philosophy, psychology and neuroscience. He is the author of Moral Tribes: Emotion, Reason, and the Gap Bewtween Us and Them.  Joshua Greene's research focuses on further understanding key aspects of both individual and collective intelligence. Deepening our knowledge of these subjects allows us to understand the key features which constitute human general intelligence, and how human cognition aggregates and plays out through group choice and social decision making. By better understanding the one general intelligence we know of, namely humans, we can gain insights into the kinds of features that are essential to general intelligence and thereby better understand what it means to create beneficial AGI. This particular episode was recorded at the Beneficial AGI 2019 conference in Puerto Rico. We hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here. Topics discussed in this episode include: -The multi-modal and combinatorial nature of human intelligence -The symbol grounding problem -Grounded cognition -Modern brain imaging -Josh's psychology research using John Rawls’ veil of ignorance -Utilitarianism reframed as 'deep pragmatism'
undefined
Feb 7, 2019 • 50min

The Byzantine Generals' Problem, Poisoning, and Distributed Machine Learning with El Mahdi El Mhamdi

Three generals are voting on whether to attack or retreat from their siege of a castle. One of the generals is corrupt and two of them are not. What happens when the corrupted general sends different answers to the other two generals? A Byzantine fault is "a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the "Byzantine Generals' Problem", developed to describe this condition, where actors must agree on a concerted strategy to avoid catastrophic system failure, but some of the actors are unreliable." The Byzantine Generals' Problem and associated issues in maintaining reliable distributed computing networks is illuminating for both AI alignment and modern networks we interact with like Youtube, Facebook, or Google. By exploring this space, we are shown the limits of reliable distributed computing, the safety concerns and threats in this space, and the tradeoffs we will have to make for varying degrees of efficiency or safety. The Byzantine Generals' Problem, Poisoning, and Distributed Machine Learning with El Mahdi El Mahmdi is the ninth podcast in the AI Alignment Podcast series, hosted by Lucas Perry. El Mahdi pioneered Byzantine resilient machine learning devising a series of provably safe algorithms he recently presented at NeurIPS and ICML. Interested in theoretical biology, his work also includes the analysis of error propagation and networks applied to both neural and biomolecular networks. This particular episode was recorded at the Beneficial AGI 2019 conference in Puerto Rico. We hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here. If you're interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space. Topics discussed in this episode include: The Byzantine Generals' Problem What this has to do with artificial intelligence and machine learning Everyday situations where this is important How systems and models are to update in the context of asynchrony Why it's hard to do Byzantine resilient distributed ML. Why this is important for long-term AI alignment An overview of Adversarial Machine Learning and where Byzantine-resilient Machine Learning stands on the map is available in this (9min) video . A specific focus on Byzantine Fault Tolerant Machine Learning is available here (~7min) In particular, El Mahdi argues in the first interview (and in the podcast) that technical AI safety is not only relevant for long term concerns, but is crucial in current pressing issues such as social media poisoning of public debates and misinformation propagation, both of which fall into Poisoning-resilience. Another example he likes to use is social media addiction, that could be seen as a case of (non) Safely Interruptible learning. This value misalignment is already an issue with the primitive forms of AIs that optimize our world today as they maximize our watch-time all over the internet. The latter (Safe Interruptibility) is another technical AI safety question El Mahdi works on, in the context of Reinforcement Learning. This line of research was initially dismissed as "science fiction", in this interview (5min), El Mahdi explains why it is a realistic question that arises naturally in reinforcement learning El Mahdi's work on Byzantine-resilient Machine Learning and other relevant topics is available on his Google scholar profile.
undefined
Jan 31, 2019 • 1h 3min

AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy

Every January, we like to look back over the past 12 months at the progress that’s been made in the world of artificial intelligence. Welcome to our annual “AI breakthroughs” podcast, 2018 edition. Ariel was joined for this retrospective by researchers Roman Yampolskiy and David Krueger. Roman is an AI Safety researcher and professor at the University of Louisville. He also recently published the book, Artificial Intelligence Safety & Security. David is a PhD candidate in the Mila lab at the University of Montreal, where he works on deep learning and AI safety. He's also worked with safety teams at the Future of Humanity Institute and DeepMind and has volunteered with 80,000 hours. Roman and David shared their lists of 2018’s most promising AI advances, as well as their thoughts on some major ethical questions and safety concerns. They also discussed media coverage of AI research, why talking about “breakthroughs” can be misleading, and why there may have been more progress in the past year than it seems.
undefined
Jan 25, 2019 • 32min

Artificial Intelligence: American Attitudes and Trends with Baobao Zhang

Our phones, our cars, our televisions, our homes: they’re all getting smarter. Artificial intelligence is already inextricably woven into everyday life, and its impact will only grow in the coming years. But while this development inspires much discussion among members of the scientific community, public opinion on artificial intelligence has remained relatively unknown. Artificial Intelligence: American Attitudes and Trends, a report published earlier in January by the Center for the Governance of AI, explores this question. Its authors relied on an in-depth survey to analyze American attitudes towards artificial intelligence, from privacy concerns to beliefs about U.S. technological superiority. Some of their findings--most Americans, for example, don’t trust Facebook--were unsurprising. But much of their data reflects trends within the American public that have previously gone unnoticed. This month Ariel was joined by Baobao Zhang, lead author of the report, to talk about these findings. Zhang is a PhD candidate in Yale University's political science department and research affiliate with the Center for the Governance of AI at the University of Oxford. Her work focuses on American politics, international relations, and experimental methods. In this episode, Zhang spoke about her take on some of the report’s most interesting findings, the new questions it raised, and future research directions for her team. Topics discussed include: -Demographic differences in perceptions of AI -Discrepancies between expert and public opinions -Public trust (or lack thereof) in AI developers -The effect of information on public perceptions of scientific issues
undefined
Jan 17, 2019 • 52min

AIAP: Cooperative Inverse Reinforcement Learning with Dylan Hadfield-Menell (Beneficial AGI 2019)

What motivates cooperative inverse reinforcement learning? What can we gain from recontextualizing our safety efforts from the CIRL point of view? What possible role can pre-AGI systems play in amplifying normative processes? Cooperative Inverse Reinforcement Learning with Dylan Hadfield-Menell is the eighth podcast in the AI Alignment Podcast series, hosted by Lucas Perry and was recorded at the Beneficial AGI 2019 conference in Puerto Rico. For those of you that are new, this series covers and explores the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, Lucas will speak with technical and non-technical researchers across areas such as machine learning, governance,  ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application. In this podcast, Lucas spoke with Dylan Hadfield-Menell. Dylan is a 5th year PhD student at UC Berkeley advised by Anca Dragan, Pieter Abbeel and Stuart Russell, where he focuses on technical AI alignment research. Topics discussed in this episode include: -How CIRL helps to clarify AI alignment and adjacent concepts -The philosophy of science behind safety theorizing -CIRL in the context of varying alignment methodologies and it's role -If short-term AI can be used to amplify normative processes
undefined
Dec 21, 2018 • 2h 6min

Existential Hope in 2019 and Beyond

Humanity is at a turning point. For the first time in history, we have the technology to completely obliterate ourselves. But we’ve also created boundless possibilities for all life that could enable just about any brilliant future we can imagine. Humanity could erase itself with a nuclear war or a poorly designed AI, or we could colonize space and expand life throughout the universe: As a species, our future has never been more open-ended. The potential for disaster is often more visible than the potential for triumph, so as we prepare for 2019, we want to talk about existential hope, and why we should actually be more excited than ever about the future. In this podcast, Ariel talks to six experts--Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark, and Anders Sandberg--about their views on the present, the future, and the path between them. Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7th Future, she’s focusing on bringing people and organizations together to imagine and figure out how to build a better future. Allison is a researcher and program coordinator at the Foresight Institute and creator of the website existentialhope.com. Josh is cohost on the Stuff You Should Know Podcast, and he recently released a 10-part series on existential risks called The End of the World with Josh Clark. Anders is a senior researcher at the Future of Humanity Institute with a background in computational neuroscience, and for the past 20 years, he’s studied the ethics of human enhancement, existential risks, emerging technology, and life in the far future. We hope you’ll come away feeling inspired and motivated--not just to prevent catastrophe, but to facilitate greatness. Topics discussed in this episode include: How technology aids us in realizing personal and societal goals. FLI’s successes in 2018 and our goals for 2019. Worldbuilding and how to conceptualize the future. The possibility of other life in the universe and its implications for the future of humanity. How we can improve as a species and strategies for doing so. The importance of a shared positive vision for the future, what that vision might look like, and how a shared vision can still represent a wide enough set of values and goals to cover the billions of people alive today and in the future. Existential hope and what it looks like now and far into the future.
undefined
Dec 18, 2018 • 1h 8min

AIAP: Inverse Reinforcement Learning and the State of AI Alignment with Rohin Shah

What role does inverse reinforcement learning (IRL) have to play in AI alignment? What issues complicate IRL and how does this affect the usefulness of this preference learning methodology? What sort of paradigm of AI alignment ought we to take up given such concerns? Inverse Reinforcement Learning and the State of AI Alignment with Rohin Shah is the seventh podcast in the AI Alignment Podcast series, hosted by Lucas Perry. For those of you that are new, this series is covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, governance, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application. In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter.  Topics discussed in this episode include: - The role of systematic bias in IRL - The metaphilosophical issues of IRL - IRL's place in preference learning - Rohin's take on the state of AI alignment - What Rohin has changed his mind about
undefined
Nov 30, 2018 • 33min

Governing Biotechnology: From Avian Flu to Genetically-Modified Babies With Catherine Rhodes

A Chinese researcher recently made international news with claims that he had edited the first human babies using CRISPR. In doing so, he violated international ethics standards, and he appears to have acted without his funders or his university knowing. But this is only the latest example of biological research triggering ethical concerns. Gain-of-function research a few years ago, which made avian flu more virulent, also sparked controversy when scientists tried to publish their work. And there’s been extensive debate globally about the ethics of human cloning. As biotechnology and other emerging technologies become more powerful, the dual-use nature of research -- that is, research that can have both beneficial and risky outcomes -- is increasingly important to address. How can scientists and policymakers work together to ensure regulations and governance of technological development will enable researchers to do good with their work, while decreasing the threats? On this month’s podcast, Ariel spoke with Catherine Rhodes about these issues and more. Catherine is a senior research associate and deputy director of the Center for the Study of Existential Risk. Her work has broadly focused on understanding the intersection and combination of risks stemming from technologies and risks stemming from governance. She has particular expertise in international governance of biotechnology, including biosecurity and broader risk management issues. Topics discussed in this episode include: ~ Gain-of-function research, the H5N1 virus (avian flu), and the risks of publishing dangerous information ~ The roles of scientists, policymakers, and the public to ensure that technology is developed safely and ethically ~ The controversial Chinese researcher who claims to have used CRISPR to edit the genome of twins ~ How scientists can anticipate whether the results of their research could be misused by someone else ~ To what extent does risk stem from technology, and to what extent does it stem from how we govern it?
undefined
Oct 31, 2018 • 1h 21min

Avoiding the Worst of Climate Change with Alexander Verbeek and John Moorhead

“There are basically two choices. We're going to massively change everything we are doing on this planet, the way we work together, the actions we take, the way we run our economy, and the way we behave towards each other and towards the planet and towards everything that lives on this planet. Or we sit back and relax and we just let the whole thing crash. The choice is so easy to make, even if you don't care at all about nature or the lives of other people. Even if you just look at your own interests and look purely through an economical angle, it is just a good return on investment to take good care of this planet.” - Alexander Verbeek On this month’s podcast, Ariel spoke with Alexander Verbeek and John Moorhead about what we can do to avoid the worst of climate change. Alexander is a Dutch diplomat and former strategic policy advisor at the Netherlands Ministry of Foreign Affairs. He created the Planetary Security Initiative where representatives from 75 countries meet annually on the climate change-security relationship. John is President of Drawdown Switzerland, an act tank to support Project Drawdown and other science-based climate solutions that reverse global warming. He is a blogger at Thomson Reuters, The Economist, and sciencebasedsolutions.com, and he advises and informs on climate solutions that are economy, society, and environment positive.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app