undefined

Max Tegmark

Professor of physics at MIT, known for his work on the multiverse and mathematical universe.

Top 10 podcasts with Max Tegmark

Ranked by the Snipd community
undefined
1,549 snips
Apr 13, 2023 • 2h 54min

#371 – Max Tegmark: The Case for Halting AI Development

Max Tegmark, a physicist and AI researcher at MIT, discusses the urgent need to pause AI development to mitigate existential risks. He explores the ethical implications of advanced AI, questioning the wisdom of creating machines that might surpass human intelligence. The conversation touches on the importance of regulating AI for safety and the challenges of balancing innovation with societal welfare. Tegmark also reflects on personal loss, the influence of family on intellectual curiosity, and the need for compassion in AI development.
undefined
171 snips
Aug 26, 2018 • 1h 23min

Max Tegmark: Life 3.0

Max Tegmark, a renowned MIT physics professor and co-founder of the Future of Life Institute, dives deep into thought-provoking topics about artificial intelligence and consciousness. He discusses the search for intelligent life beyond Earth, pondering the Fermi paradox. Tegmark unpacks the nuances of artificial general intelligence (AGI), ethics, and the emotional capabilities of machines. He also explores the intersection of quantum computing and AGI, emphasizing the importance of fostering human-like connections in AI development while navigating its complex ethical landscape.
undefined
77 snips
Nov 22, 2022 • 1h 8min

Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris

In this insightful discussion, guests include Jay Shapiro, a filmmaker behind an engaging audio documentary series, Eliezer Yudkowsky, a computer scientist renowned for his AI safety work, physicist Max Tegmark, and computer science professor Stuart Russell. They delve into the complexities of AI, revealing the dangers of misaligned objectives and the critical issues of value alignment and control. The conversation touches on the transformative potential of AI juxtaposed with ethical dilemmas, consciousness, and geopolitical concerns surrounding AI weaponization.
undefined
46 snips
Jan 18, 2021 • 3h 8min

#155 – Max Tegmark: AI and Physics

Max Tegmark, a physicist at MIT and co-founder of the Future of Life Institute, dives deep into the intersection of AI and physics. He discusses the urgent need for aligning AI technologies with human values, warning against the risks of overtrusting automated systems. The conversation touches on navigating misinformation, the ethics of autonomous warfare, and the historical importance of individual agency during crises. Tegmark also reflects on humanity's cosmic destiny and the philosophical implications of consciousness, urging a collaborative approach to scientific advancements.
undefined
38 snips
Jul 1, 2022 • 2h 58min

#133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them. That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly-capable AI systems seriously. Links to learn more, summary and full transcript. Max's primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity's future including nuclear war, synthetic biology, and AI. Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his 'put up or shut up' resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a website called 'Improve The News' to help readers separate facts from spin. But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind. You can now give an AI system like GPT-3 the text: "I'm going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that's in?" And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago. So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them. He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?” Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What's the potential? What are the threats? How might this story play out? What should we be doing to prepare? Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem. They then spend roughly the last third talking about Max's current big passion: improving the news we consume — where Rob has a few reservations. They also cover: • Whether we could understand what superintelligent systems were doing • The value of encouraging people to think about the positive future they want • How to give machines goals • Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’ • Whether we’re sleepwalking into disaster • Whether people actually just want their biases confirmed • Why Max is worried about government-backed fact-checking • And much more Chapters:Rob’s intro (00:00:00)The interview begins (00:01:19)How Max prioritises (00:12:33)Intro to AI risk (00:15:47)Superintelligence (00:35:56)Imagining a wide range of possible futures (00:47:45)Recent advances in capabilities and alignment (00:57:37)How to give machines goals (01:13:13)Regulatory capture (01:21:03)How humanity fails to fulfil its potential (01:39:45)Are we being hacked? (01:51:01)Improving the news (02:05:31)Do people actually just want their biases confirmed? (02:16:15)Government-backed fact-checking (02:37:00)Would a superintelligence seem like magic? (02:49:50)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore
undefined
25 snips
Dec 9, 2024 • 57min

Max Tegmark: Will AI Surpass Human Intelligence? [Ep. 469]

Max Tegmark, a renowned physicist and machine learning expert, dives deep into the realm of artificial intelligence and its potential to exceed human intelligence. He raises thought-provoking questions about the ethical responsibilities that come with AI advancements. The discussion covers the societal impacts of AI, the necessity of regulations, and the exciting possibilities of AI in education and science. Tegmark even draws fascinating parallels between AI and cosmology, pondering whether we live in a multiverse and how that insight could shape our understanding of intelligence.
undefined
21 snips
Apr 28, 2023 • 1h 6min

ChatGPT Training, Superintelligence, & AI Funding in Vector Databases | E12

In this episode we cover OpenAI's new update to hide chat history or accept they will train using your inputs, we explore doomsday scenarios and cover Max Tegmark's TIME article on "The Don't Look Up Thinking That Could Doom Us All", discuss the explosion in Vector database funding and what it means and learn how large language models get anxious!CHAPTERS:====00:00 - What have we done!?00:17 - OpenAI Chat History, Privacy & Training GPT-509:45 - Where is GPT-4 with Images? & OpenAI Enterprise Deals12:52 - Max Tegmark's TIME article & Superintelligence, Doomsdaying & Fear29:23 - Anxiety in AI Models, AI "Emotions" and Motivation37:33 - AI Distribution: How Much is AI Changing our Lives? AI Hype Cycle46:01 - Vector Database Funding, Increasing Prompt Sizes, Scaling Transformer to 1M Tokens & Beyond Paper55:25 - VCs Struggling to Know What & Where to Invest in AI1:00:39 - Segment Anything in Medical Images & Medical Funding1:03:20 - Weed Zapping AI Stopping Pesticide UseSOURCES:====https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpthttps://twitter.com/gdb/status/1651306937991168002?s=46&t=uXHUN4Glah4CaV-g2czc6Qhttps://time.com/6273743/thinking-that-could-doom-us-with-ai/https://twitter.com/fchollet/status/1650738654061658112?s=46&t=uXHUN4Glah4CaV-g2czc6Qhttps://twitter.com/aisolopreneur/status/1646928323363102734?s=46&t=uXHUN4Glah4CaV-g2czc6Qhttps://twitter.com/sterlingcrispin/status/1650320043107061761?s=46&t=uXHUN4Glah4CaV-g2czc6Qhttps://www.pinecone.io/learn/series-b/https://arxiv.org/pdf/2304.11062.pdfhttps://arxiv.org/pdf/2304.11111.pdfhttps://arxiv.org/pdf/2304.12306v1.pdfhttps://twitter.com/Rainmaker1973/status/1649743415549067267If you like this podcast please consider leaving a review or sharing with a friend.
undefined
19 snips
Jan 4, 2024 • 2h 26min

Max Tegmark & Eric Weinstein: AI, Aliens, Theories of Everything, & New Year’s Resolutions! (2020) (#383)

Max Tegmark and Eric Weinstein discuss topics including remote teaching, filter bubbles, diverse thinking, scientific freedom, the impact of social media on creativity, the motivation behind doing science, pursuing a PhD without an advisor, the state of academia, funding challenges, the relationship between experimentalists and theorists in physics, and the simulation argument. A sponsored message promoting 'The Jordan Harbinger Show' is also included.
undefined
15 snips
Nov 22, 2022 • 2h 12min

Making Sense of Artificial Intelligence

Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating. And make sure to stick around for the end of each episode, where we provide our list of recommendations from the worlds of film, television, literature, music, and art.   In this episode, we explore the landscape of Artificial Intelligence. We’ll listen in on Sam’s conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI — including the control problem and the value-alignment problem — as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We’ll then be introduced to philosopher Nick Bostrom’s “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance. We’ll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We’ll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we’re building using “Deep Learning” are really marching us towards our super-intelligent overlords. Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.
undefined
13 snips
Mar 6, 2024 • 27min

Does Consciousness Require a Radical Explanation?

This podcast explores the perplexing concept of consciousness and delves into novel explanations for its existence. Featuring interviews with prominent experts, it discusses the enigma of consciousness, its essential properties, the interplay with quantum mechanics, and controversial theories surrounding its nature.