undefined

Max Tegmark

Renowned physicist and machine learning expert, known for his work on the mathematical universe hypothesis and AI.

Top 10 podcasts with Max Tegmark

Ranked by the Snipd community
undefined
1,532 snips
Apr 13, 2023 • 2h 54min

#371 – Max Tegmark: The Case for Halting AI Development

Max Tegmark, AI researcher at MIT, discusses the need to pause giant AI experiments, dangers of superintelligence, potential impact of advanced AI systems on programming, and the importance of building AGI systems aligned with human values. They also explore consciousness in AI, escalating tensions in warfare, and the significance of subjective experiences in developing AI.
undefined
124 snips
Aug 26, 2018 • 1h 23min

Max Tegmark: Life 3.0

A conversation with Max Tegmark as part of MIT course on Artificial General Intelligence. Video version is available on YouTube. He is a Physics Professor at MIT, co-founder of the Future of Life Institute, and author of “Life 3.0: Being Human in the Age of Artificial Intelligence.” If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations.
undefined
77 snips
Nov 22, 2022 • 1h 8min

Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris

Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating. In this episode, we explore the landscape of Artificial Intelligence. We’ll listen in on Sam’s conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI – including the control problem and the value-alignment problem – as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We’ll then be introduced to philosopher Nick Bostrom’s “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance.   We’ll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We’ll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we’re building using “Deep Learning” are really marching us towards our super-intelligent overlords.   Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.
undefined
62 snips
Dec 22, 2023 • 10min

How to keep AI under control | Max Tegmark

Scientist Max Tegmark discusses the risks of superintelligent AI and the need for regulations. He explores the dangers of AGI and the importance of provably safe systems. Tegmark also advocates for the use of formal verification and proof checking to keep AI under control.
undefined
38 snips
Jul 1, 2022 • 2h 58min

#133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them. That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly-capable AI systems seriously. Links to learn more, summary and full transcript. Max's primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity's future including nuclear war, synthetic biology, and AI. Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his 'put up or shut up' resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a website called 'Improve The News' to help readers separate facts from spin. But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind. You can now give an AI system like GPT-3 the text: "I'm going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that's in?" And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago. So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them. He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?” Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What's the potential? What are the threats? How might this story play out? What should we be doing to prepare? Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem. They then spend roughly the last third talking about Max's current big passion: improving the news we consume — where Rob has a few reservations. They also cover: • Whether we could understand what superintelligent systems were doing • The value of encouraging people to think about the positive future they want • How to give machines goals • Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’ • Whether we’re sleepwalking into disaster • Whether people actually just want their biases confirmed • Why Max is worried about government-backed fact-checking • And much more Chapters:Rob’s intro (00:00:00)The interview begins (00:01:19)How Max prioritises (00:12:33)Intro to AI risk (00:15:47)Superintelligence (00:35:56)Imagining a wide range of possible futures (00:47:45)Recent advances in capabilities and alignment (00:57:37)How to give machines goals (01:13:13)Regulatory capture (01:21:03)How humanity fails to fulfil its potential (01:39:45)Are we being hacked? (01:51:01)Improving the news (02:05:31)Do people actually just want their biases confirmed? (02:16:15)Government-backed fact-checking (02:37:00)Would a superintelligence seem like magic? (02:49:50)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore
undefined
38 snips
Jan 18, 2021 • 3h 8min

#155 – Max Tegmark: AI and Physics

Max Tegmark is a physicist and AI researcher at MIT. Please support this podcast by checking out our sponsors: – The Jordan Harbinger Show: https://www.jordanharbinger.com/lex/ – Four Sigmatic: https://foursigmatic.com/lex and use code LexPod to get up to 60% off – BetterHelp: https://betterhelp.com/lex to get 10% off – ExpressVPN: https://expressvpn.com/lexpod and use code LexPod to get 3 months free EPISODE LINKS: News Project Explainer Video: https://www.youtube.com/watch?v=PRLF17Pb6vo News Project Website: https://www.improvethenews.org/ Max’s Twitter: https://twitter.com/tegmark Max’s Website: https://space.mit.edu/home/tegmark/ Future of Life Institute: https://futureoflife.org/ Lex Fridman Podcast #1: https://www.youtube.com/watch?v=Gi8LUnhP5yU PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: – Check out the sponsors above, it’s the best way to support this podcast – Support on Patreon: https://www.patreon.com/lexfridman – Twitter: https://twitter.com/lexfridman – Instagram: https://www.instagram.com/lexfridman – LinkedIn: https://www.linkedin.com/in/lexfridman – Facebook: https://www.facebook.com/LexFridmanPage – Medium: https://medium.com/@lexfridman OUTLINE: Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) – Introduction (08:15) – AI and physics (21:32) – Can AI discover new laws of physics? (30:22) – AI safety (47:59) – Extinction of human species (58:57) – How to fix fake news and misinformation (1:20:30) – Autonomous weapons (1:35:54) – The man who prevented nuclear war (1:46:02) – Elon Musk and AI (1:59:39) – AI alignment (2:05:42) – Consciousness (2:14:45) – Richard Feynman (2:18:56) – Machine learning and computational physics (2:29:53) – AI and creativity (2:41:08) – Aliens (2:56:51) – Mortality
undefined
21 snips
Apr 28, 2023 • 1h 6min

ChatGPT Training, Superintelligence, & AI Funding in Vector Databases | E12

In this episode we cover OpenAI's new update to hide chat history or accept they will train using your inputs, we explore doomsday scenarios and cover Max Tegmark's TIME article on "The Don't Look Up Thinking That Could Doom Us All", discuss the explosion in Vector database funding and what it means and learn how large language models get anxious!CHAPTERS:====00:00 - What have we done!?00:17 - OpenAI Chat History, Privacy & Training GPT-509:45 - Where is GPT-4 with Images? & OpenAI Enterprise Deals12:52 - Max Tegmark's TIME article & Superintelligence, Doomsdaying & Fear29:23 - Anxiety in AI Models, AI "Emotions" and Motivation37:33 - AI Distribution: How Much is AI Changing our Lives? AI Hype Cycle46:01 - Vector Database Funding, Increasing Prompt Sizes, Scaling Transformer to 1M Tokens & Beyond Paper55:25 - VCs Struggling to Know What & Where to Invest in AI1:00:39 - Segment Anything in Medical Images & Medical Funding1:03:20 - Weed Zapping AI Stopping Pesticide UseSOURCES:====https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpthttps://twitter.com/gdb/status/1651306937991168002?s=46&t=uXHUN4Glah4CaV-g2czc6Qhttps://time.com/6273743/thinking-that-could-doom-us-with-ai/https://twitter.com/fchollet/status/1650738654061658112?s=46&t=uXHUN4Glah4CaV-g2czc6Qhttps://twitter.com/aisolopreneur/status/1646928323363102734?s=46&t=uXHUN4Glah4CaV-g2czc6Qhttps://twitter.com/sterlingcrispin/status/1650320043107061761?s=46&t=uXHUN4Glah4CaV-g2czc6Qhttps://www.pinecone.io/learn/series-b/https://arxiv.org/pdf/2304.11062.pdfhttps://arxiv.org/pdf/2304.11111.pdfhttps://arxiv.org/pdf/2304.12306v1.pdfhttps://twitter.com/Rainmaker1973/status/1649743415549067267If you like this podcast please consider leaving a review or sharing with a friend.
undefined
15 snips
Dec 9, 2024 • 53min

Max Tegmark: Will AI Surpass Human Intelligence?

Max Tegmark, a renowned physicist and machine learning expert, dives deep into the realm of artificial intelligence and its potential to exceed human intelligence. He raises thought-provoking questions about the ethical responsibilities that come with AI advancements. The discussion covers the societal impacts of AI, the necessity of regulations, and the exciting possibilities of AI in education and science. Tegmark even draws fascinating parallels between AI and cosmology, pondering whether we live in a multiverse and how that insight could shape our understanding of intelligence.
undefined
15 snips
Nov 22, 2022 • 2h 12min

Making Sense of Artificial Intelligence

Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating. And make sure to stick around for the end of each episode, where we provide our list of recommendations from the worlds of film, television, literature, music, and art.   In this episode, we explore the landscape of Artificial Intelligence. We’ll listen in on Sam’s conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI — including the control problem and the value-alignment problem — as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We’ll then be introduced to philosopher Nick Bostrom’s “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance. We’ll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We’ll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we’re building using “Deep Learning” are really marching us towards our super-intelligent overlords. Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.
undefined
14 snips
Jan 4, 2024 • 2h 26min

Max Tegmark & Eric Weinstein: AI, Aliens, Theories of Everything, & New Year’s Resolutions! (2020) (#383)

Max Tegmark and Eric Weinstein discuss topics including remote teaching, filter bubbles, diverse thinking, scientific freedom, the impact of social media on creativity, the motivation behind doing science, pursuing a PhD without an advisor, the state of academia, funding challenges, the relationship between experimentalists and theorists in physics, and the simulation argument. A sponsored message promoting 'The Jordan Harbinger Show' is also included.