For Humanity: An AI Safety Podcast cover image

For Humanity: An AI Safety Podcast

Latest episodes

undefined
Jun 24, 2024 • 5min

Episode #34 TRAILER - “The Threat of AI Autonomous Replication” For Humanity: An AI Risk Podcast

In Episode #34, host John Sherman talks with Charbel-Raphaël Segerie, Executive Director, Centre pour la sécurité de l'IA. Among the very important topics covered: autonomous AI self replication, the potential for warning shots to go unnoticed due to a public and journalist class that are uneducated on AI risk, and the potential for a disastrous Yan Lecunnification of the upcoming February 2025 Paris AI Safety Summit.   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: The exponential growth of AI (00:00:00) Discussion on the potential exponential growth of AI and its implications for the future. The mass of AI systems as an existential threat (00:01:05) Exploring the potential threat posed by the sheer mass of AI systems and its impact on existential risk. The concept of warning shots (00:01:32) Elaboration on the concept of warning shots in the context of AI safety and the need for public understanding. The importance of advocacy and public understanding (00:02:30) The significance of advocacy, public awareness, and the role of the safety community in creating and recognizing warning shots. OpenAI's super alignment team resignation (00:04:00) Analysis of the resignation of OpenAI's super alignment team and its potential significance as a warning shot.
undefined
Jun 19, 2024 • 1h 23min

Episode #33 - “Dad vs. AGI” For Humanity: An AI Risk Podcast

In episode 33, host John Sherman talks with Dustin Burham, who is a dad, an anesthetist, an AI risk realist, and a podcast host himself about being a father while also understanding the realities of AI risk and the precarious moment we are living in. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Check out Dustin Burham’s fatherhood podcast: https://www.youtube.com/@thepresentfathers BUY STEPHEN HANSON’S BEAUTIFUL BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS **The threat of AI to humanity (00:00:22)** **Pope Francis's address at the G7 summit on AI risk (00:02:31)** **Starting a dialogue on tough subjects (00:05:44)** **The challenges and joys of fatherhood (00:10:47)** **Concerns and excitement about AI technology (00:15:09)** **The Present Fathers Podcast (00:16:58)** **Personal experiences of fatherhood (00:18:56)** **The impact of AI risk on future generations (00:21:11)** **Elon Musk's Concerns (00:21:57)** **Impact of Denial (00:23:40)** **Potential AI Risks (00:24:27)** **Psychopathy and Decision-Making (00:26:28)** **Personal and Societal Impact (00:28:46)** **AI Risk Awareness (00:30:12)** **Ethical Considerations (00:31:46)** **AI Technology and Human Impact (00:34:28)** **Exponential Growth and Risk (00:36:06)** **Emotion and Empathy in AI (00:37:58)** **Antenatalism and Ethical Debate (00:41:04)** **The antenatal ideas (00:42:20)** **Psychopathic tendencies among CEOs and decision making (00:43:27)** **The power of social media in influencing change (00:46:12)** **The unprecedented threat of human extinction from AI (00:49:03)** **Teaching large language models to love humanity (00:50:11)** **Proposed measures for AI regulation (00:59:27)** **China's approach to AI safety regulations (01:01:12)** **The threat of open sourcing AI (01:02:50)** **Protecting children from AI temptations (01:04:26)** **Challenges of policing AI-generated content (01:07:06)** **Hope for the future and engaging in AI safety (01:10:33)** **Performance by YG Marley and Lauryn Hill (01:14:26)** **Final thoughts and call to action (01:22:28)**
undefined
Jun 17, 2024 • 4min

Episode #33 TRAILER - “Dad vs. AGI” For Humanity: An AI Risk Podcast

In episode 33 Trailer, host John Sherman talks with Dustin Burham, who is a dad, an anesthetist, an AI risk realist, and a podcast host himself about being a father while also understanding the realities of AI risk and the precarious moment we are living in. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. TIMESTAMPS Parental Concerns (00:00:00) A parent expresses worries about AI risks and emphasizes the need for cautious progress. Risk Acceptance Threshold (00:00:50) The speaker discusses the acceptability of doom and risk in AI and robotics, drawing parallels with medical risk assessment. Zero Risk Standard (00:01:34) The speaker emphasizes the medical industry's zero-risk approach and contrasts it with the industry's acceptance of potential doom. Human Denial and Nuclear Brinksmanship (00:02:25) The power of denial and its impact on decision-making, including the tendency to ignore catastrophic possibilities. Doom Prediction (00:03:17) The speakers express high levels of concern about potential doom in the future, with a 98% doom prediction for 50 years. RESOURCES: Check out Dustin Burham’s fatherhood podcast: https://www.youtube.com/@thepresentfathers BUY STEPHEN HANSON’S BEAUTIFUL BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes
undefined
Jun 12, 2024 • 1h 37min

Episode #32 - “Humans+AIs=Harmony?” For Humanity: An AI Risk Podcast

Could humans and AGIs live in a state of mutual symbiosis, like the ecostsystem of a coral reef? (FULL INTERVIEW STARTS AT 00:23:21) Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast In episode 32, host John Sherman interviews BioComm AI CEO Peter Jensen. Peter is working on a number of AI-risk related projects. He believes it’s possible humans and AGIs can co-exist in mutual symbiosis. This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: BUY STEPHEN HANSON’S BEAUTIFUL BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom NYT: OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html?unlocked_article_code=1.xE0._mTr.aNO4f_hEp2J4&smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cb Dwarkesh Patel Interviews Another Whistleblower Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History Roman Yampolskiy on Lex Fridman Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431 Gladstone AI on Joe Rogan Joe Rogan Experience #2156 - Jeremie & Edouard Harris Peter Jenson’s Videos:  HOW can AI Kill-us-All? So Simple, Even a Child can Understand (1:25)  WHY do we want AI? For our Humanity (1:00)  WHAT is the BIG Problem? Wanted: SafeAI Forever (3:00)  FIRST do no harm. (Safe AI Blog) DECK. On For Humanity Podcast “Just the FACTS, please. WHY? WHAT? HOW?”  (flip book) https://discover.safeaiforever.com/ JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: **The release of products that are safe (00:00:00)** **Breakthroughs in AI research (00:00:41)** **OpenAI whistleblower concerns (00:01:17)** **Roman Yampolskiy's appearance on Lex Fridman podcast (00:02:27)** **The capabilities and risks of AI systems (00:03:35)** **Interview with Gladstone AI founders on Joe Rogan podcast (00:08:29)** **OpenAI whistleblower's interview on Hard Fork podcast (00:14:08)** **Peter Jensen's work on AI risk and media communication (00:20:01)** **The interview with Peter Jensen (00:22:49)** **Mutualistic Symbiosis and AI Containment (00:31:30)** **The Probability of Catastrophic Outcome from AI (00:33:48)** **The AI Safety Institute and Regulatory Efforts (00:42:18)** **Regulatory Compliance and the Need for Safety (00:47:12)** **The hard compute cap and hardware adjustment (00:47:47)** **Physical containment and regulatory oversight (00:48:29)** **Viewing the issue as a big business regulatory issue vs. a national security issue (00:50:18)** **Funding and science for AI safety (00:49:59)** **OpenAI's power allocation and ethical concerns (00:51:44)** **Concerns about AI's impact on employment and societal well-being (00:53:12)** **Parental instinct and the urgency of AI safety (00:56:32)**
undefined
Jun 10, 2024 • 3min

Episode #32 TRAILER - “Humans+AIs=Harmony?” For Humanity: An AI Risk Podcast

Could humans and AGIs live in a state of mutual symbiosis, like the ecostsystem of a coral reef? Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast In episode 32, host John Sherman interviews BioComm AI CEO Peter Jensen. Peter is working on a number of AI-risk related projects. He believes it’s possible humans and AGIs can co-exist in mutual symbiosis. This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Peter Jensen’s Video: HOW can AI Kill-us-All? So Simple, Even a Child can Understand (1:25) https://www.youtube.com/watch?v=8yrIfCQBgdE
undefined
Jun 5, 2024 • 1h 16min

Episode #31 - “Trucker vs. AGI” For Humanity: An AI Risk Podcast

In Episode #31 John Sherman interviews a 29-year-old American truck driver about his concerns over human extinction and artificial intelligence. They discuss the urgency of raising awareness about AI risks, the potential job displacement in industries like trucking, and the geopolitical implications of AI advancements. Leighton shares his plans to start a podcast and possibly use filmmaking to engage the public in AI safety discussions. Despite skepticism from others, they stress the importance of community and dialogue in understanding and mitigating AI threats, with Leighton highlighting the risk of a "singleton event" and ethical concerns in AI development. Full Interview Starts at (00:10:18) Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Timestamps - Layton's Introduction (00:00:00) - Introduction to the Podcast (00:02:19) - Power of the First Followers (00:03:24) - Layton's Concerns about AI (00:08:49) - Layton's Background and AI Awareness (00:11:11) - Challenges in Spreading Awareness (00:14:18) - Distrust of Government and Family Involvement (00:23:20) - Government Imperfections (00:25:39) - AI Impact on National Security (00:26:45) - AGI Decision-Making (00:28:14) - Government Oversight of AGI (00:29:32) - Geopolitical Tension and AI (00:31:51) - Job Loss and AGI (00:37:20) - AI, Mining, and Space Race (00:38:02) - Public Engagement and AI (00:44:34) - Philosophical Perspective on AI (00:49:45) - The existential threat of AI (00:51:05) - Geopolitical tensions and AI risks (00:52:05) - AI's potential for global dominance (00:53:48) - Ethical concerns and AI welfare (01:01:21) - Preparing for AI risks (01:03:02) - The challenge of raising awareness (01:06:42) - A hopeful outlook (01:08:28) RESOURCES: Leighton’s Podcast on YouTube: https://www.youtube.com/@UrNotEvenBasedBro JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes
undefined
Jun 3, 2024 • 5min

Episode #31 TRAILER - “Trucker vs. AGI” For Humanity: An AI Risk Podcast

Episode #31 TRAILER  - “Trucker vs. AGI” For Humanity: An AI Risk Podcast In Episode #31 TRAILER, John Sherman interviews a 29-year-old American truck driver about his concerns over human extinction and artificial intelligence. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Timestamps The challenge of keeping up (00:00:00) Discussion about the difficulty of staying informed amidst busy lives and the benefit of using podcasts to keep up. The impact of social media bubbles (00:01:22) Exploration of how social media algorithms create bubbles and the challenge of getting others to pay attention to important information. Geopolitical implications of technological advancements (00:02:00) Discussion about the potential implications of technological advancements, particularly in relation to artificial intelligence and global competition. Potential consequences of nationalizing AGI (00:04:21) Speculation on the potential consequences of nationalizing artificial general intelligence and the potential use of a pandemic to gain a competitive advantage. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes
undefined
May 29, 2024 • 1h 40min

Episode #30 - “Dangerous Days At Open AI” For Humanity: An AI Risk Podcast

Exploration of AI safety competence at Open AI and the shift to AI Risk. Challenges in achieving super alignment, unethical behavior in powerful organizations, and navigating AI ethics and regulation. Risks of AI biothreats, uncertainties in AI development, and debates on human vs AI intelligence limits.
undefined
May 27, 2024 • 3min

Episode #30 - “Dangerous Days At Open AI” For Humanity: An AI Risk Podcast

Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast In episode 30, John Sherman interviews Professor Olle Häggström on a wide range of AI risk topics. At the top of the list is the super-instability and the super-exodus from OpenAI’s super alignment team following the resignations of Jan Lieke and Ilya Sutskyver. This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes
undefined
May 22, 2024 • 1h 7min

Episode #29 - “Drop Everything To Stop AGI” For Humanity: An AI Safety Podcast

Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast The world is waking up to the existential danger of unaligned AGI. But we are racing against time. Some heroes are stepping up, people like this week’s guest Chris Gerrby. Chris was successful in organizing people against AI in Sweden. In early May he left Sweden, moved to  England, and is now spending 14 hours a day 7 days a week to stop AGI. Learn how he plans to grow Pause AI as its new Chief Growth Officer and his thoughts on how to make the case for pausing AI. This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Timestamps: imestamps: Dropping Everything to Stop AGI (00:00:00) Chris Gerbe's dedication to working 14 hours a day to pause AI and the challenges he faces. OpenAI's Recent Events (00:01:11)  Paused AI and Chris Gerrby's Involvement (00:05:28)  Chris Gerrbys Journey and Involvement in AI Safety (00:06:44)  Coping with the Dark Outlook of AI Risk (00:19:02)  Beliefs About AGI Timeline (00:24:06)  The pandemic risk (00:25:30)  Losing control of AGI (00:26:40)  Stealth control and treacherous turn (00:28:38)  Relocation and intense work schedule (00:30:20) Growth strategy for PAI (00:33:39)  Marketing and public relations (00:35:35)  Tailoring communications and gaining members (00:39:41)  Challenges in communicating urgency (00:44:36)  Path to growth for Pause AI (00:48:51) Joining the Pause AI community (00:49:57)  Community involvement and support (00:50:33)  Pause AI's role in the AI landscape (00:51:22) Maintaining work-life balance (00:53:47)  Adapting personal goals for the cause (00:55:50)  Probability of achieving a pause in AI development (00:57:50) Finding hope in personal connections (01:00:24)  RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode