
For Humanity: An AI Safety Podcast
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Latest episodes

Nov 19, 2024 • 1h 18min
AI Risk Update | One Year of For Humanity | Episode #52
In Episode #52 , host John Sherman looks back on the first year of For Humanity. Select shows are featured as well as a very special celebration of life at the end.

Oct 23, 2024 • 1h 6min
AI Risk Funding | Big Tech vs. Small Safety I Episode #51
In Episode #51 , host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared.
Learn More About Founders Pledge:
https://www.founderspledge.com/
No celebration of life this week!! Youtube finally got me with a copyright flag, had to edit the song out.
THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST
Join Zoom Meeting:
https://storyfarm.zoom.us/j/816517210...
Passcode: 829191
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
****************
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
/ discord
Max Winga’s “A Stark Warning About Extinction”
• A Stark Warning About AI Extinction
For Humanity Theme Music by Josef Ebner
Youtube: / @jpjosefpictures
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
***********************
Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation.
In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include:
AI
AI safety
AI safety research

Oct 21, 2024 • 6min
AI Risk Funding | Big Tech vs. Small Safety | Episode #51 TRAILER
In Episode #51 Trailer, host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared.
Learn More About Founders Pledge:
https://www.founderspledge.com/
THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST
Join Zoom Meeting:
https://storyfarm.zoom.us/j/816517210...
Passcode: 829191
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
****************
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
/ discord
Max Winga’s “A Stark Warning About Extinction”
• A Stark Warning About AI Extinction
For Humanity Theme Music by Josef Ebner
Youtube: / @jpjosefpictures
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
***********************
Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation.
In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include:
AI
What is AI?
Big tech
***************************
If you want to learn more about AI risk funding, follow us on our social media platforms, where we share additional tips, resources, and stories. You can find us on
YouTube: / @forhumanitypodcast
Website: http://www.storyfarm.com/
***************************
Don’t miss this opportunity to discover the secrets of AI risk funding, AI, what is AI, and big tech.
Have I addressed your concerns about AI risk funding?
Maybe you wish to comment below and let me know what else I can help you with AI, what is AI, big tech, and AI risk funding.

Oct 21, 2024 • 1h 19min
Accurately Predicting Doom | What Insight Can Metaculus Reveal About AI Risk? | Episode # 50
In Episode #50, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards.
THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST
Join Zoom Meeting:
https://storyfarm.zoom.us/j/816517210...
Passcode: 829191
LEARN MORE–
www.metaculus.com
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
****************
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
/ discord
Max Winga’s “A Stark Warning About Extinction”
• A Stark Warning About AI Extinction
For Humanity Theme Music by Josef Ebner
Youtube: / @jpjosefpictures
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
**********************
Hi, thanks for watching our video about what insight can Metaculus reveal about AI risk and accurately predicting doom.
In this video, we discuss accurately predicting doom and cover the following topics
AI
AI safety
Metaculus
**********************
Explore our other video content here on YouTube, where you'll find more insights into accurately predicting doom, along with relevant social media links.
YouTube: / @forhumanitypodcast
Website: http://www.storyfarm.com/
***************************
This video explores accurately predicting doom, AI, AI safety, and Metaculus.
Have I addressed your curiosity regarding accurately predicting doom?
We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: AI, AI safety, Metaculus, and accurately predicting doom.

Oct 14, 2024 • 5min
Accurately Predicting Doom | What Insight Can Metaculus Reveal About AI Risk? | Episode # 50 TRAILER
In Episode #50 TRAILER, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards.
LEARN MORE–AND JOIN STOP AI
www.stopai.info
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Oct 14, 2024 • 1h 17min
Episode #49: “Go To Jail To Stop AI” For Humanity: An AI Risk Podcast
In Episode #49, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI.
LEARN MORE–AND JOIN STOP AI
www.stopai.info
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Oct 8, 2024 • 5min
Go To Jail To Stop AI | Stopping AI | Episode #49 TRAILER
In Episode #49 TRAILER, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI.
LEARN MORE–AND JOIN STOP AI
www.stopai.info
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: forhumanitypodcast@gmail.com
*********************
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
*********************
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
/ discord
Max Winga’s “A Stark Warning About Extinction”
• A Stark Warning About AI Extinction
For Humanity Theme Music by Josef Ebner
Youtube: / @jpjosefpictures
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
PayPal.MePayPal.Me
Pay John Sherman using PayPal.Me
Go to paypal.me/forhumanitypodcast and type in the amount. Since it’s PayPal, it's easy and secure. Don’t have a PayPal account? No worries.
PayPal.MePayPal.Me
Pay John Sherman using PayPal.Me
Go to paypal.me/forhumanitypodcast and type in the amount. Since it’s PayPal, it's easy and secure. Don’t have a PayPal account? No worries.
YouTubeYouTube
Doom Debates
Urgent disagreements that must be resolved before the world ends. Hosted by Liron Shapira.
Discord
Join the PauseAI Discord Server!
Community of volunteers working towards an international pause on the development of AI systems more powerful than GPT-4 | 2077 members (93 kB)
*************
Welcome! In today's video, we delve into the vital aspects of stopping AI and explore go to jail to stop AI.
This video covers go to jail to stop AI and the following topics:
AI safety
AI risks
AI legal issues
********************
Discover more of our video content on go to jail to stop AI. You'll find additional insights on this topic along with relevant social media links.
YouTube: / @forhumanitypodcast
Website: http://www.storyfarm.com/
***************************
This video explores go to jail to stop AI, AI safety, AI risks, and AI legal issues.
Have I addressed your curiosity regarding go to jail to stop AI?
We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: AI safety, AI risks, AI legal issues, and go to jail to stop AI.

Oct 8, 2024 • 1h 9min
What Is The Origin Of AI Safety? | AI Safety Movement | Episode #48
In Episode #48, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins.
Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403 PASSCODE: 789742
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhu...
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
/ @doomdebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
/ discord
Max Winga’s “A Stark Warning About Extinction”
• A Stark Warning About AI Extinction
For Humanity Theme Music by Josef Ebner
Youtube: / @jpjosefpictures
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.co...
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on...
Best Account on Twitter: AI Notkilleveryoneism Memes
/ aisafetymemes
*************************
Welcome! In today's video, we delve into the vital aspects of AI safety movement and explore what is the origin of AI safety.
This video covers what is the origin of AI safety and the following topics:
AI safety
AI safety research
Eliezer’s insights on AI safety research
********************
Discover more of our video content on what is the origin of AI safety. You'll find additional insights on this topic along with relevant social media links.
YouTube: / @forhumanitypodcast

Sep 30, 2024 • 8min
AI Safety's Limiting Origins: For Humanity, An AI Risk Podcast, Episode #48 Trailer
In Episode #48 Trailer, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins.
Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Sep 25, 2024 • 1h 20min
Episode #47: “Can AI Be Controlled?“ For Humanity: An AI Risk Podcas
In Episode #47, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that’s supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck’s writing on these topics below:
https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerful
https://redwoodresearch.substack.com/p/would-catching-your-ais-trying-to
Senate Hearing:
https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectives
Harry Macks Youtube Channel
https://www.youtube.com/channel/UC59ZRYCHev_IqjUhremZ8Tg
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.