For Humanity: An AI Safety Podcast cover image

For Humanity: An AI Safety Podcast

Episode #32 - “Humans+AIs=Harmony?” For Humanity: An AI Risk Podcast

Jun 12, 2024
01:37:00

Could humans and AGIs live in a state of mutual symbiosis, like the ecostsystem of a coral reef?

(FULL INTERVIEW STARTS AT 00:23:21)

Please Donate Here To Help Promote For Humanity

https://www.paypal.com/paypalme/forhumanitypodcast

In episode 32, host John Sherman interviews BioComm AI CEO Peter Jensen. Peter is working on a number of AI-risk related projects. He believes it’s possible humans and AGIs can co-exist in mutual symbiosis.

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

BUY STEPHEN HANSON’S BEAUTIFUL BOOK!!!

https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

NYT: OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance

https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html?unlocked_article_code=1.xE0._mTr.aNO4f_hEp2J4&smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cb

Dwarkesh Patel Interviews Another Whistleblower

Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History

Roman Yampolskiy on Lex Fridman

Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431

Gladstone AI on Joe Rogan

Joe Rogan Experience #2156 - Jeremie & Edouard Harris

Peter Jenson’s Videos: 

HOW can AI Kill-us-All? So Simple, Even a Child can Understand (1:25) 


WHY do we want AI? For our Humanity (1:00) 


WHAT is the BIG Problem? Wanted: SafeAI Forever (3:00) 


FIRST do no harm. (Safe AI Blog)


DECK. On For Humanity Podcast “Just the FACTS, please. WHY? WHAT? HOW?”  (flip book)


https://discover.safeaiforever.com/

JOIN THE FIGHT, help Pause AI!!!!

Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST

  / discord  

https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety

Statement on AI Risk | CAIS

https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes 

https://twitter.com/AISafetyMemes

TIMESTAMPS:

**The release of products that are safe (00:00:00)**

**Breakthroughs in AI research (00:00:41)**

**OpenAI whistleblower concerns (00:01:17)**

**Roman Yampolskiy's appearance on Lex Fridman podcast (00:02:27)**

**The capabilities and risks of AI systems (00:03:35)**

**Interview with Gladstone AI founders on Joe Rogan podcast (00:08:29)**

**OpenAI whistleblower's interview on Hard Fork podcast (00:14:08)**

**Peter Jensen's work on AI risk and media communication (00:20:01)**

**The interview with Peter Jensen (00:22:49)**

**Mutualistic Symbiosis and AI Containment (00:31:30)**

**The Probability of Catastrophic Outcome from AI (00:33:48)**

**The AI Safety Institute and Regulatory Efforts (00:42:18)**

**Regulatory Compliance and the Need for Safety (00:47:12)**

**The hard compute cap and hardware adjustment (00:47:47)**

**Physical containment and regulatory oversight (00:48:29)**

**Viewing the issue as a big business regulatory issue vs. a national security issue (00:50:18)**

**Funding and science for AI safety (00:49:59)**

**OpenAI's power allocation and ethical concerns (00:51:44)**

**Concerns about AI's impact on employment and societal well-being (00:53:12)**

**Parental instinct and the urgency of AI safety (00:56:32)**


Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode