In this engaging discussion, So8res, an advocate for courageous communication about AI dangers, emphasizes the importance of openly addressing serious threats posed by artificial intelligence. He shares insights on how expressing concerns assertively can shift public perception and spur meaningful dialogue among policymakers. So8res also calls for a compelling literature project to raise awareness, urging community support and open discussions about the urgent AI issues we face. It's a clarion call for clarity and confidence in a crucial conversation.
10:12
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
volunteer_activism ADVICE
Speak AI Dangers With Courage
Speak your AI danger concerns loudly and confidently without shame.
People take threats seriously when you present them as obvious and sensible.
volunteer_activism ADVICE
Use Expert Authority Confidently
Cite credible voices like Nobel laureates and top researchers to strengthen your case.
Challenge dismissive people by asking what knowledge they have beyond experts.
question_answer ANECDOTE
Elected Official's Stark AI Fear
At a dinner with an elected official, he expressed concern AI superintelligence could wipe out humanity in as little as three years.
Others at the dinner could muster only mild concerns, highlighting differing courage levels.
Get the Snipd Podcast app to discover more snips from this episode
This book delves into the potential risks of advanced artificial intelligence, arguing that the development of superintelligence could lead to catastrophic consequences for humanity. The authors present a compelling case for the need for careful consideration and regulation of AI development. They explore various scenarios and potential outcomes, emphasizing the urgency of addressing the challenges posed by rapidly advancing AI capabilities. The book is written in an accessible style, making complex ideas understandable to a broad audience. It serves as a call to action, urging policymakers and researchers to prioritize AI safety and prevent potential existential threats.
I think more people should say what they actually believe about AI dangers, loudly and often. Even if you work in AI policy.
I’ve been beating this drum for a few years now. I have a whole spiel about how your conversation-partner will react very differently if you share your concerns while feeling ashamed about them versus if you share your concerns as if they’re obvious and sensible, because humans are very good at picking up on your social cues. If you act as if it's shameful to believe AI will kill us all, people are more prone to treat you that way. If you act as if it's an obvious serious threat, they’re more likely to take it seriously too.
I have another whole spiel about how it's possible to speak on these issues with a voice of authority. Nobel laureates and lab heads and the most cited [...]
The original text contained 2 footnotes which were omitted from this narration.