So8res, an author known for advocating open discussions on AI dangers, argues for courageous communication about this pressing issue. He believes more voices should confidently express their concerns about the potential threats of AI. So8res emphasizes that how we convey our fears influences how others perceive them. If we share our worries with authority and certainty, we can better engage listeners and prompt serious consideration among policymakers. His insights encourage a shift toward more assertive dialogues about AI risks.
10:13
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
volunteer_activism ADVICE
Speak AI Dangers With Confidence
Speak about AI dangers with confidence and without shame to make people take you seriously.
Present your concerns as obvious and sensible to influence how others respond to the topic.
insights INSIGHT
Experts Agree on AI Risks
Nobel laureates and top AI researchers acknowledge serious AI risk.
Dismissing AI dangers ignores expert consensus and weakens credibility.
question_answer ANECDOTE
Elected Official Warns of Imminent AI Threat
At a dinner with an elected official, people spoke hesitantly about minor AI dangers.
The official boldly shared worries about superintelligence wiping out humanity within three years.
Get the Snipd Podcast app to discover more snips from this episode
This book delves into the potential risks of advanced artificial intelligence, arguing that the development of superintelligence could lead to catastrophic consequences for humanity. The authors present a compelling case for the need for careful consideration and regulation of AI development. They explore various scenarios and potential outcomes, emphasizing the urgency of addressing the challenges posed by rapidly advancing AI capabilities. The book is written in an accessible style, making complex ideas understandable to a broad audience. It serves as a call to action, urging policymakers and researchers to prioritize AI safety and prevent potential existential threats.
I think more people should say what they actually believe about AI dangers, loudly and often. Even if you work in AI policy.
I’ve been beating this drum for a few years now. I have a whole spiel about how your conversation-partner will react very differently if you share your concerns while feeling ashamed about them versus if you share your concerns as if they’re obvious and sensible, because humans are very good at picking up on your social cues. If you act as if it's shameful to believe AI will kill us all, people are more prone to treat you that way. If you act as if it's an obvious serious threat, they’re more likely to take it seriously too.
I have another whole spiel about how it's possible to speak on these issues with a voice of authority. Nobel laureates and lab heads and the most cited [...]
The original text contained 2 footnotes which were omitted from this narration.