The chapter discusses the concept of default bias and its implications, acknowledging that default options can lead to biased decision-making. The speaker explores the difference between statements and revealed preferences, suggesting that behavior often aligns with certain policy outcomes. The chapter concludes with a discussion on the use of strategic foresight for mitigating risks from AI and the need for a more nuanced understanding of the complexity and uncertainty of the situation.
Read the full transcript here.
How can we find and expand the limitations of our imaginations, especially with respect to possible futures for humanity? What sorts of existential threats have we not yet even imagined? Why is there a failure of imagination among the general populace about AI safety? How can we make better decisions under uncertainty and avoid decision paralysis? What kinds of tribes have been forming lately within AI fields? What are the differences between alignment and control in AI safety? What do people most commonly misunderstand about AI safety? Why can't we just turn a rogue AI off? What threats from AI are unique in human history? What can the average person do to help mitigate AI risks? What are the best ways to communicate AI risks to the general populace?
Darren McKee (MSc, MPA) is the author of the just-released Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. He is a speaker and sits on the Board of Advisors for AIGS Canada, the leading safety and governance network in the country. McKee also hosts the international award-winning podcast, The Reality Check, a top 0.5% podcast on Listen Notes with over 4.5 million downloads. Learn more about him on his website, darrenmckee.info, or follow him on X / Twitter at @dbcmckee.
Staff
Music
Affiliates