The broader social evidence does not support the idea of AI bringing about the end of the world. From a market perspective, the world is doing well, with markets hitting new highs, showing no signs of impending doom. Academia does not seem alarmed either, as top scientists are not publishing models predicting AI as catastrophic. In economics, a leading economist's model suggests AI will only have a modest impact on global GDP, conflicting with fears of AI-induced doom. Renowned philosopher Nick Bostrom, who initially raised AI safety concerns, has now shifted to a more optimistic stance. This shift is evidenced by the closure of the Oxford University's Future of Humanity Institute which was known for its AI safety fears, further indicating a change in perspective regarding AI outcomes.
A reading and discussion inspired by https://www.bloomberg.com/opinion/articles/2024-05-21/ai-safety-is-dead-and-chuck-schumer-faces-risks?srnd=undefined&sref=qUxVp6JU
**
Join Superintelligent at https://besuper.ai/ -- Practical, useful, hands on AI education through tutorials and step-by-step how-tos. Use code podcast for 50% off your first month!
**
ABOUT THE AI BREAKDOWN
The AI Breakdown helps you understand the most important news and discussions in AI.
Subscribe to The AI Breakdown newsletter: https://aidailybrief.beehiiv.com/
Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@AIDailyBrief
Join the community: bit.ly/aibreakdown