When humans attained general intelligence, we didn't achieve our subsequent exponential growth and information processing capacity by growing bigger brains. It's naive to assume that the fastest path from AGI to super-intelligence involves simply training even larger LLMs with even more data. Once AGI is tasked with discovering these better architectures, AI progress will be made much faster than now,. And I.J. Good's intelligence explosion has begun; some people will task it with making self-improving AI for various purposes, including destroying humanity. Will be fine, even if the asteroid hits us.
A reading of Max Tegmark's essay "The 'Don't Look Up' Thinking That Could Doom Us With AI"
The AI Breakdown helps you understand the most important news and discussions in AI.
Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe
Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown
Join the community: bit.ly/aibreakdown
Learn more: http://breakdown.network/