Open AI says it's conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains and carry out as much productive activity as one of today's largest corporations. They say given the possibility of existential risk, we can't just be reactive. Any effort above a certain capability or resources like compute threshold will need to be subject to an international authority - they admit this is an open research question. In what's not in scope section, they think it's important to allow companies and open source projects to develop models below a significant capability threshold without the kind of regulation we describe here.
On today's episode, NLW looks at global regulatory proposals from OpenAI and Google, as well as a number of topics on the brief, including:
Intel's Aurora is a 1 TRILLION parameter model
Meta's new multilanguage model can recognize 4000 languages
Bill Gates talks about AI
CoDI multimodal
1X robot EVE
The AI Breakdown helps you understand the most important news and discussions in AI.
Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe
Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown
Learn more: http://breakdown.network/