OpenAI forms a team to prepare for catastrophic risks around AI. They discuss their risk-informed development policy and aligning AI. Also, AI usage and satisfaction survey results are explored along with global efforts for AI governance.
OpenAI forms a team to focus on preparing for catastrophic risks associated with AI.
The team aims to evaluate and protect against various catastrophic risks, including individualized persuasion and cybersecurity.
Deep dives
OpenAI's Video Language Model: Pegasus 1
Pegasus 1 is a video language model developed by 12 Labs that integrates visual, audio, and speech information to generate holistic text summaries from videos. The model, which has approximately 80 billion parameters, outperforms previous video language models and offers new capabilities for interacting with videos in an intelligent way. While currently not available for use, interested users can join a waiting list for access.
Shutterstock's AI Editing of Stock Photos
Shutterstock allows users to transform photos from their library using AI editing tools. Users can resize, expand, remove backgrounds, and even describe natural language modifications to images. The aim is to address issues of copyright and compensate artists for licensed images that are edited using AI. Although it has certain constraints, this AI extension of existing stock photo libraries offers solutions to questions of artist compensation.
OpenAI's Catastrophic Risk Preparedness Team
OpenAI recently announced the formation of its Catastrophic Risk Preparedness Team, focusing on the safety and risks associated with AI. The team aims to track, evaluate, forecast, and protect against various catastrophic risks, ranging from individualized persuasion to cybersecurity, chemical biological radiological and nuclear threats, and autonomous replication and adaptation. OpenAI also introduced a Risk-Informed Development Policy to ensure the integration of risk management throughout the development and deployment process. The company is inviting submissions for its Preparedness Challenge, offering API credits for proposals addressing catastrophic misuse prevention.
OpenAI forms a team to focus on how to prepare for the biggest most catastrophic risks around AI. NLW explores as well as looking at the new UN AI advisory council
ABOUT THE AI BREAKDOWN
The AI Breakdown helps you understand the most important news and discussions in AI.
Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe
Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown
Join the community: bit.ly/aibreakdown
Learn more: http://breakdown.network/
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode