Dr. Miles Brundage, Head of Policy Research at OpenAI, discusses AI models like GPT-3, DALL-E, Codex, and CLIP, AI policy, safety, alignment, and the future impact of AI on various professions.
AI policy is crucial for societal impact and mitigating harmful use cases.
AI models like Codex and CLIP require human oversight to prevent misuse.
Considerations for deploying AI models include evidence, user bases, and comparing with human baselines.
Deep dives
Overview of OpenAI and AI Model Rollouts
OpenAI, led by Dr. Miles Brundage, focuses on AI policy and the responsible deployment of advanced models like GPT-3, DALI, Codex, and CLIP. Dr. Brundage discusses considerations for AI models in production, including insight into the rollout process of cutting-edge models that predict image classes and aid in software writing.
Importance of AI Policy Research and Mitigating Harm
Dr. Brundage highlights the significance of AI policy research in understanding societal impacts and preventing harmful AI use cases. The team at OpenAI employs methods like red teaming and collaborates internally and externally to mitigate biases, disinformation, and misuse through better training data, human feedback, and product policies.
Challenges in Code Generation and Risk Mitigation
Code generation models like Codex and language-image models such as CLIP present challenges like creating secure and accurate code. The emphasis is on avoiding misuse and developing human oversight to prevent reckless or naive application of AI-generated code. The evolving landscape calls for vigilance and ongoing research into ethical code generation practices.
Issues with Image Recognition Models
Image recognition models can exhibit biases, like misclassifying darker-skinned individuals as gorillas. Google's solution was to exclude predicting 'gorillas' entirely, indicating limitations in post hoc fixes. Additionally, these models may show better performance across certain demographics and favor Western concepts, potentially leading to inaccuracies in recognizing diverse cultural elements.
Concerns and Considerations for AI Deployment
In determining the readiness of AI models for deployment, factors like evidence supporting limited use versus broad applications play a crucial role. Gradual scaling up user bases through API-based deployment offers flexibility in approving use cases. Evaluating alternatives, existing technologies, and comparing AI systems to human baselines can aid in identifying advancements and ensuring appropriate guardrails to mitigate potential harms.
Dr. Miles Brundage, Head of Policy Research at OpenAI, joins Jon Krohn this week to discuss AI model production, policy, safety, and alignment. Tune in to hear him speak on GPT-3, DALL-E, Codex, and CLIP as well.
In this episode you will learn:
• Miles’ role as Head of Policy Research at OpenAI [4:35]
• OpenAI's DALL-E model [7:20]
• OpenAI's natural language model GPT-3 [30:43]
• OpenAI's automated software-writing model Codex [36:57]
• OpenAI’s CLIP model [44:01]
• What sets AI policy, AI safety, and AI alignment apart from each other [1:07:03]
• How A.I. will likely augment more professions than it displaces them [1:12:06]