
AI Control: Improving Safety Despite Intentional Subversion
AI Safety Fundamentals: Alignment
Planning Projects and Write-ups for AI Safety
Projects include deep dives into theory of change and challenges in using control evaluations for ensuring safe AI use. Code back-doring setting focuses on removing access to GPT-4 for auditing, emphasizing expected number of back doors rather than chance of insertion. Highlight on insufficiencies in baseline auditing approach. Exploration of complex failure scenarios and discussing AI control for policy, advocacy, or standards. Emphasis on not deploying AI in ways that could lead to them taking over. Contact Fabian at rdwrs.com for more information.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.