undefined

Markus Anderljung

Head of Policy at the Centre for the Governance of AI

Top 3 podcasts with Markus Anderljung

Ranked by the Snipd community
undefined
30 snips
Apr 30, 2024 • 22min

Ep. 10: Navigating Global AI Regulation - The Brussels Effect and Beyond

Markus Anderljung, head of policy at the Center for the Governance of AI, discusses the EU AI Act's impact on global AI regulation. They explore the 'Brussels Effect,' potential influence on U.S. policies, and the enforcement mechanisms. The podcast delves into standardized approaches for high-risk AI systems and the establishment of an AI office within the European Commission.
undefined
25 snips
Jul 10, 2023 • 2h 7min

#156 – Markus Anderljung on how to regulate cutting-edge AI models

"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier developers keep walking down the minefield, there's going to be all these other people who follow along. And then a really important thing is to make sure that they don't step on the same mines. So you need to put a flag down -- not on the mine, but maybe next to it. And so what that looks like in practice is maybe once we find that if you train a model in such-and-such a way, then it can produce maybe biological weapons is a useful example, or maybe it has very offensive cyber capabilities that are difficult to defend against. In that case, we just need the regulation to be such that you can't develop those kinds of models." — Markus AnderljungIn today’s episode, host Luisa Rodriguez interviews the Head of Policy at the Centre for the Governance of AI — Markus Anderljung — about all aspects of policy and governance of superhuman AI systems.Links to learn more, summary and full transcript.They cover:The need for AI governance, including self-replicating models and ChaosGPTWhether or not AI companies will willingly accept regulationThe key regulatory strategies including licencing, risk assessment, auditing, and post-deployment monitoringWhether we can be confident that people won't train models covertly and ignore the licencing systemThe progress we’ve made so far in AI governanceThe key weaknesses of these approachesThe need for external scrutiny of powerful modelsThe emergent capabilities problemWhy it really matters where regulation happensAdvice for people wanting to pursue a career in this fieldAnd much more.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
undefined
Sep 22, 2023 • 22min

Highlights: #156 – Markus Anderljung on how to regulate cutting-edge AI models

Markus Anderljung, an expert on regulating cutting-edge AI models, discusses the challenges of regulating AI models and the need for early addressing of concerns. They also explore the evaluation of AI models for controllability and dangerous capabilities, as well as the potential impact of regulatory approaches on AI adoption.