undefined

Anca Dragan

Professor at UC Berkeley, specializing in human-robot interaction and reward engineering. Her research focuses on algorithms that enable robots to effectively coordinate with humans.

Top 3 podcasts with Anca Dragan

Ranked by the Snipd community
undefined
Aug 28, 2024 • 38min

AI Safety...Ok Doomer: with Anca Dragan

Anca Dragan, a lead for AI safety and alignment at Google DeepMind, dives into the pressing challenges of AI safety. She discusses the urgent need to align artificial intelligence with human values to prevent existential threats. The conversation covers the ethical dilemmas posed by AI recommendation systems and the interplay of competing objectives. Dragan also highlights innovative uses of AI, like citizens' assemblies, to promote democratic dialogue. The episode serves as a vital reminder of the importance of human oversight in AI development.
undefined
Mar 19, 2020 • 1h 39min

#81 – Anca Dragan: Human-Robot Interaction and Reward Engineering

Anca Dragan is a professor at UC Berkeley specializing in human-robot interaction. She discusses the complexities of designing robots that can effectively coordinate with humans, stressing the importance of aligning robotic behavior with human preferences. Anca dives into the challenges of autonomous driving, the intricacies of reward functions in AI, and the emotional connections people have with robots. The conversation touches on how robots can understand human intentions and the philosophical implications of AI in our lives.
undefined
Aug 21, 2024 • 19min

“AGI Safety and Alignment at Google DeepMind:A Summary of Recent Work ” by Rohin Shah, Seb Farquhar, Anca Dragan

Join Rohin Shah, a key member of Google's AGI safety team, alongside Seb Farquhar, an existential risk expert, and Anca Dragan, a safety researcher. They dive into the evolving strategies for ensuring AI alignment and safety. Topics include innovative techniques for interpreting neural models, the challenges of scalable oversight, and the ethical implications of AI development. The trio also discusses future plans to address alignment risks, emphasizing the importance of collaboration and the role of mentorship in advancing AGI safety.