undefined

Anca Dragan

Berkeley professor specializing in human-robot interaction and reward engineering.

Top 3 podcasts with Anca Dragan

Ranked by the Snipd community
undefined
Mar 19, 2020 • 1h 39min

#81 – Anca Dragan: Human-Robot Interaction and Reward Engineering

Anca Dragan is a professor at Berkeley, working on human-robot interaction — algorithms that look beyond the robot’s function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings. Support this podcast by supporting the sponsors and using the special code: – Download Cash App on the App Store or Google Play & use code “LexPodcast”  EPISODE LINKS: Anca’s Twitter: https://twitter.com/ancadianadragan Anca’s Website: https://people.eecs.berkeley.edu/~anca/ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:26 – Interest in robotics 05:32 – Computer science 07:32 – Favorite robot 13:25 – How difficult is human-robot interaction? 32:01 – HRI application domains 34:24 – Optimizing the beliefs of humans 45:59 – Difficulty of driving when humans are involved 1:05:02 – Semi-autonomous driving 1:10:39 – How do we specify good rewards? 1:17:30 – Leaked information from human behavior 1:21:59 – Three laws of robotics 1:26:31 – Book recommendation 1:29:02 – If a doctor gave you 5 years to live… 1:32:48 – Small act of kindness 1:34:31 – Meaning of life
undefined
Aug 28, 2024 • 38min

AI Safety...Ok Doomer: with Anca Dragan

Anca Dragan, a lead for AI safety and alignment at Google DeepMind, dives into the pressing challenges of AI safety. She discusses the urgent need to align artificial intelligence with human values to prevent existential threats. The conversation covers the ethical dilemmas posed by AI recommendation systems and the interplay of competing objectives. Dragan also highlights innovative uses of AI, like citizens' assemblies, to promote democratic dialogue. The episode serves as a vital reminder of the importance of human oversight in AI development.
undefined
Aug 21, 2024 • 19min

“AGI Safety and Alignment at Google DeepMind:A Summary of Recent Work ” by Rohin Shah, Seb Farquhar, Anca Dragan

Join Rohin Shah, a key member of Google's AGI safety team, alongside Seb Farquhar, an existential risk expert, and Anca Dragan, a safety researcher. They dive into the evolving strategies for ensuring AI alignment and safety. Topics include innovative techniques for interpreting neural models, the challenges of scalable oversight, and the ethical implications of AI development. The trio also discusses future plans to address alignment risks, emphasizing the importance of collaboration and the role of mentorship in advancing AGI safety.