undefined

Paul Christiano

Researcher at OpenAI, working on aligning artificial intelligence with human values. Previously completed a PhD in theoretical computer science at UC Berkeley.

Top 5 podcasts with Paul Christiano

Ranked by the Snipd community
undefined
102 snips
Apr 24, 2023 • 1h 50min

168 - How to Solve AI Alignment with Paul Christiano

In this engaging discussion, Paul Christiano, head of the Alignment Research Center, tackles the pressing AI alignment problem. He provides insights on the scale and complexity of aligning AI systems with human values. Paul delves into the likelihood of AI risks, the potential timeline for these developments, and the ethical dilemmas that arise. He emphasizes the importance of proactive strategies and collaborative efforts to ensure the safe integration of AI into society. Humorously, he suggests that politeness could play a role in our future interactions with intelligent machines!
undefined
71 snips
Oct 31, 2023 • 3h 7min

Paul Christiano - Preventing an AI Takeover

Paul Christiano, a leading AI safety researcher and head of the Alignment Research Center, shares his insights on preventing AI disasters. He discusses the dual-use nature of alignment techniques and his modest timelines for AI advancements. Paul also explores the vision for a post-AGI world and the ethical implications of keeping advanced AI 'enslaved.' He emphasizes the need for responsible scaling policies and dives into his current research aimed at solving alignment challenges, highlighting the risks of misalignment and the complexities of AI behavior.
undefined
71 snips
Dec 2, 2021 • 2h 50min

12 - AI Existential Risk with Paul Christiano

Why would advanced AI systems pose an existential risk, and what would it look like to develop safer systems? In this episode, I interview Paul Christiano about his views of how AI could be so dangerous, what bad AI scenarios could look like, and what he thinks about various techniques to reduce this risk.   Topics we discuss, and timestamps:  - 00:00:38 - How AI may pose an existential threat    - 00:13:36 - AI timelines    - 00:24:49 - Why we might build risky AI    - 00:33:58 - Takeoff speeds    - 00:51:33 - Why AI could have bad motivations    - 00:56:33 - Lessons from our current world    - 01:08:23 - "Superintelligence"  - 01:15:21 - Technical causes of AI x-risk    - 01:19:32 - Intent alignment    - 01:33:52 - Outer and inner alignment    - 01:43:45 - Thoughts on agent foundations  - 01:49:35 - Possible technical solutions to AI x-risk    - 01:49:35 - Imitation learning, inverse reinforcement learning, and ease of evaluation    - 02:00:34 - Paul's favorite outer alignment solutions      - 02:01:20 - Solutions researched by others      - 2:06:13 - Decoupling planning from knowledge    - 02:17:18 - Factored cognition    - 02:25:34 - Possible solutions to inner alignment  - 02:31:56 - About Paul    - 02:31:56 - Paul's research style    - 02:36:36 - Disagreements and uncertainties    - 02:46:08 - Some favorite organizations    - 02:48:21 - Following Paul's work   The transcript: axrp.net/episode/2021/12/02/episode-12-ai-xrisk-paul-christiano.html   Paul's blog posts on AI alignment: ai-alignment.com   Material that we mention:  - Cold Takes - The Most Important Century: cold-takes.com/most-important-century  - Open Philanthropy reports on:    - Modeling the human trajectory: openphilanthropy.org/blog/modeling-human-trajectory    - The computational power of the human brain: openphilanthropy.org/blog/new-report-brain-computation    - AI timelines (draft): alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines    - Whether AI could drive explosive economic growth: openphilanthropy.org/blog/report-advanced-ai-drive-explosive-economic-growth  - Takeoff speeds: sideways-view.com/2018/02/24/takeoff-speeds  - Superintelligence: Paths, Dangers, Strategies: en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies  - Wei Dai on metaphilosophical competence:    - Two neglected problems in human-AI safety: alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety    - The argument from philosophical difficulty: alignmentforum.org/posts/w6d7XBCegc96kz4n3/the-argument-from-philosophical-difficulty    - Some thoughts on metaphilosophy: alignmentforum.org/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy  - AI safety via debate: arxiv.org/abs/1805.00899  - Iterated distillation and amplification: ai-alignment.com/iterated-distillation-and-amplification-157debfd1616  - Scalable agent alignment via reward modeling: a research direction: arxiv.org/abs/1811.07871  - Learning the prior: alignmentforum.org/posts/SL9mKhgdmDKXmxwE4/learning-the-prior  - Imitative generalisation (AKA 'learning the prior'): alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1  - When is unaligned AI morally valuable?: ai-alignment.com/sympathizing-with-ai-e11a4bf5ef6e
undefined
24 snips
Sep 2, 2023 • 3h 52min

Three: Paul Christiano on finding real solutions to the AI alignment problem

Paul Christiano, an expert in AI, discusses various intriguing topics like the gradual transformation of the world by AI, methods for ensuring AI compliance, granting legal rights to AI systems, and the obsolescence of human labor. He also touches on AI's impact on science research and the timeline for human labor becoming obsolete.
undefined
20 snips
Oct 2, 2018 • 3h 52min

#44 - Paul Christiano on how we'll hand the future off to AI, & solving the alignment problem

In this discussion, Paul Christiano, an OpenAI researcher with a theoretical computer science background, shares his insights on how AI will gradually transform our world. He delves into AI alignment issues, emphasizing strategies OpenAI is developing to ensure AI systems reflect human values. Christiano also predicts that AI may surpass humans in scientific research and discusses the potential economic impacts of AI on labor and savings. With provocative ideas on moral value and rights for AI, this conversation is a deep dive into the future of technology and ethics.