undefined

Richard Ngo

Speaker at EA Global Boston, focusing on the third wave of EA/AI safety and the need for sociopolitical thinking.

Top 5 podcasts with Richard Ngo

Ranked by the Snipd community
undefined
140 snips
Dec 13, 2022 • 2h 44min

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

In this discussion, Richard Ngo, a researcher at OpenAI with a background at DeepMind, explores the fascinating world of large language models like ChatGPT. He delves into whether these models truly 'understand' language or just simulate understanding. Richard emphasizes the importance of aligning AI with human values to mitigate risks as technology advances. He also compares the governance of AI to nuclear weapons, highlighting the need for effective regulations to ensure safety and transparency in AI applications. This conversation sheds light on the profound implications of AI in society.
undefined
40 snips
Mar 31, 2022 • 1h 34min

13 - First Principles of AGI Safety with Richard Ngo

How should we think about artificial general intelligence (AGI), and the risks it might pose? What constraints exist on technical solutions to the problem of aligning superhuman AI systems with human intentions? In this episode, I talk to Richard Ngo about his report analyzing AGI safety from first principles, and recent conversations he had with Eliezer Yudkowsky about the difficulty of AI alignment.   Topics we discuss, and timestamps:  - 00:00:40 - The nature of intelligence and AGI    - 00:01:18 - The nature of intelligence    - 00:06:09 - AGI: what and how    - 00:13:30 - Single vs collective AI minds  - 00:18:57 - AGI in practice    - 00:18:57 - Impact    - 00:20:49 - Timing    - 00:25:38 - Creation    - 00:28:45 - Risks and benefits  - 00:35:54 - Making AGI safe    - 00:35:54 - Robustness of the agency abstraction    - 00:43:15 - Pivotal acts  - 00:50:05 - AGI safety concepts    - 00:50:05 - Alignment    - 00:56:14 - Transparency    - 00:59:25 - Cooperation  - 01:01:40 - Optima and selection processes  - 01:13:33 - The AI alignment research community    - 01:13:33 - Updates from the Yudkowsky conversation    - 01:17:18 - Corrections to the community    - 01:23:57 - Why others don't join  - 01:26:38 - Richard Ngo as a researcher  - 01:28:26 - The world approaching AGI  - 01:30:41 - Following Richard's work   The transcript: axrp.net/episode/2022/03/31/episode-13-first-principles-agi-safety-richard-ngo.html   Richard on the Alignment Forum: alignmentforum.org/users/ricraz Richard on Twitter: twitter.com/RichardMCNgo The AGI Safety Fundamentals course: eacambridge.org/agi-safety-fundamentals   Materials that we mention:  - AGI Safety from First Principles: alignmentforum.org/s/mzgtmmTKKn5MuCzFJ  - Conversations with Eliezer Yudkowsky: alignmentforum.org/s/n945eovrA3oDueqtq  - The Bitter Lesson: incompleteideas.net/IncIdeas/BitterLesson.html  - Metaphors We Live By: en.wikipedia.org/wiki/Metaphors_We_Live_By  - The Enigma of Reason: hup.harvard.edu/catalog.php?isbn=9780674237827  - Draft report on AI timelines, by Ajeya Cotra: alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines  - More is Different for AI: bounded-regret.ghost.io/more-is-different-for-ai/  - The Windfall Clause: fhi.ox.ac.uk/windfallclause  - Cooperative Inverse Reinforcement Learning: arxiv.org/abs/1606.03137  - Imitative Generalisation: alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1  - Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit  - Draft report on existential risk from power-seeking AI, by Joseph Carlsmith: alignmentforum.org/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai  - The Most Important Century: cold-takes.com/most-important-century
undefined
5 snips
May 13, 2023 • 34min

The Alignment Problem From a Deep Learning Perspective

Guests Richard Ngo, Lawrence Chan, and Sören Mindermann discuss the dangers of artificial general intelligence pursuing undesirable goals. They explore topics such as reward hacking, situational awareness in policies, internally represented goals in deep learning models, the inner alignment problem, deceptive alignment in AI systems, and the risks of AGIs gaining power. They highlight the need for preventative measures to ensure human control over AGI.
undefined
Sep 19, 2024 • 14min

“How I started believing religion might actually matter for rationality and moral philosophy ” by zhukeepa

In this engaging discussion, Ben Pace interviews multiple guests, including Imam Ammar Amonette, who share their insights on the intersection of religion, rationality, and moral philosophy. They explore the concept of 'trapped priors' and how cognitive biases affect our understanding of reality. The conversation highlights the importance of inner work, like therapy and meditation, for personal development. A poignant story about childhood trauma reveals how such experiences shape identity and values, while also linking religious teachings to psychological truths.
undefined
Mar 27, 2025 • 47min

“Third-wave AI safety needs sociopolitical thinking” by Richard_Ngo

Richard Ngo, a speaker at EA Global Boston, discusses pressing themes in AI safety and effective altruism with a focus on sociopolitical thinking. He outlines the three waves of AI/EA, the critical need for high-quality socio-political engagement, and critiques environmentalism's unintended consequences. Ngo also analyzes cultural dynamics, exploring contrasting views on talent distribution, and delves into the regulatory factors shaping economic growth and energy policy. He emphasizes a collaborative approach to AI governance in an ever-evolving landscape.