
Richard Ngo
Speaker at EA Global Boston, focusing on the third wave of EA/AI safety and the need for sociopolitical thinking.
Top 5 podcasts with Richard Ngo
Ranked by the Snipd community

140 snips
Dec 13, 2022 • 2h 44min
#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well
In this discussion, Richard Ngo, a researcher at OpenAI with a background at DeepMind, explores the fascinating world of large language models like ChatGPT. He delves into whether these models truly 'understand' language or just simulate understanding. Richard emphasizes the importance of aligning AI with human values to mitigate risks as technology advances. He also compares the governance of AI to nuclear weapons, highlighting the need for effective regulations to ensure safety and transparency in AI applications. This conversation sheds light on the profound implications of AI in society.

40 snips
Mar 31, 2022 • 1h 34min
13 - First Principles of AGI Safety with Richard Ngo
How should we think about artificial general intelligence (AGI), and the risks it might pose? What constraints exist on technical solutions to the problem of aligning superhuman AI systems with human intentions? In this episode, I talk to Richard Ngo about his report analyzing AGI safety from first principles, and recent conversations he had with Eliezer Yudkowsky about the difficulty of AI alignment. Topics we discuss, and timestamps: - 00:00:40 - The nature of intelligence and AGI - 00:01:18 - The nature of intelligence - 00:06:09 - AGI: what and how - 00:13:30 - Single vs collective AI minds - 00:18:57 - AGI in practice - 00:18:57 - Impact - 00:20:49 - Timing - 00:25:38 - Creation - 00:28:45 - Risks and benefits - 00:35:54 - Making AGI safe - 00:35:54 - Robustness of the agency abstraction - 00:43:15 - Pivotal acts - 00:50:05 - AGI safety concepts - 00:50:05 - Alignment - 00:56:14 - Transparency - 00:59:25 - Cooperation - 01:01:40 - Optima and selection processes - 01:13:33 - The AI alignment research community - 01:13:33 - Updates from the Yudkowsky conversation - 01:17:18 - Corrections to the community - 01:23:57 - Why others don't join - 01:26:38 - Richard Ngo as a researcher - 01:28:26 - The world approaching AGI - 01:30:41 - Following Richard's work The transcript: axrp.net/episode/2022/03/31/episode-13-first-principles-agi-safety-richard-ngo.html Richard on the Alignment Forum: alignmentforum.org/users/ricraz Richard on Twitter: twitter.com/RichardMCNgo The AGI Safety Fundamentals course: eacambridge.org/agi-safety-fundamentals Materials that we mention: - AGI Safety from First Principles: alignmentforum.org/s/mzgtmmTKKn5MuCzFJ - Conversations with Eliezer Yudkowsky: alignmentforum.org/s/n945eovrA3oDueqtq - The Bitter Lesson: incompleteideas.net/IncIdeas/BitterLesson.html - Metaphors We Live By: en.wikipedia.org/wiki/Metaphors_We_Live_By - The Enigma of Reason: hup.harvard.edu/catalog.php?isbn=9780674237827 - Draft report on AI timelines, by Ajeya Cotra: alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines - More is Different for AI: bounded-regret.ghost.io/more-is-different-for-ai/ - The Windfall Clause: fhi.ox.ac.uk/windfallclause - Cooperative Inverse Reinforcement Learning: arxiv.org/abs/1606.03137 - Imitative Generalisation: alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1 - Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit - Draft report on existential risk from power-seeking AI, by Joseph Carlsmith: alignmentforum.org/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai - The Most Important Century: cold-takes.com/most-important-century

16 snips
Feb 26, 2025 • 27min
“Power Lies Trembling: a three-book review” by Richard_Ngo
Richard Ngo, an insightful author and thinker, delves into the sociology of military coups and social dynamics. He paints coups as rare supernovae that reveal the underlying forces of society, particularly through Naunihal Singh's research on Ghana. Ngo discusses how preference falsification shapes societal behavior, especially in racial discrimination, and emphasizes the importance of expressing true beliefs. The conversation also touches on Kierkegaard's ideas, contrasting different forms of faith and their roles in uniting individuals for collective action.

5 snips
May 13, 2023 • 34min
The Alignment Problem From a Deep Learning Perspective
Guests Richard Ngo, Lawrence Chan, and Sören Mindermann discuss the dangers of artificial general intelligence pursuing undesirable goals. They explore topics such as reward hacking, situational awareness in policies, internally represented goals in deep learning models, the inner alignment problem, deceptive alignment in AI systems, and the risks of AGIs gaining power. They highlight the need for preventative measures to ensure human control over AGI.

Sep 19, 2024 • 14min
“How I started believing religion might actually matter for rationality and moral philosophy ” by zhukeepa
In this engaging discussion, Ben Pace interviews multiple guests, including Imam Ammar Amonette, who share their insights on the intersection of religion, rationality, and moral philosophy. They explore the concept of 'trapped priors' and how cognitive biases affect our understanding of reality. The conversation highlights the importance of inner work, like therapy and meditation, for personal development. A poignant story about childhood trauma reveals how such experiences shape identity and values, while also linking religious teachings to psychological truths.