undefined

Andrew Critch

Author of the LessWrong post on cognitive biases contributing to AI x-risk.

Top 3 podcasts with Andrew Critch

Ranked by the Snipd community
undefined
16 snips
Jun 14, 2024 • 9min

“Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety)” by Andrew_Critch

Andrew Critch, an AI researcher, discusses the importance of considering the social model in technical AI safety and alignment. He dispels the myth that technical progress alone is sufficient for safety and emphasizes the need to align it with human values for the benefit of humanity.
undefined
Mar 10, 2024 • 9min

[HUMAN VOICE] "CFAR Takeaways: Andrew Critch" by Raemon

The podcast delves into Andrew Krich's insights on challenges in numeracy, inner simulation, and human desires. It explores teaching strategies like visualizations and internal family systems, while emphasizing the importance of cognitive and emotional skills for high performance.
undefined
Oct 15, 2024 • 25min

“My theory of change for working in AI healthtech” by Andrew_Critch

In this discussion, Andrew Critch, an AI alignment expert working in healthtech, shares his insights on the urgent need to address the risks of AI, particularly the impending arrival of AGI. He highlights concerns about industrial dehumanization and how it could threaten humanity. Critch advocates for developing human-centric industries, especially in healthcare, as a way to foster human welfare amidst rapid AI advancement. He emphasizes the importance of moral commitment in the sector to navigate the challenges posed by AI.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app