
Wei Dai
Computer scientist and writer known for contributions to cryptography and rationalist discourse; guest author of the LessWrong episode discussing metaethics and the analogy to cryptography culture.
Top 3 podcasts with Wei Dai
Ranked by the Snipd community

4 snips
Nov 14, 2025 • 4min
“Please, Don’t Roll Your Own Metaethics” by Wei Dai
Wei Dai, a computer scientist known for his work in cryptography and rationalist discourse, dives into the parallels between cryptography and metaethics. He shares an eye-opening story from his internship, highlighting the risks of trusting homemade cryptographic designs. Dai emphasizes the challenges of critiquing philosophical ideas compared to the clear-cut nature of cryptographic failures. He questions whether society would benefit from lowered confidence in various philosophies, urging listeners to reflect and contribute their thoughts on this nuanced topic.

Dec 3, 2025 • 4min
“Racing For AI Safety™ was always a bad idea, right?” by Wei Dai
Wei Dai, a cryptographer and prominent voice in the AI risk community, revisits historical debates around MIRI's controversial plan to create a Friendly AI. He argues that MIRI's uncertainties about alignment weren't a valid justification for their approach. Dai critiques their novel metaethics as risky and discusses the dangers of unchecked power. He emphasizes the lack of public trust in MIRI's strategy, warning it could inspire a dangerous competitive race in AI safety, ultimately diverting crucial resources from more effective solutions.

Nov 12, 2025 • 4min
“Please, Don’t Roll Your Own Metaethics” by Wei Dai
In this discussion, Wei Dai, a cryptography expert and applied philosophy writer, draws intriguing parallels between cryptography and metaethics. He shares a memorable experience from his internship, highlighting the pitfalls of trusting amateur-designed systems. Dai passionately argues against creating bespoke metaethics, emphasizing the inherent dangers and overconfidence in novel philosophical positions. He stresses the need for humility in discussions about AI risks while inviting feedback to improve understanding in this complex field.


