
Steven Byrnes
AI safety researcher at the Astera Institute, focusing on technical AI alignment. Holds a physics PhD from UC Berkeley and Harvard physics postdoc.
Best podcasts with Steven Byrnes
Ranked by the Snipd community

36 snips
Aug 1, 2025 • 3h 15min
The Man Who Might SOLVE AI Alignment — Dr. Steven Byrnes, AGI Safety Researcher @ Astera Institute
Dr. Steven Byrnes, an AI safety researcher at the Astera Institute and a former physics postdoc at Harvard, shares his cutting-edge insights on AI alignment. He discusses his 90% probability of AI doom while arguing that true threats stem from future brain-like AGI rather than current LLMs. Byrnes explores the brain's dual subsystems and their influences on decision-making, emphasizing the necessity of integrating neuroscience into AI safety research. He critiques existing alignment approaches, warning of the risks posed by misaligned AI and the complexities surrounding human-AI interaction.

Jan 14, 2025 • 6min
“Applying traditional economic thinking to AGI: a trilemma” by Steven Byrnes
Steven Byrnes, author of a thought-provoking LessWrong post, dives into the intersection of traditional economics and Artificial General Intelligence. He discusses two foundational principles: the resilience of human labor value amidst population growth and the implications of demand on product pricing. Byrnes presents a captivating trilemma, exploring how AGI might challenge these longstanding economic views. With insights on the evolving landscape of labor and manufacturing, he sparks a fascinating debate on AGI's impact on the economy.