LessWrong (Curated & Popular) cover image

LessWrong (Curated & Popular)

Latest episodes

undefined
Jan 14, 2025 • 6min

“Applying traditional economic thinking to AGI: a trilemma” by Steven Byrnes

Steven Byrnes, author of a thought-provoking LessWrong post, dives into the intersection of traditional economics and Artificial General Intelligence. He discusses two foundational principles: the resilience of human labor value amidst population growth and the implications of demand on product pricing. Byrnes presents a captivating trilemma, exploring how AGI might challenge these longstanding economic views. With insights on the evolving landscape of labor and manufacturing, he sparks a fascinating debate on AGI's impact on the economy.
undefined
Jan 14, 2025 • 57min

“Passages I Highlighted in The Letters of J.R.R.Tolkien” by Ivan Vendrov

Ivan Vendrov, an author and Tolkien enthusiast, shares fascinating insights from J.R.R. Tolkien's letters. He discusses Tolkien's skepticism about machinery, emphasizing its potential for creating endless labor. Vendrov delves into the moral implications of power and the dangers of technological advancement. He explores the relationship between language and mythology, critiquing modern English as a limitation. The podcast also reflects on themes of love, war, and the complexities of human relationships, linked to Tolkien's rich narratives and philosophical insights.
undefined
7 snips
Jan 13, 2025 • 15min

“Parkinson’s Law and the Ideology of Statistics” by Benquo

Dive into the critique of a World Bank intervention in Lesotho, where sparse data led to misguided conclusions and failed programs. Discover the importance of historical context and ethnographic research in improving decision-making. The discussion also highlights the economic challenges local communities face, such as limited access to resources. Lastly, the need for a shift away from purely statistical evidence in development policies is emphasized, advocating for tailored solutions that truly reflect local needs.
undefined
9 snips
Jan 11, 2025 • 25min

“Capital Ownership Will Not Prevent Human Disempowerment” by beren

The discussion centers on the role of capital in an AI-driven future and its impact on power dynamics. It questions whether mere ownership will safeguard humanity's control as technology evolves. Historical comparisons spotlight potential pitfalls for traditional capital amidst rapid change. The podcast also highlights how increasing information asymmetries may erode human influence over businesses, emphasizing a necessary balance between autonomous AIs and regulatory measures to ensure safety and economic stability.
undefined
8 snips
Jan 10, 2025 • 16min

“Activation space interpretability may be doomed” by bilalchughtai, Lucius Bushnaq

The podcast dives into the challenges of activation space interpretability in neural networks. It argues that current methods like sparse autoencoders and PCA may misrepresent neural models by isolating individual activation features. Instead of revealing the model's inner workings, these techniques often highlight superficial aspects of activations. The conversation explores the fundamental issues with such interpretations and discusses potential paths forward for accurate understanding.
undefined
4 snips
Jan 9, 2025 • 9min

“What o3 Becomes by 2028” by Vladimir_Nesov

Vladimir Nesov, author known for his insights on AI, dives deep into the future of OpenAI's O3 training systems. He discusses the significance of upcoming investments in scaling AI capabilities with models projected to train at unprecedented FLOP levels. The conversation highlights the balance between data quality and quantity, emphasizing the need for 50 trillion training tokens. Nesov also evaluates the current state of GPT-4 and its competitors, pondering what advancements might emerge by 2028 in this rapidly evolving landscape.
undefined
Jan 9, 2025 • 25min

“What Indicators Should We Watch to Disambiguate AGI Timelines?” by snewman

The podcast dives into the murky waters of AGI timelines, capturing the contrasting theories on whether we're on the brink of a technological revolution or simply beginning a journey. It dissects the slow versus fast scenarios of AGI development, highlighting the hurdles current AI faces. The discussion also explores crucial indicators to watch for signs of a rapidly approaching AGI. With ongoing advancements, the narrative raises both excitement and skepticism about what lies ahead in the world of artificial intelligence.
undefined
Jan 8, 2025 • 1h 19min

“How will we update about scheming?” by ryan_greenblatt

I mostly work on risks from scheming (that is, misaligned, power-seeking AIs that plot against their creators such as by faking alignment). Recently, I (and co-authors) released "Alignment Faking in Large Language Models", which provides empirical evidence for some components of the scheming threat model.One question that's really important is how likely scheming is. But it's also really important to know how much we expect this uncertainty to be resolved by various key points in the future. I think it's about 25% likely that the first AIs capable of obsoleting top human experts[1] are scheming. It's really important for me to know whether I expect to make basically no updates to my P(scheming)[2] between here and the advent of potentially dangerously scheming models, or whether I expect to be basically totally confident one way or another by that point (in the same way that, though I might [...] ---Outline:(03:20) My main qualitative takeaways(04:56) Its reasonably likely (55%), conditional on scheming being a big problem, that we will get smoking guns.(05:38) Its reasonably likely (45%), conditional on scheming being a big problem, that we wont get smoking guns prior to very powerful AI.(15:59) My P(scheming) is strongly affected by future directions in model architecture and how the models are trained(16:33) The model(22:38) Properties of the AI system and training process(23:02) Opaque goal-directed reasoning ability(29:24) Architectural opaque recurrence and depth(34:14) Where do capabilities come from?(39:42) Overall distribution from just properties of the AI system and training(41:20) Direct observations(41:43) Baseline negative updates(44:35) Model organisms(48:21) Catching various types of problematic behavior(51:22) Other observations and countermeasures(52:02) Training processes with varying (apparent) situational awareness(54:05) Training AIs to seem highly corrigible and (mostly) myopic(55:46) Reward hacking(57:28) P(scheming) under various scenarios (putting aside mitigations)(01:05:19) An optimistic and a pessimistic scenario for properties(01:10:26) Conclusion(01:11:58) Appendix: Caveats and definitions(01:14:49) Appendix: Capabilities from intelligent learning algorithmsThe original text contained 15 footnotes which were omitted from this narration. --- First published: January 6th, 2025 Source: https://www.lesswrong.com/posts/aEguDPoCzt3287CCD/how-will-we-update-about-scheming --- Narrated by TYPE III AUDIO.
undefined
7 snips
Jan 8, 2025 • 20min

“OpenAI #10: Reflections” by Zvi

Sam Altman, CEO of OpenAI, discusses the complex landscape of AI leadership. He shares personal reflections on governance challenges following significant board disputes. Altman highlights the potential for advanced AI to transform productivity by 2025, while expressing concerns about the ethical implications of this technology. He also addresses accusations of fake news surrounding OpenAI and emphasizes the importance of trust and diverse perspectives in navigating the future of artificial intelligence.
undefined
4 snips
Jan 7, 2025 • 2min

“Maximizing Communication, not Traffic” by jefftk

Explore the idea of prioritizing clear communication in writing over chasing traffic. The discussion critiques clickbait tactics popular among content creators and stresses the value of genuine idea sharing. Discover the unique approach of including full posts in RSS feeds, emphasizing understanding rather than mere clicks. This perspective encourages writers to focus on the essence of their messages rather than the metrics that drive ad revenue.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode