undefined

Nathan Labenz

Co-host of The Cognitive Revolution podcast, interviewing AI builders and researchers.

Top 10 podcasts with Nathan Labenz

Ranked by the Snipd community
undefined
112 snips
Dec 20, 2023 • 1h 13min

How to Use ChatGPT as a Copilot for Learning - Ep. 4 with Nathan Labenz

Nathan Labenz, founder of Waymark and host of The Cognitive Revolution podcast, discusses how ChatGPT can be a copilot for learning by writing complex code in unfamiliar languages. They explore the potential of AI agents, custom instructions, using ChatGPT for knowledge exploration and patent application, generating visual outputs, and ChatGPT as a research tool.
undefined
75 snips
Sep 6, 2024 • 1h 23min

AI's Impact on Geopolitics

Nathan Labenz, host of the Cognitive Revolution Podcast, and Samo Burja, a strategist in geopolitics, dive into the intricate relationship between AI and global power dynamics. They explore how advancements in AI are reshaping U.S.-China relations and the critical challenges of AI safety. The duo examines the importance of foundational research versus commercialization, highlights the complex chip supply chain, and advocates for collaboration in AI development to ensure safety and innovation in a rapidly changing landscape.
undefined
61 snips
Dec 22, 2023 • 3h 47min

#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

Nathan Labenz, entrepreneur and AI scout, discusses OpenAI's mission and the recent drama surrounding its leadership. He shares his experience as part of the GPT-4 red team, raising concerns about AI safety and control measures. The podcast explores OpenAI's actions in ensuring safety, the importance of specialized models, and the impact of GPT-4 on the field of AI. The conversation also delves into communication breakdowns, knowledge sharing practices, and the need for caution in open-sourcing AI models.
undefined
59 snips
Mar 11, 2023 • 2h 11min

Effective Accelerationism and the AI Safety Debate with Bayeslord, Beff Jezoz, and Nathan Labenz

Anonymous founders of the Effective Accelerationist (e/acc) movement @Bayeslord and Beff Jezoz (@BasedBeff) join Erik Torenberg, Dan Romero, and Nathan Labenz to debate views on AI safety.We're hiring across the board at Turpentine and for Erik's personal team on other projects he's incubating. He's hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: eriktorenberg.com.RECOMMENDED PODCAST:The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix’s culture deck Patty McCord.https://link.chtbl.com/hrhereticsTIMESTAMPS:(00:00) Episode preview(03:00) Intro to effective accelerationism(08:00) Differences between effective accelerationism and effective altruism(23:00) Effective accelerationism is bottoms-up(42:00) Transhumanism(46:00) "Equanimity amidst the singularity"(48:30) Why AI safety is the wrong frame(56:00) Pushing back against effective accelerationism(01:06:00) The case for AI safety(01:24:00) Upgrading civilizational infrastructure(01:33:00) Effective accelerationism is anti-fragile(01:39:00) Will we botch AI like we botched nuclear?(01:46:00) Hidden costs of emphasizing downsides(2:00:00) Are we in the same position as neanderthals, before humans?(2:09:00) "Doomerism has an unpriced opportunity cost of upside"SPONSORS: Beehiiv | Shopify | SecureframeHead to Beehiiv, the newsletter platform built for growth, to power your own. Connect with premium brands, scale your audience, and deliver a beautiful UX that stands out in an inbox. 🐝 to https://Beehiiv.com and use code “MOZ” for 20% off your first three months-Shopify: https://shopify.com/momentofzen for a $1/month trial periodShopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of all e-commerce in the US. And Shopify’s the global force behind Allbirds, Rothy’s, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries. From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you’re selling, Shopify’s got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/momentofzen-Secureframe (www.secureframe.com)Secureframe is the leading all-in-one platform for security and privacy compliance. Get SOC-2 audit ready in weeks, not months. I believe in Secureframe so much that I invested in it, and I recommend it to all my portfolio companies. Sign up for a free demo and mention MOMENT OF ZEN during your demo to get 20% off your first year.
undefined
47 snips
Feb 5, 2025 • 1h 5min

Understanding US-China Relations Right Now with Nathan Labenz

Nathan Labenz, an expert on U.S.-China relations and host of "The Cognitive Revolution," dissects the complexities of the current geopolitical climate. He highlights Xi Jinping's ambitious plans that threaten U.S. dominance and discusses the effectiveness of recent U.S. chip export controls. The conversation dives into military technology advancements and the potential ramifications of AI competition, particularly in relation to Taiwan. Labenz advocates for strategic deterrence and innovation to navigate these turbulent waters and promote global stability.
undefined
42 snips
Jan 27, 2025 • 1h 19min

Nathan Labenz on the future of AI scaling

Nathan Labenz, host of the Cognitive Revolution podcast and an AI scout, joins to discuss the recent slowdown in AI scaling. He notes that while technology adoption has lagged, significant advancements still occur in model capabilities. Labenz anticipates continued rapid progress, maintaining that we're still on the steep part of the scaling curve. The conversation also highlights AI's potential to discover new scientific concepts, emphasizing the need for a deeper understanding of scaling laws and the complexities within AI organizations.
undefined
30 snips
Jan 24, 2024 • 2h 47min

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

AI entrepreneur Nathan Labenz discusses the capabilities and limitations of AI, concerns about AI deception, breakthroughs in protein folding, safety comparison of self-driving cars, the potential of GPT for vision, the online conversation around AI safety, negative impact of Twitter on public discourse, contrasting views on AI, backfire of anti-regulation sentiment in tech industry, importance of constructive policy discussions on AI, concerns about face recognition technology, capabilities and concerns of autonomous AI drones, staying up to date with AI research.
undefined
29 snips
Jul 27, 2023 • 2h 14min

E48: Mechanizing Mechanistic Interpretability with Arthur Conmy

Arthur Conmy sits down with Nathan Labenz for an accessible deep dive into the state of interpretability research online today. They discuss how pioneering researchers have painstakingly worked to isolate the sub-circuits within transformers that are responsible for different aspects of AI capabilities. Arthur also introduces us to a new ACDC approach that he and his co-authors have taken to automating some of the most time-consuming parts of this work. If you’re looking for an ERP platform, check out our sponsor, NetSuite: http://netsuite.com/cognitiveWe're hiring across the board at Turpentine and for Erik's personal team on other projects he's incubating. He's hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: eriktorenberg.com.RECOMMENDED PODCAST:The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix’s culture deck Patty McCord.https://link.chtbl.com/hrhereticsThe Cognitive Revolution is a part of the Turpentine podcast network. Learn more: Turpentine.coTIMESTAMPS:(00:00) Episode Preview(04:40) What attracted Arthur to mechanistic interpretability?(07:49) LLM information processing: General Understanding vs Stochastic Parrot Paradigm(14:00) ACDC paper: https://arxiv.org/abs/2304.14997 (14:45) Sponsors: NetSuite | Omneky(24:30) Putting together data sets (32:39) How to intervene in LLMs network activity(36:00) Defining metrics to evaluate the production of correct completions(44:20) The future of the mechanistic interpretability research (50:00) Extracting upstream activations in the ACDC project and evaluating impact on downstream components.(56:00) Anthropic research findings (01:08:00) 3-Step process of the ACDC approach(01:22:00) Setting a threshold and validation(01:27:00) Goal of the approach(01:32:00) Compute requirements Correction: at (01:33:00), Arthur meant to say = "quadratic in nodes"(01:35:30) Scaling laws for mechanistic interpretability(01:40:00) Accessibility of this research for casual enthusiasts(01:46:00) Emergence discourse(01:56:00) Path to AI safetyLINKS:https://arthurconmy.github.io/https://arxiv.org/abs/2304.14997 X:@labenz (Nathan)@arthurconmy (Arthur)@cogrev_podcastSPONSORS: NetSuite | Omneky-NetSuite provides financial software for all your business needs. More than thirty-six thousand companies have already upgraded to NetSuite, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you’re looking for an ERP platform: NetSuite (http://netsuite.com/cognitive) and defer payments of a FULL NetSuite implementation for six months.-Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that *actually work* customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.This show is produced by Turpentine: a network of podcasts, newsletters, and more, covering technology, business, and culture — all from the perspective of industry insiders and experts. We’re launching new shows every week, and we’re looking for industry-leading sponsors — if you think that might be you and your company, email us at erik@turpentine.co.
undefined
28 snips
Dec 24, 2024 • 1h 48min

E70: Martin Casado of a16z on AI Innovation and AGI

In this engaging discussion, Martin Casado, a general partner at Andreessen Horowitz, and Nathan Labenz, an AI scout, dive into the complexities of AI systems. They explore the debate on whether AI will achieve AGI, shedding light on model scaling and safety concerns. The conversation also touches on the future of AI assistants by 2027, skepticism about current AI capabilities, and the importance of responsibly regulating AI development. Their insights emphasize the balance between innovation and ethical responsibilities in the rapidly evolving AI landscape.
undefined
18 snips
May 9, 2023 • 30min

Nathan Labenz on AI's Great Implementation (Founder, AI R&D at Waymark)

In this episode, Humans in AI host Haroon Choudery interviews Nathan Labenz, founder of Waymark. Nathan discusses the idea of the "Great Implementation," an imminent period of jobs being decomposed into tasks that AI can do. He also discusses broader thoughts on how to apply AI to business use cases. His recommendations include getting hands-on with the technology, starting with domains that you can easily validate, and focusing on narrow tasks vs. complex job roles.