undefined

Sayash Kapoor

Co-author of AI Snake Oil and computer science PhD candidate at Princeton University.

Top 10 podcasts with Sayash Kapoor

Ranked by the Snipd community
undefined
28 snips
Jul 28, 2024 • 50min

Sayash Kapoor - How seriously should we take AI X-risk? (ICML 1/13)

Sayash Kapoor, a Princeton Ph.D. candidate focused on AI's societal impacts, discusses critical issues surrounding AI's existential risks. He emphasizes the unreliability of risk probabilities in policymaking and critiques naive approaches like Pascal's Wager. The conversation also debunks myths regarding AI growth and investment, suggesting a more cautious, efficient model development. Additionally, Kapoor highlights the need for user-centric evaluations in AI and the distinct challenges posed by new benchmarks in assessing AI capabilities.
undefined
15 snips
Feb 8, 2025 • 30min

Ep22: Demystifying AI and separating hype from genuine progress 

Sayash Kapoor, co-author of "AI Snake Oil" and a PhD candidate at Princeton, dives into the landscape of artificial intelligence. He discusses the stark differences between generative AI, which creates useful outputs, and predictive AI, often limited by data quality. Kapoor sheds light on the rapid pace of AI advancements, the role of geopolitics, especially China's competitive edge despite sanctions, and societal impacts like job displacement. He also advocates for a thoughtful approach to merit-based opportunities through a "partial lottery system" to address inequality.
undefined
8 snips
Dec 3, 2024 • 28min

AI Snake Oil with Sayash Kapoor

Sayash Kapoor, co-author of 'AI Snake Oil' and researcher at Princeton University, shares crucial insights on the realities of AI. He discusses the hype surrounding AI, highlighting the difference between predictive and generative AI. Kapoor explains how inflated expectations can lead to misconceptions, especially in healthcare applications. He emphasizes the need for regulatory measures to balance innovation with safety and urges managers to cultivate a healthy skepticism while embracing new technologies. Dive into his eye-opening exploration of AI's true capabilities.
undefined
7 snips
Sep 18, 2024 • 1h 1min

AI Agents That Matter with Sayash Kapoor and Benedikt Stroebl - Weaviate Podcast #104!

Sayash Kapoor and Benedikt Stroebl, co-first authors from Princeton Language and Intelligence, discuss their influential paper on AI agents. They explore the crucial balance between performance and cost in AI systems, emphasizing that amazing responses mean little if they are too expensive to produce. The duo introduces the DSPY framework to optimize accuracy and costs and debates the adapting challenges of AI benchmarks in dynamic environments. They also highlight the importance of human feedback in enhancing AI reliability and performance.
undefined
5 snips
Mar 11, 2024 • 40min

Assessing the Risks of Open AI Models with Sayash Kapoor - #675

Discussing the risks and benefits of open AI models, including biosecurity threats and non-consensual imagery. Exploring a risk assessment framework inspired by cybersecurity. Emphasizing the need for common ground in assessing threats posed by AI. Addressing the balance between openness for research and cybersecurity vulnerabilities.
undefined
4 snips
Apr 14, 2024 • 57min

The Societal Impacts of Foundation Models, and Access to Data for Researchers

PhD candidate Sayash Kapoor and society lead Rishi Bommasani discuss societal impacts of open foundation models. They delve into the spectrum of openness in AI models, mitigating risks, transparency in model development, NTIA's comment process, and challenges for independent researchers accessing social media data. They also touch upon transatlantic relations, focusing on trade and technology council meetings and future uncertainties.
undefined
Oct 2, 2024 • 1h 11min

Two Computer Scientists Debunk A.I. Hype with Arvind Narayanan and Sayash Kapoor

In this discussion, Arvind Narayanan and Sayash Kapoor, computer scientists at Princeton and authors of a revealing book on AI, debunk the overblown hype surrounding artificial intelligence. They clarify how much of what’s marketed as AI is actually just clever narratives or low-paid human labor. Dive into their insights on the environmental fallout of large models, the misconceptions about AI's capabilities, and the broader implications for society. Their critical take empowers listeners to navigate the murky waters of tech claims with skepticism.
undefined
Sep 29, 2024 • 36min

AI Snake Oil: Separating Hype from Reality

Arvind Narayanan, a Princeton computer science professor, and Sayash Kapoor, a PhD candidate, dive into the misconceptions surrounding AI in their discussion on their book. They explore the origins of 'snake oil' in AI claims, stressing the importance of human oversight in content moderation challenges. The duo also tackles the misinformation crisis, emphasizing that a loss of trust in media is at its core. Their insights encourage optimism and highlight corporate responsibilities to address the societal impacts of AI technology.
undefined
Sep 24, 2024 • 39min

AI Snake Oil—A New Book by 2 Princeton University Computer Scientists

Arvind Narayanan and Sayash Kapoor, esteemed computer scientists from Princeton, dive deep into their book 'AI Snake Oil.' They challenge the hype around predictive AI by discussing its failures, particularly in healthcare. The conversation highlights a case where an algorithm discriminated against Black patients. They also explore the balance between predictive and generative AI, stressing the need for skepticism in its applications. Plus, they touch on the complexities of AI in content moderation and the significance of human oversight.
undefined
Sep 23, 2024 • 54min

385: AI Snake Oil

Sayash Kapoor, a Princeton researcher, and Arvind Narayanan, a co-author focusing on AI capabilities, delve into the murky waters of AI risk predictions. They discuss the concept of 'AI Snake Oil'—how misleading claims can cloud our understanding of AI's true potential. The duo emphasizes that our predictions about the impact of AI are often chaotic and subjective. They argue for a reassessment of how we perceive AI's threats, advocating for policy measures that prioritize practical issues rather than apocalyptic fears, and encourage grounded conversations on AI's actual capabilities.