undefined

Nicholas Carlini

Security researcher at Google DeepMind, specializing in adversarial machine learning and cybersecurity. His research focuses on adversarial attacks against image classifiers and the challenges of ensuring neural network robustness.

Top 10 podcasts with Nicholas Carlini

Ranked by the Snipd community
undefined
120 snips
Jan 25, 2025 • 1h 21min

Nicholas Carlini (Google DeepMind)

Nicholas Carlini, a research scientist at Google DeepMind specializing in AI security, delves into compelling insights about the vulnerabilities in machine learning systems. He discusses the unexpected chess-playing prowess of large language models and the broader implications of emergent behaviors. Carlini emphasizes the necessity for robust security designs to combat potential model attacks and the ethical considerations surrounding AI-generated code. He also highlights how language models can significantly enhance programming productivity, urging users to remain skeptical of their limitations.
undefined
80 snips
Feb 27, 2025 • 2h 35min

The Adversarial Mind: Defeating AI Defenses with Nicholas Carlini of Google DeepMind

Nicholas Carlini, a security researcher at Google DeepMind known for his groundbreaking work in adversarial machine learning, shares intriguing insights into AI security challenges. He discusses the asymmetric relationship between attackers and defenders, highlighting the strategic advantages attackers possess. Carlini also explores the complexities of data manipulation in AI models, the role of human intuition, and the implications of open-source AI on security. The conversation dives into balancing AI safety with accessibility in an evolving landscape.
undefined
77 snips
Aug 29, 2024 • 1h 10min

Why you should write your own LLM benchmarks — with Nicholas Carlini, Google DeepMind

Nicholas Carlini, a research scientist at DeepMind specializing in AI security, discusses the power of personalized LLM benchmarks. He encourages focusing on individual use of AI tools, emphasizing that AI shines in automating mundane tasks. Carlini shares insights from his viral blog, detailing creative applications of AI in coding and problem-solving. He also navigates the dualities of LLMs, the importance of critical evaluation, and the ongoing need for robust, domain-specific benchmarks to truly gauge AI performance.
undefined
18 snips
Sep 23, 2024 • 1h 4min

Stealing Part of a Production Language Model with Nicholas Carlini - #702

Nicholas Carlini, a research scientist at Google DeepMind and winner of the 2024 ICML Best Paper Award, dives into the world of adversarial machine learning. He discusses his groundbreaking work on stealing parts of production language models like ChatGPT. Listeners will learn about the ethical implications of model security, the significance of the embedding layer, and how these advancements raise new security challenges. Carlini also sheds light on differential privacy in AI, questioning its integration with pre-trained models and the future of ethical AI development.
undefined
9 snips
Feb 27, 2023 • 43min

Privacy and Security for Stable Diffusion and LLMs with Nicholas Carlini - #618

In this discussion, Nicholas Carlini, a research scientist at Google Brain known for his work at the crossroads of machine learning and computer security, dives deep into pressing issues of privacy and security in AI. He explores the vulnerabilities of large models like stable diffusion, particularly the risks of data extraction and adversarial attacks. The conversation also touches on model memorization versus generalization, revealing surprising insights on how these models handle training data. Additionally, Carlini discusses data poisoning and its implications in safeguarding model integrity.
undefined
5 snips
Jan 28, 2025 • 1h 21min

Cryptanalyzing LLMs with Nicholas Carlini

Nicholas Carlini, an AI security researcher specializing in machine learning vulnerabilities, joins the discussion. He delves into the mathematical underpinnings of LLM vulnerabilities, highlighting risks like model poisoning and instruction injection. Carlini explores the parallels between cryptographic attacks and AI model vulnerabilities, emphasizing the importance of robust security frameworks. He also outlines key defense strategies against data extraction and shares insights on the fragility of current AI defenses, urging a critical evaluation of security practices in an evolving digital landscape.
undefined
5 snips
Aug 9, 2024 • 1h 33min

Pragmatic LLM usage with Nicholas Carlini

Nicholas Carlini, an expert in pragmatic uses of LLMs, shares his insights on harnessing these powerful tools for real-world problem-solving. He discusses the balance of trust and critical engagement when using LLMs in programming, emphasizing their role in improving efficiency. Humorous anecdotes about AI interactions highlight the generational shift in technology integration. The conversation also critiques AI advertisements, cautioning against the hype and advocating for realistic expectations around LLM capabilities and innovation.
undefined
Mar 27, 2024 • 1h 24min

Adversarial Machine Learning

Nicholas Carlini discusses adversarial machine learning, revealing how sequences of tokens can trick language models into ignoring restrictions. The hosts explore the peculiarities of C programming and delve into the surprising effectiveness of adversarial attacks on machine learning models, emphasizing the need for security-conscious approaches in ML development.
undefined
Mar 21, 2025 • 2h 23min

Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)

Nicholas Carlini, a security researcher at Google DeepMind, shares his expertise in adversarial machine learning and cybersecurity. He reveals intriguing insights about adversarial attacks on image classifiers and the complexities of defending against them. Carlini discusses the critical role of human intuition in developing defenses, the implications of open-source AI, and the evolving risks associated with model safety. He also explores how advanced techniques expose vulnerabilities in language models and the balance between transparency and security in AI.
undefined
Aug 9, 2024 • 1h 21min

AI_031 - How I use AI

In this engaging discussion, Nicholas Carlini, renowned for his insights on large language models, explores the practical applications of AI today. He challenges common pessimistic views on AI's impact on jobs and emphasizes how it can enhance creativity and productivity. The conversation dives into innovative tools like Llama 3.1 and Flux, revealing how they transform customer support and animation. Carlini also highlights the balance between automation and human creativity, showcasing tangible benefits that make AI more than just hype.