undefined

Nicholas Carlini

Security researcher at Google DeepMind, known for his extensive work on adversarial machine learning and cybersecurity. His pioneering contributions include developing attacks that have challenged the defenses of image classifiers and exploring the robustness of neural networks.

Top 5 podcasts with Nicholas Carlini

Ranked by the Snipd community
undefined
120 snips
Jan 25, 2025 • 1h 21min

Nicholas Carlini (Google DeepMind)

Nicholas Carlini, a research scientist at Google DeepMind specializing in AI security, delves into compelling insights about the vulnerabilities in machine learning systems. He discusses the unexpected chess-playing prowess of large language models and the broader implications of emergent behaviors. Carlini emphasizes the necessity for robust security designs to combat potential model attacks and the ethical considerations surrounding AI-generated code. He also highlights how language models can significantly enhance programming productivity, urging users to remain skeptical of their limitations.
undefined
76 snips
Aug 29, 2024 • 1h 10min

Why you should write your own LLM benchmarks — with Nicholas Carlini, Google DeepMind

Nicholas Carlini, a research scientist at DeepMind, advocates for personalized benchmarks in AI. He emphasizes how AI can handle routine, tedious tasks, freeing up creativity for more valuable work. Carlini elaborates on his viral blog post detailing 12 specific ways he uses AI, from writing code to solving simple problems. He also discusses the significance of customized model evaluations and the potential vulnerabilities in AI security, pushing for a better understanding of technology's role in practical applications.
undefined
56 snips
Feb 27, 2025 • 2h 35min

The Adversarial Mind: Defeating AI Defenses with Nicholas Carlini of Google DeepMind

Nicholas Carlini, a security researcher at Google DeepMind known for his groundbreaking work in adversarial machine learning, shares intriguing insights into AI security challenges. He discusses the asymmetric relationship between attackers and defenders, highlighting the strategic advantages attackers possess. Carlini also explores the complexities of data manipulation in AI models, the role of human intuition, and the implications of open-source AI on security. The conversation dives into balancing AI safety with accessibility in an evolving landscape.
undefined
18 snips
Sep 23, 2024 • 1h 4min

Stealing Part of a Production Language Model with Nicholas Carlini - #702

Nicholas Carlini, a research scientist at Google DeepMind specializing in adversarial machine learning and model security, dives into model stealing techniques in this discussion. He reveals how parts of production language models like ChatGPT can be extracted, raising important ethical and security concerns. The episode highlights the current landscape of AI security and the steps tech giants are taking to protect against vulnerabilities. Carlini also shares insights from his best paper on privacy challenges in public pretraining and the complexities surrounding differential privacy.
undefined
9 snips
Feb 27, 2023 • 43min

Privacy and Security for Stable Diffusion and LLMs with Nicholas Carlini - #618

Today we’re joined by Nicholas Carlini, a research scientist at Google Brain. Nicholas works at the intersection of machine learning and computer security, and his recent paper “Extracting Training Data from LLMs” has generated quite a buzz within the ML community. In our conversation, we discuss the current state of adversarial machine learning research, the dynamic of dealing with privacy issues in black box vs accessible models, what privacy attacks in vision models like diffusion models look like, and the scale of “memorization” within these models. We also explore Nicholas’ work on data poisoning, which looks to understand what happens if a bad actor can take control of a small fraction of the data that an ML model is trained on.The complete show notes for this episode can be found at twimlai.com/go/618.