undefined

Nicholas Carlini

AI security researcher with a focus on the mathematical vulnerabilities of machine learning models. His work includes differential cryptanalysis on LLMs and attacks on production models.

Top 5 podcasts with Nicholas Carlini

Ranked by the Snipd community
undefined
86 snips
Jan 25, 2025 • 1h 21min

Nicholas Carlini (Google DeepMind)

Nicholas Carlini, a research scientist at Google DeepMind specializing in AI security, delves into compelling insights about the vulnerabilities in machine learning systems. He discusses the unexpected chess-playing prowess of large language models and the broader implications of emergent behaviors. Carlini emphasizes the necessity for robust security designs to combat potential model attacks and the ethical considerations surrounding AI-generated code. He also highlights how language models can significantly enhance programming productivity, urging users to remain skeptical of their limitations.
undefined
76 snips
Aug 29, 2024 • 1h 10min

Why you should write your own LLM benchmarks — with Nicholas Carlini, Google DeepMind

Nicholas Carlini, a research scientist at DeepMind, advocates for personalized benchmarks in AI. He emphasizes how AI can handle routine, tedious tasks, freeing up creativity for more valuable work. Carlini elaborates on his viral blog post detailing 12 specific ways he uses AI, from writing code to solving simple problems. He also discusses the significance of customized model evaluations and the potential vulnerabilities in AI security, pushing for a better understanding of technology's role in practical applications.
undefined
18 snips
Sep 23, 2024 • 1h 4min

Stealing Part of a Production Language Model with Nicholas Carlini - #702

Nicholas Carlini, a research scientist at Google DeepMind specializing in adversarial machine learning and model security, dives into model stealing techniques in this discussion. He reveals how parts of production language models like ChatGPT can be extracted, raising important ethical and security concerns. The episode highlights the current landscape of AI security and the steps tech giants are taking to protect against vulnerabilities. Carlini also shares insights from his best paper on privacy challenges in public pretraining and the complexities surrounding differential privacy.
undefined
9 snips
Feb 27, 2023 • 43min

Privacy and Security for Stable Diffusion and LLMs with Nicholas Carlini - #618

Today we’re joined by Nicholas Carlini, a research scientist at Google Brain. Nicholas works at the intersection of machine learning and computer security, and his recent paper “Extracting Training Data from LLMs” has generated quite a buzz within the ML community. In our conversation, we discuss the current state of adversarial machine learning research, the dynamic of dealing with privacy issues in black box vs accessible models, what privacy attacks in vision models like diffusion models look like, and the scale of “memorization” within these models. We also explore Nicholas’ work on data poisoning, which looks to understand what happens if a bad actor can take control of a small fraction of the data that an ML model is trained on.The complete show notes for this episode can be found at twimlai.com/go/618.
undefined
5 snips
Aug 9, 2024 • 1h 33min

Pragmatic LLM usage with Nicholas Carlini

Nicholas Carlini, an expert in pragmatic uses of LLMs, shares his insights on harnessing these powerful tools for real-world problem-solving. He discusses the balance of trust and critical engagement when using LLMs in programming, emphasizing their role in improving efficiency. Humorous anecdotes about AI interactions highlight the generational shift in technology integration. The conversation also critiques AI advertisements, cautioning against the hype and advocating for realistic expectations around LLM capabilities and innovation.