Latent Space: The AI Engineer Podcast cover image

Why you should write your own LLM benchmarks — with Nicholas Carlini, Google DeepMind

Latent Space: The AI Engineer Podcast

NOTE

Vulnerability Before Valor: Ethical Exploration of Model Security

Understanding the security of machine learning models requires examining their vulnerabilities, especially in the context of potential stealing attacks. Practical attacks can be enhanced by identifying exposed APIs within major models. A collaborative approach, involving legal consent, is essential for conducting security tests ethically. By obtaining permission to test OpenAI's models, the process ensured no harm was caused while exposing real vulnerabilities. Notifications were sent to affected parties before publishing findings, highlighting the responsibility to maintain transparency and mitigate risks across the industry.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner