Last Week in AI cover image

#132 - FraudGPT, Apple GPT, unlimited jailbreaks, RT-2, Frontier Model Forum, PhotoGuard

Last Week in AI

NOTE

How to Break Language Models and Make Them Do Bad Things

In this episode, the co-hosts discuss breaking language models and ransom attacks. They received an email from a listener about warm GPT and jailbreaking. They also mention a service called Belva.ai that makes ransom calls without any technical knowledge. It's a big week with lots to talk about!

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner