Red teaming is adversarial usage of the models. It's having a team of people that attempt to use them in different ways. The kind of most people are familiar with now are sort of these jail break uh jailbreak prompts. Dan: We don't do it time left and I appreciate all your time you've been very generous with itUm let's talk a little bit about safety and red teaming. You are involved with building a red teaming capability at scale to understand correctly how do you think about red teaming?
This is a special preview episode of The Cognitive Revolution: How AI Changes Everything. Hosted by Erik Torenberg and Nathan Labenz, TCR hosts in-depth interviews with the creators, builders and thinkers pushing the bleeding edge of AI. On this episode, they talk with Riley Goodside, the first Staff Prompt Engineer at Scale AI and expert in prompting LLMs and integrating them into AI applications.
Check out The Cognitive Revolution The perfect AI interview complement to The AI Breakdown https://link.chtbl.com/TheCognitiveRevolution Find TCR on YouTube: https://www.youtube.com/@CognitiveRevolutionPodcast