The pre-trained models really have opened up this new possibility that the model is just it's too capable i mean i never really envisioned that red teaming would be quite this important. As these models scale up there's more subtle rules that you can tune them to follow and i i believe it's working on the whole like i think that we are getting safer models because of rlhf and techniques like rl uh aif i don't know if rlhF is the final answer i imagine it's not, but process of whole is necessary.
This is a special preview episode of The Cognitive Revolution: How AI Changes Everything. Hosted by Erik Torenberg and Nathan Labenz, TCR hosts in-depth interviews with the creators, builders and thinkers pushing the bleeding edge of AI. On this episode, they talk with Riley Goodside, the first Staff Prompt Engineer at Scale AI and expert in prompting LLMs and integrating them into AI applications.
Check out The Cognitive Revolution The perfect AI interview complement to The AI Breakdown https://link.chtbl.com/TheCognitiveRevolution Find TCR on YouTube: https://www.youtube.com/@CognitiveRevolutionPodcast