AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Importance of Red Teaming for AI Safety
The pre-trained models really have opened up this new possibility that the model is just it's too capable i mean i never really envisioned that red teaming would be quite this important. As these models scale up there's more subtle rules that you can tune them to follow and i i believe it's working on the whole like i think that we are getting safer models because of rlhf and techniques like rl uh aif i don't know if rlhF is the final answer i imagine it's not, but process of whole is necessary.