It's not clear to me that just by removing the cases in which the developer decides that the model is refusing to do certain things, you're actually like debiasing. I don't think it's quite as straightforward to un-sensor or debias a model as might initially seem. There are models here from $7 billion to $65 billion that get really good performance better than Vikuna and Chai GPT. And yeah, we've seen how llama has been pretty fundamentally important.