AI risk hawks like I am, this is really worrying actually. Your investments in safety, right, as a cutting edge AI lab, they are like attacks that come out of your margins,. You literally are faced every day with this choice. Do I spend more money screening people before they can use my app? And so seeing that kind of safety race to the bottom and potentially also on alignment is something that's when you start to think about it.