Every three to four months, we're like throwing out the current network architecture and using a different one that is giving us better results. And so it's just not trade at bu. Warwork will work on making the models smaller and more compute efficient and less costly to run. Bur right now, t like, like our speech recognition model that does inference on a gpu. It still uses gpus. Yera models acreted iracl, mean, there's, we could run it on cpu, but it's justnot as parallelizable as running it on g pus. O, interesting. Owyou do evaluate the stuff.