It's still not as good as the open APIs out there for what they needed. I will say that like other speech recognition models, it is determined by what kind of data it's been trained on. And so there are still biases. There are still underrepresented languages in this case that won't perform as well as those that have much more data that it was trained on. It shows pretty impressive progress in transcription and will enable a lot of full-off work.