AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
AWS Inferentia Chips for Transcription
We started out with a like an on-prem provider specific for GPU hardware because at the time we were getting ridiculously good latency. For us, if we're trying to build experiences that pop up to help an expert solve a problem, we have to be faster than the human being hearing this sentence. And so latency is like super important to us to build trust. We do use AWS to do this, but we have our own internal hosted Kubernetes as well.