Speaker 3
as your data set, on a per customer basis, so if you're building out a model or a suite of models for one customer, are they going to have a data set, or their models can be trained only on data from their own hospital system or their own process? Generally,
Speaker 1
we work with extremely large helt systems, so they have a lot of data. Yes, as they answer to your question, we train the models on each customer's own data for two reasons. First, it provides an extremely disciplined process to insure that the quality is high, rather than just taking a pre baked model applying it to a new situation, and it may not generalize very well. You know, automatically, when you do training and testing, you know there's a procedure by which yo you evaluate your tests at performance, you can actually insure that you're performing well. So that's the first part. The second part is, what weve found is that there's a lot of nuances from custom and a customer, for instance, one health system versus another, will have entirely different insurance cards because they're in a different state, right? And so the insurance cards will look different. Other examples are that doctors might take notes in a slightly different wayor like medical coting, the insurance plans that are present are present with different frequencies in different places, right? And so you need to sort of learn what are the local rules for that particular helth system. So everything we do is trained essentially for every hell system. In particular, we're
Speaker 3
leveraging machine larding to automatically costomize the product essentially at scale. Is are going back to unified automation. Then when you mention sart of the humans labelling your data set. Are they doing it knowingly, or are they just doing it in the normal process of their work flows any way, and they happen to be labelling your data set along the way? Both
Speaker 1
happen. So we have two types of data. One set of is like retrospective data. Essentially, we can pull data out of the h r and so that's essentially data that has been already created for us, that we don't have to label, but some human has labelled it, right? And so we can pull that out and train on that. Some data is like for our denial poriction algitm its claims, are sent out to insurers. Insurers deny or pay for them. That provides a data set automatically. But then the third type is actually our labeller. So we provide a full stack solution. It's like r c m as service. And behind the scenes within our company, we have a i and humans. The humans are labellers. We've designed the system so that when they're doing the work, they're actually superefficient. So we have an internal piece of softboard that allows them to label the data very efficiently. When they label it, they're actually completing the work. Like we're actually completing work fellows through the labelling process. But when they're doing that, they're still training our algrtms also at the same time. And we have this process by which we go from what we call full manual mode, which is still much more efficient than doing it in the h r, but it's more manual, to co pilot mode to auto pilot mode. And co pilot mode thalcarythms, or auto completing everything the pesonages has to say, correct or not. And then in full autopilot mode, we skip humans entirely for a large percentage of the task. And so for every task, we sort of move up this curve from automation manual labelling for anything that involves human intelligence too fully audimated.