Speaker 2
Because in the end, all these it's not quite certain whether or not you'll, you'll have error. You'll just know you'll have error in a specific case, you just kind of know the aggregate, right? So is there any way that you can bound, basically, and idon't expect that there is, is there any way that you can bound this in each specific case? Well, this
Speaker 1
this question of, you know, let's take like a hard attack risk. Ah, if you look inside the trained predictor for hard attack risk, there might beike, a thousand different low si which are activated, which are used by the predictor. But maybe 50 % of the variants in risk is controlled by a elatively small number, like maybe 50 or a hundred variants. If you had a long term, you know, a project to try to determine, in each of those regions, what really is causal, what's actually going on, and you did it with a bunch of stuff, with mice, and, you know, you did more statistical analysis of larger and larger populations, you know, graduallyo get to a point where you ar kind of confident that, like, ok, of these 50 top impact lo si, for these 35, we're pretty sure we know what's going on. Like, it's's exactly this change that's causing. We don't exactly downstream what that change is doing, but we're pretty confident that's the thing ta we want ta change. So i can imagine a situation where 20 years from now, we feel pretty good. We feel like, ok, we got the the aliens gave us this perfectly efficient crisper tol, and now, after all this other work, we know exactly what want to do. And i can, i can take someone who otherwise would have been top one % for hard attack risk, and move them back down to being average or below average risk. So i think that, you know, that's a feasible state that we could be in in a few decades.