AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
LLM's Aren't a Good Source of Truth
I don't think that for LLMs, you know, like, I build lots of stuff with our API. And some of them I just build for myself. Like a large language model has no notion of what is true or what is not true. It turns out that statistical correlations of words is good enough to often say things that people interpret as true. The way the modeling works is there's some randomness in it. So that means there's some use cases though. For instance, I think you're always gonna wanna talk to a doctor. You might use it as a way of learning about yourself or like browsing through medical journals. But I don't ever think LOMs are