AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Dangers of Hallucination
The LLM GPT, for example, thinks it knows a ton of stuff about the relationship between Freud and Jung. But our software architecture suppresses all that and forces it only to answer based on what's in the textbook. When you ask it about products, it'll often like say things like, Oh yeah, if you get in trouble, just press like the the blue ejector button. That was one of the most subtle problems that we had to overcome.