AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Risk of Truth-Checking in Large Language Models
What needs to be done is self-checking, like you said. That does not exist today. I make the argument that the inventors of this technology that are raising all these alarms knew about these alarms and they could have written some of these checkers into the actual technology itself. But now it has to be done. And there's going to be lots of small innovative companies that are going to take advantage of that gap and add that capability.