AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Main Problems With Large Language Models
GPT-3 was designed to predict what someone on the internet might say in a given setting. It turns out that you can kind of trick the model into performing useful work for you by setting up a text that when the model auto completes gives you what you want. And this is actually a kind of like disemergent of some earlier work on what we call aligning language models. So should I go for the whole like alignment team, etc, etc? Why not? Yeah.