AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Importance of Visualization in Large Language Models
I do think that over time I have come to expect a bit more that things will hang around in a near human place and weird shit will happen as a result. My failure review where I look back and ask like was that a predictable sort of mistake? We're like GPT to GPT three is better than GPT two, but then we just keep going that way and sort of this straight line so I do feel like Like GPT 4 is already kind of hanging out for a longer and a weird near human space Then I was really visualizing in part because that's so incredibly hard to visualize or call correctly and in advance of when it happens Which is in retrospect to bias.