AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How GPT-4 Can Generate Text Based on Visionless Input
In instruct flip, essentially that would be sort of like taking these ideas to the LLM space. GPT-4 prove that it's possible to build a multi-model LLM that takes visionless input or Texas input and is able to generate text based on that. The idea of using instruction tuning to generalize to new sorts of tasks is something that allows for scalability. It makes us really excited about the potential of what these models can be used for.