AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
SDXL 1.0: A One GPU Model
SDXL 1.0 should work effectively on consumer GPUs with 8 gigabytes of GPU memory, VRAM, or readily available cloud instances. They talk about kind of out of the box support with Laura or the low rank adapters type of technique where you can fine tune the model in a very parameter efficient way. The other thing I was going to note on the model hub is just proliferation of llama 2s. We've got llama 2 7,32k, it looks like 32k context link from together computer, the chat llama, and then I see one to a whole bunch of other llamas.