Generally Intelligent cover image

Episode 02: Sarah Jane Hong, Latent Space, on neural rendering & research process

Generally Intelligent

00:00

Using Multi Level Transformers Over Elisans?

There's just so much information within a given picture ad the picture is a thousand words. I'm te si, if you were to write it out, tin thre's pobably a lot more code that goes into typing out therinformer than typing out the ls. And then like things that are complicated or nock on a scale. For instance, based on the parallel like etms, could achieve the same performance as transformers. It's not until they scale waigh better and i think they're a lot simpler. So the transformers ar simore than altian. T se soya, tat dijust thing.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app