AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is the Hardware Lottery Overindexing?
In some ways it does seem like we have more expansiveness and terms of the hardware architectures available, but then the goals are becoming pretty condensed. But one other thing that i notice about it is it seems to be indexing pretty hard on the types of models that are growing popular to day. Ideas like capsul nets, for instance, maybe didn't pan out because they didn't fit too well with existing architecture. And a lot of hardware venders these days are making claims like we can train gpt two really fast. Is like the transformer that have already become very popular on the macine learning side, and exploit things like parallelism on gpus. Ah, and all this means