AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is the Future of GPT-3 a Transformer?
Andrei: Do you think the future of these models has kind of built in architectural inductive biases for common subroutines like math, like other things? Or do you think that actually know with enough scale, we can kind of brute force our way through and that's where it's going? And at some point, sort of induction type logical inferences like reasoning over complex sets. Andrei: I'd hope that they could help us learn algorithms better. That would be cool if you had more sort of compositional reasoning from smaller subRoutines that can be reused.