
Ep 17 : Andrej Karpathy (Tesla - AI) and Lex Fridman
Clubhouse FM
Is the Future of GPT-3 a Transformer?
Andrei: Do you think the future of these models has kind of built in architectural inductive biases for common subroutines like math, like other things? Or do you think that actually know with enough scale, we can kind of brute force our way through and that's where it's going? And at some point, sort of induction type logical inferences like reasoning over complex sets. Andrei: I'd hope that they could help us learn algorithms better. That would be cool if you had more sort of compositional reasoning from smaller subRoutines that can be reused.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.