The Gradient: Perspectives on AI cover image

Ed Grefenstette: Language, Semantics, Cohere

The Gradient: Perspectives on AI

00:00

The Neural Networks

I worked on differentiable neural computers and this learning to transduce with an unbounded memory was sort of orthogonal contribution to that. And so it almost seemed intuitive that if you train them enough and you got the training just right that out would pop out something that you could actually run on a discrete data structure like a Turing machine or another kind of data structure. So there was a lot of excitement as I said around 2014 to 2016, when we were able to scale these methods to more sophisticated tasks where they didn't necessarily invite a rule-based interpretation.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app