The Delphi Podcast cover image

Stephen Wolfram: Unraveling the Mysteries of Large Language Models, Intelligence and Autonomous Crypto Agents

The Delphi Podcast

00:00

How to Tweak a Neural Net to Correctly Reproduce Words

GBT is a mathematical function that happens to be couple hundred billion terms long. Out of it comes a number, and that number or collection of numbers tell you the probabilities for different possible words to come next. We might have a trillion words that, that we have from things that are out there on the web. Let's try and tweak this neural net so that it will correctly reproduce sentences.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app