This Day in AI Podcast cover image

Sam Altman tries to Regulate AI, Why AI Will Displace Your Job & The Future of AI | E15

This Day in AI Podcast

00:00

The Negative Effects of Chain of Thought Reasoning

Chain of thought reasoning for those that are unaware is where the AI gives itself instructions or reasons through how to complete the problem. But large language models according to this paper because they're trained on human written tags, it's not necessarily how we reason as human. So in order for chain of thought reasoning to get better in these models, which I'm sure it will, you have to train them based on people's thoughts.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app