

Mental Models for Advanced ChatGPT Prompting with Riley Goodside - #652
78 snips Oct 23, 2023
Riley Goodside, a staff prompt engineer at Scale AI, shares insights on mastering prompt engineering for large language models. He dives into the limitations and capabilities of LLMs, emphasizing the intricacies of autoregressive inference. Goodside discusses the effectiveness of zero-shot vs. k-shot prompting and the crucial role of Reinforcement Learning from Human Feedback. He highlights how effective prompting acts as a scaffolding structure to achieve desired AI responses, blending technical skill with strategic thinking.
AI Snips
Chapters
Transcript
Episode notes
GPT-3 and Hash Values
- Riley Goodside experimented with GPT-3's ability to understand hash values.
- He found that it could generate plausible MD5 hashes and even memorized some, demonstrating unexpected knowledge.
Hofstadter's Trick Questions
- Riley discusses Hofstadter's critique of GPT-3's understanding based on trick questions.
- He explores how GPT-3 responds sarcastically, playing along with jokes, highlighting its then-common improv partner behavior.
Mental Models for LLMs
- Anthropomorphic terms when discussing AI are unavoidable and not necessarily harmful.
- Different mental models are needed to explain LLM behavior in different contexts, considering pre-training, fine-tuning, and the data distribution.