"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis cover image

Ignore Previous Instructions and Listen To This Interview with Sander Schulhoff, CEO of Learnprompting.org

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

NOTE

Using Negative Instructions and Self-Generated Examples in Language Models

Negative instructions may not work effectively in language models, as they can introduce the concept into context and make it difficult to negate. However, explicitly stating what not to do in detail can work well. Language models like GPT respond decently well to negative instructions. Moreover, having the model generate its own examples can be effective, as it can recall its own canonical examples and move on to solving the problem. Using self-generated examples and chain of thought rationales can be useful, but human-written examples may still be better for accuracy.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner