There are different prompts for different models, especially now that we're in this sort of chat era where not all models are using the same interface anymore. The tricks that you apply to like your text completion models don't apply as much for new chat APIs such as GBT and GBT4. But there's some minimal amount of adaptation you have to do to the chat way of prompting things. In particular, I feel like has reliability issues that come from the presumption that what it's doing is being a chat model.
This is a special preview episode of The Cognitive Revolution: How AI Changes Everything. Hosted by Erik Torenberg and Nathan Labenz, TCR hosts in-depth interviews with the creators, builders and thinkers pushing the bleeding edge of AI. On this episode, they talk with Riley Goodside, the first Staff Prompt Engineer at Scale AI and expert in prompting LLMs and integrating them into AI applications.
Check out The Cognitive Revolution The perfect AI interview complement to The AI Breakdown https://link.chtbl.com/TheCognitiveRevolution Find TCR on YouTube: https://www.youtube.com/@CognitiveRevolutionPodcast