
How to Prompt GPT-5
The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis
Summary of “How to Prompt GPT-5”
The episode argues that GPT-5 is highly steerable and benefits greatly from well-crafted prompts. It compiles 11 practical prompting techniques drawn from OpenAI’s guidance, early testers, and prompt engineers to help users get better outputs from GPT-5.
The hosts acknowledge GPT-5’s divisive reception but emphasize that its strength lies in adherence to instructions when prompts are explicit and well-structured. They introduce the overall idea that prompt quality now matters more than ever for GPT-5’s performance.
The episode is divided into two main parts: foundations (core prompting techniques) and agentic toggles (controls for how the model processes and delivers results). The goal is to provide immediately usable tips you can apply right away. (source: [151.901sec-170.221sec], note: broader context across the early sections)
11 Practical Prompting Techniques (high-level)
Think harder / work deeper prompts: explicitly instruct GPT-5 to think deeply or take longer to reason, sometimes via an explicit “UltraThink” style block before tackling the task. This is presented as one of the most reliable ways to improve depth of outputs.
Explicit planning phases: require the model to decompose tasks, identify ambiguities, create a structured plan, and validate understanding before proceeding. This helps ensure no steps are skipped.
Be extremely explicit about style and structure: specify tone, formatting, and output expectations; the model is shown to respond well to consistent structure and clearly defined parameters.
JSON and structured prompts: using JSON-like or highly structured prompts can improve fidelity and adherence, though the underlying benefit is the discipline it forces you to apply to the prompt.
Reasoning and validation steps (planning with checks): include pre-execution reasoning, planning phases, validation checkpoints, and a post-action review to catch mistakes and uncertainties.
Iteration and self-rating / self-evaluation: prompt GPT-5 to generate and then critique or rate its own work against a rubric, enabling iterative improvement with clearer feedback signals.
Meta-prompting: use prompts that ask GPT-5 to propose edits or improvements to the prompt itself, effectively bringing the model into the task of optimizing its own guidance.
Prompt optimization tools: OpenAI’s built-in prompt optimizer for GPT-5 can suggest concrete changes to improve prompts; useful for developers who want an automated assist.
Avoid conflicting instructions: keep prompts free of contradictions; GPT-5 can burn through reasoning tokens trying to reconcile conflicts, which harms focus on the main task.
Agentic controls: tuning for agentic eagerness, parallel processing, and verbosity to control how aggressively the model reasons and how much it outputs. Useful for shaping performance in multi-task scenarios.
Agentic toggles and API-level tips
Reasoning effort parameter (API): a setting to adjust how much effort the model should put into reasoning (low, medium, high).
Parallel processing: GPT-5 can handle multiple tasks simultaneously when instructed to do so, with guidance on when to apply parallelism.
Verbosity control: an API parameter that affects the length of the final answer (not just thinking steps), allowing you to calibrate how verbose the output should be.
Bottom line from the episode
If there’s one takeaway, it’s to tighten the structure and explicitness of your prompts. Even as prompt interpretation improves in some contexts, GPT-5 rewards careful, well-organized prompts that spell out goals, steps, constraints, and evaluation criteria.
The hosts express optimism that these techniques will yield visible improvements in GPT-5 outputs and invite listeners to experiment and report back which tips work best for them.
If you’d like, I can tailor this into quick snips or pull direct quotes from specific techniques.