How to Prompt AI Agents: What Actually Gets Good Results
Affiliate disclosure: This article contains affiliate links. If you click a link and make a purchase, we may earn a commission at no extra cost to you. Our editorial recommendations are never influenced by commissions — read our full disclosure policy.
Why Most AI Prompts Fail
The number one reason people get bad output from AI agents is not the tool — it is the prompt. After testing over 20 AI tools for six months at NorwegianSpark, Thomas and Øyvind have seen the same prompting mistakes hundreds of times. The good news is that fixing them is straightforward.
Most prompts fail because they are either too vague or too controlling. "Write me a blog post about AI" gives the agent nothing to work with. A 500-word prompt specifying every sentence constrains the agent so much that you might as well write it yourself.
The sweet spot is a prompt that provides clear context, a specific task, and enough freedom for the agent to do its job.
The Core Principles
These apply to every AI agent, regardless of the tool or task:
1. Be specific about what you want, not how to get there. Tell the agent the destination, not every turn. 2. Provide context the agent cannot guess. Your audience, your tone, your constraints, your existing content. 3. Include an example when possible. One good example communicates more than a paragraph of instructions. 4. Set quality criteria upfront. "Make it good" is useless. "Make it concise, cite sources, and use active voice" is actionable. 5. Iterate, do not restart. If the first output is 70% right, refine it instead of rewriting your prompt from scratch.
Role + Context + Task + Format
This is the prompting framework we use for 90% of our work. It is not original — many people teach variations of it — but it works consistently across every tool we have tested.
Role: Tell the agent who it is. "You are an experienced tech journalist who writes for a knowledgeable audience." This anchors the tone and depth.
Context: Give the agent what it needs to know. "We are writing for NorwegianSpark, a site that reviews AI tools honestly. Our readers are tech-savvy professionals." This prevents generic output.
Task: State what you need clearly. "Write a 1,200-word comparison of three AI writing tools, focusing on output quality and speed." Specific, measurable, achievable.
Format: Specify the structure. "Use H2 headings for each tool, include a pros/cons list, and end with a clear recommendation." This eliminates the most common structural problems.
We use this framework in MindManager to plan prompts before running them. Mapping out role, context, task, and format visually ensures we do not miss critical elements.
Chain of Thought
For complex tasks, tell the agent to think step by step. This is not just a trick — it genuinely improves output quality for multi-step problems.
Instead of: "Analyze these five products and recommend the best one."
Try: "First, list the key features of each product. Then compare them on price, quality, and user reviews. Then identify which product best fits a small business owner. Finally, write your recommendation with reasoning."
Breaking the task into explicit steps forces the agent to be methodical. The output is longer but significantly more reliable. We use this approach extensively with PopAI for research tasks where accuracy matters.
Iterating on Output
The first output is rarely the final output. The skill is in knowing how to refine it efficiently.
Good feedback is specific: "The second paragraph repeats the point from the introduction — combine them." "The tone is too casual for our audience — make it more professional." "Add a specific example to support the claim in section three."
Bad feedback is vague: "Make it better." "This does not feel right." "Try again."
Crush handles iterative feedback exceptionally well — you can highlight specific sections and give targeted instructions. This is one reason it is our top writing tool.
Common Mistakes
Overloading a single prompt. Do not ask an agent to research, outline, write, edit, and format in one prompt. Break it into steps. Each step builds on the last, and you can course-correct between them.
Ignoring the agent's strengths. Every tool has biases and capabilities. Learn what your tool is good at and prompt accordingly. Do not ask a writing agent to do complex data analysis.
Not providing examples. If you want a specific style, show the agent an example. "Write in this style:" followed by a paragraph is more effective than describing the style in abstract terms.
Expecting perfection on the first try. AI agents are drafting tools, not finished-product machines. Plan for one to three rounds of refinement.
Using jargon the agent might not understand in context. Industry-specific terms can be interpreted differently across domains. Be explicit about what you mean.
Our Template Library
At NorwegianSpark, we maintain a library of prompt templates for common tasks. Here are three we use weekly:
Article draft template: "Role: experienced tech reviewer. Context: [publication name], [audience description], [topic background]. Task: write a [word count]-word article about [topic], covering [specific angles]. Format: introduction, [N] H2 sections, conclusion with recommendation. Tone: direct, knowledgeable, first person plural."
Product comparison template: "Role: unbiased product analyst. Context: we have tested [products] for [duration] on [use cases]. Task: compare the products on [criteria]. Format: summary table, detailed comparison by criterion, final recommendation. Include specific test results."
Email sequence template: "Role: conversion copywriter. Context: [product/service], [target audience], [goal of sequence]. Task: write a [N]-email sequence, each [word count] words. Format: subject line, preview text, body, CTA for each email. Tone: [description]."
These templates save us time and produce consistent results. Customize them for your own use cases and tools.
For more on the tools we use these techniques with, see our best AI agents overview and the Agent Finder. Browse all tutorials in the agent tutorials category.
Reviewed by Thomas — NorwegianSpark · Last updated: 8 April 2026