Why Most People Get Mediocre Results from AI
You have probably tried ChatGPT or another AI tool, typed in a question, and received a response that was... fine. Generic. Usable but not impressive. Maybe you concluded that AI is overhyped, or that it cannot handle the nuance your work requires. But the issue is almost never the AI — it is the prompt.
Prompting is to AI what briefing is to a new employee. If you tell a new hire "write something about our product," you will get a vague, generic piece that misses the mark. But if you say "write a 500-word product description for our project management software, targeting small marketing agencies, emphasizing the Gantt chart feature, in a professional but friendly tone, similar to Basecamp's website copy," you will get something dramatically better. The AI works the same way.
Prompt engineering — the skill of writing effective instructions for AI — has become one of the most valuable practical skills in the modern office. It is not about memorizing magic words or knowing secret tricks. It is about communicating clearly, specifically, and completely. And like any skill, it improves with practice and a good framework.
The CRAFT Framework
After studying thousands of prompts and their outputs, a clear pattern emerges: the best prompts consistently include five elements. We call this the CRAFT framework — Context, Role, Action, Format, Tone.
C — Context
Give the AI the background information it needs. Who are you? What is the situation? What has happened so far? What constraints exist? The more relevant context you provide, the more tailored the output becomes. Without context, the AI defaults to the most generic interpretation of your request.
Bad: "Write an email about the project delay." Good: "Our software development project for client Acme Corp is running 2 weeks behind the original March 15 deadline due to unexpected API integration issues. The client has been understanding so far but is getting anxious about their product launch timeline."
R — Role
Tell the AI who it should be. Assigning a role activates different patterns in the model's training data and significantly changes the style, vocabulary, and approach of the response. "You are a senior customer success manager" produces very different output than "you are a technical support engineer" — even for the same question.
Effective roles include: "You are an experienced email copywriter for B2B SaaS companies," "You are a financial analyst preparing a board presentation," or "You are a patient customer support representative for a premium brand." The more specific the role, the better the output matches your needs.
A — Action
State exactly what you want the AI to do. Use clear action verbs: write, summarize, analyze, compare, list, rewrite, translate, explain, categorize, prioritize. Avoid ambiguous instructions like "help me with" or "do something about." Be specific about what the deliverable should be.
Instead of: "Help me with this customer complaint." Try: "Draft a professional email response that acknowledges the customer's frustration, explains what went wrong, offers a concrete solution (full refund or replacement), and includes a discount code for their next purchase."
F — Format
Specify the structure of the output. Do you want bullet points, numbered steps, a table, a formal letter, an email, a one-paragraph summary, or a detailed report? Tell the AI the length you expect: "in 3 paragraphs," "under 200 words," "as a 10-item list." Without format instructions, the AI guesses — and often guesses wrong.
T — Tone
Define the voice. Professional, casual, empathetic, authoritative, friendly, urgent, reassuring. Tone dramatically affects how the message lands with its audience. A customer apology in a casual tone reads very differently than one in a formal, corporate tone. Match the tone to your brand and your audience.
Advanced Prompting Techniques
Once you have mastered the CRAFT basics, several advanced techniques can dramatically improve your results for complex tasks.
Few-Shot Prompting
Show the AI examples of what you want before asking it to produce output. If you want email subject lines in a specific style, provide 3-5 examples of subject lines you like, then ask it to generate more in the same style. This is called "few-shot" prompting because you give the AI a few shots (examples) to learn from. It is one of the most powerful techniques available and consistently produces dramatically better results than zero-shot (no examples) prompting.
Chain-of-Thought
For complex reasoning tasks, ask the AI to "think step by step" or "explain your reasoning before giving the final answer." This forces the model to work through the problem logically rather than jumping to a potentially incorrect conclusion. For business analysis, financial calculations, or strategic recommendations, chain-of-thought prompting significantly improves accuracy.
Constraint Setting
Tell the AI what NOT to do. "Do not use jargon." "Do not include generic advice — only specific, actionable recommendations." "Do not mention competitors by name." "Do not exceed 500 words." Constraints prevent the AI from defaulting to its most common patterns, which tend to be generic and verbose.
Iterative Refinement
Treat your first prompt as a first draft, not a final submission. Review the output, identify what is missing or wrong, and refine your prompt. "This is good, but make the opening paragraph more direct and add a specific metric about our customer satisfaction rate of 94%." Each iteration gets you closer to exactly what you need. The best prompters iterate 2-3 times on important outputs.
Common Prompting Mistakes
Understanding what goes wrong helps you avoid the most frequent pitfalls that produce poor AI output.
- Being too vague: "Write something about marketing" vs. "Write a 200-word LinkedIn post announcing our new referral program, targeting existing customers, emphasizing the 20% discount incentive."
- Not providing context: The AI does not know your industry, your audience, your brand voice, or your goals unless you tell it. Every assumption it makes is a potential error.
- Asking for too much at once: "Write a complete marketing strategy" will produce a superficial overview. Break complex requests into focused sub-tasks and build up the complete picture step by step.
- Ignoring the output format: If you need a table, say so. If you need bullet points, specify. If you need a specific email structure (subject, greeting, body, CTA, sign-off), define it explicitly.
- Not iterating: Accepting the first output when it is only 70% right. One round of refinement often gets you to 95%.
Building a Prompt Library for Your Business
The biggest productivity gain from prompting is not writing better individual prompts — it is building a library of proven, reusable prompts that your entire team can use. Once you craft a prompt that consistently produces great customer support responses, that prompt becomes a business asset. Save it, document it, share it.
This is exactly what ANTS prompt templates are designed for. Instead of every team member writing their own prompts from scratch, you build standardized templates for your most common tasks — email responses, report summaries, social media posts, meeting notes, customer follow-ups — and every team member gets consistently high-quality output.
A well-maintained prompt library is one of the highest-ROI investments a business can make in AI adoption. It standardizes quality, reduces training time for new employees, and turns individual expertise into organizational capability.
The difference between a mediocre AI user and a power user is not intelligence or technical skill — it is the willingness to be specific, provide context, and iterate on results.
— ANTS Prompting Guide