Prompt Engineering

Why Better Prompts Beat Better Models: The Bloch AI Three Cs Framework

Your team spent months choosing the perfect model. Then someone typed "summarise this" and wondered why it didn't work.

Published:

15.11.25

Your team is probably wasting 80% of your AI investment.

Not because you chose the wrong model. Not because you need better tools. Because nobody taught them how to ask the right questions.

I've watched this pattern repeat across dozens of organisations. Executives spend months evaluating vendors, debating security protocols, and building deployment strategies. Then their people type "summarise this document" into ChatGPT, get a mediocre result, and conclude AI doesn't work.

The model isn't the problem. The prompt is.

Here's what most people miss: for large language models, the real leverage lives in your inputs, not in the technology itself. The difference between a vague prompt on the best model and a well-structured prompt on a decent model isn't even close. The well-structured prompt wins every time.

This isn't a technology problem. It's a skill problem. And it's costing you.

How These Systems Actually Work

Strip away the hype and LLMs do one thing: they continue text based on patterns they've learned.

They don't "understand" your business. They don't reason about your strategy. They don't know what matters to your organisation unless you tell them.

What they're extremely good at is pattern-matching. Give them incomplete information or vague instructions, and they'll fill the gaps with something that looks plausible. Something based on patterns from across the entire internet, which probably has nothing to do with your specific context.

This is why you get confident nonsense. The model isn't broken. You just didn't give it enough to work with.

The good news? Once you understand how these systems operate, fixing it is straightforward.

The Three Cs: Context, Clarity, Constraints

Because LLMs work by pattern-matching and gap-filling, they depend entirely on three things: the information you give them, the task you define, and the boundaries you set.

Miss any one of these, and you get generic rubbish. Get all three right, and even a modest model becomes surprisingly powerful.

Context: The Information the Model Uses

Context is everything the model can "see" when it generates an answer. Your policies. Your data. Your examples. The specific document you've given it. The background that explains why this task matters.

You can't write every detail into your instructions. You can't specify every assumption. Context fills those gaps, just like an expert would, or a person who really knows your business.

Without context, the model simply guesses the most likely output. And its guesses are based on the entire internet, not your organisation.

Put another way: if you wouldn't expect a new graduate to produce good work without background information, why would you expect an AI to do better?

Clarity: The Requirements of the Task

Clarity defines what you actually want. The job. The purpose. The audience. The format. The success criteria.

Where context supplies the background, clarity defines the task.

Without clarity, you get generic patterns. Overconfident claims. The kind of output that looks professional but says nothing useful.

Most people write prompts the way they'd ask a colleague in the corridor. That works with humans because they can ask follow-up questions and infer what you meant. LLMs can't. They just guess.

Constraints: The Shape and Limits of the Output

Constraints tell the model how to deliver it. Bullet points or prose. Tables or paragraphs. Word limits. Tone. Structural rules like "don't guess" or "cite your sources." Or "no em-dashes".

Constraints narrow the space of acceptable answers. They stop the model rambling. They make outputs consistent, reviewable, and actually usable.

Without constraints, you spend more time editing AI output than you would have spent writing it yourself.

What This Looks Like in Practice

Here's how most people use AI:

"Write a summary of this document."

What happens next is predictable. The model doesn't know who the summary is for, why it matters, or what format you need. So it produces something generic. Too long or too short. Wrong tone. Wrong focus. You end up editing it heavily or just starting again.

This is what I see every day in organisations. They have a prompting problem.

Now look at the same task done properly:

Context

You are assisting a senior manager preparing for a leadership meeting. The document you will read is a business report about a proposed initiative. The audience for your summary is the executive team, who need a clear, fast, decision-focused overview.

Clarity

Your task is to produce a short decision-oriented summary that includes:

  1. The core problem the initiative aims to solve

  2. The proposed solution or approach

  3. The expected benefits and success measures

  4. The key risks or uncertainties highlighted in the document

Constraints

Please structure the output as follows:

  • Problem (2-3 sentences)

  • Proposed Solution (2-3 sentences)

  • Expected Benefits (bullet points)

  • Key Risks (bullet points)

Keep the entire response under 200 words. Use clear, concise language suitable for senior leaders. Do not add information that is not in the document. If something is unclear or missing, say so directly.

The difference isn't subtle. The first version forces the model to guess what you want. The second version eliminates the guesswork.

Context stops it behaving like a generic internet summariser. Clarity turns "summarise this" into a specific, decision-oriented task. Constraints ensure the output is short, structured, and ready to use.

Most people never do this. They switch models, try to iterate around the prompt, and complain about hallucinations. When the real problem is that they're asking a powerful system to guess what they want.

Why This Matters More Than Model Choice

Model choice still matters. Speed, cost, security, and integration are all very important considerations.

But here's the truth: the performance gap between models is narrowing as they race each other to the top. The performance gap however between well-prompted and poorly-prompted requests is enormous and not shrinking at all.

Your team's ability to prompt well is worth more than your choice of vendor. Organisations that teach prompting as a core skill will outperform those focused on tool selection. The competitive advantage in AI increasingly comes from input discipline, not technology access.

Think about what this means for a moment.

You can spend six months evaluating vendors and deploying the best model available. But if your people can't write decent prompts, you'll get worse results than a competitor using a free tool with better input discipline.

Sure, we've all been disappointed by AI outputs at times. But AI platforms are like cars. A expensive sports car will get you to exactly the same destination that a small budget family car will. You just may need to push the gas a little harder.

They say "Culture eats strategy for breakfast". Its the same with AI. Your people need to embrace it and spend the time learning to work with AI. Simply because….

….the latest and greatest technologies will never beat a knowledgeable and committed workforce.

Tags:

#AI

#prompting

More from Our Experts

Practical perspectives and proven strategies from the experts driving AI transformation across industries.

Join Our Mailing List

Join Our Mailing List

Join Our Mailing List

The Innovation Experts

Get in Touch

The Innovation Experts

Get in Touch

The Innovation Experts

Get in Touch