AI often gives you something that looks right. It’s confident, fluent, even clever. But would you stake a decision on it? Probably not.

And the issue usually isn’t the model.
It’s not even the prompt.

What actually makes the difference is how you set up the context.

After dozens of experiments, I keep coming back to three simple moves:

1. Constrain

Function: Limit the space of valid outputs
Question: What is not allowed or out of scope?

If you don’t set boundaries, the model fills in the blanks, often incorrectly.

Constraints keep it from guessing, reduce noise, and help it stay focused on what actually matters.

2. Prescribe

Function: Define the shape, tone, or behavior of the response
Question: How should the answer be structured or behave?

Even with constraints, the model doesn’t know what “good” looks like.

Provide a format, an example, or audience expectations. That alignment turns plausible output into something useful.

3. Ground

Function: Anchor reasoning in facts and source material
Question: What must the model treat as true?

If you don’t give it real information, it will make things up.

Grounding the model in facts, documents, or artifacts shifts it from "guessing" to staying on topic.

That’s it.
No complex frameworks, just three strategies that make a real difference when used correctly.

Pick one, try it, and see what changes. And let me know how it went.

3 Simple Ways to Provide Context to ChatGPT (or any LLM really)

Here are three methods to expand your prompts with context.