The Smallest Change That Will 10× Your AI Coding Results
It’s not better prompts. It’s what your AI sees before you type anything.
Most people are trying to get better results from AI by improving their prompts.
Longer prompts.
More detailed prompts.
More “you are a senior engineer…” prompts.
And sometimes that works.
But after a while, you notice something strange:
You can write a perfect prompt…
…and still get mediocre code.
The Real Problem
The problem isn’t the last prompt but what the AI sees before you even write the prompt.
Two Developers, Same Prompt
Let’s say two developers ask:
“Refactor this module, improve readability, and ensure no regressions.”
Developer A
- messy repo
- no README
- unclear naming
- no tests
- random scripts
Developer B
- clear README
- structured modules
- tests covering behavior
- explicit constraints
They use the same prompt.
They get completely different results.
Why This Happens
AI doesn’t “understand” your system.
It infers your system.
From:
- file names
- structure
- tests
- comments
- patterns
If your codebase is vague, the AI fills in the gaps.
And it will fill them… confidently.
The Small Shift
Stop thinking:
“How do I write a better prompt?”
Start thinking:
“What environment does my AI operate in?”
Because that environment is the real prompt.
What Actually Changes Everything
There are three things that make a disproportionate difference:
- A README that explains intent
- Tests that define reality
- Constraints that remove ambiguity
Let’s go through them.
1. README Is Not Documentation Anymore
Most READMEs try to explain what the code does.
That’s not the most valuable thing anymore.
Your AI can already read the code.
What it cannot infer reliably:
- why decisions were made
- what tradeoffs exist
- what must not change
So instead of writing:
“This project processes financial data”
Write:
- “Latency must stay below 50ms”
- “Numerical stability is more important than speed”
- “All outputs must be reproducible with fixed seeds”
- “This module is intentionally duplicated for clarity”
That last one alone prevents hours of “AI cleanup.”
Small Trick That Changes Everything
Add a section like:
## Non-Negotiable Constraints
- Do not change function signatures in `core/`
- All calculations must remain deterministic
- Avoid introducing new dependencies
- Backwards compatibility is required for API v1You just removed 80% of “creative mistakes.”
2. Tests Are Not Safety Nets — They Are Communication
Most people think tests are there to catch bugs.
That’s true.
But with AI, they do something more important:
They tell the model what correct means.
Without tests:
- the AI guesses correctness
With tests:
- correctness is observable
The Important Part Most People Miss
Coverage is not the goal.
Clarity is.
A single good test is often better than 20 vague ones.
Bad test:
assert result is not NoneGood test:
assert calculate_price(100, tax=0.19) == 119.0Even better:
# Price must remain stable across rounding boundaries
assert calculate_price(0.1, tax=0.19) == 0.119Now the AI knows:
- precision matters
- rounding matters
- behavior matters
Subtle but Powerful
If you want better refactors:
-> Write tests that encode intent, not implementation
Otherwise the AI will “optimize” away things you actually needed.
3. Constraints Beat Creativity
Most people try to get better results by giving more freedom.
“Be elegant.”
“Improve performance.”
“Refactor cleanly.”
That sounds nice.
It also invites chaos.
What Works Better
Constrain the problem.
Instead of:
“Improve this code”
Say:
- “Only refactor within this file”
- “Do not change public interfaces”
- “Preserve current behavior exactly (see tests)”
- “Focus only on readability improvements”
You’ll get:
- smaller changes
- fewer regressions
- faster iteration
The Surprising Part
When you reduce the solution space…
The AI actually becomes more useful, not less.
Minimal Setup That Outperforms Fancy Prompting
If you do only this:
1. README with intent + constraints
2. 5–10 high-quality tests
3. Clear module boundaries
You will outperform someone using:
- perfect prompts
- better models
- more tokens
The Real Workflow (What Actually Works)
Here’s a loop that consistently produces good results:
- Define constraints (
READMEor inline) - Write or refine tests
- Ask AI to implement or refactor
- Run tests
- Adjust constraints (not prompts)
Notice what’s missing?
No prompt tweaking spiral.
A Small Example
Before:
“Optimize this function”
After:
“Refactor this function for readability only.
Do not change behavior (see tests).
Avoid introducing new dependencies.
Keep function signature unchanged.”
Same model.
Different world.
What This Changes
Once you work like this, something interesting happens:
You stop thinking of AI as:
“a tool that writes code”
And start seeing it as:
“a system that operates within boundaries you define”
And that’s the real leverage.
Because boundaries scale.
Prompts don’t.
Useful Prompts within AGENTS.md in Addition
1. Add “Known Weaknesses”
## Known Weaknesses
- Floating point precision may degrade for very large inputs
- No currency conversion support yet-> The AI will avoid “fixing” things you deliberately left out
2. Add “Out of Scope”
## Out of Scope
- Multi-currency support
- External API integrations-> Prevents scope creep during generation
3. Add “Examples of Bad Changes”
## Avoid These Changes
- Replacing simple logic with heavy abstractions
- Introducing caching without measurement-> This dramatically reduces overengineering
4. Force Explanation Mode (very powerful)
Add to AGENTS.md:
Before making changes, briefly explain your plan.-> You turn the AI into a thinking partner, not a blind executor
The Quiet Advantage
Most people will keep optimizing prompts.
You’ll optimize the environment.
And your results will look like you’re using a better model.
Even when you’re not.
Most people try to control AI with better prompts.
The real leverage comes from controlling the environment it operates in.
Once you see that, you stop fighting the model—and start shaping it.