ARTICLE
AI changed how we document code (or why we document code)
6 minutes read
code is no longer written for humans only
If you use AI in your development workflow, documentation isn’t just a nice-to-have anymore. It’s essential. Poor documentation slows engineers down and can actually mislead AI.
For a long time, code had two primary consumers: the runtime & developers. Today, there is a third one: LLMs that read, generate, refactor, and test code.
LLMs do not “understand” software in the human sense, but they are extremely good at extracting patterns, intent, and constraints from text. Importantly, they extract intent more reliably from natural language than from code alone. This is not because LLMs are bad at reading code; it’s because most code is optimized for execution, not explanation. It excels at describing how something is done, but it’s much worse at explaining why.
Business rules, assumptions, and edge cases are often implicit:
- In naming
- In conventions
- In the heads of the original authors
When those rules are not made explicit, an AI assistant is forced to infer them. And inference is exactly where hallucinations begin.
Documentation (comments, tests, READMEs, written rules) reduces the amount of inference an AI assistant must perform. It turns probabilistic guessing into constraint-based reasoning.
The less intent you express in language, the more intent the AI has to invent.
how AI actually reads your code (a mental model)
Before talking about examples or tests, it helps to have a rough mental model of how an LLM approaches a codebase.
Think of it as a three-layer input pipeline:
Natural Language (README, comments, test names)
↓
Explicit Rules (assertions, invariants, examples)
↓
Executable Code (functions, types, control flow)Humans usually read this stack bottom-up, AI reads it top-down
When clear, natural language and explicit rules are available, the model uses them as constraints and treats the code as an implementation detail. When they don’t, the model is forced to infer intent solely from structure. That inversion matters:
Code describes how something happens.
- Language describes what must always be true.
- The less information exists in the upper layers, the more AI has to guess by pattern matching.
what AI sees when documentation is missing
Here’s a typical implementation:
export function calculateTotal(
items: { price: number; quantity: number }[],
vat: number,
discount?: number
): number {
const subtotal = items.reduce((sum, item) => sum + item.price * item.quantity, 0);
const withVat = subtotal + subtotal * vat;
if (discount) {
return withVat - discount;
}
return withVat;
}To a developer, this looks “obvious”; an AI assistant sees ambiguity everywhere:
- Is vat represented as a decimal (0.21) or as a percentage (21)?
- Is the discount a fixed value or a percentage?
- Can totals go negative?
- What are the valid edge cases?
When you ask AI to “Write unit tests,” it will invent answers to those questions.
Not because it’s careless, but because you did not give it constraints.
code comments for AI: explain rules, instead of syntax
AI doesn’t need help understanding what the code does; it needs help understanding why it does it this way.
Here’s the same function, rewritten with intent-first documentation
/**
* Calculates the final total of an invoice.
*
* Business rules:
* - VAT is expressed as a decimal (e.g. 0.19 = 19%)
* - Discount is a fixed monetary value, NOT a percentage
* - Discount is applied AFTER VAT
* - Final total must never be negative
*
* @param items List of invoice items
* @param vat VAT rate as a decimal
* @param discount Optional fixed discount applied after VAT
*/
export function calculateTotal(
items: { price: number; quantity: number }[],
vat: number,
discount: number = 0
): number {
const subtotal = items.reduce((sum, item) => sum + item.price * item.quantity, 0);
const withVat = subtotal * (1 + vat);
const total = withVat - discount;
return Math.max(total, 0);
}This changes everything:
- AI now has rules
- Assumptions are eliminated
- Generated tests become aligned with business intent
- AI will suggest code changes based on new documentation (VAT calc, return)
unit tests are executable documentation
In traditional projects, unit tests are primarily treated as a safety mechanism. They catch regressions, validate edge cases, and give developers confidence to refactor.
In an AI-assisted workflow, unit tests take on a second role: they become the most authoritative form of documentation in the codebase. This is because unit tests don’t just describe behavior, they enforce it.
Comments, types, and function signatures all provide signals to an AI assistant, but they vary in strength.
- Comments express intent, but can be outdated or ignored
- Types define shape, but rarely define business meaning
- Code shows what happens, but not what must remain invariant
Unit tests, on the other hand, encode intent as something that must continue to pass.
From an AI’s perspective, the goal is almost always the same: “Make a change without breaking existing behavior.” Tests define what “breaking” means.
A comment might say:
// Discount is applied after VATBut unless that rule is enforced somewhere, AI cannot know whether this rule:
- is critical
- is historical
- is already violated elsewhere
A unit test removes that ambiguity
it("applies discount after VAT", () => {
const result = calculateTotal(
[{ price: 100, quantity: 1 }],
0.19,
10
);
expect(result).toBe(109);
});
This test does not explain the rule; it locks it in. When an AI agent is asked to refactor or optimize the implementation, preserving this outcome becomes non-negotiable.
Test names matter more in AI-assisted projects than in traditional ones. Even test names can be perceived as documentation by an AI agent:
Compare
it("returns correct value", () => { ... });with
it("never returns a negative total even if discount exceeds subtotal", () => { ...});The second version:
- documents intent
- encodes an edge case
- provides language-level guidance
- executable constraint
For AI systems, this combination is extremely powerful. The test name provides semantic context, the assertion provides enforcement.
README files are now contracts
Historically, README files served as a lightweight introduction to a project, explaining what the project does, how to run it, and sometimes how to contribute.
In an AI-assisted workflow, that role changes. When AI reads a README, it treats it as a high-level specification. That makes the README less of a welcome message and more of a contract.
What does the README signal to AI?
- Where does the business logic live
- Which files are authoritative
- What rules must not be violated
- What is considered out of scope
- What patterns to follow
This is why vague READMEs lead to surprisingly confident but incorrect changes.
The key shift is not about writing more documentation; it’s about writing documentation that removes ambiguity.
Example of AI aware README section
## AI Agent Guidelines
- This is a Node.js backend written in TypeScript
- Core business logic lives in `src/billing`
- Unit tests are the source of truth for business rules
- Do not change business logic without updating tests
- VAT is always represented as a decimal (0.19 = 19%)
- Discounts are fixed monetary values, never percentages
dedicated instructions for AI agents
When using AI Agents, treat them like new team members, give them onboarding. This can be achieved by creating a file specifically for their instructions
//AI_INSTRUCTIONS.md
You are an AI coding assistant working on this project.
Rules:
- Prefer clarity over cleverness
- Never infer business rules
- Rely on unit tests and code comments as the source of truth
- When behavior is unclear, ask instead of guessing
- Always add or update unit tests when changing logicWhy a separate file?
A README defines the project, code comments explain local intent, and unit tests enforce behavior. AI agent instructions serve a different purpose: they define how AI should operate within the system.
Without explicit guidance, AI agents optimize for general best practices pulled from training data. That often clashes with project-specific priorities.
This is not the place to restate business logic or API documentation; it’s a place to define behavioral constraints for the agent.
conclusion
AI forces us to be explicit (and that’s a good thing), and by all means, AI doesn’t remove the need for documentation. It exposes how fragile undocumented assumptions really are.
The benefits of this shift are not limited to AI tools. A codebase that is explicit
- Is easier to onboard into
- Is safer to refactor
- Is more resilient to team changes
- Degrades more gracefully over time
This doesn’t mean documentation must be exhaustive, or that every rule should be written down. Explicitness is about clarity, where ambiguity would be costly.
AI simply accelerates the feedback loop, and it makes unclear assumptions fail faster.
