AI can explain what code does — but not why it does it. This post explores how documentation is evolving in the age of AI, and why writing down human intent is becoming one of the most practical forms of AI alignment.
Every developer has asked it, even if only half-jokingly: "If AI can understand the code, do we still need to write documentation?”
We used to think documentation was a chore, something we wrote for other people. But AI has flipped that logic. The paradox is this:
The better AI gets at reading code, the more it depends on clear documentation.
And yet, AI also makes writing and maintaining docs easier than ever.
So where should we draw the line? What should humans document — and what should AI handle?
When We Stopped Writing Docs
Let's be honest: documentation was the first thing to rot in most codebases. It's easy to justify:
"The code explains itself."
"We'll add docs later."
"Nobody reads them anyway."
The result? Every team has a README that's 80% wrong and a wiki that hasn't been updated since the last major version.
Then AI coding assistants arrived: Cursor, Copilot, Claude Code. Suddenly, we had help that could read code, generate explanations, and even propose updates. For a brief moment, it felt like we could finally ignore documentation.
But that moment didn't last.
The Experiment That Failed
When we first asked an AI assistant to "explain" a module in our system, the result looked plausible; until we realized it had completely misunderstood the purpose of a few functions.
It described what the code did, but not why.
It documented behavior, not intent.
That's when it clicked: AI isn't a wizard that infers motivation from syntax. It's a pattern matcher. And without written context — the human story behind the code — it guesses.
And guessing is not understanding.
The Dilemma: Human vs. Machine Ownership
So, here's the question: should documentation remain a human responsibility, or can it be delegated to AI?
There's no simple answer — it's a trade-off between trust and efficiency.
Ownership | Pros | Cons |
---|---|---|
Humans write everything | Clear intent, human tone | Time-consuming, often outdated |
AI writes everything | Fast, consistent | Shallow understanding, risk of hallucination |
Hybrid approach | Balanced, scalable | Requires discipline and review |
The middle ground works best: let AI handle the what, and humans clarify the why.
How AI Actually Helps Documentation
AI isn't replacing documentation: it's making it maintainable.
Updating 200 comments used to take hours. Now it's a single prompt.
AI can:
Rewrite stale comments based on code diffs.
Summarize large PRs into readable changelogs.
Generate or update architecture diagrams automatically.
Flag inconsistencies between docstrings and behavior.
The key is control. You don't ask AI to write intent from scratch: you feed it your current truth and let it keep things consistent.
A Concrete Example
Here's what this looks like in practice.
Before:
After: (after refactor + AI doc update)
The AI didn't just fix grammar — it aligned the docstring with the new behavior. The developer still reviewed and approved it, but 90% of the effort disappeared.
That's the model: humans define truth, AI enforces consistency.
When AI Documentation Goes Wrong
AI is powerful — but context-blind. It doesn't know your deadlines, trade-offs, or politics. It can sometimes produce documentation that looks fine syntactically but doesn't fully reflect what the team meant.
The risk isn't catastrophic errors; it's subtle drift — where AI mirrors what's in the code but misses the reasoning behind it. That's why review and intent still matter.
The lesson? You can't outsource intent.

AI Alignment and Documentation
The discussion around AI alignment often feels distant — about controlling superintelligent systems or preventing catastrophic failure. But in practice, every developer already works on a small piece of the alignment problem.
When we write documentation, we're aligning an AI assistant's reasoning with our human intent. Each docstring, comment, or README acts as a micro-alignment step — a way to teach the model why something exists, not just how it behaves.
In that sense, documentation is one of the most concrete tools for alignment we have. It encodes human goals into something AI can parse, learn from, and act upon safely. Alignment isn't a research problem here — it's a daily engineering habit.
Our Rule of Thumb
At Mergify, we now treat documentation as shared context between humans and AI.
Humans describe intent, design choices, and the trade-offs involved.
AI keeps facts, code references, and examples synchronized.
Every doc is a collaboration between reasoning and automation.
When both are in sync, we achieve a simple but powerful form of alignment: AI that truly represents what we mean, not just what we wrote.
The Takeaway
AI didn't make documentation obsolete — it made it critical. Because now, it's not just for humans anymore.
Documentation has become a form of interface between our brains, our teammates, and the AI systems that read, reason, and refactor our code. In a way, writing documentation is the most practical form of AI alignment: teaching machines how to interpret, preserve, and act on human intent.
If you skip it, both humans and machines will fill in the blanks.
And as every engineer knows: when something tries to "fill in the blanks" for you, it rarely gets it right.