The coding-agent conversation is dominated by model capability. Which model writes better code. Which CLI is faster. Real questions, wrong bottleneck.
The bottleneck is context, and it splits into three layers your developers take for granted.
Structural context is the map. Does the agent know what your repo looks like — not just the top-level directory, but the submodules, shared libraries, type definitions three levels deep? A well-maintained agents.md is worth more than a model upgrade. So is a repo-level config telling the agent which submodules to clone for which kinds of tasks.
Navigational context is the resolver. When a developer encounters an unfamiliar type, they jump to definition. Agents, by default, use grep. Grep finds strings. LSP finds meaning. The difference between an agent that greps for "paymentId" and one that resolves the actual type definition is the difference between an intern who reads the code and an intern who writes fan fiction about the code.
Operational context is the running system. Your developers run the app while they code. They see TypeScript errors in real time. They watch tests fail and fix them before committing. An agent in a bare container with no app server, no watcher, and no test runner is coding blind — writing code it has never executed. Then it submits the PR and you become the reviewer of work that should have failed at the agent's machine.
None of these are model problems. All of them are environment problems — software engineering applied to the agent's own development setup. Unsexy work. Load-bearing work.
Pick one to fix this week. Add an agents.md. Or get LSP into the container. Or run the app. Each one moves the needle more than next quarter's model release.
Sources
Related Essays
Context Engineering Is the Hard Problem
Models keep getting better, but agents without deep codebase and organizational context are just expensive autocomplete. Context engineering is the bottleneck nobody has productized.
The Submodule Problem Is the Whole Problem in Miniature
Submodules are a specific pain point, but they illustrate a universal truth. Enterprise codebases are not simple, and agents that cannot handle them cannot handle enterprise software.
The Agent Made a New Type Instead of Finding the Real One
A scene from every engineering org operationalizing agents. The task was trivial. The PR was wrong in a way no human on the team would ever get wrong. It is not a model problem.
Key takeaways
- Structural context is a map of the codebase — submodules, shared libs, type locations. An agents.md is worth more than a model upgrade.
- Navigational context is the difference between grep and LSP. Grep finds strings. LSP finds meaning.
- Operational context is a running app — TypeScript errors, test failures, watcher output. An agent without it is coding blind.
FAQ
What is the difference between navigational and structural context?
Structural context tells the agent what the codebase looks like — directories, submodules, shared types. Navigational context lets it move through that structure with semantic precision, the way an LSP-equipped developer jumps to definition rather than greps for a string.
Why does operational context matter so much?
Without a running app, watcher, and test runner, the agent submits code it has never executed. You become the test runner. The feedback loop that catches obvious errors at the developer's machine is the same one that should catch them at the agent's machine.