The mental model that has been working for me is simple. For every unit of AI firepower aimed at building, deploy equal or greater firepower at securing, debugging, and testing. Two-thirds defense, one-third offense if you can swing it. Most teams are doing the inverse.
Here is what defense actually looks like.
Automated vulnerability scanning. Point AI at the codebase continuously, not as a quarterly audit. Most legacy vulnerabilities live in human-written code anyway. By the end of 2026, autonomous vulnerability detection is going to outperform the human security reviews of 2024 — not because AI is brilliant, but because it never sleeps and never skips a file.
Continuous debt reduction. Stand up a background agent whose job is to file one PR a day targeting whatever ill is creeping in. Type errors. Dead code. Drifted tests. The compounding problem with debt is that nobody is paid to chip at it. Give an agent the assignment and watch the slope flip.
Exception-to-fix pipelines. Wire your Sentry stream into an agent. When something breaks in production, the agent picks it up, diagnoses, writes the patch, and opens the PR. You review and merge. The feedback loop tightens from days to hours — this is what agents in production actually look like once the novelty wears off.
Test generation with permission to execute. Do not just let agents write tests. Let them run the tests, watch them fail, and iterate until they pass. The leverage is in the loop, not the artifact.
I've argued elsewhere that the offense-only problem is what makes the security critics sound right even when their conclusion is wrong. Defense is how you make their concerns evaporate. The infrastructure is sitting there. Most of it is a weekend project. The teams that build it now will look obvious in eighteen months and irreplaceable in thirty-six.
— Ry
Sources
Related Essays
The Offense-Only Problem with AI Coding
Most developers deploy AI almost exclusively to ship features faster. The asymmetry is the problem — not the AI itself.
Let the Agents Fight Each Other
If a coding agent introduces a bug and a testing agent catches it and a debugging agent fixes it, that is a win. The developer's time was preserved.
Where AI Defense Is Headed
Two-thirds of your AI firepower belongs on debt, security, and testing. The teams building defensive infrastructure now will outpace teams that either reject AI or deploy it recklessly.
Key takeaways
- Match every unit of offensive AI firepower with equal defensive firepower.
- Background agents can scan for vulnerabilities, file daily debt PRs, and convert exceptions into fixes.
- Give test agents permission to execute and iterate — not just write — so they catch their own mistakes.
FAQ
What does agentic defense look like in practice?
Continuous vulnerability scans, daily PRs targeting tech debt, exception-to-fix pipelines that turn Sentry alerts into pull requests, and test agents with permission to run and iterate until tests pass.
Why is exception-to-fix the highest-leverage pattern?
Production breakage is the cheapest signal you have for what to fix next. Wiring exceptions directly into an agent that diagnoses, patches, and opens a PR turns days-long feedback loops into hours.