The conversation around AI-generated code has become predictable. Someone posts about shipping faster with AI. The replies pile up: what about security, what about debt, what about quality. Same thread, every week.
The concerns are legitimate. The framing is wrong. These are not arguments against AI. They are arguments against using AI on offense only.
Right now, most developers deploying AI are pointing it almost exclusively at building. Write this feature. Generate this component. Scaffold this API. Ship, ship, ship. The output volume goes up. The defensive capacity around it stays flat. The asymmetry is what creates the problem — not the AI itself.
And yes, AI-generated code can introduce vulnerabilities. It can accumulate debt. It can pick the wrong abstraction. So can humans. We just do it slower, which is the only reason we have not panicked about it for the last forty years. We normalized human imperfection. AI imperfection still gets treated as disqualifying.
The mistake is not that we are letting AI write code. The mistake is that we are not letting AI defend against the problems that come with writing code faster. I've argued elsewhere that agentic defense is the missing half of the equation — automated review, exception-to-fix pipelines, continuous debt reduction. None of that is exotic. It is the obvious counterweight nobody is shipping yet.
The teams that figure this out first will not be the teams that abandon AI. They will be the teams that stop deploying it on one side of the field. The critics will eventually notice the score has changed.
— Ry
Sources
Related Essays
Agentic Defense: The Missing Half of the Equation
For every unit of AI firepower aimed at building, deploy equal or greater firepower at securing, debugging, and testing. Here is what that looks like.
Put AI on Defense, Not Just Offense
Most developers use AI only to write code. The real opportunity is using AI to secure, debug, and test—deploying equal firepower on defense.
Where AI Defense Is Headed
Two-thirds of your AI firepower belongs on debt, security, and testing. The teams building defensive infrastructure now will outpace teams that either reject AI or deploy it recklessly.
Key takeaways
- Most AI usage is offense-only — write features, generate components, ship faster.
- The risk is not AI writing code. It is AI writing code with no defensive counterweight.
- Critics keep arguing against AI when they should be arguing for symmetric deployment.
FAQ
What is the offense-only problem?
Teams point AI almost entirely at building — features, components, APIs — and run zero AI on the defensive side. The output volume goes up while the review and security capacity stays flat.
Are the security critics wrong?
Their concerns are real. Their conclusion is wrong. The fix is not less AI. The fix is AI on both sides of the ledger so review keeps up with generation.