← Back to essays
·2 min read·By Ry Walker

The Offense-Only Problem with AI Coding

The Offense-Only Problem with AI Coding

The conversation around AI-generated code has become predictable. Someone posts about shipping faster with AI. The replies pile up: what about security, what about debt, what about quality. Same thread, every week.

The concerns are legitimate. The framing is wrong. These are not arguments against AI. They are arguments against using AI on offense only.

Right now, most developers deploying AI are pointing it almost exclusively at building. Write this feature. Generate this component. Scaffold this API. Ship, ship, ship. The output volume goes up. The defensive capacity around it stays flat. The asymmetry is what creates the problem — not the AI itself.

And yes, AI-generated code can introduce vulnerabilities. It can accumulate debt. It can pick the wrong abstraction. So can humans. We just do it slower, which is the only reason we have not panicked about it for the last forty years. We normalized human imperfection. AI imperfection still gets treated as disqualifying.

The mistake is not that we are letting AI write code. The mistake is that we are not letting AI defend against the problems that come with writing code faster. I've argued elsewhere that agentic defense is the missing half of the equation — automated review, exception-to-fix pipelines, continuous debt reduction. None of that is exotic. It is the obvious counterweight nobody is shipping yet.

The teams that figure this out first will not be the teams that abandon AI. They will be the teams that stop deploying it on one side of the field. The critics will eventually notice the score has changed.

— Ry

Key takeaways

  • Most AI usage is offense-only — write features, generate components, ship faster.
  • The risk is not AI writing code. It is AI writing code with no defensive counterweight.
  • Critics keep arguing against AI when they should be arguing for symmetric deployment.

FAQ

What is the offense-only problem?

Teams point AI almost entirely at building — features, components, APIs — and run zero AI on the defensive side. The output volume goes up while the review and security capacity stays flat.

Are the security critics wrong?

Their concerns are real. Their conclusion is wrong. The fix is not less AI. The fix is AI on both sides of the ledger so review keeps up with generation.