The conversation around AI-generated code has become predictable. Someone posts about shipping faster with AI, and the replies flood in: "But what about security?" "What about technical debt?" "What about code quality?"
These are legitimate concerns. But here's the thing—they're not arguments against AI. They're arguments for using AI differently.
The Offense-Only Problem
Right now, most developers using AI are deploying it almost exclusively on offense. Write this feature. Build this component. Generate this API. Ship, ship, ship.
And yes, that creates risk. AI-generated code can introduce vulnerabilities. It can accumulate debt. It can make questionable architectural choices. But so can humans—we just do it slower.
The mistake isn't using AI to write code. The mistake is not using AI to defend against the problems that come with writing code faster.
Agentic Defense: The Missing Half of the Equation
Here's the mental model shift that's been working for me: for every unit of AI firepower you put on building, put equal or greater firepower on securing, debugging, and testing.
What does that look like in practice?
Automated vulnerability scanning. AI can scan for and root out security issues, especially in human-generated code (which, let's be honest, is where most of the legacy vulnerabilities live anyway). By end of 2026, it'll be obvious that autonomous vulnerability detection and mitigation outperforms the human security reviews of 2024.
Continuous debt reduction. Put background agents in place to fight against whatever ills you're seeing. You can have AI make a PR every day specifically targeting technical debt—fighting back against the creep before it becomes a crisis.
Exception-to-fix pipelines. Automate your Sentry exceptions into PRs. When something breaks in production, an agent picks it up, diagnoses it, writes the fix, and opens a pull request. You review and merge. The feedback loop tightens from days to hours.
Test generation with permission to execute. Give AI permission to not just write tests, but run them. Let it iterate until they pass. Let it catch its own mistakes before you ever see them.
"But AI Code Is Slop"
I've heard this criticism a lot. And I get it—there's plenty of bad AI-generated code out there. But I'd push back on the framing.
AI-generated code doesn't have to equal slop. Humans over-engineer. Humans make mistakes. Humans choose the wrong abstractions. Nobody's perfect. The difference is we've normalized human imperfection while treating AI imperfection as disqualifying.
I haven't seen LLMs choosing worse variable names than humans, personally. I've seen plenty of human code with temp, data, handler2, and processStuff. The bar isn't as high as we pretend.
And here's the harder stance I'll take: I describe whole features to Opus 4.5 and it writes code that works reliably. For software under construction, that's enough to check the box. Ship it, get users, then AI refactor when necessary. Premature optimization is still premature optimization, whether a human or an AI is doing it.
Let the Agents Fight Each Other
There's a philosophical objection some people raise: won't this just create an endless loop of AI creating problems and AI fixing them?
Maybe. But here's my response: let the agents fight each other rather than the developers.
If an AI coding agent introduces a subtle bug, and an AI testing agent catches it, and an AI debugging agent fixes it—that's a win. The code got better. The developer's time was preserved for higher-level decisions.
The Security Question, Revisited
"Is the only answer to security 'humans moving slowly?'"
That's the real question underneath all the skepticism. And I think the honest answer is no. Speed isn't the enemy of security. Lack of automated defense is the enemy of security.
The companies that figure this out first—deploying agentic AI on both sides of the equation—will ship faster AND be more secure than companies clinging to human-only review processes. Not because AI is perfect, but because AI can operate continuously, at scale, without getting tired or distracted or taking PTO.
Where This Is Headed
We're early. I'll be the first to admit that. The tooling is nascent. The patterns are still emerging. But the direction is clear.
The teams that treat AI as a full-spectrum capability—offense AND defense—will outpace the teams that either reject AI entirely or deploy it recklessly without safeguards.
Put two-thirds of your AI firepower on debt reduction, security, and testing. Give agents permission to write, run, and iterate on tests. Automate the exception-to-fix pipeline. Build the defensive infrastructure now.
Because the alternative—humans moving slowly as the only answer to quality—isn't going to scale. And deep down, everyone knows it.
If you're interested in trying this approach, Tembo is free for light use. Would love to see what defensive agents you build.
— Ry
Related Essays
Why "Good Enough" Code Wins
AI-assisted development has changed the economics of code quality. Teams shipping "good enough" code are moving faster than craftperfectionists.
I Asked 399 Developers for Their One Wish. Here's What They Said.
Survey results from 399 developers reveal what's broken in modern software development: focus time, AI tools, and distribution top the list.
The Future of AI Coding: Beyond Tools to True Autonomous Development
The three tiers of AI coding adoption: from assisted tools to autonomous background agents. Why 80% of code will be AI-written within a decade.