← Back to essays
·2 min read·By Ry Walker

AI Code Is Not Slop

AI Code Is Not Slop

I hear the slop critique a lot. Yes, plenty of bad AI-generated code is being shipped. The framing still does not hold up.

AI-generated code does not have to equal slop. Humans over-engineer. Humans make mistakes. Humans pick the wrong abstractions on a regular Tuesday. Nobody is perfect. The difference is that we normalized human imperfection over forty years of professional software and treat AI imperfection as a disqualifier from the first commit.

I have not personally seen LLMs choosing worse variable names than humans. I have seen plenty of human code with temp, data, handler2, and processStuff. I have read dissertations of indirection in human-written services that solved nothing. The bar is not as high as the critics pretend, and the median human contribution to a codebase is not the heroic counterexample anyone wants to claim it is.

Here is the harder stance. I describe whole features to Opus 4.5 and the output works reliably. For software under construction, that is enough to check the box. Ship it. Get users. Refactor later when there is a reason to refactor — performance, maintenance, a shape change in the product. Premature optimization is still premature whether a human or a model is doing it.

The slop framing is mostly a status concern dressed up as a quality concern. The right move is not to argue about authorship. The right move is to deploy defensive agents so the review and test capacity scales with the generation capacity. I've argued elsewhere that the offense-only problem is the real vulnerability — and quality is downstream of fixing it.

If your defensive infrastructure can catch the bad code, the question of who wrote it stops being interesting.

— Ry

Key takeaways

  • The "AI code is slop" framing applies a standard humans never had to meet.
  • For software under construction, code that works reliably is enough. Refactor when there is a reason to.
  • Premature optimization is premature whether the author is human or model.

FAQ

Is AI-generated code actually worse than human code?

Not in the ways critics usually claim. Variable names, abstraction choices, and structural decisions in AI output are at least as good as the median human codebase. The difference is we audit AI output and shrug at human output.

When should you refactor AI-generated code?

When there is a reason — a real performance problem, a real maintenance cost, a real shape change. Not preemptively because the author was a model. Premature optimization is premature regardless of authorship.