I had a conversation recently with an engineering leader at one of the fastest-growing infrastructure companies in the world. His team has gone deep on AI adoption — custom skills on top of Claude Code for incident investigation, log analysis, SRE workflows. By any reasonable measure they are ahead of the curve.
What stuck with me was not how much they had built. It was the shelf life of what they had built. Three to four weeks of work produced tooling that matched or exceeded what dedicated AI SRE vendors were offering. That is impressive. It is also a warning sign. If a small team can build something competitive in a month, what happens to that tooling in six months when the person who built it has moved on?
The instinct to build is correct. You cannot stop engineers from doing this, and you should not try. People are building AI tools internally because their careers depend on it. The mistake is the build-forever assumption.
Here is what actually happens. Someone spends a few weeks building a Claude Code skill or a custom automation. It works. Models change, APIs change, the team's needs evolve, and suddenly the tooling feels like it was built in a different era — because in AI terms, it was. The top five percent of engineering orgs can keep up with this pace. Everyone else hits a prioritization wall. Maintaining bespoke AI tooling is not your core competency, even if building it felt natural.
Worse, automating knowledge work always comes back to writing software. Software has properties prompt engineering does not — it can be tested, versioned, maintained by someone other than the original author. It can also rot if it is not actively maintained. The enterprise AI conversation needs to shift from "how do we adopt AI tools" to "how do we build and maintain AI software systems." Those are different questions, and I've written elsewhere about why homegrown platforms decay once you stop asking the second one.
Sources
Related Essays
Agents in Production: GTM Mesh and the Death of the ERP
The same mesh-of-agents pattern that closes the gap between ad click and revenue is the one that retires the ERP dashboard. Two examples, one architecture.
The Demo Is Not the Deployment
A French CTO with twelve Claude Max seats can ship from his laptop and watch his product team file tickets and wait. That gap is the real problem.
Start With the Pain, Not the Platform
The temptation with agent projects is to start with the technology. That is how you end up with a demo that impresses nobody who actually has to use it.
Key takeaways
- A small team can now build internal AI tooling that matches a dedicated vendor in three to four weeks. That is the headline. The shelf life is the warning.
- Models change, APIs change, team needs evolve. Bespoke AI tooling rots quickly when AI tooling itself is moving this fast.
- Automating knowledge work always comes back to writing software, which can be tested, versioned, and maintained — but only if someone is paid to maintain it.
FAQ
How fast does internal AI tooling become obsolete?
In AI terms, a year is a long time. Tooling built in a few weeks can match a vendor on day one and feel dated six months later because the underlying models, APIs, and best practices have all moved.
Should companies stop building internal AI tools?
No. The instinct to build is correct and you cannot stop engineers from doing it. The mistake is the build-forever assumption. The right framing is that internal tooling is disposable scaffolding while the platform underneath should not be.