A statistic floats around from MIT that something like 95% of enterprise AI projects fail. People cite it as evidence the technology is not ready. Wrong conclusion. The technology is ready enough. What is not ready is the organization.
I see this every day at Tembo. Companies want to deploy agents against their business processes, but they cannot describe those processes with enough fidelity for an agent to act on them. The CEO does not know that a team is spending every afternoon calling customers to remind them about a 2:30 order cutoff. The CTO does not know there are 46 handoffs and 18 bottlenecks in the customer onboarding flow. Nobody has observability into how the business actually works.
Toyota figured this out for manufacturing decades ago. Step one of improving a production line is the Gemba walk — go observe the process. But in knowledge work there is no factory floor. The process lives in email threads, Slack messages, spreadsheets, and the heads of people who have been doing the job for fifteen years. It is invisible.
You cannot give agents context because people do not know how their own businesses work. Before you can deploy agents, you have to make the invisible visible. Screen recordings. Process narrations. Artifact uploads. Not SOPs that nobody reads — actual observable process. Once you have it, throwing agents at improvement becomes almost mechanical. The hard part was always the seeing.
This also reframes governance. Right now every enterprise is trying to address security, compliance, and approval before anyone has built anything worth governing. Let people experiment freely in a sandbox and only surface things for IT and finance review when they have proven useful. Then I'd argue the right move on Monday is the mirror, not the model — pick one process, record it, and now you have something an agent can actually do.
Sources
Related Essays
Start With the Mirror, Not the Model
Pick one process your business actually runs. Record people doing it. Now you have something an agent can act on — and a foundation for measuring whether it works.
The Operationalization Gap: Where AI Demos Go to Die
The gap between an AI demo and an AI deployment is called software engineering. Most organizations are not equipped to close it, and that is where all the value lives.
Start With the Pain, Not the Platform
The temptation with agent projects is to start with the technology. That is how you end up with a demo that impresses nobody who actually has to use it.
Key takeaways
- The technology is ready enough. The organization is not.
- You cannot give agents context because most companies do not know how their own businesses actually work.
- Process observability — screen recordings, narrations, artifacts — is the prerequisite to deployment, not an afterthought.
FAQ
Why do 95% of enterprise AI projects fail?
Not because models are too weak. Because organizations cannot describe their own processes with enough fidelity for an agent to act on them. The work lives in inboxes, Slack threads, and the heads of fifteen-year employees.
What is process observability?
Making the invisible work of knowledge workers visible — through screen recordings, narrations, and artifact uploads. Not SOPs nobody reads. Actual observable process you can hand to an agent.