← Back to essays
·11 min read·By Ry Walker

Agents in Production: GTM Mesh and the Death of the ERP

Key takeaways

  • The real bottleneck in enterprise software is not missing data — it is humans context-switching across six systems to act on it.
  • The same mesh-of-specialized-agents pattern works for GTM (routing between ad click and revenue) and operations (replacing the ERP dashboard).
  • In GTM, agents collapse the time between first touch and qualified pipeline from days to seconds by enriching, routing, and personalizing in real time.
  • In operations, agents pull from HubSpot, Metabase, and Fathom to deliver a prioritized daily plan in Slack — and write updates back to source systems.
  • Agent logic should be inspectable code, not opaque prompts, so the team can challenge and evolve the prioritization algorithm.
  • The agent does not replace your CRM or your ERP. It becomes the operating layer on top of them — the interface humans actually use.

FAQ

What is a mesh of agents?

A mesh of agents is a fabric of small, specialized programs that each do one job well and coordinate through shared context. Instead of one mega-agent doing everything, each agent observes state, applies logic, and hands off to the next — with humans reviewing the outputs that matter.

Why should agent logic be code instead of prompts?

Code is inspectable, versionable, and evolvable in ways prompts are not. When a sales leader asks why a lead was routed to self-serve, you need to point at a function — not a regenerated prompt. That transparency is what earns trust and prevents the system from getting ripped out.

How do agents replace the ERP without replacing the CRM?

The agent does not replace the systems of record — it becomes the operating layer on top of them. HubSpot stays the data layer, Metabase stays the analytics layer, and the agent synthesizes both into a prioritized daily plan delivered in Slack. Humans stop being the integration glue between six tools.

Every enterprise software stack has the same dirty secret: the systems they paid millions for are half-adopted, inconsistently updated, and quietly hated by the people who use them. The marketing team has a beautiful dashboard nobody opens. The CX team is toggling between six tools and none of them tell you what to do next. The pipeline review feels exactly the same as it did six months ago — a handful of real opportunities buried under a mountain of noise.

The instinct, the operational-brain instinct, is to fix this with discipline. Pick one tool. Build a better dashboard. Force adoption. Run training. Set up accountability metrics.

That instinct is wrong. Not because discipline does not matter, but because the problem is not which tool people use. The problem is that humans should not be the integration layer between their own systems. That is what agents are for.

The Pattern

I keep seeing the same architecture work across very different problems. It is a mesh of specialized agents — each one doing one thing with high confidence — coordinating through shared context, with humans reviewing the outputs that matter.

Not one mega-agent that does everything. Not a chatbot taped to an API. A fabric of small, inspectable programs that observe state, apply logic, take action, and write results back to the systems where the data already lives.

I want to walk through two concrete versions of this pattern. One is in go-to-market. The other is in operations. They look like different problems. They are the same problem.

Example 1: The Gap Between Ad Click and Revenue

Every marketing team I talk to has the same story. Ad spend is up. Impressions are up. Click-through rates look healthy. And yet the pipeline review feels like nothing changed.

The problem is not traffic. The problem is what happens between the ad click and the revenue event. That gap — the messy, unstructured space where a prospect lands on your site, pokes around, maybe asks a question, and then either converts or vanishes — is where almost every go-to-market motion loses the plot.

Traditional landing pages are frozen in time. You write copy, you pick a CTA, you run traffic, you hope the message resonates. Maybe you A/B test two headlines. Maybe you swap the button color. This is optimization at the margins.

Meanwhile, the prospect arriving on your page is a living context problem. They work at a specific company, in a specific role, with specific pain. They might be a self-serve developer who wants to sign up and start building. They might be a VP of Engineering evaluating vendors for a team of fifty. The static page treats both of them identically. Then you wonder why your conversion rate plateaus.

What works — and what we are building toward at Tembo — is a fundamentally different model. An agent sits between the ad click and the conversion event. It enriches the visitor in real time. It knows, within seconds, whether this person matches your enterprise ICP or your self-serve profile. And it routes them accordingly — different CTA, different content depth, different follow-up sequence.

This is not personalization in the way marketers have used that word for the last decade. This is not "Hi " in a subject line. This is an agent making a real-time qualification decision that used to take an SDR three days and a Salesforce lookup.

Behind that one decision is a mesh.

  • The enrichment agent qualifies the visitor from firmographic and behavioral signal.
  • The routing agent decides self-serve versus sales-led.
  • The content agent rebuilds the page around the visitor's context.
  • The analytics agent watches all of them and feeds learnings back.

No single agent does all of this well. Each one does one job, hands off context to the next, and a human reviews the outputs that change the strategy. The enrichment agent passes its findings to the routing agent. The routing agent's decision informs the content agent. The analytics agent correlates topics explored with conversion outcomes — it notices that prospects who ask "how is this different from Copilot" convert at half the rate of prospects who ask about "setup time," and it surfaces setup information earlier in the experience.

The behavioral data was always there. Scroll depth. Time on page. FAQ expansions. Component clicks. In most setups, it sits in an analytics dashboard and somebody reviews it next week in a meeting. The agent flips that — it observes in real time, correlates against conversion, and produces reviewable recommendations on a continuous loop.

This is the pattern: context in, background execution, reviewable output, human approval. The agent does not autonomously rewrite your landing page overnight. It observes, it correlates, it recommends, and a human decides.

Example 2: The ERP Is Dead. The Agent Is Your Operating System Now.

Now look at the operations side, where the same pattern runs in a completely different domain.

A team has HubSpot as their system of record. Half the team uses it. The other half lives in spreadsheets and post-it notes. There is a JIRA instance for tech support, an Airtable for onboarding tickets, Metabase for revenue data, Intercom for inbound, Outlook for everything else. The CX team is toggling between six tools and none of them tell anyone what to do next.

The ERP was supposed to solve this. It did not. It is half-adopted, inconsistently updated, and the dashboard everyone was supposed to live in is the dashboard nobody opens.

Here is what actually works. Instead of building another dashboard, you build an agent that pulls data from every system the team touches — HubSpot, Metabase, Fathom, Airtable, Intercom — and synthesizes it into a prioritized daily work plan delivered where people already live. For most teams, that is Slack.

Every morning at 9 AM, each team member gets a message: here are your priorities for today, here is why, here is what is overdue, here is what is escalating. Not a notification. Not a reminder. A generated, reasoned work plan built from the actual state of their accounts, tickets, and revenue data.

Then the team member works. They make calls, they have meetings, they resolve tickets. Instead of toggling back to HubSpot to log the update — which is where adoption dies — they tell the agent what happened. The agent writes it back to the source systems. HubSpot gets updated. The ticket gets closed. The touch gets logged. No context switching. No data entry.

This is the same mesh. A data-collection agent pulls from each system. A prioritization agent applies logic. A delivery agent posts to Slack. A capture agent listens to the human's reply and writes back to the system of record. Each one inspectable, each one composable, each one doing exactly one job.

It is software that collects data, applies logic, generates output, and writes results back to production systems. It is the agent doing the work the ERP was supposed to make easy and never did.

Inspectable Logic, Not Black Box Magic

This is where most agent projects go sideways. Someone builds a clever prompt chain that works in a demo, and then nobody can explain why it recommended Account A over Account B. The team does not trust it. Adoption stalls. The project gets shelved.

The fix is straightforward but non-obvious: the agent's logic should be real, permanent, inspectable code. Not a prompt regenerated every time. Code that anyone — including non-engineers — can ask questions about.

The first version of the algorithm should be dead simple. If an account has not been touched in 10 days, it goes on the list. If there is an open urgent ticket older than 5 days, it goes to the top. If the account is high-revenue and flagged red on health, it gets escalated. These are rules you can write in an afternoon.

Because the logic is code, it is evolvable. The team can propose changes. You can version it. You can explain to a new hire exactly how their daily priorities are generated. If someone asks the agent "explain to me how prioritization works," it can read its own code and give a plain-language answer. That is the kind of transparency that earns trust in an enterprise environment.

The same rule applies to the GTM mesh. If the routing agent decides this visitor is enterprise and routes them to a booking flow, somebody on the marketing team needs to be able to read the rule. Not a prompt — an inspectable function. The day a sales leader asks "why did this lead go to self-serve" and you cannot answer is the day the system gets ripped out.

Start With the Pain, Not the Platform

The temptation with agent projects is to start with the technology. Pick a model, pick a framework, wire up some APIs, see what happens. That is how you end up with a demo that impresses nobody who actually has to use it.

Start with the pain instead. In GTM, the pain is that prospects fall into a static page and the system has no idea who they are. In operations, the pain is that account managers do not have a single place to manage their daily work — they are reactive instead of proactive, systems are inconsistently updated, and managers have no visibility into what is actually happening across the book of business.

The agent does not replace HubSpot. It does not replace Metabase. It does not replace your landing page CMS. It sits on top of all of them and becomes the interface the team — and the prospect — actually interacts with. HubSpot becomes the data layer. Metabase becomes the analytics layer. The agent becomes the operating layer. The thing that turns data into action and action back into data.

This is what I mean when I talk about agents as software, not prompts. A prompt can summarize your CRM data. Software can run your daily operations. The difference is not sophistication — it is reliability, inspectability, and the ability to evolve the system without rebuilding it from scratch every time the business logic changes.

The Pilot Is the Product

Do not wait for perfection. Pilot with three or four users. Give them the daily priority message. Let them complain about what it gets wrong. Collect those complaints as the backlog for the next version of the algorithm. The same approach works for the GTM mesh — turn it on for one campaign, watch what the routing agent gets wrong, and tighten the rules.

The pilot is not a test of whether agents work. The pilot is the first iteration of a system that will run in production permanently. Every piece of feedback is a refinement to the logic. Every edge case is a rule you add to the code.

This is how you operationalize AI in the enterprise. Not with a vendor selection process and a six-month implementation timeline. With a working agent, a simple algorithm, a feedback loop, and the willingness to let the system evolve in production.

The Shift

Most companies pour energy into the top of the funnel and the bottom of the funnel and treat the middle as a static page with a form. Most companies pour energy into the system-of-record dashboard and ignore the fact that humans are the integration layer holding it all together. Both mistakes have the same root cause. They optimize the artifacts and ignore the workflow.

The companies that figure this out will not just have better dashboards or better landing pages. They will collapse the time between first touch and qualified pipeline from days to seconds. They will have operations that run themselves — with humans reviewing, approving, and steering, but never again serving as the glue between six disconnected systems.

The ERP is not going away. The CRM is not going away. The landing page is not going away. But the human as the operating system between them is. The agent takes that job now.

— Ry