← Back to essays
·2 min read·By Ry Walker

Each Person Needs Their Own Agent Instance

Each Person Needs Their Own Agent Instance

Shared agents with a single configuration are fundamentally broken. The analogy is precise. A shared agent is the worst possible consultant — one who treats every client the same, does not listen, does not read the room, and is perfectly predictable in the worst way.

Each user gets their own agent instance. Not a shared bot with a single prompt, but a personal copy that learns through interactions with its user. A salesperson who wants aggressive double-tap follow-ups trains their agent to do that. A colleague who prefers a slower cadence trains theirs differently. The agent starts as an infant or a fresh college grad — book-smart but untrained on how this particular human works.

Seven out of ten people might coach their agent to get better. Three might coach it to get worse. The math shakes out — but only if there is an organizational layer watching the patterns. A manager or oversight agent should see discrepancies across instances, identify what consistently works, and lock in policies individuals cannot override. Individuals get autonomy within bounds. The organization gets convergence on what matters.

The hard part, again, is the learning mechanism. When a user spars with their agent, that conversation has to compact into something actionable. The agent cannot just append every interaction to an ever-growing context file. It needs to extract the behavioral change and persist it. I've argued elsewhere that general-purpose memory is unsolved but workflow-scoped learning is tractable. Personal-agent learning lands in the same bucket — narrow it to behaviors, not transcripts.

If you are deploying agents to a team, do not ship one shared bot. Ship instances. Then build the oversight layer. Skip the second step and your three drifters will become everyone's problem.

Key takeaways

  • A shared agent is the worst possible consultant — predictable in the worst way, unable to read the room.
  • Each user trains a personal instance through interactions. Some get better, some get worse, the math shakes out only with oversight.
  • The org needs a layer watching across instances, identifying what works, locking in policies individuals cannot override.

FAQ

Why are shared agents broken?

Because people work differently and a single configuration is necessarily wrong for most of the people using it. A shared bot with one prompt is the worst possible consultant — perfectly consistent across clients, perfectly tone-deaf to each one.

How do you keep per-user agents from drifting?

With an organizational layer that watches the patterns. A manager or oversight agent sees discrepancies across instances, identifies what consistently works, and locks in policies individuals cannot override. Autonomy within bounds.