Key takeaways
- Full task-to-merged-PR pipeline with autonomous feedback loops — auto-resumes agents on CI failures, review feedback, and merge conflicts without human intervention
- 319 stars in 7 days (created Mar 19, 2026). Solo developer project with production-grade Kubernetes architecture and Helm deployment
- Pod-per-repo with git worktree isolation is a pragmatic middle ground between Gas Town's per-agent complexity and simpler tmux-based approaches
- The feedback loop is the real differentiator — most orchestrators stop at opening a PR, Optio drives through CI, review, and merge
FAQ
What is Optio?
A self-hosted workflow orchestrator that takes coding tasks (from GitHub Issues, Linear, or manual input), runs AI agents in isolated Kubernetes pods, opens PRs, monitors CI, handles review feedback, and auto-merges when everything passes.
How does Optio handle CI failures?
It polls PRs every 30 seconds. When CI fails, the agent is automatically resumed with the failure context. Same for review change requests and merge conflicts. The loop continues until the PR merges or hits a configured retry limit.
How does Optio compare to Gas Town?
Gas Town coordinates 20-30 agents working in parallel on the same codebase with complex role hierarchies. Optio is simpler — one task, one agent, one PR — but adds the full PR lifecycle and feedback loop that Gas Town lacks.
What agents does Optio support?
Claude Code and OpenAI Codex, configurable per repository with custom prompts, models, and container images.
Overview
Optio is a self-hosted workflow orchestrator that turns coding tasks into merged pull requests without human babysitting. You submit a task — manually, from a GitHub Issue, or from a Linear ticket — and Optio handles the rest: provisions an isolated Kubernetes pod, runs an AI agent (Claude Code or Codex), opens a PR, monitors CI, triggers code review, auto-fixes failures, and squash-merges when everything passes.
Key stats: 319 stars, TypeScript, MIT license. Created March 19, 2026 — 7 days old. Solo developer (Jon Wiggins, Senior ML Engineer at Chartbeat). 162 commits, extremely active development.
The name is fitting — in Roman military terminology, an optio was a centurion's second-in-command who handled logistics and execution while the centurion focused on strategy.
Architecture
Optio runs as a Turborepo monorepo deployed via Helm to Kubernetes:
Core stack:
- API Server — Fastify 5, Drizzle ORM, BullMQ workers
- Web Dashboard — Next.js 15, Tailwind CSS 4, Zustand. Real-time log streaming via WebSocket
- Database — PostgreSQL 16 for tasks/logs/events/secrets, Redis 7 for job queue and pub/sub
- Container Runtime — Kubernetes with pod-per-repo architecture
- Auth — Multi-provider OAuth (GitHub, Google, GitLab)
Packages:
agent-adapters— Claude Code and Codex prompt/auth adapterscontainer-runtime— Kubernetes pod lifecycle, exec, log streamingticket-providers— GitHub Issues and Linear integrationshared— Types, task state machine, prompt templates, error classifier
The pod-per-repo model is key: one long-lived Kubernetes pod per repository, with git worktree isolation for concurrent tasks. This avoids the overhead of spinning up fresh containers per task while still maintaining isolation between work items.
The Feedback Loop
This is what separates Optio from most agent orchestrators. The typical flow stops at "agent opens a PR." Optio keeps going:
- Task queued — from UI, GitHub Issue, or Linear ticket
- Provisioning — find or create a Kubernetes pod, set up a git worktree
- Execution — AI agent runs with configured prompt and model
- PR opened — agent's work becomes a pull request
- PR watcher — polls every 30 seconds for CI status, review state, merge readiness
- Feedback loop:
- CI fails → resume agent with failure context
- Review requests changes → resume agent with reviewer feedback
- Merge conflict → resume agent to rebase
- CI passes + approved → squash-merge and close linked issue
The PR watcher is a BullMQ worker that continuously monitors open PRs. When it detects a state change, it creates a new agent execution with the relevant context injected. The agent picks up where it left off.
A separate code review agent can also be configured as a subtask with its own prompt and model — so one agent writes the code and another reviews it before human reviewers see it.
Competitive Landscape
The coding agent orchestration space is rapidly filling up:
| Tool | Approach | Status |
|---|---|---|
| Optio | Task-to-merged-PR with K8s pods and feedback loops | 319 stars, MIT, 7 days old |
| Gas Town | 20-30 parallel agents with role hierarchy (Mayor, Crew, etc.) | 4k+ stars, by Steve Yegge |
| Conductor | Git worktree isolation, dashboard for parallel agents | ~1k stars, by Melty Labs |
| Capy | IDE for parallel agent development with planning agents | Commercial, VC-backed |
| Code Conductor | GitHub-native: tasks as Issues, agents claim and PR | Open source |
| Axon | K8s pods, TaskSpawner watches Issues, cost tracking | Open source |
| Cursor Background Agents | Built into Cursor IDE, cloud sandboxes | Commercial, closed |
Optio's niche: It's not trying to coordinate 30 agents in parallel like Gas Town. It's solving a different problem — taking a single task through the entire lifecycle from intake to merge. The feedback loop (CI fix, review fix, merge) is the key differentiator. Most tools stop at "opened a PR."
Gas Town is broader in scope but more complex — seven distinct agent roles, its own task tracking system (Beads), and designed for frontier users running 20+ agents. Optio is more accessible: one task, one agent, one PR, driven to completion.
Capy and Cursor Background Agents are commercial alternatives solving similar problems but without the self-hosted, Kubernetes-native approach.
Strengths
- Full lifecycle. Most orchestrators stop at PR creation. Optio's CI-failure-resume and review-feedback-resume loops are genuinely valuable — these are where human time gets burned.
- Clean architecture. Turborepo monorepo, Fastify + Drizzle + BullMQ is a modern, well-chosen stack. Helm chart for production K8s deployment shows serious intent.
- Per-repo configuration. Model, prompt template, container image, concurrency limits, and setup commands are all tunable per repository. Practical for orgs with diverse codebases.
- Multi-source intake. GitHub Issues, Linear tickets, and manual tasks. Covers the three most common task entry points for engineering teams.
- MIT license. No FSL games, no commercial restrictions. Truly open source.
- Dashboard with cost analytics. Tracks spend per task — critical for teams managing AI API budgets.
Weaknesses
- Solo developer. 162 commits, all from Jon Wiggins. Bus factor of 1. If he loses interest, the project stalls.
- 7 days old. Extremely early. The architecture looks solid but it hasn't been battle-tested at scale.
- Kubernetes requirement. Docker Desktop with K8s enabled is the minimum. This limits adoption to teams already comfortable with container orchestration. Many small teams won't bother.
- No parallelism story. One task = one agent = one PR. If you want 10 agents working on 10 tasks simultaneously, the architecture supports it (multiple worktrees per pod), but there's no intelligent task decomposition or coordination between agents.
- Agent support limited to Claude Code and Codex. No Gemini, no open-source models, no Cursor integration.
- No community yet. 11 open issues, 16 forks, but zero external contributors.
Relevance to Tembo
Optio is a strong signal for what Tembo is building. The feedback loop pattern — agent writes code, CI fails, agent fixes, reviewer comments, agent addresses — is the exact workflow that agent orchestration platforms need to nail. Most tools treat "open a PR" as the finish line; Optio correctly identifies it as the midpoint.
The Kubernetes-native approach (pod-per-repo, worktree isolation, Helm deployment) is also noteworthy as the emerging infrastructure pattern for production agent orchestration. Teams that are serious about running agents at scale are converging on K8s as the runtime.
The per-repo configuration model (custom prompts, models, container images) maps directly to how enterprise teams will need to configure agent behavior across different codebases with different requirements.
Worth watching as it matures. The solo-developer risk is real, but the architecture and problem framing are right.
Research by Ry Walker Research