← Back to research
·8 min read·company

Codex App

Codex App is OpenAI's macOS desktop app for multi-agent coding workflows — featuring parallel worktrees, Skills, Automations, and unified sync across CLI and IDE.

Key takeaways

  • Desktop command center for running multiple coding agents in parallel with built-in Git worktrees — each agent works on an isolated copy
  • Skills and Automations enable background execution without prompting — issue triage, CI/CD monitoring, and scheduled tasks run while you sleep
  • Unified experience across App, CLI, IDE extension, and web — session history and configuration sync automatically via ChatGPT account

FAQ

What is OpenAI Codex App?

Codex App is a macOS desktop app for managing multiple AI coding agents in parallel, with built-in worktree support, Automations, and Git functionality.

How is Codex App different from Codex CLI?

Codex App provides a visual interface for multi-agent orchestration, Automations for scheduled background tasks, and built-in Git tools. The CLI is terminal-only without Automations.

How much does Codex App cost?

Free with ChatGPT Plus ($20/mo), Pro ($200/mo), Business, Enterprise, and Edu plans. Limited trial available for Free and Go users.

Does Codex App run in the cloud?

Codex App runs locally on your Mac with sandboxed agents. Cloud threads are available for background execution, and Automations will support cloud-based triggers in future updates.

Executive Summary

Codex App is OpenAI's macOS desktop application for orchestrating multiple AI coding agents in parallel. Released February 2026, it represents OpenAI's response to the Claude Code and Claude Cowork apps that have gained traction with developers. The app provides a visual command center for multi-agent workflows with built-in Git worktrees, Skills for extensibility, and Automations for scheduled background tasks.

AttributeValue
CompanyOpenAI
Founded2015
Funding$11.3B+
Employees~3,000
HeadquartersSan Francisco, CA

Product Overview

Codex App is designed to manage the shift from single-agent coding to multi-agent orchestration. Since Codex launched as a CLI tool in April 2025 and expanded to cloud in May 2025, developers have moved toward running multiple agents across projects, delegating work in parallel, and trusting agents with tasks spanning hours, days, or weeks.

The app picks up session history and configuration from the Codex CLI and IDE extension, enabling seamless continuation across surfaces. Over one million developers have used Codex since the launch of GPT-5.2-Codex in mid-December 2025.

Key Capabilities

CapabilityDescription
Multi-Agent OrchestrationRun parallel agents across projects with visual monitoring; switch between tasks without losing context
Git WorktreesBuilt-in worktree support isolates agent work on separate copies of code; explore different paths without touching local Git state
SkillsExtend Codex beyond code generation with task-specific packages for Figma, Linear, Cloudflare, image generation, and more
AutomationsSchedule background tasks that run without prompting — results queue for review when you return
Built-in Git ToolsReview diffs, comment inline, stage/revert chunks, commit, push, and create PRs without leaving the app

Product Surfaces / Editions

SurfaceDescriptionAvailability
Codex AppmacOS desktop app — command center for multi-agent workflowsGA (macOS)
Codex WebCloud-based at chatgpt.com/codex with parallel sandboxesGA
Codex CLIOpen-source terminal agent (Apache 2.0)GA
IDE ExtensionsVS Code, Cursor, Windsurf integrationGA

Technical Architecture

Codex App uses native, open-source, and configurable system-level sandboxing — the same security model as the Codex CLI. By default, agents are limited to editing files in the folder or branch where they're working and using cached web search. Elevated permissions like network access require explicit approval.

Key Technical Details

AspectDetail
DeploymentLocal (macOS) with cloud thread support
Model(s)GPT-5.3-Codex, GPT-5.2-Codex, GPT-5.1-Codex-Mini
IntegrationsVS Code, Cursor, Windsurf, GitHub, MCP servers
Open SourceCLI only (Apache 2.0); app is proprietary

Worktree Architecture

Each agent works in a Git worktree — a second checkout of your repository that shares the same .git metadata but has its own copy of every file. This allows:

  • Multiple agents working on the same repo without conflicts
  • Exploration of different approaches without touching main branch
  • Checking out changes locally or letting agents continue independently
  • Automatic cleanup of worktrees after 4 days or when you have more than 10

Worktrees are created in $CODEX_HOME/worktrees and start in a detached HEAD state to avoid polluting branches.

Sandboxing

Default permissions:
├── File editing: Project folder/branch only
├── Web search: Cached results (OpenAI index)
├── Network access: Requires approval
└── Elevated commands: Requires approval

Project-level or team-level rules can allowlist specific commands for automatic elevated permissions.


Strengths

  • Parallel multi-agent workflows — Built for the new way developers work with multiple agents at once; visual orchestration that terminal tools can't match
  • Skills ecosystem — Over 15 curated skills (Figma, Linear, Cloudflare, Vercel, image generation) plus community skills via agentskills.io
  • Automations for background work — Schedule recurring tasks like issue triage, CI failure analysis, and daily release briefs without prompting
  • Unified cross-surface sync — Session history, configuration, and skills sync across App, CLI, IDE extension, and web
  • Enterprise credibility — Cisco, Temporal, Superhuman, and Kodiak already using Codex; OpenAI has unmatched resources
  • Native sandboxing — Security by design with configurable permissions, not an afterthought

Cautions

  • macOS-only — No Windows or Linux support at launch; Windows announced as coming soon but no timeline
  • OpenAI lock-in — Only works with OpenAI models; no model choice unlike Claude Code (terminal-based) or Cursor (multi-model)
  • Automations run locally — The app must be running and project available on disk for Automations; cloud-based triggers coming later
  • Memory consumption — Users report the app is a "memory hog" on Reddit; Electron-based architecture draws criticism on Hacker News
  • Limited enterprise integrations — Skills for Linear and Figma exist, but no native Jira, Slack, or signed commit support
  • No BYOK (Bring Your Own Key) — Can't use enterprise model deployments or custom API keys for the app (only for CLI/IDE extension)

Pricing & Licensing

TierPriceCodex Limits
ChatGPT Free$0Trial access (limited time)
ChatGPT Go$0Trial access (limited time)
ChatGPT Plus$20/mo45-225 local messages, 10-60 cloud tasks per 5h
ChatGPT Pro$200/mo300-1,500 local messages, 50-400 cloud tasks, priority processing
ChatGPT BusinessCustomSame as Plus + admin controls, SSO
ChatGPT EnterpriseCustomNo fixed limits, scales with credits

Licensing model: Subscription (bundled with ChatGPT plans)

Hidden costs: Heavy users may need Pro tier for rate limits; 2x promotional limits are temporary

Credits system: When you exceed limits, credits allow continued usage at ~5 credits/local message (GPT-5.3-Codex) or ~1 credit/message (GPT-5.1-Codex-Mini).


Competitive Positioning

Direct Competitors

CompetitorDifferentiation
Claude CodeTerminal-based, all platforms, $20/mo; Codex App has visual orchestration + Automations
CursorIDE with multi-model support; Codex App is orchestration layer, not an editor
Warp OzCloud execution + Slack/Linear integration; Codex App runs locally, no Slack
Emdash20+ agent CLIs, model-agnostic; Codex App is OpenAI-only but more polished
TemboAgent-agnostic with Jira, signed commits, BYOK — enterprise orchestration focus

When to Choose Codex App Over Alternatives

  • Choose Codex App when: You want visual multi-agent orchestration with Skills/Automations in the OpenAI ecosystem
  • Choose Claude Code when: You prefer terminal-based workflows or Anthropic models, or need Windows/Linux
  • Choose Cursor when: You want AI deeply integrated into your IDE, not a separate orchestration layer
  • Choose Warp Oz when: You need Slack integration and cloud agents that work while you're offline
  • Choose Tembo when: You need enterprise integrations (Jira, signed commits, BYOK) and agent-agnostic orchestration

Ideal Customer Profile

Best fit:

  • Developers already paying for ChatGPT Plus/Pro who want parallel agent workflows
  • Mac users who prefer visual orchestration over terminal-only tools
  • Teams that want Skills for Figma, Linear, or cloud deployment automation
  • Developers who need scheduled Automations for routine tasks (CI monitoring, daily briefs)

Poor fit:

  • Windows/Linux users (macOS-only at launch)
  • Enterprises requiring Jira integration or signed commits
  • Teams wanting model flexibility (Anthropic, Google, open-source models)
  • Users who prefer lightweight terminal tools over Electron apps

Viability Assessment

FactorAssessment
Financial HealthStrong — OpenAI has $11B+ funding, massive ChatGPT revenue
Market PositionLeader — first-party offering from dominant AI provider
Innovation PaceRapid — Skills, Automations, and app launched in quick succession
Community/EcosystemActive — 1M+ Codex users, open skills repo at github.com/openai/skills
Long-term OutlookPositive — Windows coming, cloud-based triggers planned

OpenAI can iterate rapidly and has distribution via ChatGPT's massive user base. The main risk is if Claude Code's terminal-based approach proves more popular with developers, or if model quality gaps emerge.


Bottom Line

Codex App is OpenAI's bid to own the multi-agent coding orchestration layer. It's not an IDE (like Cursor) or a terminal tool (like Claude Code) — it's a command center for managing multiple parallel agents with visual feedback, Git integration, and background Automations.

The Skills ecosystem and Automations feature are genuine differentiators. No other Mac coding agent app offers scheduled background tasks that run without prompting and queue results for review. This moves toward the "agents that work while you sleep" vision.

The macOS-only limitation and Electron architecture are real drawbacks. Memory consumption complaints and the lack of Windows/Linux support limit adoption. The OpenAI lock-in may also frustrate teams who want model flexibility.

Recommended for: Developers in the ChatGPT ecosystem who want visual multi-agent orchestration with Skills and Automations on Mac.

Not recommended for: Windows/Linux users, enterprises needing Jira/signed commits, teams wanting model-agnostic tools.

Outlook: Windows support and cloud-based Automation triggers will address current gaps. Expect OpenAI to aggressively expand the Skills ecosystem and push Codex as the default coding tool for ChatGPT users.


Research by Ry Walker Research • methodology