← Back to research
·2 min read·company

Browserbase Internal Agents

Browserbase uses internal coding agents to automatically clean up product bloat as new features ship — keeping LLM prompts lean and codebases manageable.

Key takeaways

  • Internal agents automatically clean up code as new features ship and old ones ramp down
  • Motivated by a novel insight: product bloat degrades LLM performance because prompts accumulate outdated instructions
  • Represents the shift from agents-as-builders to agents-as-maintainers

FAQ

What are Browserbase's internal agents?

Custom coding agents that automatically clean up product bloat — removing outdated code, configurations, and prompt instructions as the team ships new features and sunsets old ones.

Why does product bloat matter for AI companies?

More product bloat means prompts try to serve use cases that were added months ago, degrading LLM performance. It also increases codebase complexity, making it harder to ship reliably.

Who is behind Browserbase's internal agents?

Kyle Jeong, who works on engineering and growth at Browserbase, described the approach in social posts. Browserbase provides headless browser infrastructure for AI agents.

Overview

Browserbase, the headless browser infrastructure company, is using internal coding agents for a distinctive purpose: automated product maintenance. As described by Kyle Jeong, their approach targets a problem unique to AI-native companies: product bloat degrades LLM performance because prompts accumulate instructions for outdated use cases, and complex codebases become harder to ship reliably.

Their solution: internal agents that automatically clean up code and configuration as the team ships new features and ramps old ones down.

The Problem

Kyle Jeong's framing identifies a feedback loop that many AI-native companies face:

More product bloat = poor LLM performance, especially for long running agents: your prompt is still trying to serve a use case you added 2 months ago. More product bloat = more complex codebases, making it higher effort to ship reliably.

This is an underappreciated dynamic. As companies use LLMs to build products faster, the resulting product complexity degrades LLM performance — creating a vicious cycle.

The Solution

Our solution: internal agents that automatically clean up as we ship new features & ramp old ones down.

Rather than using coding agents primarily to build new features (the dominant pattern at Stripe, Ramp, Coinbase, etc.), Browserbase deploys them for maintenance: removing dead code, cleaning up outdated prompt instructions, and simplifying codebases as features are deprecated.

Why This Matters

This represents a meaningful expansion of the in-house coding agent pattern:

  1. Agents-as-maintainers, not just agents-as-builders — Most documented systems focus on creating new code. Browserbase shows agents can also remove code, which is often harder and more valuable.

  2. LLM-aware software engineering — Recognizing that codebase complexity directly impacts AI agent performance creates a new incentive for aggressive cleanup that didn't exist in pre-AI engineering.

  3. Compounding benefit — Each cleanup cycle makes future agent work more reliable, creating a virtuous cycle instead of the vicious one.

Context

Browserbase provides headless browser infrastructure for AI agents — they're deeply embedded in the AI tooling ecosystem and are likely early to experience the "bloat degrades LLM performance" problem at scale. Kyle Jeong also described building a "CEO CLI" connecting Claude to Snowflake for pulling standup summaries, tracking OKRs, and verifying RFCs — suggesting a broader internal agent culture beyond just code cleanup.


Research by Ry Walker Research • Back to In-House Coding Agents comparison