Key takeaways
- Andrew Ng's answer to agent API drift — a CLI (chub) that gives coding agents curated, versioned, language-specific documentation instead of relying on stale training data. 68+ APIs covered, 5.7k stars in 4 days.
- Two content types with a clean split: Docs ("what to know" — large, ephemeral, fetched per-task) and Skills ("how to do it" — small, persistent, installed into agent skill directories). Both use the SKILL.md standard.
- The learning loop is the real innovation: agents annotate docs with local notes that persist across sessions, and up/down feedback flows back to doc authors. Agents get smarter without fine-tuning.
- Positioned as a community-maintained package manager for agent knowledge — anyone contributes docs via PR, usage signals drive quality. Think npm for API documentation.
FAQ
What is Context Hub?
An open-source CLI tool (chub) from Andrew Ng / DeepLearning.AI that serves curated, LLM-optimized API documentation to coding agents. Instead of hallucinating APIs from training data, agents fetch current, correct docs on demand.
How does Context Hub differ from MCP or skills frameworks?
MCP provides tool execution capabilities. Skills frameworks provide behavioral instructions. Context Hub provides factual reference documentation — the API shapes, parameters, and examples that agents need to write correct code. They're complementary layers.
Does Context Hub work with my coding agent?
Yes — it's agent-agnostic. Any agent that can run CLI commands can use chub. It ships with a SKILL.md for Claude Code, and works with Cursor, Codex, or any agent that supports shell access.
Overview
Coding agents hallucinate APIs. They use deprecated endpoints, invent parameters, and mix up SDK versions — because their training data is months or years stale. Context Hub is Andrew Ng's open-source solution: a curated, versioned documentation registry that agents fetch via CLI instead of guessing from memory.
Released March 9, 2026. 5,764 stars in 4 days. 68+ API docs already contributed. MIT licensed, published as @aisuite/chub on npm.
How It Works
The Agent Workflow
Agent needs to call Stripe API
↓
chub search "stripe" → finds stripe/api
↓
chub get stripe/api --lang js → fetches current JS docs
↓
Agent reads docs, writes correct code
↓
chub annotate stripe/api "Webhook needs raw body" → saves for next time
↓
chub feedback stripe/api up → signals quality to authors
The agent runs chub as a CLI command. No special integration needed — any agent with shell access works. Context Hub ships with a SKILL.md that teaches agents when and how to use it.
Two Content Types
Context Hub makes a clean distinction between docs and skills:
| Dimension | Docs | Skills |
|---|---|---|
| Purpose | Reference knowledge ("what to know") | Behavioral instructions ("how to do it") |
| Size | Large (10K-50K+ tokens) | Small (under 500 lines) |
| Lifecycle | Ephemeral, fetched per-task | Persistent, installed into agent |
| Language variants | Yes (Python, JS, TS) | No (typically language-agnostic) |
| Version variants | Yes (v1, v2, etc.) | No |
| Entry file | DOC.md | SKILL.md |
Both live in the same registry, fetched with the same chub get command. The CLI auto-detects the type.
The Learning Loop
The most interesting part isn't the docs — it's how agents learn from using them:
-
Annotations — Local notes agents attach to docs. "Webhook verification requires raw body — do not parse before verifying." Persists across sessions, appears automatically on future
chub getcalls. The agent doesn't have to rediscover workarounds. -
Feedback — Up/down ratings with labels (outdated, inaccurate, incomplete, well-structured, helpful). Flows back to doc authors. The docs improve based on real agent usage signals.
-
Community contributions — Anyone can submit docs via PR. Content is plain markdown with YAML frontmatter. The repo IS the docs — you can inspect exactly what your agent reads.
This creates a virtuous cycle: agents use docs → annotate gaps → authors fix gaps → docs get better → agents get better.
Architecture
Content repo (markdown + frontmatter)
↓ chub build → registry.json + content tree
CDN (serves registry + files)
↓ CLI fetches from here
~/.chub/ (local cache + annotations)
↓ CLI reads from here
Agent (consumes docs via stdout)
Multiple sources are supported — remote CDN for community docs, local folders for private/internal docs. Everything merges at the CLI level. Trust is signaled via the source field: official, maintainer, or community.
CLI Commands
| Command | Purpose |
|---|---|
chub search [query] | Search docs and skills |
chub get [id] --lang py | Fetch docs or skills by ID |
chub annotate [id] [note] | Attach a persistent note |
chub annotate --list | List all annotations |
chub feedback [id] up/down | Rate a doc (sent to maintainers) |
Category Fit
Context Hub sits at the intersection of three existing categories on this site:
- Agentic Skills Frameworks — Uses the SKILL.md standard, serves skills alongside docs. But it's not a methodology framework — it's a content registry.
- Agent Self-Improvement — The annotation/feedback loop is a lightweight form of agent learning. But it's not memory infrastructure like Mem0 or Letta — it's doc-scoped.
- AI Coding Assistants — It's built for coding agents, but it's not a coding agent itself — it's infrastructure that makes them better.
Best fit: new sub-category within Agentic Skills Frameworks — "Agent Knowledge Registries." Tools that serve curated, versioned knowledge to agents on demand. Context Hub is the first major entrant, but the pattern (community-maintained, agent-optimized documentation) will likely spawn more.
Strengths and Limitations
Strengths:
- Solves a real, painful problem (API hallucination)
- Andrew Ng's backing gives it instant credibility and adoption
- Agent-agnostic — works with any coding agent that has shell access
- Clean docs/skills separation with thoughtful design decisions
- The annotation loop is genuine agent learning without fine-tuning
- Community-driven content model scales well
- 68+ APIs already covered in 4 days
Limitations:
- Requires agents to know to use
chub(needs prompting or SKILL.md installation) - Only as good as the contributed docs — community quality varies
- No MCP server yet (CLI-only, so agents need shell access)
- Annotations are local-only — no sharing across agents/machines
- No private doc hosting (local folders only, no team/org registry)
Comparisons
| Tool | What it does | Relationship to Context Hub |
|---|---|---|
| MCP servers | Give agents tools to execute | Complementary — Context Hub gives knowledge, MCP gives capabilities |
| SKILL.md | Behavioral instructions for agents | Context Hub serves skills in SKILL.md format + adds docs |
| Agentic Context Engine (ACE) | Agents learn from execution traces | ACE learns from doing; Context Hub provides knowledge before doing |
| Web search | Agents search the open web | Context Hub replaces noisy web search with curated, agent-optimized docs |
| RAG | Retrieve from document stores | Context Hub is specialized RAG for API docs with versioning and feedback |
Bottom Line
Context Hub is infrastructure, not an agent. It makes every coding agent better by solving the API hallucination problem at the source — giving agents current, correct documentation instead of relying on stale training data.
The bigger play is the community model: if Context Hub becomes the npm of agent documentation, it creates a network effect where more contributions → better docs → more agent usage → more feedback → even better docs. Andrew Ng's credibility accelerates this flywheel.
The annotation/feedback loop is what separates it from "just a docs repo." Agents that use Context Hub get smarter over time — not through fine-tuning, but through accumulated local knowledge and community-driven quality improvement.
Watch for: MCP server integration, team/org private registries, cross-agent annotation sharing, and whether API providers start maintaining their own Context Hub entries as a first-class distribution channel.
Research by Ry Walker Research • methodology