← Back to research
·2 min read·company

Deep Research (dzhng)

dzhng/deep-research — the simplest deep research agent implementation. Under 500 lines of code with depth/breadth controls for iterative research. 18.6k stars, MIT license, TypeScript. Sponsored by Aomni.

Key takeaways

  • The simplest deep research agent — under 500 lines of code with explicit depth and breadth parameters that control research iteration
  • Iterative refinement loop: search, process results, extract learnings and new directions, recurse until depth reaches zero, then generate final report
  • 18.6k stars, MIT license, TypeScript. Designed to be easy to understand and build on top of — a reference implementation, not a framework
  • Sponsored by Aomni. The "less is more" approach to deep research — proves the pattern works in minimal code

FAQ

What is dzhng/deep-research?

A minimal deep research agent (~500 LoC) that iteratively searches, processes results, extracts learnings and new research directions, and recurses until the configured depth is exhausted. Then generates a markdown report.

How do depth and breadth parameters work?

Breadth controls how many search queries run per iteration. Depth controls how many recursive rounds of research occur. Higher values produce more comprehensive but slower reports.

Overview

dzhng/deep-research is the minimalist's deep research agent — an AI-powered research assistant that performs iterative research on any topic in under 500 lines of TypeScript. Created by David Zhang in February 2025, it quickly hit 18.6k stars as a clean reference implementation.

The design philosophy is explicit: keep the repo small enough to understand completely, then build on top. Two parameters control everything — breadth (queries per iteration) and depth (recursive rounds).

Key stats: 18,569 stars, MIT license, TypeScript. Sponsored by Aomni.


Architecture

The iterative loop:

  1. Deep Research — Takes user query + breadth + depth parameters
  2. SERP Queries — Generates and executes search queries
  3. Process Results — Extracts learnings and new research directions
  4. Decision — If depth is greater than 0, pick next direction (informed by prior goals, new questions, and learnings) and recurse
  5. Output — When depth reaches 0, generate markdown report

The simplicity is the point — no frameworks, no complex orchestration, just a recursive search-and-learn loop.


Competitive Position

Strengths: Extremely simple, easy to fork and customize. MIT license. Great starting point for building custom research agents.

Weaknesses: No built-in UI, no document ingestion (web only), prompt-based (no fine-tuned model). Limited source diversity compared to GPT Researcher's 20+ source approach.


Research by Ry Walker Research