Key takeaways
- Hierarchical multi-agent system where a top-level planning agent coordinates specialized lower-level agents for both deep research and general-purpose tasks
- Built on the Autogenesis protocol — a self-evolution framework where agents can dynamically instantiate, refine, and version their own tools, prompts, and capabilities during execution
- Two protocol layers: RSPL (Resource Substrate) manages resources with explicit state and versioning; SEPL (Self Evolution) specifies the propose/assess/commit improvement loop with rollback
- 3.2k stars, MIT license. Most architecturally ambitious deep research agent — agents that improve themselves while researching
FAQ
What is DeepResearchAgent?
A hierarchical multi-agent system from Skywork AI that uses the Autogenesis self-evolution protocol. A planning agent coordinates specialized sub-agents that can dynamically create and refine their own tools and prompts during execution.
What is Autogenesis?
A self-evolution protocol with two layers: RSPL manages resources (prompts, tools, memory) with versioning and lifecycle; SEPL specifies how agents propose, assess, and commit improvements with auditable lineage and rollback.
Overview
DeepResearchAgent is Skywork AI's hierarchical multi-agent system designed for deep research and general-purpose task solving. Its distinguishing feature is the Autogenesis protocol — a self-evolution framework where agents can dynamically create, refine, and version their own resources during execution.
The architecture uses a top-level planning agent to coordinate specialized lower-level agents (domain agents, tool-calling agents), with an iterative Act-Observe-Optimize-Remember loop that enables agents to improve across runs.
Key stats: 3,241 stars, MIT license, Python.
Architecture: Autogenesis Protocol
Two protocol layers:
RSPL (Resource Substrate Protocol Layer):
- Models prompts, agents, tools, environments, and memory as protocol-registered resources
- Explicit state, lifecycle, and versioned interfaces
- Enables composable agent systems
SEPL (Self Evolution Protocol Layer):
- Closed-loop operator interface: propose, assess, commit improvements
- Auditable lineage and rollback capability
- Optimizers: reflection, GRPO, Reinforce++ methods
The iterative loop: Act (produce actions via LLM + tools) then Observe (capture outcomes and traces) then Optimize (update prompts/solutions using optimizer) then Remember (persist insights to memory).
Competitive Position
Strengths: Most architecturally ambitious deep research agent. Self-evolution is a genuine differentiator. Composable agent/tool/environment system. MIT license.
Weaknesses: Complexity — many abstractions (RSPL, SEPL, optimizers, tracers, versioning). Smaller community than simpler alternatives. Self-evolution benefits are theoretical until proven at scale.
Research by Ry Walker Research