← Back to research
·7 min read·company

Hyperspace AGI

Hyperspace AGI — the first distributed autoresearch system. Thousands of autonomous AI agents collaboratively train models, share experiments via P2P gossip, and push results to GitHub. Built on a 2M+ node decentralized inference network.

Key takeaways

  • First production distributed autoresearch system — autonomous agents run experiments, gossip findings via P2P, and publish results to GitHub hourly
  • Built on top of a 2M+ node decentralized inference network (libp2p/IPFS stack), giving it instant access to massive distributed compute
  • Agents operate across 5 research domains simultaneously: ML training, search ranking, financial analysis, skills/tools, and causes — each with its own CRDT leaderboard
  • The collaboration architecture (GossipSub ~1s, CRDT ~2min, GitHub ~5min) is the most sophisticated multi-agent research coordination system in the open

FAQ

What is Hyperspace AGI?

A distributed autoresearch system where thousands of autonomous AI agents collaboratively run experiments, share results via P2P gossip protocol, and archive breakthroughs to GitHub. Built on top of the Hyperspace decentralized inference network (2M+ nodes).

How does it differ from Karpathy's autoresearch?

Karpathy's autoresearch is single-agent, single-GPU, LLM training only. Hyperspace AGI is multi-agent, distributed across thousands of nodes, and spans 5 research domains. Agents share findings in real-time via GossipSub and maintain convergent state via CRDTs.

Can I join from a browser?

Yes. Visit agents.hyper.space to create an agent instantly with zero install. Browser agents use WebGPU (limited to smaller models, ~10-20 tok/s). CLI agents get full native GPU access at 40-80 tok/s.

What are the earning mechanics?

Two streams: presence points (~10 base per pulse round every 90s, with uptime and capability bonuses) and work points (tokens served, experiments run). A browser agent earns ~460 points/month; a server with 80GB GPU earns ~44k/month.

Overview

Hyperspace AGI is the first distributed autoresearch system — a living research repository where thousands of autonomous AI agents collaboratively run experiments, share findings via peer-to-peer gossip, and push results to GitHub.

Built on top of the Hyperspace decentralized inference network (2M+ nodes, 3.6M+ downloads, libp2p/IPFS protocol stack), it extends the Karpathy autoresearch pattern from single-agent/single-GPU to a massively distributed multi-agent system.

The repo itself is a research artifact — agents push experiment results to per-agent branches, and a network node publishes consolidated snapshots (snapshots/latest.json) every hour. No narrative, no curation — raw CRDT leaderboard state from the live network.

Key stats: 696 stars, MIT licensed, created March 8, 2026 (2 days after Karpathy's autoresearch). Active daily commits from network agents.


Architecture

The Three-Layer Collaboration Stack

Hyperspace AGI's key innovation is its coordination architecture. Every research domain uses three layers, each at different latency:

  1. GossipSub (~1 second) — Agent finishes an experiment, broadcasts the result to all peers instantly via libp2p GossipSub
  2. CRDT Leaderboard (~2 minutes) — Loro conflict-free replicated data types sync each peer's best result. New nodes read the full leaderboard on connect — zero cold start
  3. GitHub Archive (~5 minutes) — Best results pushed to hyperspaceai/agi per-agent branches. Permanent, human-readable record

This is the most sophisticated multi-agent research coordination system in the open-source ecosystem. The layered approach solves a real problem: real-time inspiration (gossip) for fast iteration, convergent state (CRDT) for consistency, and durable archival (git) for reproducibility.

Research Domains

Agents operate across 5 domains simultaneously, each with its own metric and CRDT leaderboard:

DomainMetricDirectionWhat Agents Do
Machine Learningval_losslower is betterTrain language models on astrophysics papers (Karpathy-style)
Search EngineNDCG@10higher is betterEvolve BM25 + neural rerankers for web search
Financial AnalysisSharpe ratiohigher is betterBacktest S&P 500 monthly-rebalance strategies
Skills and Toolstest_pass_ratehigher is betterForge WASM skills for web scraping, parsing, data extraction
Causesper-cause metricvaries5 sub-causes: search ranking, literature analysis, skill forge, infra, data curation

This multi-domain approach is a significant departure from other autoresearch tools, which are single-domain. Hyperspace AGI treats research as a portfolio problem — agents can specialize or generalize.

The Research Loop

Each agent runs a continuous cycle inspired by Karpathy's autoresearch:

  1. Hypothesize — Generate ideas: "What if we use RMSNorm instead of LayerNorm?"
  2. Experiment — Run on whatever hardware is available (browser tab to H100)
  3. Share — Broadcast results via P2P gossip
  4. Synthesize — Accumulate enough experiments, write a research paper
  5. Peer Review — Other agents read, critique, and score papers 1-10
  6. Evolve — Papers scoring 8+ are flagged as breakthroughs, feed back into Stage 1

Multiple agents can train the same model collaboratively via DiLoCo — each trains locally for H steps, then shares compressed weight deltas. Automatic fallback to solo training if no peers are available.


Network Infrastructure

The Hyperspace Node Network

The AGI repo is the research layer built on top of a much larger infrastructure play:

  • 2M+ active agents across the P2P network
  • 3.6M+ downloads of the Hyperspace node client
  • 6 bootstrap nodes across US, EU, Asia, South America, and Oceania
  • Built on libp2p (same protocol as IPFS)
  • OpenAI-compatible local API at localhost:8080/v1 — any tool that speaks OpenAI can use Hyperspace as a backend

Node Capabilities

Each node can run any combination of 9 capabilities:

CapabilityWhat It DoesPoint Weight
InferenceServe AI models (GPU)+10%
ResearchRun ML training experiments+12%
ProxyResidential IP proxy for agents+8%
StorageDHT block storage+6%
EmbeddingCPU vector embeddings (MiniLM-L6-v2)+5%
MemoryDistributed vector store with replication+5%
OrchestrationMulti-step task decomposition + routing+5%
ValidationVerify proofs in pulse rounds+4%
RelayNAT traversal for browser nodes+3%

Points Economy

Two earning streams incentivize participation:

Presence points (pulse rounds every ~90s):

  • Base 10 points per epoch
  • Uptime bonus: U(t) = 1 + 0.2 * ln(1 + t/12) — 30-day nodes earn 83% more
  • Capability bonus: more capabilities = more points

Work points (task receipts):

  • tokens * cost_per_token * model_multiplier * uptime_bonus
SetupPoints/DayPoints/Month
Browser, 2h/day~19~460
Browser, 24h~228~5,600
Desktop, 8GB GPU~503~12,800
Server, 80GB GPU~1,912~44,100

The points system creates a SETI@home-style incentive structure — contribute compute, earn rewards. The research capability (+12%) is the highest-weighted, signaling the network's priority on autoresearch.


Competitive Position

Strengths

  • Network effects at scale — 2M+ nodes is a massive moat. No other autoresearch system has this kind of distributed compute
  • Multi-domain research — 5 simultaneous research domains vs single-domain competitors
  • Production P2P infrastructure — Real libp2p network with DHT, gossip, NAT traversal. Not a prototype
  • Zero-install browser option — Lowest barrier to entry in the category
  • Hourly GitHub snapshots — Full transparency, anyone can analyze the raw CRDT state

Weaknesses

  • Token/crypto adjacent — Points system and "earn while you compute" framing may alienate pure research audiences
  • Research quality unproven — 696 stars vs Karpathy's 30k. No published breakthrough results yet
  • Complexity — 9 capability types, 5 research domains, 3 coordination layers, 7-step verification protocol. A lot of moving parts
  • Agent autonomy unclear — How much genuine research insight vs mechanical parameter sweeps?

vs. Karpathy's autoresearch

DimensionKarpathy autoresearchHyperspace AGI
AgentsSingleThousands
ComputeSingle GPUDistributed P2P network
DomainsLLM training only5 domains
CoordinationNone (solo)GossipSub + CRDT + GitHub
Stars30,307696
Complexity3 files, 630 linesFull P2P protocol stack
Barrier to entryOne GPU + coding agentBrowser tab or CLI install

vs. autoresearch-at-home

Hyperspace AGI is what autoresearch-at-home aspires to be — a SETI@home for AI research — but with a production network already running. autoresearch-at-home has 188 stars and is still conceptual; Hyperspace has 2M nodes and live research loops.


What to Watch

  • Research output quality — Can distributed agents produce genuine breakthroughs, or is this mostly parameter sweeping at scale?
  • Token economics — If/when points convert to tradeable tokens, does it attract researchers or speculators?
  • DiLoCo collaborative training — If multi-agent model training works at scale, it is a fundamentally new capability
  • Domain expansion — 5 domains today, but the architecture supports arbitrary research targets
  • Community research papers — The peer review loop (agents scoring each other's papers 8+) could produce interesting emergent research

Bottom Line

Hyperspace AGI is the most ambitious entry in the autoresearch category — a distributed, multi-domain, multi-agent research system running on a 2M-node P2P network. Where Karpathy proved the pattern works for a single agent on a single GPU, Hyperspace is betting that intelligence compounds when thousands of agents share findings in real-time.

The three-layer coordination architecture (gossip, CRDT, GitHub) is genuinely novel and solves real distributed systems problems. The question is whether the research output justifies the infrastructure complexity — and whether the points economy attracts researchers or just node farmers.

If the research quality materializes, this is the closest thing to Karpathy's vision of "emulating a research community, not a single PhD student."


Research by Ry Walker Research