Key takeaways
- Observational Memory achieves 94.9% on LongMemEval with gpt-5-mini — highest score ever recorded
- Built by the Gatsby team (Y Combinator W25), designed for TypeScript-first AI development
- All-in-one framework: agents, workflows, RAG, memory, MCP, evals in one package
FAQ
What is Mastra?
Mastra is a TypeScript framework for building AI agents and applications, featuring agents, workflows, RAG, and a novel memory system.
Who created Mastra?
The team behind Gatsby, including Tyler Barnes (ex-Netlify, ex-Gatsby staff engineer), backed by Y Combinator W25.
What is Observational Memory?
A memory system that uses background agents to compress conversations into dense observations, achieving SOTA benchmark scores without RAG or graphs.
How does Mastra compare to LangChain?
Mastra is TypeScript-native and batteries-included; LangChain is Python-first with more modularity but higher integration complexity.
Is Mastra open source?
Yes, Mastra is fully open source under Apache 2.0 license.
Executive Summary
Mastra is an open-source TypeScript framework for building AI-powered applications and agents. Created by the team behind Gatsby and backed by Y Combinator (W25), it provides a batteries-included approach: agents, workflows, RAG, memory, MCP servers, and evals in one cohesive package. The standout feature is Observational Memory, which achieves state-of-the-art benchmark scores without RAG or knowledge graphs.
| Attribute | Value |
|---|---|
| Company | Mastra (Gatsby team) |
| Founded | 2024 |
| Funding | Y Combinator W25 |
| Employees | ~10 |
| Headquarters | San Francisco, CA |
Product Overview
Mastra is an open-source TypeScript framework for building AI-powered applications and agents.[1] Created by the team behind Gatsby, Mastra is designed around the principle that "Python trains, TypeScript ships."[2]
The framework supports 40+ model providers through a unified interface — OpenAI, Anthropic, Google, DeepSeek, and more.
Key Capabilities
| Capability | Description |
|---|---|
| Agents | Autonomous agents that use LLMs and tools for open-ended tasks |
| Workflows | Graph-based orchestration with .then(), .branch(), .parallel() |
| Observational Memory | SOTA memory system (94.9% on LongMemEval) |
| RAG | Built-in retrieval from APIs, databases, and files |
| MCP | Model Context Protocol server authoring |
Product Surfaces / Editions
| Surface | Description | Availability |
|---|---|---|
| Framework | TypeScript npm package | GA |
| CLI | npm create mastra@latest | GA |
| Enterprise | Hosted services and support | Contact sales |
Technical Architecture
Language: TypeScript (Node.js)
Storage adapters: PostgreSQL, LibSQL, MongoDB
Key Technical Details
| Aspect | Detail |
|---|---|
| Deployment | Self-hosted (Node.js) |
| Model(s) | 40+ providers (OpenAI, Anthropic, Google, etc.) |
| Integrations | Vercel AI SDK, CopilotKit, React, Next.js |
| Open Source | Yes (Apache 2.0) |
Installation:
npm create mastra@latest
Observational Memory (SOTA)
Mastra's standout feature is Observational Memory (OM), achieving state-of-the-art on LongMemEval:[3][4][5]
| System | Model | LongMemEval |
|---|---|---|
| Mastra OM | gpt-5-mini | 94.87% |
| Mastra OM | gemini-3-pro-preview | 93.27% |
| Supermemory | gpt-4o | 81.60% |
| Full context | gpt-4o | 60.20% |
OM uses background agents to compress conversations into dense observations, achieving 5-40× compression without RAG or graphs.[6]
Strengths
- SOTA memory — Observational Memory beats all published benchmarks
- TypeScript-native — First-class TS experience, not a Python port
- Batteries included — One framework for agents, workflows, RAG, memory, evals
- Prompt cache friendly — Stable context windows enable cost savings
- Gatsby pedigree — Team has shipped widely-adopted open source before
- YC backing — Credibility and resources for long-term development
- Human-in-the-loop — Built-in suspend/resume for approval workflows
Cautions
- TypeScript only — Python developers have to switch stacks
- Young project — Limited production track record vs. established frameworks
- OM sync limitation — Observation currently blocks conversation (async mode shipping soon)
- Gatsby ghost — Gatsby's decline after Next.js dominance raises questions about staying power
- Memory vendor lock-in — OM requires Mastra's storage adapters; not portable
Pricing & Licensing
| Tier | Price | Includes |
|---|---|---|
| Open Source | Free | Full framework (Apache 2.0) |
| Enterprise | Custom | Hosted services, support |
Licensing model: Open source (Apache 2.0) + enterprise services
Hidden costs: Infrastructure costs for self-hosting; enterprise support requires contact[2]
Competitive Positioning
Direct Competitors
| Competitor | Differentiation |
|---|---|
| LangChain | LangChain is Python-first; Mastra is TypeScript-native |
| CrewAI | CrewAI focuses on agent teams; Mastra is full framework |
| Vercel AI SDK | Vercel is lower-level; Mastra uses it internally |
| Supermemory | Mastra OM outperforms Supermemory on benchmarks |
When to Choose Mastra Over Alternatives
- Choose Mastra when: You want TypeScript-native, batteries-included framework with SOTA memory
- Choose LangChain when: You're Python-first or need maximum modularity
- Choose CrewAI when: You specifically need multi-agent team orchestration
- Choose Vercel AI SDK when: You want lower-level primitives, not full framework
Ideal Customer Profile
Best fit:
- TypeScript developers building AI applications
- Teams wanting all-in-one framework without stitching libraries
- Projects needing production-ready memory that scales with conversation
- React/Next.js developers wanting familiar patterns
- Open source advocates wanting commercial backing
Poor fit:
- Python-first teams
- Organizations needing proven production track record
- Teams wanting to avoid any vendor lock-in
- Projects requiring immediate async memory operations
Viability Assessment
| Factor | Assessment |
|---|---|
| Financial Health | Moderate — YC backing, early stage |
| Market Position | Challenger — new but differentiated |
| Innovation Pace | Rapid — SOTA memory, active development |
| Community/Ecosystem | Growing — Gatsby team reputation |
| Long-term Outlook | Positive — strong technical differentiation |
Mastra has YC backing and technical differentiation with Observational Memory. The Gatsby team has proven open-source execution. Risk is whether they can maintain momentum and avoid Gatsby's fate of being eclipsed by competitors.
Bottom Line
Mastra is the most complete TypeScript AI framework available. The Observational Memory system is genuinely novel — achieving SOTA without RAG or graphs is impressive, and the prompt-caching benefits have real cost implications at scale.
The Gatsby team knows how to build developer tools that get adopted. Whether Mastra becomes the "Next.js of AI frameworks" depends on TypeScript developer adoption.
Recommended for: TypeScript developers wanting a batteries-included AI framework with SOTA memory and React/Next.js integration.
Not recommended for: Python-first teams, organizations needing proven production track record, or projects avoiding any framework lock-in.
Outlook: If Observational Memory proves out in production, Mastra could become the default TypeScript AI framework. Watch for async OM mode and enterprise adoption signals.
Research by Ry Walker Research • methodology