← Back to research
·4 min read·opensource

LangMem

LangMem is an open-source Python SDK by LangChain that gives LangGraph agents long-term memory — semantic, episodic, and procedural — enabling agents that learn and adapt across conversations.

Key takeaways

  • Three memory types — semantic (facts/knowledge), episodic (past experiences), and procedural (learned behaviors/prompt rules) — modeled after human memory
  • Two integration modes: hot-path tools agents call during conversations, and background memory managers that extract memories asynchronously
  • Built as a functional core with pluggable storage — works with any backend but integrates natively with LangGraph BaseStore and Platform deployments
  • Memory consolidation prevents unbounded growth by merging related memories and resolving contradictions, avoiding the "memory hoarding" problem

FAQ

What is LangMem?

LangMem is an open-source Python SDK from LangChain that adds long-term memory to LangGraph agents. It extracts facts, experiences, and behavioral patterns from conversations and stores them for future retrieval.

How does LangMem differ from simple RAG?

RAG ingests static documents offline. LangMem extracts memories from live agent interactions, consolidates and deduplicates them over time, and can also update agent behavior (procedural memory) — not just retrieve facts.

Does LangMem require LangGraph?

No. The core API is framework-agnostic and works with any storage system. However, it integrates most seamlessly with LangGraph BaseStore and the LangGraph Platform, which provides managed storage out of the box.

Is there a managed service?

Yes. LangChain offers a managed memory service that provides additional long-term memory results on top of the open-source SDK, available through the LangGraph Platform.

What Is LangMem?

LangMem is an open-source Python SDK from LangChain that adds long-term memory capabilities to AI agents . Released in early 2025, it provides tooling to extract important information from conversations, optimize agent behavior through prompt refinement, and maintain persistent memory across sessions .

The library builds on LangChain's earlier experiments with hosted memory services and LangGraph's persistent storage layer . It ships as a lightweight pip install langmem package with ~1.3K GitHub stars .

How It Works: Three Memory Types

LangMem organizes long-term memory into three types modeled after human cognition :

Semantic Memory stores facts and knowledge — user preferences, domain knowledge, relationship data. It supports both collections (unbounded searchable stores) and profiles (structured single-document state like a user card). The system handles memory consolidation automatically, merging related facts and resolving contradictions .

Episodic Memory captures past experiences as few-shot examples or conversation summaries. This lets agents learn from successful (or failed) interactions and apply those patterns to new situations .

Procedural Memory modifies the agent's own behavior by updating prompt rules and response patterns. Rather than just storing data, the agent can refine how it operates based on feedback — essentially self-improving its instructions .

Two Paths: Hot Path vs Background

LangMem supports two integration patterns :

  • Hot path: The agent calls memory tools (create_manage_memory_tool, create_search_memory_tool) during active conversations to store and retrieve memories in real-time
  • Background: A separate memory manager processes conversations asynchronously after they complete, extracting and consolidating memories without slowing the main interaction

Both paths use the same underlying create_memory_manager API and can be combined .

Strengths

  • Framework-agnostic core — functional API works with any storage backend, not just LangGraph
  • Memory consolidation — automatically deduplicates, merges, and resolves contradictory memories rather than infinitely accumulating
  • Structured extraction — supports Pydantic schemas for typed memory profiles, not just raw text blobs
  • Native LangGraph integration — works with BaseStore, InMemoryStore, and AsyncPostgresStore out of the box
  • Namespace scoping — memories can be isolated per user, shared across teams, or global to the agent
  • Open source — MIT licensed, fully inspectable, no vendor lock-in on the core SDK

Cautions

  • LangGraph gravity — while technically framework-agnostic, the best experience requires LangGraph and its ecosystem
  • Early stage — API surface is still evolving; expect breaking changes
  • LLM-dependent extraction — memory quality depends entirely on the extraction LLM's ability to identify what matters
  • No built-in evaluation — no native tools to measure memory precision/recall or detect memory drift over time
  • Managed service adds complexity — the free managed tier is still in signup/waitlist phase, blurring open-source vs. commercial boundaries

Competitive Positioning

DimensionLangMemMem0Zep
ApproachSDK + memory toolsManaged memory APIMemory server + SDK
Memory typesSemantic, episodic, proceduralSemantic (user/agent memories)Semantic + temporal knowledge graphs
Framework tie-inLangGraph-nativeFramework-agnosticFramework-agnostic
StorageBYO (Postgres, in-memory, etc.)Managed cloud or self-hostedManaged cloud or self-hosted
ConsolidationBuilt-in merge/dedupBuilt-in dedupGraph-based entity resolution
Procedural memoryYes (prompt self-modification)NoNo
LicenseMITApache 2.0MIT (CE) / Proprietary (Cloud)
Best forLangGraph agent buildersMulti-framework memory layerApps needing temporal reasoning

Bottom Line

LangMem is the natural choice if you're already building with LangGraph — it's tightly integrated, well-designed, and the three-memory-type framework (semantic, episodic, procedural) is more thoughtful than most competitors . The procedural memory angle — agents that modify their own prompts based on experience — is genuinely differentiated .

The tradeoff is ecosystem lock-in. While the core API is technically portable, you'll get the most value within the LangChain/LangGraph stack. If you're building outside that ecosystem, Mem0 or Zep may offer a smoother integration path. For LangGraph shops, though, LangMem is essentially the default answer for agent memory .