v1.0 — Production Ready

Memory
that sticks.

Give your AI agents persistent memory in 5 lines of Python. Zero config. Self-hosted. Open source.

Star on GitHub
quickstart.py
from engram import Engram

memory = Engram()

# Store a memory
memory.store("User prefers dark mode",
    tags=["preference"],
    importance=8
)

# Search memories
results = memory.search("dark mode")

# Recall top memories
context = memory.recall(limit=10)

AI Agents haben Alzheimer.

Every conversation starts from scratch. Every context is forgotten. Your AI agents process thousands of interactions — and remember none of them. They ask the same questions, make the same mistakes, lose the same insights. Memory isn't a feature. It's the foundation.

87%
of agent failures stem from missing context
5x
faster agent onboarding with persistent memory
<1s
memory recall latency with Engram

Everything you need. Nothing you don't.

Built for developers who value simplicity, privacy, and performance.

Zero Config

No databases to set up. No Docker containers to manage. No environment variables to configure. Install and go.

❮/❯

5-Line API

Store, search, and recall memories with an API so simple it fits in a tweet. No boilerplate. No ceremony.

🔌

MCP Native

First-class Model Context Protocol support. Connect to Claude, Cursor, or any MCP-compatible client out of the box.

🤖

Multi-Agent

Isolated namespaces per agent. Shared memory pools when you need them. Built for swarm architectures.

🔒

Privacy-First

Your data stays on your machine. No cloud. No telemetry. No third-party services. MIT licensed.

🧠

Smart Memory

Importance scoring, automatic deduplication, full-text search, and semantic recall. Memories that matter surface first.

See how Engram stacks up.

We built Engram because existing solutions were too complex for what should be simple.

Feature Engram Mem0 Letta Zep
Setup Zero-Config Complex Docker + PG Cloud-only
Self-Hosted Yes (trivial) Yes (complex) Yes (Docker) Deprecated
Dependencies None Neo4j + Vector DB PostgreSQL Cloud-only
API Complexity 5 lines Medium Complex Medium
MCP Support First-Class Community No Yes
License MIT Apache 2.0 Apache 2.0 Cloud-only

Simple, transparent pricing.

Start free. Scale when you're ready. No credit card required.

Free
0
Perfect for getting started and personal projects.
  • 5,000 Memories
  • 2 Agent Namespaces
  • Full-Text Search (FTS5)
  • REST API + MCP Server
  • Community Support
Get Started
Enterprise
Custom
For organizations with advanced requirements.
  • Unlimited Memories
  • Unlimited Agents
  • Custom Embeddings
  • SSO / SAML
  • Dedicated Support
  • SLA Guarantee
Contact Sales
Open Source Core — Free. Forever.

Pay securely via Stripe. Cancel anytime. Questions?

Up and running in minutes.

Copy, paste, run. That's it.

quickstart.py
from engram import Engram

# Initialize — that's it. No config needed.
memory = Engram()

# Store facts, preferences, decisions
memory.store(
    content="User prefers dark mode and compact layout",
    tags=["preference", "ui"],
    importance=8
)

# Search with full-text search
results = memory.search("dark mode")
for r in results:
    print(r.content, r.importance)

# Recall top memories for context injection
context = memory.recall(limit=20, min_importance=7)
multi_agent.py
from engram import Engram

# Each agent gets its own namespace
researcher = Engram(namespace="researcher")
analyst = Engram(namespace="analyst")
writer = Engram(namespace="writer")

# Memories are isolated by default
researcher.store(
    "Found 3 papers on transformer architectures",
    tags=["finding", "ml"],
    importance=9
)

# But can be shared when needed
shared = Engram(namespace="shared")
shared.store(
    "Project deadline: March 15, 2026",
    tags=["project", "deadline"],
    importance=10
)

# All agents can access shared memory
deadlines = shared.search("deadline")
~/.claude.json
{
  "mcpServers": {
    "engram": {
      "command": "python3",
      "args": [
        "-m", "engram.mcp_server"
      ]
    }
  }
}

// That's it. Claude Code now has persistent memory.
// Available tools:
//   - memory_store
//   - memory_search
//   - memory_recall
//   - memory_delete
//   - memory_stats

Common questions.

Text-based memories — facts, preferences, decisions, error fixes, patterns. Each memory has content, tags, an importance score (1–10), and a type. Everything is stored in a local SQLite database on your machine. No images, no binaries — just the context your agents need.

No. Engram runs on SQLite with full-text search (FTS5) — it works on a Raspberry Pi. Memory recall is sub-millisecond for typical workloads. Optional semantic search with embeddings needs more resources, but the core runs anywhere Python runs.

Your data never leaves your machine. Engram is local-first by design — no cloud, no telemetry, no third-party services. The database is a single file on your disk. You control it completely. The entire codebase is MIT-licensed and open for audit.

Engram is the memory layer, not the agent runtime. Your agents run wherever you want — Claude Code, Cursor, custom scripts, LangChain, CrewAI. Engram just gives them persistent memory via a Python SDK, REST API, or MCP server. Think of it as a database your agents can remember with.