The open-source memory layer for AI agents.
Stop building stateless agents. Give your AI persistent memory with just 5 lines of code.
MemMachine is an open-source long-term memory layer for AI agents and LLM-powered applications. It enables your AI to learn, store, and recall information from past sessionsβtransforming stateless chatbots into personalized, context-aware assistants.
- Episodic Memory: Graph-based conversational context that persists across sessions
- Profile Memory: Long-term user facts and preferences stored in SQL
- Working Memory: Short-term context for the current session
- Agent Memory Persistence: Memory that survives restarts, sessions, and even model changes
Get up and running in under 5 minutes:
Prerequisites: This code requires a running MemMachine Server.
Start a server locally or create a free account on the MemMachine Platform.
pip install memmachine-clientfrom memmachine import MemMachineClient
# Initialize the client
client = MemMachineClient(base_url="http://localhost:8080")
# Get or create a project
project = client.get_or_create_project(org_id="my_org", project_id="my_project")
# Create a memory instance for a user session
memory = project.memory(
group_id="default",
agent_id="travel_agent",
user_id="alice",
session_id="session_001"
)
# Add a memory
memory.add("I prefer aisle seats on flights", metadata={"category": "travel"})
# => [AddMemoryResult(uid='...')]
# Search memories
results = memory.search("What are my flight preferences?")
print(results.content.episodic_memory.long_term_memory.episodes[0].content)
# => "I prefer aisle seats on flights"For full installation options (Docker, self-hosted, cloud), visit the Quick Start Guide.
MemMachine works seamlessly with your favorite AI frameworks:
| Framework | Description |
|---|---|
| LangChain | Memory provider for LangChain agents |
| LangGraph | Stateful memory for LangGraph workflows |
| CrewAI | Persistent memory for CrewAI multi-agent systems |
| LlamaIndex | Memory integration for LlamaIndex applications |
| AWS Strands | Memory for AWS Strands Agent SDK |
| n8n | No-code workflow automation integration |
| Dify | Memory backend for Dify AI applications |
| FastGPT | Integration with FastGPT platform |
MemMachine includes a native Model Context Protocol (MCP) server for seamless integration with Claude Desktop, Cursor, and other MCP-compatible clients:
# Stdio mode (for Claude Desktop)
memmachine-mcp-stdio
# HTTP mode (for web clients)
memmachine-mcp-httpSee the MCP documentation for setup instructions.
- Developers building AI agents, assistants, or autonomous workflows
- Researchers experimenting with agent architectures and cognitive models
- Teams who need persistent, cross-session memory for their LLM applications
- Multiple Memory Types: Working (short-term), Episodic (long-term conversational), and Profile (user facts) memory
- Developer-Friendly APIs: Python SDK, RESTful API, TypeScript SDK, and MCP server interfaces
- Flexible Storage: Graph database (Neo4j) for episodic memory, SQL for profiles
- LLM Agnostic: Works with OpenAI, Anthropic, Bedrock, Ollama, and any LLM provider
- Self-Hosted or Cloud: Run locally, in Docker, or use our managed service
For more information, refer to the API Reference Guide.
- Agents interact via the API Layer: Users interact with an agent, which connects to MemMachine through a RESTful API, Python SDK, or MCP Server.
- MemMachine manages memory: Processes interactions and stores them as Episodic Memory (conversational context) and Profile Memory (long-term user facts).
- Data is persisted: Episodic memory is stored in a graph database; profile memory is stored in SQL.
MemMachine's versatile memory architecture can be applied across any domain. Explore our examples to see memory-powered agents in action:
| Agent | Description |
|---|---|
| CRM Agent | Recalls client history and deal stages to help sales teams close faster |
| Healthcare Navigator | Remembers medical history and tracks treatment progress |
| Personal Finance Advisor | Stores portfolio preferences and risk tolerance for personalized insights |
| Writing Assistant | Learns your style guide and terminology for consistent content |
Are you using MemMachine in your project? We'd love to feature you!
- Share your project in GitHub Discussions β Showcase
- Drop a message in our Discord #showcase channel
MemMachine is a growing community of builders and developers. Help us grow by clicking the β Star button above!
- Main Website β Learn about MemMachine
- Docs & API Reference β Full documentation
- Quick Start Guide β Get started in minutes
- Discord: Join our community for support, updates, and discussions: https://discord.gg/usydANvKqD
- Issues & Feature Requests: Use GitHub Issues
We welcome contributions! Please see our CONTRIBUTING.md for guidelines.
MemMachine is released under the Apache 2.0 License.

