Skip to content

mem0ai/mem0

Repository files navigation

Mem0 - The Memory Layer for Personalized AI

mem0ai%2Fmem0 | Trendshift

Learn more Β· Join Discord Β· Demo Β· OpenMemory

Mem0 Discord Mem0 PyPI - Downloads GitHub commit activity Package version Npm package Y Combinator S24

πŸ“„ Building Production-Ready AI Agents with Scalable Long-Term Memory β†’

⚑ +26% Accuracy vs. OpenAI Memory β€’ πŸš€ 91% Faster β€’ πŸ’° 90% Fewer Tokens

πŸ”₯ Research Highlights

  • +26% Accuracy over OpenAI Memory on the LOCOMO benchmark
  • 91% Faster Responses than full-context, ensuring low-latency at scale
  • 90% Lower Token Usage than full-context, cutting costs without compromise
  • Read the full paper

Introduction

Mem0 ("mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. It remembers user preferences, adapts to individual needs, and continuously learns over timeβ€”ideal for customer support chatbots, AI assistants, and autonomous systems.

Key Features & Use Cases

Core Capabilities:

  • Multi-Level Memory: Seamlessly retains User, Session, and Agent state with adaptive personalization
  • Developer-Friendly: Intuitive API, cross-platform SDKs, and a fully managed service option

Applications:

  • AI Assistants: Consistent, context-rich conversations
  • Customer Support: Recall past tickets and user history for tailored help
  • Healthcare: Track patient preferences and history for personalized care
  • Productivity & Gaming: Adaptive workflows and environments based on user behavior

πŸš€ Quickstart Guide

Choose between our hosted platform or self-hosted package:

Hosted Platform

Get up and running in minutes with automatic updates, analytics, and enterprise security.

  1. # on Mem0 Platform
  2. Embed the memory layer via SDK or API keys

Self-Hosted (Open Source)

Install the sdk via pip:

pip install mem0ai

Install sdk via npm:

npm install mem0ai

Basic Usage

Mem0 requires an LLM to function, with gpt-4o-mini from OpenAI as the default. However, it supports a variety of LLMs; for details, refer to our Supported LLMs documentation.

First step is to instantiate the memory:

from openai import OpenAI
from mem0 import Memory

openai_client = OpenAI()
memory = Memory()

def chat_with_memories(message: str, user_id: str = "default_user") -> str:
    # Retrieve relevant memories
    relevant_memories = memory.search(query=message, user_id=user_id, limit=3)
    memories_str = "\n".join(f"- {entry['memory']}" for entry in relevant_memories["results"])

    # Generate Assistant response
    system_prompt = f"You are a helpful AI. Answer the question based on query and memories.\nUser Memories:\n{memories_str}"
    messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": message}]
    response = openai_client.chat.completions.create(model="gpt-4o-mini", messages=messages)
    assistant_response = response.choices[0].message.content

    # Create new memories from the conversation
    messages.append({"role": "assistant", "content": assistant_response})
    memory.add(messages, user_id=user_id)

    return assistant_response

def main():
    print("Chat with AI (type 'exit' to quit)")
    while True:
        user_input = input("You: ").strip()
        if user_input.lower() == 'exit':
            print("Goodbye!")
            break
        print(f"AI: {chat_with_memories(user_input)}")

if __name__ == "__main__":
    main()

For detailed integration steps, see the Quickstart and API Reference.

πŸ”— Integrations & Demos

  • ChatGPT with Memory: Personalized chat powered by Mem0 (Live Demo)
  • Browser Extension: Store memories across ChatGPT, Perplexity, and Claude (Chrome Extension)
  • Langgraph Support: Build a customer bot with Langgraph + Mem0 (Guide)
  • CrewAI Integration: Tailor CrewAI outputs with Mem0 (Example)

πŸ“š Documentation & Support

Citation

We now have a paper you can cite:

@article{mem0,
  title={Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory},
  author={Chhikara, Prateek and Khant, Dev and Aryan, Saket and Singh, Taranjeet and Yadav, Deshraj},
  journal={arXiv preprint arXiv:2504.19413},
  year={2025}
}

βš–οΈ License

Apache 2.0 β€” see the LICENSE file for details.