Open
Description
Description
I’m exploring how the openai-agents-python
framework handles memory management for agents.
In particular, I’d like to understand whether the SDK provides built‑in support for:
-
Short‑term memory
- An abstraction or helper to buffer, filter, or automatically summarize recent conversation history before sending prompts to the LLM (similar to LangGraph’s session window).
- Best practices or recommended patterns for integrating a short‑term memory module (e.g., sliding window, summary chaining).
-
Long‑term memory
- A mechanism to persist and retrieve agent memory across multiple sessions or restarts (e.g., vector DB, SQLiteSession).
- Any existing or planned extensions/plugins that enable long‑term memory the way LangGraph supports both short‑ and long‑term storage.
Context
- I reviewed the RunContext docs and saw that it allows passing Python objects between tools within a session, but does not ship memory persistence out of the box.
- I also saw issues like Handling service session memory in Agents SDK #248, Short term memory #374, Add Session Memory #745, Mem0ai integration as example for create memory agents #832 discussing memory topics, but I’m not clear on the official roadmap or recommended implementation.
Questions
- Is there a roadmap or official plan to add native “memory management” (both short‑term and long‑term) to the SDK?
- Are there any example integrations or sample extensions (e.g., with a vector database or LangGraph‑style memory layers) that I can reference?
- If not, what guidance would you offer for implementing a memory solution and contributing it back to the project?
Thank you for any pointers!