Store (long-term memory) — API reference
Store (long-term memory) — API reference
Section titled “Store (long-term memory) — API reference”Verified against langgraph==1.1.10 (modules: langgraph.store.base, langgraph.store.memory).
Checkpointers give you short-term memory tied to a single thread_id. A Store gives you long-term memory that lives outside any thread — shared across conversations, users, and graph runs. Same backend pattern as checkpointers: one abstract base, multiple implementations.
Minimal runnable example
Section titled “Minimal runnable example”from langgraph.store.memory import InMemoryStore
store = InMemoryStore()
store.put(("users", "alice"), "prefs", {"theme": "dark", "lang": "en"})item = store.get(("users", "alice"), "prefs")print(item.value) # {'theme': 'dark', 'lang': 'en'}print(item.namespace) # ('users', 'alice')print(item.key) # 'prefs'
for hit in store.search(("users",), filter={"theme": "dark"}): print(hit.key, hit.value)Wire a store into a graph so nodes and tools can read/write to it:
from langgraph.graph import StateGraph, STARTfrom langgraph.store.memory import InMemoryStore
store = InMemoryStore()graph = ( StateGraph(State) .add_node("recall", recall_fn) .add_edge(START, "recall") .compile(store=store))Available backends
Section titled “Available backends”| Backend | Import | Persists? | Vector search | TTL |
|---|---|---|---|---|
InMemoryStore | langgraph.store.memory | No | Yes (numpy if installed) | Optional |
PostgresStore | langgraph.store.postgres1 | Yes | Yes (pgvector) | Yes |
AsyncPostgresStore | langgraph.store.postgres.aio1 | Yes | Yes (pgvector) | Yes |
AsyncBatchedBaseStore | langgraph.store.base.batch | Adapter | Same as wrapped | Same as wrapped |
1 Ships in the separate langgraph-checkpoint-postgres package — the same package as PostgresSaver.
Data model
Section titled “Data model”namespace: tuple[str, ...]— hierarchical path (e.g.,("users", "123", "memories")). The prefix is used for listing and scoped searches.key: str— unique within a namespace.value: dict[str, Any]— JSON-serializable payload. Keys are filterable.Item— returned byget/list_namespaces. Fields:value,key,namespace,created_at,updated_at.SearchItem(Item)— returned bysearch. Addsscore: float | None.
Any of these operations can raise InvalidNamespaceError (e.g., empty tuple, empty string label, or "." in a label).
BaseStore surface
Section titled “BaseStore surface”All methods have sync and a-prefixed async variants.
store.get(namespace, key, *, refresh_ttl=None) -> Item | Nonestore.put(namespace, key, value, index=None, *, ttl=NOT_PROVIDED) -> Nonestore.delete(namespace, key) -> Nonestore.search( namespace_prefix, /, *, query=None, filter=None, limit=10, offset=0, refresh_ttl=None,) -> list[SearchItem]store.list_namespaces( *, prefix=None, suffix=None, max_depth=None, limit=100, offset=0,) -> list[tuple[str, ...]]store.batch(ops: Iterable[Op]) -> list[Result]Under the hood, every single-item method funnels through batch/abatch. Submit mixed GetOp, PutOp, SearchOp, ListNamespacesOp for a single round-trip.
put() — details
Section titled “put() — details”store.put( namespace: tuple[str, ...], key: str, value: dict[str, Any], index: Literal[False] | list[str] | None = None, *, ttl: float | None | NotProvided = NOT_PROVIDED,) -> Noneindex=None— use the fields you configured on the store (or none if it is not indexed).index=False— skip embedding for this item even if the store is indexed.index=["metadata.title", "chapters[*].content"]— path selectors. Supports:- dot-separated nesting (
"a.b.c"), [*]for every array element (each embedded separately),[0]for a specific index.
- dot-separated nesting (
ttl— minutes until expiry. RaisesNotImplementedErrorif you pass a value and the backend hassupports_ttl = False.
search() — filter + semantic
Section titled “search() — filter + semantic”Filtering on value keys is exact-match across every backend:
store.search(("docs",), filter={"type": "article", "status": "published"}, limit=20)Semantic search requires IndexConfig:
from langchain.embeddings import init_embeddingsfrom langgraph.store.memory import InMemoryStore
store = InMemoryStore( index={ "dims": 1536, "embed": init_embeddings("openai:text-embedding-3-small"), # Optional: which fields within `value` to embed. Default: ["$"] (whole value). "fields": ["text", "summary"], })
store.put(("docs",), "d1", {"text": "Rust is a systems language", "type": "lang"})results = store.search( ("docs",), query="memory-safe low-level languages", filter={"type": "lang"}, limit=5,)for r in results: print(r.score, r.value["text"])If the store was not created with index=, the query= argument is silently ignored and search returns plain filtered results.
IndexConfig
Section titled “IndexConfig”class IndexConfig(TypedDict, total=False): dims: int embed: Embeddings | EmbeddingsFunc | AEmbeddingsFunc | str fields: list[str] # default ["$"] — embed the entire value ann_index_config: ... # backend-specific (e.g., pgvector tuning) distance_type: Literal["l2", "inner_product", "cosine"]embed can be:
- a LangChain
Embeddingsinstance, - a sync
(list[str]) -> list[list[float]], - an async callable with the same shape,
- a provider string like
"openai:text-embedding-3-small"(LangChain resolves it).
TTLConfig
Section titled “TTLConfig”class TTLConfig(TypedDict, total=False): refresh_on_read: bool # default True default_ttl: float | None # minutes for new items sweep_interval_minutes: int | NoneOnly set ttl=... on put() if the backend supports TTL. InMemoryStore supports TTL by accepting the kwarg but does not run a background sweeper — items are evicted lazily.
list_namespaces
Section titled “list_namespaces”Explore the tree:
store.list_namespaces(prefix=("users",), max_depth=2)# [('users', 'alice'), ('users', 'bob'), ...]prefix / suffix accept NamespacePath tuples; use "*" as a wildcard segment. max_depth caps the tuple length returned.
InMemoryStore
Section titled “InMemoryStore”from langgraph.store.memory import InMemoryStore
store = InMemoryStore(*, index: IndexConfig | None = None)- Pure-Python, process-local. Data is lost on exit.
- Vector search uses numpy if installed, falls back to a pure-Python dot product otherwise. For any non-trivial corpus,
pip install numpy. - Exposes sync and async methods (batched through
AsyncBatchedBaseStoreunder the hood for async).
PostgresStore / AsyncPostgresStore
Section titled “PostgresStore / AsyncPostgresStore”from langgraph.store.postgres import PostgresStore
with PostgresStore.from_conn_string(DB_URI) as store: store.setup() # creates tables + pgvector extension if index is set graph = builder.compile(store=store) graph.invoke(..., cfg)from_conn_stringis a context manager (same pattern asPostgresSaver).setup()is required on first use.- Pass
index=IndexConfig(...)to enable pgvector semantic search. Requires thevectorextension in your database.
Async counterpart lives at langgraph.store.postgres.aio.AsyncPostgresStore with an async context manager and await store.setup().
Using a Store from a node
Section titled “Using a Store from a node”The Runtime.store attribute exposes whatever you passed to compile(store=...):
from langgraph.runtime import Runtime
def recall(state: State, runtime: Runtime) -> dict: if runtime.store is None: return {"memories": []} hits = runtime.store.search( ("memories", state["user_id"]), query=state["query"], limit=3, ) return {"memories": [h.value for h in hits]}Using a Store from a tool (InjectedStore)
Section titled “Using a Store from a tool (InjectedStore)”Tools get the store injected automatically when wrapped by ToolNode (from langgraph.prebuilt):
from typing import Annotatedfrom langchain_core.tools import toolfrom langgraph.prebuilt import InjectedStore, ToolNodefrom langgraph.store.base import BaseStore
@tooldef save_fact( user_id: str, fact: str, store: Annotated[BaseStore, InjectedStore()],) -> str: store.put(("facts", user_id), fact, {"text": fact}) return f"Saved for {user_id}"
tool_node = ToolNode([save_fact])The store argument is stripped from the schema the model sees, so the LLM cannot pass it. InjectedState works the same way for whole-state injection; ToolRuntime bundles state + context + config + store + stream_writer + tool_call_id into one object.
Patterns
Section titled “Patterns”1. Per-user preferences
Section titled “1. Per-user preferences”ns = ("users", user_id, "prefs")store.put(ns, "theme", {"mode": "dark"})store.put(ns, "lang", {"code": "en"})for pref in store.list_namespaces(prefix=("users", user_id)): for item in store.search(pref): print(pref, item.key, item.value)2. Semantic memory with filtered recall
Section titled “2. Semantic memory with filtered recall”store = InMemoryStore(index={"dims": 1536, "embed": "openai:text-embedding-3-small"})store.put(("mem", "alice"), "m1", {"text": "Likes espresso", "kind": "food"})store.put(("mem", "alice"), "m2", {"text": "Works at Acme", "kind": "work"})
hits = store.search( ("mem", "alice"), query="favorite drink", filter={"kind": "food"}, limit=3,)3. Batched write on session end
Section titled “3. Batched write on session end”from langgraph.store.base import PutOp
store.batch([ PutOp(("mem", user_id), f"m{i}", v, index=None, ttl=None) for i, v in enumerate(new_memories)])4. Tools that read and write memory
Section titled “4. Tools that read and write memory”from typing import Annotatedfrom langchain_core.tools import toolfrom langgraph.prebuilt import InjectedStorefrom langgraph.store.base import BaseStore
@tooldef remember( user_id: str, text: str, store: Annotated[BaseStore, InjectedStore()],) -> str: store.put(("mem", user_id), f"note-{text[:16]}", {"text": text}) return "ok"
@tooldef recall( user_id: str, topic: str, store: Annotated[BaseStore, InjectedStore()],) -> list[str]: return [i.value["text"] for i in store.search(("mem", user_id), query=topic, limit=5)]5. TTL-bounded cache
Section titled “5. TTL-bounded cache”store.put(("cache", "bing"), query, {"json": result}, ttl=30) # minuteshit = store.get(("cache", "bing"), query, refresh_ttl=True)Gotchas
Section titled “Gotchas”- Namespace rules. Each segment must be a non-empty string and must not contain
".".("", "x")raisesInvalidNamespaceError. query=is ignored without an index. You will get filter-only results without any warning — always assertstorewas built withindex=IndexConfig(...)when you rely on semantic search.fields=["$"]means the entire value is stringified and embedded. Pick explicit fields for better recall and smaller embedding costs.InMemoryStoreis not Platform-safe. LangGraph Platform provides a managed store — don’t pass one when deploying there.- TTL is in minutes, not seconds. A
ttl=30means half an hour, not 30 seconds. store.searchreturnslist[SearchItem], not an iterator. Always bounded bylimit(default 10). Paginate withoffset.deleteusesPutOp(...value=None)internally. If you subclassBaseStore,PutOp.value is Noneis the delete signal.
Breaking changes
Section titled “Breaking changes”| Version | Change |
|---|---|
| 1.1 | Semantic-search result SearchItem.score is consistently `float |
| 1.0 | Store moved out of experimental; InjectedStore is the stable way to pull the store into tools. |
| 0.6 | runtime.store replaces config["configurable"]["store"] for node injection. |