Memory

Memory nodes store conversation history for AI Agents, enabling multi-turn context across user interactions. Every AI Agent requires exactly one memory node.

Memory is required

An AI Agent cannot run without a memory node. If no memory node is connected, the Canvas displays a validation error on the agent node and the workflow will be blocked from execution.

Memory Nodes at a Glance

NodeLoom ships with four memory backends. Each one stores the same conversation data but differs in persistence, performance, and configuration:

NodeStoragePersistenceBest For
Simple MemoryIn-memory (execution scope)Cleared when the execution completesOne-off tasks, testing, and stateless workflows where history is not needed across runs.
PostgreSQL MemoryPostgreSQL databasePersistent across executions. Optional expiry (TTL in hours).Production agents that need durable, long-lived conversation history. Default for new agents.
Redis MemoryRedisAutomatic expiration via TTL. Data is lost if Redis restarts without persistence.High-throughput agents where sub-millisecond reads matter and short-lived history is acceptable.
Window Buffer MemorySliding window (backed by any storage)Keeps only the last N messages, discarding older ones.Long-running conversations where you want to limit token usage by bounding context size.

Simple Memory

Simple Memory stores conversation history in memory for the duration of the execution. It is the lightest option -- no external dependencies, no configuration -- but all data is lost when the execution completes.

Use Simple Memory when your agent handles single-turn interactions or when you are prototyping a workflow and do not yet need persistent history.

No configuration needed

Simple Memory has zero configuration fields. Just connect it to the agent's memory handle and it works.

PostgreSQL Memory

PostgreSQL Memory persists conversations in a dedicated database store. It is the default memory backend for new AI Agent nodes and the recommended choice for production deployments.

Configuration

FieldDefaultDescription
Session IDAuto-generatedGroups messages into conversations. Leave blank to use the default session derived from the chat session ID.
Expiry (hours)None (no expiry)Optional. Messages older than this value are automatically pruned. Set to 0 or leave blank for indefinite retention.

Default for Simple Flow

When you add an AI Agent node in the Simple Flow editor, a PostgreSQL Memory sub-node is auto-added for you. You do not need to add or connect it manually.

Redis Memory

Redis Memory stores conversations in Redis with a configurable TTL. It offers the fastest read/write performance but relies on Redis persistence settings for durability.

Configuration

FieldDefaultDescription
Session IDAuto-generatedGroups messages into conversations. Same behaviour as PostgreSQL Memory.
TTL (seconds)3600Time-to-live for conversation keys. After expiry, Redis automatically deletes the conversation data.

Redis persistence

If your Redis instance is not configured with disk persistence, conversation data will be lost on Redis restart. For production use, ensure Redis persistence is enabled or use PostgreSQL Memory instead.

Window Buffer Memory

Window Buffer Memory wraps any memory backend and applies a sliding window that retains only the last N messages. Older messages are discarded when the window limit is reached.

This is useful for long-running conversations where unbounded history would consume too many tokens and increase LLM costs. The window size is measured in individual messages (both user and assistant messages count).

Configuration

FieldDefaultDescription
Window Size20Maximum number of messages to retain. When exceeded, the oldest messages are dropped.
Session IDAuto-generatedGroups messages into conversations.

Canvas Validation

The Canvas editor enforces the memory requirement at design time. If an AI Agent node does not have a memory node connected to its memory handle:

  • An amber validation badge appears on the AI Agent node.
  • The node configuration panel shows a red-bordered warning explaining that memory is required.
  • Execution is blocked -- the workflow will not run until a memory node is connected.

What Gets Stored

Memory nodes store the full conversation history in an OpenAI-compatible message format. This includes:

  • User messages -- the raw text sent by the user.
  • Assistant messages -- the agent's final responses.
  • Tool call messages -- records of which tools the agent invoked, with what arguments, and what results were returned.
  • System prompts -- if the agent has a system prompt, it is prepended to the history on each turn (not stored repeatedly).
Stored conversation format
// Example stored conversation (OpenAI-compatible format)
[
  { "role": "system", "content": "You are a helpful assistant." },
  { "role": "user", "content": "What time is it in Tokyo?" },
  { "role": "assistant", "content": null, "tool_calls": [
    { "id": "call_1", "function": { "name": "get_current_time", "arguments": "{\"timezone\": \"Asia/Tokyo\"}" } }
  ]},
  { "role": "tool", "tool_call_id": "call_1", "content": "2026-02-17T22:30:00+09:00" },
  { "role": "assistant", "content": "It is currently 10:30 PM in Tokyo (JST)." }
]

Choosing a Backend

CriteriaRecommendation
Production agent with long-lived sessionsPostgreSQL Memory -- durable and zero-config.
High-throughput, short-lived sessionsRedis Memory -- fastest reads, automatic cleanup via TTL.
Cost-conscious long conversationsWindow Buffer Memory -- caps token usage by limiting context window.
Testing and prototypingSimple Memory -- no setup, ephemeral by design.