Memory
Memory nodes store conversation history for AI Agents, enabling multi-turn context across user interactions. Every AI Agent requires exactly one memory node.
Memory is required
Memory Nodes at a Glance
NodeLoom ships with four memory backends. Each one stores the same conversation data but differs in persistence, performance, and configuration:
| Node | Storage | Persistence | Best For |
|---|---|---|---|
| Simple Memory | In-memory (execution scope) | Cleared when the execution completes | One-off tasks, testing, and stateless workflows where history is not needed across runs. |
| PostgreSQL Memory | PostgreSQL database | Persistent across executions. Optional expiry (TTL in hours). | Production agents that need durable, long-lived conversation history. Default for new agents. |
| Redis Memory | Redis | Automatic expiration via TTL. Data is lost if Redis restarts without persistence. | High-throughput agents where sub-millisecond reads matter and short-lived history is acceptable. |
| Window Buffer Memory | Sliding window (backed by any storage) | Keeps only the last N messages, discarding older ones. | Long-running conversations where you want to limit token usage by bounding context size. |
Simple Memory
Simple Memory stores conversation history in memory for the duration of the execution. It is the lightest option (no external dependencies, no configuration) but all data is lost when the execution completes.
Use Simple Memory when your agent handles single-turn interactions or when you are prototyping a workflow and do not yet need persistent history.
No configuration needed
PostgreSQL Memory
PostgreSQL Memory persists conversations in a dedicated database store. It is the default memory backend for new AI Agent nodes and the recommended choice for production deployments.
Configuration
| Field | Default | Description |
|---|---|---|
Session ID | Auto-generated | Groups messages into conversations. Leave blank to use the default session derived from the chat session ID. |
Expiry (hours) | None (no expiry) | Optional. Messages older than this value are automatically pruned. Set to 0 or leave blank for indefinite retention. |
Default for Simple Flow
Redis Memory
Redis Memory stores conversations in Redis with a configurable TTL. It offers the fastest read/write performance but relies on Redis persistence settings for durability.
Configuration
| Field | Default | Description |
|---|---|---|
Session ID | Auto-generated | Groups messages into conversations. Same behaviour as PostgreSQL Memory. |
TTL (seconds) | 3600 | Time-to-live for conversation keys. After expiry, Redis automatically deletes the conversation data. |
Redis persistence
Window Buffer Memory
Window Buffer Memory wraps any memory backend and applies a sliding window that retains only the last N messages. Older messages are discarded when the window limit is reached.
This is useful for long-running conversations where unbounded history would consume too many tokens and increase LLM costs. The window size is measured in individual messages (both user and assistant messages count).
Configuration
| Field | Default | Description |
|---|---|---|
Window Size | 20 | Maximum number of messages to retain. When exceeded, the oldest messages are dropped. |
Session ID | Auto-generated | Groups messages into conversations. |
Canvas Validation
The Canvas editor enforces the memory requirement at design time. If an AI Agent node does not have a memory node connected to its memory handle:
- An amber validation badge appears on the AI Agent node.
- The node configuration panel shows a red-bordered warning explaining that memory is required.
- Execution is blocked. The workflow will not run until a memory node is connected.
What Gets Stored
Memory nodes store the full conversation history in an OpenAI-compatible message format. This includes:
- User messages: the raw text sent by the user.
- Assistant messages: the agent's final responses.
- Tool call messages: records of which tools the agent invoked, with what arguments, and what results were returned.
- System prompts: if the agent has a system prompt, it is prepended to the history on each turn (not stored repeatedly).
// Example stored conversation (OpenAI-compatible format)
[
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "What time is it in Tokyo?" },
{ "role": "assistant", "content": null, "tool_calls": [
{ "id": "call_1", "function": { "name": "get_current_time", "arguments": "{\"timezone\": \"Asia/Tokyo\"}" } }
]},
{ "role": "tool", "tool_call_id": "call_1", "content": "2026-02-17T22:30:00+09:00" },
{ "role": "assistant", "content": "It is currently 10:30 PM in Tokyo (JST)." }
]Choosing a Backend
| Criteria | Recommendation |
|---|---|
| Production agent with long-lived sessions | PostgreSQL Memory. Durable and zero-config. |
| High-throughput, short-lived sessions | Redis Memory. Fastest reads, automatic cleanup via TTL. |
| Cost-conscious long conversations | Window Buffer Memory. Caps token usage by limiting context window. |
| Testing and prototyping | Simple Memory. No setup, ephemeral by design. |