AI & Intelligence Nodes
NodeLoom's AI nodes let you integrate large language models, build autonomous agents, classify and route data intelligently, perform sentiment analysis, and power RAG pipelines, all without writing code.
Provider flexibility
Core AI Nodes
The six primary AI nodes for intelligence tasks.
Send prompts to an LLM and receive completions. Supports OpenAI, Anthropic, Gemini, Azure OpenAI, and custom API-compatible endpoints. Configure temperature, max tokens, system prompts, and streaming.
Deploy an autonomous agent that reasons and takes actions. Supports ReAct, Conversational, and Tools Only modes with configurable memory backends and tool access.
Route workflow execution based on LLM analysis. The model evaluates input data and selects a branch, enabling intelligent conditional logic without hardcoded rules.
Categorize input into predefined labels. Provide category definitions and the model assigns the best match with a confidence score.
Analyze text for emotional tone. Returns overall sentiment (positive, negative, neutral) along with a detailed emotion breakdown and confidence scores.
Generate vector embeddings for text. Used to power RAG pipelines, semantic search, similarity matching, and clustering workflows.
AI Chat
The AI Chat node sends a prompt to a large language model and returns the completion. Use it for text generation, summarization, translation, extraction, and any task that benefits from LLM reasoning.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
| provider | string | "openai" | LLM provider to use: openai, anthropic, gemini, azure_openai, or custom. |
| model | string | "gpt-4o" | Model identifier. Options depend on the selected provider. |
| systemPrompt | string | "" | Optional system-level instruction that sets the model's behavior and context. |
| userPrompt | string | required | The user prompt to send to the model. Supports expression interpolation. |
| temperature | number | 0.7 | Controls randomness. 0 = deterministic, 1 = creative. Range: 0 to 1. |
| maxTokens | number | 1024 | Maximum number of tokens in the model's response. |
| customEndpoint | string | "" | URL for custom OpenAI-compatible endpoints. Only used when provider is "custom". |
| customModel | string | "" | Model name for custom endpoints. Only used when provider is "custom". |
Example
{
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"systemPrompt": "You are a helpful customer support agent. Be concise and friendly.",
"userPrompt": "Summarize this ticket: {{ $json.ticketBody }}",
"temperature": 0.3,
"maxTokens": 512
}Expression interpolation
AI Agent
The AI Agent node deploys an autonomous agent that can reason, use tools, and maintain conversation history. It iterates in a loop (thinking, acting, observing) until it reaches an answer or hits the maximum iteration limit.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
| agentType | string | "react" | Agent execution mode: react (Reason-Act loop), conversational (chat with memory), or tools_only (direct tool invocation). |
| provider | string | "openai" | LLM provider: openai, anthropic, gemini, azure_openai, or custom. |
| model | string | "gpt-4o" | Model identifier for the agent's reasoning. |
| systemPrompt | string | "" | System instructions defining the agent's role, constraints, and personality. |
| maxIterations | number | 10 | Maximum number of reasoning iterations before the agent stops. Prevents infinite loops. |
| temperature | number | 0.7 | Controls randomness in the agent's responses. Lower values produce more consistent behavior. |
| maxTokens | number | 2048 | Maximum tokens per agent response. |
| memoryEnabled | boolean | true | Whether to persist conversation history across interactions. Requires a connected memory node. |
| memorySessionKey | string | "" | Key to isolate memory sessions. Use expressions (e.g. {{ $json.userId }}) for per-user memory. |
| tools | string[] | [] | List of connected tool node IDs the agent can invoke. Tools are auto-discovered from connected nodes. |
Example
{
"agentType": "react",
"provider": "openai",
"model": "gpt-4o",
"systemPrompt": "You are a research assistant. Use the search tool to find information and the database tool to store results.",
"maxIterations": 15,
"temperature": 0.5,
"maxTokens": 2048,
"memoryEnabled": true,
"memorySessionKey": "{{ $json.sessionId }}"
}AI Agent Modes
The AI Agent node supports three execution modes, each suited to different use cases:
| Mode | Behavior | Best For |
|---|---|---|
| ReAct | Reason-Act loop: the agent thinks, selects a tool, observes the result, and repeats until it has an answer. | Complex multi-step tasks requiring tool use and reasoning |
| Conversational | Maintains conversation history with memory. The agent responds naturally and can use tools when needed. | Chat-based workflows, customer support, interactive assistants |
| Tools Only | The agent directly invokes tools based on the prompt without explicit reasoning steps. | Deterministic tool execution, API orchestration |
Vector Store
The Vector Store node stores, queries, and manages vector embeddings in a vector database. It is the core building block for Retrieval-Augmented Generation (RAG) pipelines, enabling semantic search over your own documents and data.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
| vectorProvider | string | required | Vector database backend: pinecone, weaviate, qdrant, or chroma. |
| operation | string | "query" | Operation to perform: query (search), upsert (store/update), or delete. |
| collectionName | string | required | Name of the vector collection (index) to operate on. |
| embeddingModel | string | "text-embedding-3-small" | Embedding model used to vectorize text. Must match the model used at index time. |
| queryText | string | "" | The search query text. Used with the query operation. |
| topK | number | 5 | Number of top results to return from a query. |
| similarityThreshold | number | 0.7 | Minimum similarity score (0 to 1) for results to be included. |
| documentId | string | "" | Unique identifier for upserting or deleting a specific document. |
| content | string | "" | Text content to embed and store. Used with the upsert operation. |
| metadata | object | {} | Key-value metadata attached to stored documents for filtering. |
| filter | object | {} | Metadata filter applied to queries. Syntax depends on the vector provider. |
Example
// Upsert a document
{
"vectorProvider": "pinecone",
"operation": "upsert",
"collectionName": "knowledge-base",
"embeddingModel": "text-embedding-3-small",
"documentId": "doc-{{ $json.id }}",
"content": "{{ $json.pageContent }}",
"metadata": {
"source": "{{ $json.url }}",
"category": "{{ $json.category }}"
}
}
// Query for similar documents
{
"vectorProvider": "pinecone",
"operation": "query",
"collectionName": "knowledge-base",
"embeddingModel": "text-embedding-3-small",
"queryText": "{{ $json.userQuestion }}",
"topK": 5,
"similarityThreshold": 0.75,
"filter": {
"category": "{{ $json.selectedCategory }}"
}
}Embedding model consistency
Memory Nodes
Memory nodes provide conversation persistence for AI Agents. Attach a memory node to give your agent the ability to recall previous interactions.
In-memory conversation storage. Fast and simple, but data is lost when the execution completes. Best for single-turn or short-lived agents.
Persist conversation history in PostgreSQL. Durable and queryable, ideal for production agents that need long-term memory across sessions.
Store conversation history in Redis with configurable TTL. High-performance option with automatic expiration for session-scoped memory.
Keep only the last N messages in context. Prevents token limits from being exceeded while maintaining recent conversation context.
Memory is required
RAG Nodes
Retrieval-Augmented Generation (RAG) nodes let you build pipelines that ground LLM responses in your own data. Ingest documents, split them into chunks, generate embeddings, store them in a vector database, and retrieve relevant context at query time.
Store and retrieve vector embeddings from a vector database. Supports Pinecone, Weaviate, Qdrant, and Chroma as backends. Configure similarity metrics and top-K retrieval.
Ingest documents from various sources (files, URLs, APIs) and convert them to text for downstream processing. Supports PDF, DOCX, HTML, Markdown, and plain text.
Split long documents into smaller chunks for embedding. Configurable chunk size, overlap, and splitting strategy (character, token, sentence, or recursive).
Supported Providers
AI nodes work with multiple LLM and embedding providers. Configure provider credentials once and reference them across all your workflows.
| Provider | Models | Features |
|---|---|---|
| OpenAI | GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo, text-embedding-3-* | Chat, embeddings, function calling, streaming |
| Anthropic | Claude Opus, Sonnet, Haiku | Chat, tool use, streaming, extended context |
| Google Gemini | Gemini 2.0, 1.5 Pro, 1.5 Flash | Chat, embeddings, multimodal, streaming |
| Azure OpenAI | Your deployed Azure models | Same as OpenAI with Azure enterprise compliance |
| Custom | Any OpenAI-compatible endpoint | Self-hosted models, Ollama, vLLM, LiteLLM |