AI & Intelligence Nodes

NodeLoom's AI nodes let you integrate large language models, build autonomous agents, classify and route data intelligently, perform sentiment analysis, and power RAG pipelines, all without writing code.

Provider flexibility

AI nodes support multiple LLM providers. Configure your preferred provider (OpenAI, Anthropic, Google Gemini, Azure OpenAI, or a custom endpoint) through credentials, and switch between them without changing your workflow.

Core AI Nodes

The six primary AI nodes for intelligence tasks.

AI ChatAI

Send prompts to an LLM and receive completions. Supports OpenAI, Anthropic, Gemini, Azure OpenAI, and custom API-compatible endpoints. Configure temperature, max tokens, system prompts, and streaming.

AI AgentAI

Deploy an autonomous agent that reasons and takes actions. Supports ReAct, Conversational, and Tools Only modes with configurable memory backends and tool access.

AI DecisionAI

Route workflow execution based on LLM analysis. The model evaluates input data and selects a branch, enabling intelligent conditional logic without hardcoded rules.

AI ClassifierAI

Categorize input into predefined labels. Provide category definitions and the model assigns the best match with a confidence score.

Sentiment AnalysisAI

Analyze text for emotional tone. Returns overall sentiment (positive, negative, neutral) along with a detailed emotion breakdown and confidence scores.

EmbeddingAI

Generate vector embeddings for text. Used to power RAG pipelines, semantic search, similarity matching, and clustering workflows.

AI Chat

The AI Chat node sends a prompt to a large language model and returns the completion. Use it for text generation, summarization, translation, extraction, and any task that benefits from LLM reasoning.

Parameters

NameTypeDefaultDescription
providerstring"openai"LLM provider to use: openai, anthropic, gemini, azure_openai, or custom.
modelstring"gpt-4o"Model identifier. Options depend on the selected provider.
systemPromptstring""Optional system-level instruction that sets the model's behavior and context.
userPromptstringrequiredThe user prompt to send to the model. Supports expression interpolation.
temperaturenumber0.7Controls randomness. 0 = deterministic, 1 = creative. Range: 0 to 1.
maxTokensnumber1024Maximum number of tokens in the model's response.
customEndpointstring""URL for custom OpenAI-compatible endpoints. Only used when provider is "custom".
customModelstring""Model name for custom endpoints. Only used when provider is "custom".

Example

AI Chat configuration
{
  "provider": "anthropic",
  "model": "claude-sonnet-4-20250514",
  "systemPrompt": "You are a helpful customer support agent. Be concise and friendly.",
  "userPrompt": "Summarize this ticket: {{ $json.ticketBody }}",
  "temperature": 0.3,
  "maxTokens": 512
}

Expression interpolation

Use {{ $json.fieldName }} syntax in your prompts to inject dynamic data from upstream nodes. This works in both system and user prompts.

AI Agent

The AI Agent node deploys an autonomous agent that can reason, use tools, and maintain conversation history. It iterates in a loop (thinking, acting, observing) until it reaches an answer or hits the maximum iteration limit.

Parameters

NameTypeDefaultDescription
agentTypestring"react"Agent execution mode: react (Reason-Act loop), conversational (chat with memory), or tools_only (direct tool invocation).
providerstring"openai"LLM provider: openai, anthropic, gemini, azure_openai, or custom.
modelstring"gpt-4o"Model identifier for the agent's reasoning.
systemPromptstring""System instructions defining the agent's role, constraints, and personality.
maxIterationsnumber10Maximum number of reasoning iterations before the agent stops. Prevents infinite loops.
temperaturenumber0.7Controls randomness in the agent's responses. Lower values produce more consistent behavior.
maxTokensnumber2048Maximum tokens per agent response.
memoryEnabledbooleantrueWhether to persist conversation history across interactions. Requires a connected memory node.
memorySessionKeystring""Key to isolate memory sessions. Use expressions (e.g. {{ $json.userId }}) for per-user memory.
toolsstring[][]List of connected tool node IDs the agent can invoke. Tools are auto-discovered from connected nodes.

Example

AI Agent configuration
{
  "agentType": "react",
  "provider": "openai",
  "model": "gpt-4o",
  "systemPrompt": "You are a research assistant. Use the search tool to find information and the database tool to store results.",
  "maxIterations": 15,
  "temperature": 0.5,
  "maxTokens": 2048,
  "memoryEnabled": true,
  "memorySessionKey": "{{ $json.sessionId }}"
}

AI Agent Modes

The AI Agent node supports three execution modes, each suited to different use cases:

ModeBehaviorBest For
ReActReason-Act loop: the agent thinks, selects a tool, observes the result, and repeats until it has an answer.Complex multi-step tasks requiring tool use and reasoning
ConversationalMaintains conversation history with memory. The agent responds naturally and can use tools when needed.Chat-based workflows, customer support, interactive assistants
Tools OnlyThe agent directly invokes tools based on the prompt without explicit reasoning steps.Deterministic tool execution, API orchestration

Vector Store

The Vector Store node stores, queries, and manages vector embeddings in a vector database. It is the core building block for Retrieval-Augmented Generation (RAG) pipelines, enabling semantic search over your own documents and data.

Parameters

NameTypeDefaultDescription
vectorProviderstringrequiredVector database backend: pinecone, weaviate, qdrant, or chroma.
operationstring"query"Operation to perform: query (search), upsert (store/update), or delete.
collectionNamestringrequiredName of the vector collection (index) to operate on.
embeddingModelstring"text-embedding-3-small"Embedding model used to vectorize text. Must match the model used at index time.
queryTextstring""The search query text. Used with the query operation.
topKnumber5Number of top results to return from a query.
similarityThresholdnumber0.7Minimum similarity score (0 to 1) for results to be included.
documentIdstring""Unique identifier for upserting or deleting a specific document.
contentstring""Text content to embed and store. Used with the upsert operation.
metadataobject{}Key-value metadata attached to stored documents for filtering.
filterobject{}Metadata filter applied to queries. Syntax depends on the vector provider.

Example

Vector Store configuration
// Upsert a document
{
  "vectorProvider": "pinecone",
  "operation": "upsert",
  "collectionName": "knowledge-base",
  "embeddingModel": "text-embedding-3-small",
  "documentId": "doc-{{ $json.id }}",
  "content": "{{ $json.pageContent }}",
  "metadata": {
    "source": "{{ $json.url }}",
    "category": "{{ $json.category }}"
  }
}

// Query for similar documents
{
  "vectorProvider": "pinecone",
  "operation": "query",
  "collectionName": "knowledge-base",
  "embeddingModel": "text-embedding-3-small",
  "queryText": "{{ $json.userQuestion }}",
  "topK": 5,
  "similarityThreshold": 0.75,
  "filter": {
    "category": "{{ $json.selectedCategory }}"
  }
}

Embedding model consistency

Always use the same embedding model for both upserting and querying. Mixing different models will produce incorrect similarity scores because vector dimensions and representations differ between models.

Memory Nodes

Memory nodes provide conversation persistence for AI Agents. Attach a memory node to give your agent the ability to recall previous interactions.

Simple MemoryAI

In-memory conversation storage. Fast and simple, but data is lost when the execution completes. Best for single-turn or short-lived agents.

PostgreSQL MemoryAI

Persist conversation history in PostgreSQL. Durable and queryable, ideal for production agents that need long-term memory across sessions.

Redis MemoryAI

Store conversation history in Redis with configurable TTL. High-performance option with automatic expiration for session-scoped memory.

Window Buffer MemoryAI

Keep only the last N messages in context. Prevents token limits from being exceeded while maintaining recent conversation context.

Memory is required

The AI Agent node requires a memory node to be connected. Even for stateless agents, attach a Simple Memory node as the minimum configuration.

RAG Nodes

Retrieval-Augmented Generation (RAG) nodes let you build pipelines that ground LLM responses in your own data. Ingest documents, split them into chunks, generate embeddings, store them in a vector database, and retrieve relevant context at query time.

Vector StoreAI

Store and retrieve vector embeddings from a vector database. Supports Pinecone, Weaviate, Qdrant, and Chroma as backends. Configure similarity metrics and top-K retrieval.

Document LoaderAI

Ingest documents from various sources (files, URLs, APIs) and convert them to text for downstream processing. Supports PDF, DOCX, HTML, Markdown, and plain text.

Text SplitterAI

Split long documents into smaller chunks for embedding. Configurable chunk size, overlap, and splitting strategy (character, token, sentence, or recursive).

Supported Providers

AI nodes work with multiple LLM and embedding providers. Configure provider credentials once and reference them across all your workflows.

ProviderModelsFeatures
OpenAIGPT-4o, GPT-4 Turbo, GPT-3.5 Turbo, text-embedding-3-*Chat, embeddings, function calling, streaming
AnthropicClaude Opus, Sonnet, HaikuChat, tool use, streaming, extended context
Google GeminiGemini 2.0, 1.5 Pro, 1.5 FlashChat, embeddings, multimodal, streaming
Azure OpenAIYour deployed Azure modelsSame as OpenAI with Azure enterprise compliance
CustomAny OpenAI-compatible endpointSelf-hosted models, Ollama, vLLM, LiteLLM