AI & Intelligence Nodes

NodeLoom's AI nodes let you integrate large language models, build autonomous agents, classify and route data intelligently, perform sentiment analysis, and power RAG pipelines -- all without writing code.

Provider flexibility

AI nodes support multiple LLM providers. Configure your preferred provider (OpenAI, Anthropic, Google Gemini, Azure OpenAI, or a custom endpoint) through credentials, and switch between them without changing your workflow.

Core AI Nodes

The six primary AI nodes for intelligence tasks.

AI ChatAI

Send prompts to an LLM and receive completions. Supports OpenAI, Anthropic, Gemini, Azure OpenAI, and custom API-compatible endpoints. Configure temperature, max tokens, system prompts, and streaming.

AI AgentAI

Deploy an autonomous agent that reasons and takes actions. Supports ReAct, Conversational, and Tools Only modes with configurable memory backends and tool access.

AI DecisionAI

Route workflow execution based on LLM analysis. The model evaluates input data and selects a branch, enabling intelligent conditional logic without hardcoded rules.

AI ClassifierAI

Categorize input into predefined labels. Provide category definitions and the model assigns the best match with a confidence score.

Sentiment AnalysisAI

Analyze text for emotional tone. Returns overall sentiment (positive, negative, neutral) along with a detailed emotion breakdown and confidence scores.

EmbeddingAI

Generate vector embeddings for text. Used to power RAG pipelines, semantic search, similarity matching, and clustering workflows.

AI Agent Modes

The AI Agent node supports three execution modes, each suited to different use cases:

ModeBehaviorBest For
ReActReason-Act loop: the agent thinks, selects a tool, observes the result, and repeats until it has an answer.Complex multi-step tasks requiring tool use and reasoning
ConversationalMaintains conversation history with memory. The agent responds naturally and can use tools when needed.Chat-based workflows, customer support, interactive assistants
Tools OnlyThe agent directly invokes tools based on the prompt without explicit reasoning steps.Deterministic tool execution, API orchestration

Memory Nodes

Memory nodes provide conversation persistence for AI Agents. Attach a memory node to give your agent the ability to recall previous interactions.

Simple MemoryAI

In-memory conversation storage. Fast and simple, but data is lost when the execution completes. Best for single-turn or short-lived agents.

PostgreSQL MemoryAI

Persist conversation history in PostgreSQL. Durable and queryable, ideal for production agents that need long-term memory across sessions.

Redis MemoryAI

Store conversation history in Redis with configurable TTL. High-performance option with automatic expiration for session-scoped memory.

Window Buffer MemoryAI

Keep only the last N messages in context. Prevents token limits from being exceeded while maintaining recent conversation context.

Memory is required

The AI Agent node requires a memory node to be connected. Even for stateless agents, attach a Simple Memory node as the minimum configuration.

RAG Nodes

Retrieval-Augmented Generation (RAG) nodes let you build pipelines that ground LLM responses in your own data. Ingest documents, split them into chunks, generate embeddings, store them in a vector database, and retrieve relevant context at query time.

Vector StoreAI

Store and retrieve vector embeddings from a vector database. Supports Pinecone, Weaviate, Qdrant, and Chroma as backends. Configure similarity metrics and top-K retrieval.

Document LoaderAI

Ingest documents from various sources (files, URLs, APIs) and convert them to text for downstream processing. Supports PDF, DOCX, HTML, Markdown, and plain text.

Text SplitterAI

Split long documents into smaller chunks for embedding. Configurable chunk size, overlap, and splitting strategy (character, token, sentence, or recursive).

Supported Providers

AI nodes work with multiple LLM and embedding providers. Configure provider credentials once and reference them across all your workflows.

ProviderModelsFeatures
OpenAIGPT-4o, GPT-4 Turbo, GPT-3.5 Turbo, text-embedding-3-*Chat, embeddings, function calling, streaming
AnthropicClaude Opus, Sonnet, HaikuChat, tool use, streaming, extended context
Google GeminiGemini 2.0, 1.5 Pro, 1.5 FlashChat, embeddings, multimodal, streaming
Azure OpenAIYour deployed Azure modelsSame as OpenAI with Azure enterprise compliance
CustomAny OpenAI-compatible endpointSelf-hosted models, Ollama, vLLM, LiteLLM