AI & Intelligence Nodes
NodeLoom's AI nodes let you integrate large language models, build autonomous agents, classify and route data intelligently, perform sentiment analysis, and power RAG pipelines -- all without writing code.
Provider flexibility
Core AI Nodes
The six primary AI nodes for intelligence tasks.
Send prompts to an LLM and receive completions. Supports OpenAI, Anthropic, Gemini, Azure OpenAI, and custom API-compatible endpoints. Configure temperature, max tokens, system prompts, and streaming.
Deploy an autonomous agent that reasons and takes actions. Supports ReAct, Conversational, and Tools Only modes with configurable memory backends and tool access.
Route workflow execution based on LLM analysis. The model evaluates input data and selects a branch, enabling intelligent conditional logic without hardcoded rules.
Categorize input into predefined labels. Provide category definitions and the model assigns the best match with a confidence score.
Analyze text for emotional tone. Returns overall sentiment (positive, negative, neutral) along with a detailed emotion breakdown and confidence scores.
Generate vector embeddings for text. Used to power RAG pipelines, semantic search, similarity matching, and clustering workflows.
AI Agent Modes
The AI Agent node supports three execution modes, each suited to different use cases:
| Mode | Behavior | Best For |
|---|---|---|
| ReAct | Reason-Act loop: the agent thinks, selects a tool, observes the result, and repeats until it has an answer. | Complex multi-step tasks requiring tool use and reasoning |
| Conversational | Maintains conversation history with memory. The agent responds naturally and can use tools when needed. | Chat-based workflows, customer support, interactive assistants |
| Tools Only | The agent directly invokes tools based on the prompt without explicit reasoning steps. | Deterministic tool execution, API orchestration |
Memory Nodes
Memory nodes provide conversation persistence for AI Agents. Attach a memory node to give your agent the ability to recall previous interactions.
In-memory conversation storage. Fast and simple, but data is lost when the execution completes. Best for single-turn or short-lived agents.
Persist conversation history in PostgreSQL. Durable and queryable, ideal for production agents that need long-term memory across sessions.
Store conversation history in Redis with configurable TTL. High-performance option with automatic expiration for session-scoped memory.
Keep only the last N messages in context. Prevents token limits from being exceeded while maintaining recent conversation context.
Memory is required
RAG Nodes
Retrieval-Augmented Generation (RAG) nodes let you build pipelines that ground LLM responses in your own data. Ingest documents, split them into chunks, generate embeddings, store them in a vector database, and retrieve relevant context at query time.
Store and retrieve vector embeddings from a vector database. Supports Pinecone, Weaviate, Qdrant, and Chroma as backends. Configure similarity metrics and top-K retrieval.
Ingest documents from various sources (files, URLs, APIs) and convert them to text for downstream processing. Supports PDF, DOCX, HTML, Markdown, and plain text.
Split long documents into smaller chunks for embedding. Configurable chunk size, overlap, and splitting strategy (character, token, sentence, or recursive).
Supported Providers
AI nodes work with multiple LLM and embedding providers. Configure provider credentials once and reference them across all your workflows.
| Provider | Models | Features |
|---|---|---|
| OpenAI | GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo, text-embedding-3-* | Chat, embeddings, function calling, streaming |
| Anthropic | Claude Opus, Sonnet, Haiku | Chat, tool use, streaming, extended context |
| Google Gemini | Gemini 2.0, 1.5 Pro, 1.5 Flash | Chat, embeddings, multimodal, streaming |
| Azure OpenAI | Your deployed Azure models | Same as OpenAI with Azure enterprise compliance |
| Custom | Any OpenAI-compatible endpoint | Self-hosted models, Ollama, vLLM, LiteLLM |