AI Agents Overview

NodeLoom AI Agents combine large language models with workflow tools to create intelligent, autonomous assistants that can reason, act, and converse with users in real time.

New to AI Agents?

If you have never built an AI Agent before, start by creating a new workflow, adding a Chat Trigger node, an AI Agent node, and a Chat Reply node. Connect them in order and you will have a working conversational agent in under a minute.

Agent Types

Every AI Agent node has a type setting that controls how the agent processes user messages and decides when to use tools. NodeLoom supports three agent types:

TypeBehaviourBest For
ReActReasoning + Acting loop. The agent thinks step-by-step, decides which tool to call, observes the result, and repeats until it has enough information to answer.Complex multi-step tasks that require planning, e.g. research, data gathering across multiple APIs.
ConversationalChat-focused. The agent prioritises natural conversation and only invokes tools when explicitly needed by the user's request.Customer support bots, FAQ assistants, and general-purpose chat interfaces.
Tools OnlyDirect tool execution without reasoning. The agent immediately maps the user's intent to a tool call and returns the result with minimal narration.Action-oriented integrations where speed matters more than explanation, e.g. quick lookups, form submissions.

Default type

New AI Agent nodes default to ReAct because it offers the best balance of reasoning quality and tool utilisation for most workflows.

How Agents Work

At a high level, every AI Agent workflow follows a four-step execution flow:

1

Chat Trigger receives the user message

The workflow begins when a user sends a message through the Agent Chat interface, a widget embed, or a webhook. The Chat Trigger node captures the message and passes it downstream.

2

Gather context

The AI Agent node loads conversation history from its connected Memory node, retrieves the system prompt, and assembles the list of available tools from any nodes connected to its tools handle.

3

LLM generates a response

The assembled context is sent to the configured LLM provider (OpenAI, Anthropic, Google Gemini, Azure OpenAI, or a custom endpoint). Depending on the agent type, the model may reason internally, call one or more tools, observe the results, and iterate before producing a final answer.

4

Chat Reply sends the response back

The Chat Reply node streams the final answer back to the user in real time. The full conversation (including any tool calls) is persisted in the Memory node for future turns.

Memory Is Required

Every AI Agent node must have a Memory node connected to its memory handle. Memory stores the conversation history so the agent can maintain context across turns. Without a Memory node, the Canvas will display a validation error and the workflow will not execute.

NodeLoom provides four memory backends -- Simple Memory, PostgreSQL Memory, Redis Memory, and Window Buffer Memory -- each with different persistence and performance characteristics. See the Memory page for details.

Connecting Tools

Tools give your agent the ability to interact with external services. Connect any supported action node (Gmail, Slack, HTTP Request, Google Sheets, and more) to the agent's tools handle. The agent automatically derives each tool's name, description, and input schema from the connected node's configuration.

For advanced use cases, wrap a node in an Agent Tool Wrapper to customise the tool name, description, and schema, or to expose multiple tools from a single node. See the Tools page for the full list of supported nodes and patterns.

Built-in Tools

Every AI Agent includes two built-in tools that are always available, even when no external tool nodes are connected:

ToolDescriptionExample Usage
get_current_timeReturns the current date and time in the workflow's configured timezone."What time is it?" or "Schedule this for tomorrow"
calculateEvaluates a mathematical expression and returns the result. Supports standard arithmetic, parentheses, and common math functions."What is 15% of 2340?" or "Convert 72F to Celsius"

Multiple Providers per Node

Each AI Agent node can be configured with its own LLM provider and model. This means you can have multiple AI Agent nodes in the same workflow -- each powered by a different model. For example, one agent might use GPT-4o for complex reasoning while another uses Claude 3.5 Haiku for fast, lightweight responses.

Provider and model settings are configured per-node in the node configuration panel. Workspace administrators can also set team-level defaults from the workspace AI settings. See the Providers page for the full list of supported providers and models.

Error Sanitization

When an agent encounters an error during tool execution or LLM inference, NodeLoom automatically sanitizes the raw API error before surfacing it to the user. This means:

  • Raw provider error messages (which may contain internal details, request IDs, or stack traces) are replaced with user-friendly descriptions.
  • Rate-limit and quota errors are translated into actionable messages that tell the user to try again later.
  • Tool execution failures include the tool name and a generic failure reason without leaking credential or endpoint details.
  • All sanitized errors are still logged in full detail in the execution inspector for debugging purposes.

Custom error messages

If you need to override the default error messages, you can wrap your agent workflow in a Try/Catch node and handle errors manually in the catch branch.

Next Steps

  • Memory -- choose a memory backend for your agent.
  • Tools -- connect workflow nodes as agent tools.
  • Agent Chat -- interact with your agent in a dedicated chat interface.
  • Providers -- configure LLM providers and models.