Agent Chat

The Agent Chat is a dedicated interface for conversing with your AI Agents in real time. It supports session management, streaming responses, thought display, tool call indicators, and per-session model selection.

Chat Interface

The Agent Chat lives at /agent-chat in the NodeLoom dashboard. It provides a full-screen conversational UI similar to ChatGPT or Claude, with a sidebar for session management and a main panel for the active conversation.

Key elements of the interface:

  • Session sidebar -- lists all chat sessions with search, create, and delete actions.
  • Message area -- displays the conversation with distinct styles for user messages, assistant responses, tool calls, and system messages.
  • Input bar -- text input with send button, model selector, and system prompt editor.
  • Thought panel -- collapsible panel showing the agent's internal reasoning steps (for ReAct agents).

Session Management

Conversations are organized into sessions. Each session maps to a unique memory context, meaning the agent retains history within a session but starts fresh in a new one.

ActionDescription
Create sessionClick the "+" button in the sidebar to start a new conversation. You can optionally set a title.
List sessionsThe sidebar shows all sessions for the current workspace, sorted by most recent activity.
Search sessionsUse the search bar at the top of the sidebar to filter sessions by title or message content.
Delete sessionHover over a session and click the delete icon. This permanently removes the session and its conversation history from memory.

Session persistence

Session data is stored in the agent's configured memory backend. If the agent uses PostgreSQL Memory, sessions persist indefinitely (unless a TTL is set). With Redis Memory, sessions expire based on the configured TTL.

Real-Time Streaming

Agent responses are streamed token-by-token to the chat interface in real time. This means you see the response being generated progressively, just like in commercial AI chat products.

The streaming experience delivers each token as it is generated by the LLM, giving immediate feedback. When the response completes, the full message is finalized and the session is updated.

Connection resilience

If the connection drops mid-stream, the client will automatically reconnect and fetch any missed messages from the server. The full response is always persisted server-side regardless of client connectivity.

Thought Display

When using a ReAct agent, the chat interface shows the agent's internal reasoning process in a collapsible thought panel. This gives visibility into:

  • What the agent is thinking at each step of its reasoning loop.
  • Which tool the agent decided to call and why.
  • How the agent interpreted the tool's response.
  • When the agent decided it had enough information to produce a final answer.

The thought panel is collapsed by default and can be expanded by clicking the "Show reasoning" toggle on any assistant message. This keeps the chat clean while making the reasoning accessible when needed.

Tool Call Indicators

When the agent invokes a tool during a conversation, the chat interface displays a visual indicator showing:

  • Tool name -- the name of the tool being called.
  • Status -- a loading spinner while the tool is executing, a check mark on success, or an error icon on failure.
  • Execution time -- how long the tool took to execute.
  • Expandable details -- click the indicator to see the full input arguments and output returned by the tool.

Model Selection

Each chat session can use a different LLM provider and model. The model selector in the input bar lets you choose from all configured providers in your workspace. Changing the model mid-conversation is supported -- the agent will use the new model for subsequent messages while retaining the full conversation history.

SettingScopeDescription
ModelPer sessionThe LLM model to use for this conversation. Defaults to the workspace AI default.
ProviderPer sessionThe AI provider (OpenAI, Anthropic, Google, Azure, Custom). Automatically set when a model is selected.

System Prompts

You can configure a system prompt per conversation to guide the agent's behaviour. The system prompt editor is accessible from the input bar and supports multi-line text.

System prompts are prepended to the conversation history on every turn, giving the agent consistent instructions throughout the session. Changing the system prompt mid-conversation takes effect immediately on the next message.

Example system prompt
You are a customer support agent for Acme Corp.
- Always greet the user by name if available.
- Look up order details before answering shipping questions.
- Escalate billing issues to the human support team.
- Never share internal pricing or discount codes.

Workflow Tool Integration

The Agent Chat is backed by a real NodeLoom workflow. This means the agent can use any workflow node as a tool -- Gmail for sending emails, Slack for posting messages, HTTP Request for calling APIs, Google Sheets for reading data, and more.

To add tools to your agent chat workflow:

  • Open the workflow that powers the agent chat session.
  • Add action nodes (Gmail, Slack, HTTP Request, etc.) to the canvas.
  • Connect them to the AI Agent node's tools handle.
  • The agent will immediately have access to these tools in the chat interface.

See the Tools page for details on direct connection vs. Agent Tool Wrapper patterns.

Credentials required

Tool nodes connected to the agent must have valid credentials configured. If a tool's credentials are missing or expired, the agent will receive a sanitized error message when attempting to use it.