Providers

NodeLoom supports multiple AI providers out of the box. Each AI Agent node can be configured with its own provider and model, allowing you to mix and match models within the same workflow.

Supported Providers

The following providers are supported natively. Each requires an API key or credential configured in the NodeLoom credentials store.

OpenAI

The most widely used provider. Supports function calling, streaming, and JSON mode across all models.

ModelContext WindowNotes
GPT-4o128K tokensFlagship multimodal model. Best overall quality.
GPT-4o Mini128K tokensSmaller, faster, and cheaper variant of GPT-4o.
GPT-4 Turbo128K tokensPrevious generation. Still strong for complex reasoning.
GPT-48K tokensOriginal GPT-4. Smaller context window.
GPT-3.5 Turbo16K tokensFastest and cheapest OpenAI model. Good for simple tasks.

Anthropic

Claude models are known for strong instruction following, long-context support, and safety. All models support tool use.

ModelContext WindowNotes
Claude 3.5 Sonnet200K tokensBest balance of capability and speed. Recommended default.
Claude 3.5 Haiku200K tokensFastest Claude model. Ideal for high-throughput, simple tasks.
Claude 3 Opus200K tokensMost capable for complex reasoning and analysis.

Google Gemini

Google's Gemini models support multimodal input and function calling. Authentication uses a Google AI API key.

ModelContext WindowNotes
Gemini 1.5 Pro1M tokensLargest context window available. Great for long-document analysis.
Gemini 1.5 Flash1M tokensOptimised for speed. Lower latency than 1.5 Pro.
Gemini 1.0 Pro32K tokensPrevious generation. Smaller context window.

Azure OpenAI

Enterprise deployment of OpenAI models through Microsoft Azure. Offers the same model capabilities as OpenAI but with Azure's compliance, network isolation, and regional deployment options.

FieldDescription
EndpointYour Azure OpenAI resource endpoint (e.g. https://my-resource.openai.azure.com).
Deployment NameThe name of your model deployment in Azure.
API VersionAzure API version string (e.g. 2024-02-15-preview).
API KeyYour Azure OpenAI API key, stored in the NodeLoom credentials store.

Same models, different auth

Azure OpenAI uses the same GPT models as OpenAI but authenticates through Azure AD or API keys and routes traffic through your Azure subscription. Model availability depends on your Azure deployment.

Custom (OpenAI-Compatible)

Connect any OpenAI-compatible API endpoint. This works with self-hosted models (Ollama, vLLM, LM Studio), third-party providers (Together AI, Fireworks, Groq), or any service that implements the OpenAI chat completions API.

FieldDescription
Base URLThe API base URL (e.g. http://localhost:11434/v1 for Ollama).
Model IDThe model identifier to pass in the API request.
API KeyOptional. Required by most hosted providers, not needed for local models.
Custom provider configuration
// Example: Using a local Ollama instance
{
  "provider": "custom",
  "baseUrl": "http://localhost:11434/v1",
  "model": "llama3:70b",
  "apiKey": ""
}

Per-Node Provider Selection

Every AI Agent node has its own provider and model settings in the node configuration panel. This means different nodes in the same workflow can use entirely different models.

Common patterns include:

  • Routing by complexity. Use a fast, cheap model (GPT-4o Mini or Claude 3.5 Haiku) for initial classification, then route complex queries to a more capable model (GPT-4o or Claude 3 Opus).
  • Provider redundancy. Configure a primary agent with OpenAI and a fallback agent with Anthropic. If the primary fails, a Try/Catch node can redirect to the fallback.
  • Cost optimisation. Use expensive models only for high-value tasks and cheaper models for routine operations within the same workflow.

Workspace AI Defaults

Workspace administrators can set team-level AI defaults from the workspace settings page. These defaults apply to all new AI Agent nodes created in the workspace and to the Agent Chat interface.

SettingDefaultDescription
Default ProviderOpenAIThe provider pre-selected for new AI Agent nodes.
Default ModelGPT-4oThe model pre-selected for new AI Agent nodes.
Temperature0.7Controls randomness. Lower values (0.0-0.3) for factual tasks, higher (0.7-1.0) for creative tasks.
Max Tokens4096Maximum number of tokens in the model's response. Does not affect input context length.

Override at any level

Workspace defaults are just starting values. Individual nodes can always override the provider, model, temperature, and max tokens in their own configuration. Per-session overrides in Agent Chat take highest priority.

Credential Setup

Each provider requires an API key stored in the NodeLoom credentials store. To add a provider credential:

  • Navigate to Settings > Credentials in the dashboard.
  • Click Add Credential and select the provider type (e.g. "OpenAI API Key").
  • Enter your API key and save. The credential is encrypted at rest using AES-256.
  • When configuring an AI Agent node, select the credential from the dropdown. The node will use this credential for all API requests.

Key security

API keys are encrypted at rest and never exposed in the frontend. They are only decrypted server-side at execution time. Avoid sharing credentials across workspaces unless using the workspace-level credential sharing feature.