Providers
NodeLoom supports multiple AI providers out of the box. Each AI Agent node can be configured with its own provider and model, allowing you to mix and match models within the same workflow.
Supported Providers
The following providers are supported natively. Each requires an API key or credential configured in the NodeLoom credentials store.
OpenAI
The most widely used provider. Supports function calling, streaming, and JSON mode across all models.
| Model | Context Window | Notes |
|---|---|---|
GPT-4o | 128K tokens | Flagship multimodal model. Best overall quality. |
GPT-4o Mini | 128K tokens | Smaller, faster, and cheaper variant of GPT-4o. |
GPT-4 Turbo | 128K tokens | Previous generation. Still strong for complex reasoning. |
GPT-4 | 8K tokens | Original GPT-4. Smaller context window. |
GPT-3.5 Turbo | 16K tokens | Fastest and cheapest OpenAI model. Good for simple tasks. |
Anthropic
Claude models are known for strong instruction following, long-context support, and safety. All models support tool use.
| Model | Context Window | Notes |
|---|---|---|
Claude 3.5 Sonnet | 200K tokens | Best balance of capability and speed. Recommended default. |
Claude 3.5 Haiku | 200K tokens | Fastest Claude model. Ideal for high-throughput, simple tasks. |
Claude 3 Opus | 200K tokens | Most capable for complex reasoning and analysis. |
Google Gemini
Google's Gemini models support multimodal input and function calling. Authentication uses a Google AI API key.
| Model | Context Window | Notes |
|---|---|---|
Gemini 1.5 Pro | 1M tokens | Largest context window available. Great for long-document analysis. |
Gemini 1.5 Flash | 1M tokens | Optimised for speed. Lower latency than 1.5 Pro. |
Gemini 1.0 Pro | 32K tokens | Previous generation. Smaller context window. |
Azure OpenAI
Enterprise deployment of OpenAI models through Microsoft Azure. Offers the same model capabilities as OpenAI but with Azure's compliance, network isolation, and regional deployment options.
| Field | Description |
|---|---|
Endpoint | Your Azure OpenAI resource endpoint (e.g. https://my-resource.openai.azure.com). |
Deployment Name | The name of your model deployment in Azure. |
API Version | Azure API version string (e.g. 2024-02-15-preview). |
API Key | Your Azure OpenAI API key, stored in the NodeLoom credentials store. |
Same models, different auth
Custom (OpenAI-Compatible)
Connect any OpenAI-compatible API endpoint. This works with self-hosted models (Ollama, vLLM, LM Studio), third-party providers (Together AI, Fireworks, Groq), or any service that implements the OpenAI chat completions API.
| Field | Description |
|---|---|
Base URL | The API base URL (e.g. http://localhost:11434/v1 for Ollama). |
Model ID | The model identifier to pass in the API request. |
API Key | Optional. Required by most hosted providers, not needed for local models. |
// Example: Using a local Ollama instance
{
"provider": "custom",
"baseUrl": "http://localhost:11434/v1",
"model": "llama3:70b",
"apiKey": ""
}Per-Node Provider Selection
Every AI Agent node has its own provider and model settings in the node configuration panel. This means different nodes in the same workflow can use entirely different models.
Common patterns include:
- Routing by complexity. Use a fast, cheap model (GPT-4o Mini or Claude 3.5 Haiku) for initial classification, then route complex queries to a more capable model (GPT-4o or Claude 3 Opus).
- Provider redundancy. Configure a primary agent with OpenAI and a fallback agent with Anthropic. If the primary fails, a Try/Catch node can redirect to the fallback.
- Cost optimisation. Use expensive models only for high-value tasks and cheaper models for routine operations within the same workflow.
Workspace AI Defaults
Workspace administrators can set team-level AI defaults from the workspace settings page. These defaults apply to all new AI Agent nodes created in the workspace and to the Agent Chat interface.
| Setting | Default | Description |
|---|---|---|
Default Provider | OpenAI | The provider pre-selected for new AI Agent nodes. |
Default Model | GPT-4o | The model pre-selected for new AI Agent nodes. |
Temperature | 0.7 | Controls randomness. Lower values (0.0-0.3) for factual tasks, higher (0.7-1.0) for creative tasks. |
Max Tokens | 4096 | Maximum number of tokens in the model's response. Does not affect input context length. |
Override at any level
Credential Setup
Each provider requires an API key stored in the NodeLoom credentials store. To add a provider credential:
- Navigate to Settings > Credentials in the dashboard.
- Click Add Credential and select the provider type (e.g. "OpenAI API Key").
- Enter your API key and save. The credential is encrypted at rest using AES-256.
- When configuring an AI Agent node, select the credential from the dropdown. The node will use this credential for all API requests.
Key security