Data & Integration Nodes

Data and integration nodes handle the core plumbing of your workflows: making API calls, querying databases, reading and writing files, transforming data, and connecting to cloud services.

Core Data Nodes

The seven foundational nodes for data processing and integration.

HTTP RequestData

Make HTTP/HTTPS requests to any API. Supports GET, POST, PUT, PATCH, DELETE with headers, query params, body (JSON, form, raw), authentication (Bearer, Basic, API Key, OAuth2), and response parsing.

Database SQLData

Execute SQL queries against PostgreSQL or MySQL databases. Supports parameterized queries, transactions, and result mapping. Connect using stored credentials.

FileData

Read and write files on the local filesystem. All file operations are sandboxed to a secure directory for security. Supports text, binary, CSV, and JSON formats.

JSON TransformData

Transform JSON data using expressions, mapping rules, or JSONata queries. Reshape, filter, and restructure data between nodes.

MergeData

Combine data from multiple upstream branches into a single output. Supports append, merge by key, inner join, outer join, and zip modes.

SetData

Create or modify variables in the workflow data. Set static values, compute expressions, rename fields, or transform data types.

MCP ToolData

Call tools exposed via the Model Context Protocol (MCP). Connect to any MCP-compatible server and invoke its tools with typed inputs and outputs.

HTTP Request

The HTTP Request node is the most versatile integration node. It can call any REST API, GraphQL endpoint, or webhook URL.

Parameters

NameTypeDefaultDescription
methodstring"GET"HTTP method: GET, POST, PUT, PATCH, DELETE, HEAD, or OPTIONS.
urlstringrequiredFull URL to request. Supports expression interpolation (e.g. https://api.example.com/{{ $json.path }}).
headersobject{}Custom request headers as key-value pairs.
bodystring | object""Request body. Format depends on Content-Type: JSON object, form-data, URL-encoded, or raw text.
authenticationstring"none"Authentication method: none, bearer, basic, apiKey, or oauth2.
bearerTokenstring""Bearer token value. Shown when authentication is "bearer".
usernamestring""Basic auth username. Shown when authentication is "basic".
passwordstring""Basic auth password. Shown when authentication is "basic".
apiKeyNamestring""Name of the API key header or query parameter. Shown when authentication is "apiKey".
apiKeyValuestring""API key value. Shown when authentication is "apiKey".
timeoutnumber30000Request timeout in milliseconds.
followRedirectsbooleantrueWhether to automatically follow HTTP 3xx redirects.

Example

HTTP Request configuration
{
  "method": "POST",
  "url": "https://api.example.com/users/{{ $json.userId }}/orders",
  "headers": {
    "Content-Type": "application/json",
    "X-Request-Id": "{{ $json.requestId }}"
  },
  "body": {
    "product": "{{ $json.productName }}",
    "quantity": "{{ $json.qty }}",
    "priority": "high"
  },
  "authentication": "bearer",
  "bearerToken": "{{ $credentials.apiToken }}",
  "timeout": 15000,
  "followRedirects": true
}

Database SQL

Execute SQL queries against relational databases. Always use parameterized queries to prevent SQL injection.

Parameters

NameTypeDefaultDescription
operationstring"query"SQL operation type: query (SELECT), insert, update, delete, or raw (execute arbitrary SQL).
querystringrequiredSQL query string. Use $1, $2, ... as parameter placeholders.
parametersarray[]Ordered array of parameter values that replace $1, $2, ... placeholders in the query.
credentialIdstringrequiredID of the stored database credential (PostgreSQL or MySQL connection details).

Example

Database SQL configuration
{
  "operation": "query",
  "query": "SELECT id, name, email FROM users WHERE status = $1 AND created_at > $2 ORDER BY created_at DESC LIMIT 100",
  "parameters": ["active", "{{ $json.sinceDate }}"],
  "credentialId": "{{ $credentials.productionDb }}"
}

SQL injection prevention

Always use parameterized queries ($1, $2) instead of string concatenation. NodeLoom does not allow raw expression interpolation inside SQL query strings.

MCP Tool

The MCP Tool node connects to servers implementing the Model Context Protocol. MCP provides a standardized way for AI applications to access tools, data sources, and prompts. The node supports two transports:

  • HTTP — Connect to a remote MCP server via Streamable HTTP or SSE. Best for hosted MCP servers.
  • Stdio — Spawn a local MCP server process and communicate via stdin/stdout. Best for official MCP servers like @modelcontextprotocol/server-slack, @modelcontextprotocol/server-github, etc.

Parameters

NameTypeDefaultDescription
transportstring"http"Transport protocol: "http" for remote servers, "stdio" for local process-based servers.
serverUrlstringrequired (HTTP)URL of the MCP server. Only used with HTTP transport.
commandstringrequired (stdio)Command to start the MCP server process. Only used with stdio transport.
toolNamestringrequiredName of the tool to invoke (e.g. "slack_post_message", "search_documents").
toolInputobject{}Input parameters for the tool as JSON. Supports {{ $json.field }} expressions.
timeoutnumber30Maximum time (seconds) to wait for a tool response.
retryOnErrorbooleanfalseWhether to retry the tool call if it fails.
maxRetriesnumber3Maximum number of retry attempts when retryOnError is true.

Credentials

MCP credentials are stored securely in the credential store (encrypted at rest) rather than in the workflow definition.

TransportCredential TypeWhat It Stores
HTTPMCP Server (HTTP)Auth token (Bearer) for authenticating with the MCP server.
StdioMCP Server (Stdio)Environment variables as a JSON object (e.g. API tokens, team IDs). These are injected into the server process at startup.

Create credentials in Settings → Credentials, then select them in the MCP Tool node configuration panel.

HTTP Example

MCP Tool — HTTP transport
{
  "transport": "http",
  "serverUrl": "https://mcp.example.com/tools",
  "toolName": "search_documents",
  "toolInput": {
    "query": "{{ $json.searchQuery }}",
    "limit": 10
  },
  "timeout": 15,
  "retryOnError": true,
  "maxRetries": 2
}

Stdio Example (Slack MCP)

MCP Tool — Stdio transport with Slack MCP server
{
  "transport": "stdio",
  "command": "npx -y @modelcontextprotocol/server-slack",
  "toolName": "slack_post_message",
  "toolInput": {
    "channel_id": "C01ABCDEF",
    "text": "{{ $json.message }}"
  },
  "timeout": 15
}

Compatible MCP Servers

Any MCP server that supports the stdio transport works out of the box. Popular servers include Slack, GitHub, Google Drive, SQLite, and Filesystem. See the MCP server directory for a full list.

Use with AI Agent

MCP tools can also be attached to an AI Agent node, allowing the agent to autonomously decide when to invoke MCP tools during its reasoning loop.

File Operations

The File node reads and writes files within a sandboxed directory. This prevents workflows from accessing sensitive system files.

Sandbox restriction

All file operations are restricted to a sandboxed directory. Attempting to access paths outside this directory will result in an error.

Merge Modes

The Merge node supports several strategies for combining data from multiple branches:

ModeBehavior
AppendConcatenate all items from all branches into a single array
Merge by KeyMatch items across branches by a shared key field and combine their properties
Inner JoinOnly include items that have a matching key in all branches
Outer JoinInclude all items from all branches, filling missing fields with null
ZipPair items by index position across branches

Cloud Services

Dedicated nodes for interacting with cloud infrastructure and managed services.

AWS S3Data

Upload, download, list, copy, and delete objects in Amazon S3 buckets. Supports presigned URLs and multipart uploads.

AWS LambdaData

Invoke AWS Lambda functions synchronously or asynchronously. Pass input payloads and receive function responses.

AWS SQSData

Send messages to and receive messages from Amazon SQS queues. Supports standard and FIFO queues.

Google Cloud StorageData

Upload, download, list, and delete objects in Google Cloud Storage buckets.

Google Pub/SubData

Publish messages to Google Cloud Pub/Sub topics. Configure message attributes and ordering keys.

MongoDBData

Query, insert, update, and delete documents in MongoDB collections. Supports aggregation pipelines and bulk operations.

RedisData

Execute Redis commands such as GET, SET, HGET, HSET, LPUSH, RPOP, and more. Configure key expiration and data serialization.