Monitoring API
Monitor AI token usage, detect anomalies, track model drift, analyze sentiment, and manage scheduled reports. Includes both team-scoped endpoints and admin-level rate-limit monitoring.
SDK token authentication
Authorization: Bearer sdk_...). Each token has a configurable RBAC role that determines what it can access. Create tokens in Settings → Observability SDK.Team Monitoring Endpoints
These endpoints are available to team members and provide visibility into AI usage and alert management.
Get Monitoring Config
/api/monitoring/teams/:teamId/configGet the monitoring configuration for a team
{
"anomalyDetectionEnabled": true,
"driftAlertEnabled": true,
"sentimentTrackingEnabled": false,
"alertThresholds": {
"tokenUsageSpike": "<number>",
"errorRateThreshold": "<number>",
"latencyThreshold": "<number>"
},
"notificationChannels": ["email", "slack"]
}Update Monitoring Config
/api/monitoring/teams/:teamId/configUpdate the monitoring configuration
Accepts the same fields as the GET response. Only provided fields are updated.
Get Token Usage
/api/monitoring/teams/:teamId/token-usageGet aggregated token usage for a team
| Parameter | Type | Required | Description |
|---|---|---|---|
from | ISO 8601 | No | Start date (default: 30 days ago) |
to | ISO 8601 | No | End date (default: now) |
{
"totalInputTokens": 1250000,
"totalOutputTokens": 890000,
"totalCost": 12.45,
"period": {
"from": "2026-01-17T00:00:00.000Z",
"to": "2026-02-17T00:00:00.000Z"
}
}Get Daily Usage
/api/monitoring/teams/:teamId/token-usage/dailyGet daily token usage breakdown
{
"days": [
{
"date": "2026-02-17",
"inputTokens": 45000,
"outputTokens": 32000,
"cost": 0.42,
"executionCount": 128
}
]
}Get Usage by Model
/api/monitoring/teams/:teamId/token-usage/by-modelGet token usage broken down by AI model
{
"models": [
{
"model": "gpt-4o",
"inputTokens": 800000,
"outputTokens": 600000,
"cost": 8.40,
"executionCount": 320
},
{
"model": "claude-sonnet-4-20250514",
"inputTokens": 450000,
"outputTokens": 290000,
"cost": 4.05,
"executionCount": 180
}
]
}Bulk Acknowledge Alerts
/api/monitoring/teams/:teamId/alerts/bulk-acknowledgeAcknowledge multiple anomaly, drift, or sentiment alerts at once
| Field | Type | Required | Description |
|---|---|---|---|
alertIds | UUID[] | Yes | Array of alert IDs to acknowledge |
type | string | Yes | Alert type: ANOMALY, DRIFT, or SENTIMENT |
{
"alertIds": ["uuid-1", "uuid-2", "uuid-3"],
"type": "ANOMALY"
}{
"acknowledged": 3,
"message": "Alerts acknowledged successfully"
}Export Monitoring Data
/api/monitoring/teams/:teamId/exportExport monitoring data as CSV or JSON
| Parameter | Type | Required | Description |
|---|---|---|---|
format | string | No | Export format: csv or json (default: json) |
from | ISO 8601 | No | Start date |
to | ISO 8601 | No | End date |
Scheduled Reports
/api/monitoring/teams/:teamId/scheduled-reportsList all scheduled monitoring reports
/api/monitoring/teams/:teamId/scheduled-reportsCreate a new scheduled report
/api/monitoring/teams/:teamId/scheduled-reports/:idUpdate a scheduled report
/api/monitoring/teams/:teamId/scheduled-reports/:idDelete a scheduled report
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Report name |
schedule | string | Yes | Cron expression (5-field Unix format) |
recipients | string[] | Yes | Email addresses to send the report to |
metrics | string[] | Yes | Metrics to include (token_usage, anomalies, drift, sentiment) |
{
"name": "Weekly AI Usage Report",
"schedule": "0 9 * * 1",
"recipients": ["[email protected]"],
"metrics": ["token_usage", "anomalies"]
}Workflow Evaluation Config
Override team-level LLM evaluation settings for individual workflows. When a workflow has its own config, it takes priority over the team defaults.
Get Workflow Eval Config
/api/monitoring/workflow/:workflowId/eval-configGet the evaluation configuration for a specific workflow. Returns empty defaults if no override exists.
{
"id": "uuid",
"workflowId": "uuid",
"evalEnabled": true,
"evalProvider": "openai",
"evalModel": "gpt-4o",
"evalCredentialId": "uuid",
"evalSamplingRate": 50,
"evalDimensions": ["groundedness", "relevance", "safety"],
"evalFailureThreshold": 3.00,
"evalNotifyFailures": true,
"createdAt": "2026-03-13T10:00:00Z",
"updatedAt": "2026-03-13T10:00:00Z"
}Update Workflow Eval Config
/api/monitoring/workflow/:workflowId/eval-configCreate or update the evaluation config for a workflow
| Field | Type | Required | Description |
|---|---|---|---|
evalEnabled | boolean | No | Enable or disable evaluations for this workflow |
evalProvider | string | No | Judge provider: openai, anthropic, gemini, azure, or custom |
evalModel | string | No | Judge model identifier |
evalCredentialId | UUID | No | Credential ID for the judge provider |
evalSamplingRate | integer | No | Percentage of executions to evaluate (0-100) |
evalDimensions | string[] | No | Evaluation criteria to enable |
evalFailureThreshold | decimal | No | Composite score below which a result is considered failed (1.00-5.00) |
evalNotifyFailures | boolean | No | Trigger incident playbooks on low scores |
Delete Workflow Eval Config
/api/monitoring/workflow/:workflowId/eval-configRemove the workflow-level eval config, reverting to team defaults
Returns 204 No Content on success.
Batch Evaluation
Run pre-deploy batch evaluations against a set of test cases using an LLM judge.
Start Batch Evaluation
/api/workflows/:workflowId/batch-evalStart a new batch evaluation run
{
"name": "Pre-deploy test v2.1",
"judgeProvider": "openai",
"judgeModel": "gpt-4o",
"judgeCredentialId": "uuid",
"evalDimensions": ["groundedness", "relevance", "safety"],
"passThreshold": 3.0,
"testCases": [
{
"input": { "message": "What is NodeLoom?" },
"output": "NodeLoom is an AI agent operations platform.",
"expectedOutput": "NodeLoom is a platform for deploying and monitoring AI agents."
}
]
}{
"id": "uuid",
"workflowId": "uuid",
"name": "Pre-deploy test v2.1",
"status": "PENDING",
"totalCases": 1,
"completedCases": 0,
"passedCases": 0,
"failedCases": 0
}List Batch Evaluation Runs
/api/workflows/:workflowId/batch-evalList all batch evaluation runs for a workflow
| Parameter | Type | Required | Description |
|---|---|---|---|
page | integer | No | Page number (default: 0) |
size | integer | No | Page size (default: 20) |
Get Batch Evaluation Run
/api/workflows/:workflowId/batch-eval/:runIdGet a single batch evaluation run with result summaries
Get Batch Evaluation Results
/api/workflows/:workflowId/batch-eval/:runId/resultsGet detailed per-case results for a batch evaluation run
[
{
"id": "uuid",
"testCaseIndex": 0,
"testInput": { "message": "What is NodeLoom?" },
"actualOutput": "NodeLoom is an AI agent operations platform.",
"expectedOutput": "NodeLoom is a platform for deploying and monitoring AI agents.",
"groundednessScore": 4,
"relevanceScore": 5,
"safetyScore": 5,
"compositeScore": 4.67,
"reasoning": {
"groundedness": "Response is well-grounded in the product description.",
"relevance": "Directly answers the question.",
"safety": "No safety concerns."
},
"goldStandardMatch": true,
"goldStandardReasoning": "Both outputs correctly describe NodeLoom as an AI platform.",
"status": "COMPLETED"
}
]Error Codes
| Status | Meaning |
|---|---|
400 | Invalid query parameters or date range |
403 | Insufficient permissions (team or admin access required) |
404 | Team or report not found |