Monitoring API

Monitor AI token usage, detect anomalies, track model drift, analyze sentiment, and manage scheduled reports. Includes both team-scoped endpoints and admin-level rate-limit monitoring.

SDK token authentication

These endpoints can be authenticated with SDK tokens (Authorization: Bearer sdk_...). Each token has a configurable RBAC role that determines what it can access. Create tokens in Settings → Observability SDK.

Team Monitoring Endpoints

These endpoints are available to team members and provide visibility into AI usage and alert management.

Get Monitoring Config

GET
/api/monitoring/teams/:teamId/config

Get the monitoring configuration for a team

200 OK
{
  "anomalyDetectionEnabled": true,
  "driftAlertEnabled": true,
  "sentimentTrackingEnabled": false,
  "alertThresholds": {
    "tokenUsageSpike": "<number>",
    "errorRateThreshold": "<number>",
    "latencyThreshold": "<number>"
  },
  "notificationChannels": ["email", "slack"]
}

Update Monitoring Config

PUT
/api/monitoring/teams/:teamId/config

Update the monitoring configuration

Accepts the same fields as the GET response. Only provided fields are updated.

Get Token Usage

GET
/api/monitoring/teams/:teamId/token-usage

Get aggregated token usage for a team

ParameterTypeRequiredDescription
fromISO 8601NoStart date (default: 30 days ago)
toISO 8601NoEnd date (default: now)
200 OK
{
  "totalInputTokens": 1250000,
  "totalOutputTokens": 890000,
  "totalCost": 12.45,
  "period": {
    "from": "2026-01-17T00:00:00.000Z",
    "to": "2026-02-17T00:00:00.000Z"
  }
}

Get Daily Usage

GET
/api/monitoring/teams/:teamId/token-usage/daily

Get daily token usage breakdown

200 OK
{
  "days": [
    {
      "date": "2026-02-17",
      "inputTokens": 45000,
      "outputTokens": 32000,
      "cost": 0.42,
      "executionCount": 128
    }
  ]
}

Get Usage by Model

GET
/api/monitoring/teams/:teamId/token-usage/by-model

Get token usage broken down by AI model

200 OK
{
  "models": [
    {
      "model": "gpt-4o",
      "inputTokens": 800000,
      "outputTokens": 600000,
      "cost": 8.40,
      "executionCount": 320
    },
    {
      "model": "claude-sonnet-4-20250514",
      "inputTokens": 450000,
      "outputTokens": 290000,
      "cost": 4.05,
      "executionCount": 180
    }
  ]
}

Bulk Acknowledge Alerts

POST
/api/monitoring/teams/:teamId/alerts/bulk-acknowledge

Acknowledge multiple anomaly, drift, or sentiment alerts at once

FieldTypeRequiredDescription
alertIdsUUID[]YesArray of alert IDs to acknowledge
typestringYesAlert type: ANOMALY, DRIFT, or SENTIMENT
Request
{
  "alertIds": ["uuid-1", "uuid-2", "uuid-3"],
  "type": "ANOMALY"
}
200 OK
{
  "acknowledged": 3,
  "message": "Alerts acknowledged successfully"
}

Export Monitoring Data

GET
/api/monitoring/teams/:teamId/export

Export monitoring data as CSV or JSON

ParameterTypeRequiredDescription
formatstringNoExport format: csv or json (default: json)
fromISO 8601NoStart date
toISO 8601NoEnd date

Scheduled Reports

GET
/api/monitoring/teams/:teamId/scheduled-reports

List all scheduled monitoring reports

POST
/api/monitoring/teams/:teamId/scheduled-reports

Create a new scheduled report

PUT
/api/monitoring/teams/:teamId/scheduled-reports/:id

Update a scheduled report

DELETE
/api/monitoring/teams/:teamId/scheduled-reports/:id

Delete a scheduled report

FieldTypeRequiredDescription
namestringYesReport name
schedulestringYesCron expression (5-field Unix format)
recipientsstring[]YesEmail addresses to send the report to
metricsstring[]YesMetrics to include (token_usage, anomalies, drift, sentiment)
Request
{
  "name": "Weekly AI Usage Report",
  "schedule": "0 9 * * 1",
  "recipients": ["[email protected]"],
  "metrics": ["token_usage", "anomalies"]
}

Workflow Evaluation Config

Override team-level LLM evaluation settings for individual workflows. When a workflow has its own config, it takes priority over the team defaults.

Get Workflow Eval Config

GET
/api/monitoring/workflow/:workflowId/eval-config

Get the evaluation configuration for a specific workflow. Returns empty defaults if no override exists.

200 OK
{
  "id": "uuid",
  "workflowId": "uuid",
  "evalEnabled": true,
  "evalProvider": "openai",
  "evalModel": "gpt-4o",
  "evalCredentialId": "uuid",
  "evalSamplingRate": 50,
  "evalDimensions": ["groundedness", "relevance", "safety"],
  "evalFailureThreshold": 3.00,
  "evalNotifyFailures": true,
  "createdAt": "2026-03-13T10:00:00Z",
  "updatedAt": "2026-03-13T10:00:00Z"
}

Update Workflow Eval Config

PUT
/api/monitoring/workflow/:workflowId/eval-config

Create or update the evaluation config for a workflow

FieldTypeRequiredDescription
evalEnabledbooleanNoEnable or disable evaluations for this workflow
evalProviderstringNoJudge provider: openai, anthropic, gemini, azure, or custom
evalModelstringNoJudge model identifier
evalCredentialIdUUIDNoCredential ID for the judge provider
evalSamplingRateintegerNoPercentage of executions to evaluate (0-100)
evalDimensionsstring[]NoEvaluation criteria to enable
evalFailureThresholddecimalNoComposite score below which a result is considered failed (1.00-5.00)
evalNotifyFailuresbooleanNoTrigger incident playbooks on low scores

Delete Workflow Eval Config

DELETE
/api/monitoring/workflow/:workflowId/eval-config

Remove the workflow-level eval config, reverting to team defaults

Returns 204 No Content on success.

Batch Evaluation

Run pre-deploy batch evaluations against a set of test cases using an LLM judge.

Start Batch Evaluation

POST
/api/workflows/:workflowId/batch-eval

Start a new batch evaluation run

Request
{
  "name": "Pre-deploy test v2.1",
  "judgeProvider": "openai",
  "judgeModel": "gpt-4o",
  "judgeCredentialId": "uuid",
  "evalDimensions": ["groundedness", "relevance", "safety"],
  "passThreshold": 3.0,
  "testCases": [
    {
      "input": { "message": "What is NodeLoom?" },
      "output": "NodeLoom is an AI agent operations platform.",
      "expectedOutput": "NodeLoom is a platform for deploying and monitoring AI agents."
    }
  ]
}
200 OK
{
  "id": "uuid",
  "workflowId": "uuid",
  "name": "Pre-deploy test v2.1",
  "status": "PENDING",
  "totalCases": 1,
  "completedCases": 0,
  "passedCases": 0,
  "failedCases": 0
}

List Batch Evaluation Runs

GET
/api/workflows/:workflowId/batch-eval

List all batch evaluation runs for a workflow

ParameterTypeRequiredDescription
pageintegerNoPage number (default: 0)
sizeintegerNoPage size (default: 20)

Get Batch Evaluation Run

GET
/api/workflows/:workflowId/batch-eval/:runId

Get a single batch evaluation run with result summaries

Get Batch Evaluation Results

GET
/api/workflows/:workflowId/batch-eval/:runId/results

Get detailed per-case results for a batch evaluation run

200 OK
[
  {
    "id": "uuid",
    "testCaseIndex": 0,
    "testInput": { "message": "What is NodeLoom?" },
    "actualOutput": "NodeLoom is an AI agent operations platform.",
    "expectedOutput": "NodeLoom is a platform for deploying and monitoring AI agents.",
    "groundednessScore": 4,
    "relevanceScore": 5,
    "safetyScore": 5,
    "compositeScore": 4.67,
    "reasoning": {
      "groundedness": "Response is well-grounded in the product description.",
      "relevance": "Directly answers the question.",
      "safety": "No safety concerns."
    },
    "goldStandardMatch": true,
    "goldStandardReasoning": "Both outputs correctly describe NodeLoom as an AI platform.",
    "status": "COMPLETED"
  }
]

Error Codes

StatusMeaning
400Invalid query parameters or date range
403Insufficient permissions (team or admin access required)
404Team or report not found