TypeScript SDK

The TypeScript SDK lets you instrument AI agents built with LangChain.js or custom Node.js code. It supports Node.js 18 and above.

Installation

bash
npm install @nodeloom/sdk

Or with other package managers:

bash
yarn add @nodeloom/sdk
pnpm add @nodeloom/sdk

Quick Start

quick-start.ts
import { NodeLoomClient, SpanType } from "@nodeloom/sdk";

const client = new NodeLoomClient({
  apiKey: "sdk_...",
});

// Start a trace for your agent
const trace = client.trace("customer-support-agent", {
  input: { query: "How do I reset my password?" },
});

// Track an LLM call
const llmSpan = trace.span("openai-chat", SpanType.LLM);
const result = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "How do I reset my password?" }],
});
llmSpan.setOutput({ response: result.choices[0].message.content });
llmSpan.setTokenUsage({
  promptTokens: result.usage.prompt_tokens,
  completionTokens: result.usage.completion_tokens,
  model: "gpt-4o",
});
llmSpan.end();

// Track a tool call
const toolSpan = trace.span("lookup-user", SpanType.TOOL);
toolSpan.setInput({ email: "[email protected]" });
const user = await lookupUser("[email protected]");
toolSpan.setOutput({ userId: user.id });
toolSpan.end();

// End the trace
trace.end("success", { output: { answer: "You can reset your password from..." } });

// Always shut down before exit
await client.shutdown();

Client Configuration

OptionTypeDefaultDescription
apiKeystring(required)Your SDK token (starts with sdk_)
endpointstring"https://api.nodeloom.io"NodeLoom API URL
batchSizenumber100Maximum events per batch
flushIntervalnumber5000Milliseconds between automatic flushes
maxQueueSizenumber10000Maximum events held in memory before dropping
typescript
const client = new NodeLoomClient({
  apiKey: "sdk_...",
  batchSize: 50,
  flushInterval: 3000,
  maxQueueSize: 5000,
});

Traces

A trace represents one run of your agent. Create a trace at the start and end it when the agent finishes.

typescript
// Basic trace
const trace = client.trace("my-agent");
// ... agent work ...
trace.end("success");

// Trace with input, output, and metadata
const trace = client.trace("my-agent", {
  input: { query: "Tell me about AI" },
  metadata: { userId: "u123", sessionId: "s456" },
  environment: "production",  // or "development", "staging"
});
// ... agent work ...
trace.end("success", { output: { answer: "AI is..." } });

// Trace with error
const trace = client.trace("my-agent", { input: { query: "..." } });
try {
  // ... agent work ...
  trace.end("success", { output: result });
} catch (err) {
  trace.end("error", { error: (err as Error).message });
}

Spans

Spans track individual operations within a trace. Always call end() on each span when the operation completes.

typescript
// Create a span
const span = trace.span("openai-call", SpanType.LLM);
span.setInput({ prompt: "Hello" });
const result = await callLlm("Hello");
span.setOutput({ response: result });
span.end();

// Span with error
const span = trace.span("tool-call", SpanType.TOOL);
try {
  const result = await runTool();
  span.setOutput(result);
  span.end();
} catch (err) {
  span.end({ error: (err as Error).message });
}

Span Types

ConstantValueUse Case
SpanType.LLMllmLLM API calls
SpanType.TOOLtoolTool or function calls
SpanType.RETRIEVALretrievalVector search, document retrieval
SpanType.AGENTagentSub-agent invocations
SpanType.CHAINchainChain or pipeline steps
SpanType.CUSTOMcustomAny other operation

Token Usage

typescript
const span = trace.span("llm-call", SpanType.LLM);
const result = await openai.chat.completions.create({...});
span.setTokenUsage({
  promptTokens: result.usage.prompt_tokens,
  completionTokens: result.usage.completion_tokens,
  model: "gpt-4o",
});
span.end();

LangChain.js Integration

The SDK includes a callback handler for LangChain.js that automatically traces chain runs, LLM calls, and tool invocations.

LangChain.js example
import { NodeLoomClient } from "@nodeloom/sdk";
import { NodeLoomCallbackHandler } from "@nodeloom/sdk/integrations/langchain";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const client = new NodeLoomClient({
  apiKey: "sdk_...",
});
const handler = new NodeLoomCallbackHandler(client);

const llm = new ChatOpenAI({ model: "gpt-4o" });
const prompt = ChatPromptTemplate.fromTemplate("Tell me about {topic}");
const chain = prompt.pipe(llm);

// The handler automatically creates traces and spans
const result = await chain.invoke(
  { topic: "quantum computing" },
  { callbacks: [handler] }
);

await client.shutdown();

Custom Metrics

Record custom numeric metrics to track performance indicators like latency, cost, or quality scores.

typescript
client.metric("response_latency", 1.23, { unit: "seconds", tags: { model: "gpt-4o" } });

Feedback

Attach user feedback to a trace for evaluation and fine-tuning workflows.

typescript
client.feedback({ traceId: trace.traceId, rating: 5, comment: "Great response" });

Session Tracking

Group related traces into a session to track multi-turn conversations or long-running interactions.

typescript
const trace = client.trace("support-agent", { sessionId: "conv-123", input: { query: "Hello" } });

Prompt Templates

Manage versioned prompt templates and associate them with spans for prompt tracking and iteration.

typescript
await client.setPrompt("system-prompt", {
  content: "You are a helpful assistant for {{company}}.",
  variables: ["company"],
  modelHint: "gpt-4o"
});
span.setPromptInfo({ template: "system-prompt", version: 2 });

Callback URL

Register a callback URL for your agent to receive webhook notifications from NodeLoom.

typescript
await client.registerCallback("my-agent", "https://my-agent.example.com/callback");

Guardrail Config

Fetch the guardrail configuration for your agent at runtime so your code can enforce the rules defined in the NodeLoom UI.

typescript
const config = await client.getGuardrailConfig("my-agent");

Read-only

Guardrails are configured in the NodeLoom UI. The SDK provides read-only access to the configuration.

Shutdown

Always call await client.shutdown() before your application exits. This flushes any remaining events in the queue.

typescript
// Blocks until all events are flushed (up to 10s timeout)
await client.shutdown();

// With custom timeout (milliseconds)
await client.shutdown(30000);

Unflushed events

If your process exits without calling shutdown(), any events still in the queue will be lost. For serverless environments, call shutdown at the end of each handler invocation.