TypeScript SDK
The TypeScript SDK lets you instrument AI agents built with LangChain.js or custom Node.js code. It supports Node.js 18 and above.
Installation
npm install @nodeloom/sdkOr with other package managers:
yarn add @nodeloom/sdk
pnpm add @nodeloom/sdkQuick Start
import { NodeLoomClient, SpanType } from "@nodeloom/sdk";
const client = new NodeLoomClient({
apiKey: "sdk_...",
});
// Start a trace for your agent
const trace = client.trace("customer-support-agent", {
input: { query: "How do I reset my password?" },
});
// Track an LLM call
const llmSpan = trace.span("openai-chat", SpanType.LLM);
const result = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "How do I reset my password?" }],
});
llmSpan.setOutput({ response: result.choices[0].message.content });
llmSpan.setTokenUsage({
promptTokens: result.usage.prompt_tokens,
completionTokens: result.usage.completion_tokens,
model: "gpt-4o",
});
llmSpan.end();
// Track a tool call
const toolSpan = trace.span("lookup-user", SpanType.TOOL);
toolSpan.setInput({ email: "[email protected]" });
const user = await lookupUser("[email protected]");
toolSpan.setOutput({ userId: user.id });
toolSpan.end();
// End the trace
trace.end("success", { output: { answer: "You can reset your password from..." } });
// Always shut down before exit
await client.shutdown();Client Configuration
| Option | Type | Default | Description |
|---|---|---|---|
apiKey | string | (required) | Your SDK token (starts with sdk_) |
endpoint | string | "https://api.nodeloom.io" | NodeLoom API URL |
batchSize | number | 100 | Maximum events per batch |
flushInterval | number | 5000 | Milliseconds between automatic flushes |
maxQueueSize | number | 10000 | Maximum events held in memory before dropping |
const client = new NodeLoomClient({
apiKey: "sdk_...",
batchSize: 50,
flushInterval: 3000,
maxQueueSize: 5000,
});Traces
A trace represents one run of your agent. Create a trace at the start and end it when the agent finishes.
// Basic trace
const trace = client.trace("my-agent");
// ... agent work ...
trace.end("success");
// Trace with input, output, and metadata
const trace = client.trace("my-agent", {
input: { query: "Tell me about AI" },
metadata: { userId: "u123", sessionId: "s456" },
environment: "production", // or "development", "staging"
});
// ... agent work ...
trace.end("success", { output: { answer: "AI is..." } });
// Trace with error
const trace = client.trace("my-agent", { input: { query: "..." } });
try {
// ... agent work ...
trace.end("success", { output: result });
} catch (err) {
trace.end("error", { error: (err as Error).message });
}Spans
Spans track individual operations within a trace. Always call end() on each span when the operation completes.
// Create a span
const span = trace.span("openai-call", SpanType.LLM);
span.setInput({ prompt: "Hello" });
const result = await callLlm("Hello");
span.setOutput({ response: result });
span.end();
// Span with error
const span = trace.span("tool-call", SpanType.TOOL);
try {
const result = await runTool();
span.setOutput(result);
span.end();
} catch (err) {
span.end({ error: (err as Error).message });
}Span Types
| Constant | Value | Use Case |
|---|---|---|
SpanType.LLM | llm | LLM API calls |
SpanType.TOOL | tool | Tool or function calls |
SpanType.RETRIEVAL | retrieval | Vector search, document retrieval |
SpanType.AGENT | agent | Sub-agent invocations |
SpanType.CHAIN | chain | Chain or pipeline steps |
SpanType.CUSTOM | custom | Any other operation |
Token Usage
const span = trace.span("llm-call", SpanType.LLM);
const result = await openai.chat.completions.create({...});
span.setTokenUsage({
promptTokens: result.usage.prompt_tokens,
completionTokens: result.usage.completion_tokens,
model: "gpt-4o",
});
span.end();LangChain.js Integration
The SDK includes a callback handler for LangChain.js that automatically traces chain runs, LLM calls, and tool invocations.
import { NodeLoomClient } from "@nodeloom/sdk";
import { NodeLoomCallbackHandler } from "@nodeloom/sdk/integrations/langchain";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const client = new NodeLoomClient({
apiKey: "sdk_...",
});
const handler = new NodeLoomCallbackHandler(client);
const llm = new ChatOpenAI({ model: "gpt-4o" });
const prompt = ChatPromptTemplate.fromTemplate("Tell me about {topic}");
const chain = prompt.pipe(llm);
// The handler automatically creates traces and spans
const result = await chain.invoke(
{ topic: "quantum computing" },
{ callbacks: [handler] }
);
await client.shutdown();Custom Metrics
Record custom numeric metrics to track performance indicators like latency, cost, or quality scores.
client.metric("response_latency", 1.23, { unit: "seconds", tags: { model: "gpt-4o" } });Feedback
Attach user feedback to a trace for evaluation and fine-tuning workflows.
client.feedback({ traceId: trace.traceId, rating: 5, comment: "Great response" });Session Tracking
Group related traces into a session to track multi-turn conversations or long-running interactions.
const trace = client.trace("support-agent", { sessionId: "conv-123", input: { query: "Hello" } });Prompt Templates
Manage versioned prompt templates and associate them with spans for prompt tracking and iteration.
await client.setPrompt("system-prompt", {
content: "You are a helpful assistant for {{company}}.",
variables: ["company"],
modelHint: "gpt-4o"
});
span.setPromptInfo({ template: "system-prompt", version: 2 });Callback URL
Register a callback URL for your agent to receive webhook notifications from NodeLoom.
await client.registerCallback("my-agent", "https://my-agent.example.com/callback");Guardrail Config
Fetch the guardrail configuration for your agent at runtime so your code can enforce the rules defined in the NodeLoom UI.
const config = await client.getGuardrailConfig("my-agent");Read-only
Shutdown
Always call await client.shutdown() before your application exits. This flushes any remaining events in the queue.
// Blocks until all events are flushed (up to 10s timeout)
await client.shutdown();
// With custom timeout (milliseconds)
await client.shutdown(30000);Unflushed events