Java SDK
The Java SDK lets you instrument AI agents built with Spring AI, LangChain4j, or custom Java code. It supports Java 11 and above.
Installation
Maven
<dependency>
<groupId>io.nodeloom</groupId>
<artifactId>nodeloom-sdk</artifactId>
<version>0.6.0</version>
</dependency>Gradle
implementation 'io.nodeloom:nodeloom-sdk:0.6.0'Quick Start
import io.nodeloom.sdk.NodeLoom;
import io.nodeloom.sdk.Trace;
import io.nodeloom.sdk.Span;
import io.nodeloom.sdk.SpanType;
import java.util.Map;
public class QuickStart {
public static void main(String[] args) {
NodeLoom client = NodeLoom.builder()
.apiKey("sdk_...")
.build();
// Using try-with-resources for automatic cleanup
try (Trace trace = client.trace("customer-support-agent")
.input(Map.of("query", "How do I reset my password?"))
.start()) {
// Track an LLM call
try (Span span = trace.span("openai-call", SpanType.LLM)) {
// ... call your LLM ...
span.setOutput(Map.of("response", "You can reset your password..."));
span.setTokenUsage(150, 200, "gpt-4o");
}
// Track a tool call
try (Span span = trace.span("lookup-user", SpanType.TOOL)) {
span.setInput(Map.of("email", "[email protected]"));
// ... call your tool ...
span.setOutput(Map.of("userId", "u123"));
}
// Trace ends successfully when try-with-resources closes
trace.setOutput(Map.of("answer", "You can reset your password from..."));
}
// Always close before exit
client.close();
}
}Client Configuration
The client uses a builder pattern for configuration:
| Method | Type | Default | Description |
|---|---|---|---|
.apiKey() | String | (required) | Your SDK token (starts with sdk_) |
.endpoint() | String | "https://api.nodeloom.io" | NodeLoom API URL |
.batchSize() | int | 100 | Maximum events per batch |
.flushIntervalMs() | long | 5000 | Milliseconds between automatic flushes |
.maxQueueSize() | int | 10000 | Maximum events held in memory before dropping |
NodeLoom client = NodeLoom.builder()
.apiKey("sdk_...")
.batchSize(50)
.flushIntervalMs(3000)
.maxQueueSize(5000)
.build();Traces
Traces implement AutoCloseable, so you can use try-with-resources for automatic cleanup. When the trace closes, it sends a trace_end event with a success status. If an exception propagates, it records the error instead.
// Try-with-resources (recommended)
try (Trace trace = client.trace("my-agent")
.input(Map.of("query", "..."))
.environment("production")
.metadata(Map.of("userId", "u123"))
.start()) {
// ... agent work ...
trace.setOutput(Map.of("answer", "..."));
}
// Manual management
Trace trace = client.trace("my-agent")
.input(Map.of("query", "..."))
.start();
try {
// ... agent work ...
trace.end("success", Map.of("answer", "..."));
} catch (Exception e) {
trace.end("error", e.getMessage());
}Spans
Spans also implement AutoCloseable. They record timing automatically.
// Try-with-resources (recommended)
try (Span span = trace.span("openai-call", SpanType.LLM)) {
span.setInput(Map.of("prompt", "Hello"));
String result = callLlm("Hello");
span.setOutput(Map.of("response", result));
span.setTokenUsage(150, 200, "gpt-4o");
}
// If an exception occurs inside the try block, the span records the error
// Manual management
Span span = trace.span("tool-call", SpanType.TOOL);
span.setInput(Map.of("query", "search term"));
try {
Object result = runTool();
span.setOutput(Map.of("result", result));
span.end();
} catch (Exception e) {
span.endWithError(e.getMessage());
}Span Types
| Constant | Value | Use Case |
|---|---|---|
SpanType.LLM | llm | LLM API calls |
SpanType.TOOL | tool | Tool or function calls |
SpanType.RETRIEVAL | retrieval | Vector search, document retrieval |
SpanType.AGENT | agent | Sub-agent invocations |
SpanType.CHAIN | chain | Chain or pipeline steps |
SpanType.CUSTOM | custom | Any other operation |
Token Usage
try (Span span = trace.span("llm-call", SpanType.LLM)) {
// ... call your LLM ...
// Arguments: promptTokens, completionTokens, model
span.setTokenUsage(150, 200, "gpt-4o");
}Custom Metrics
Record custom numeric metrics to track performance indicators like latency, cost, or quality scores.
client.metric("response_latency", 1.23, "seconds", Map.of("model", "gpt-4o"));Feedback
Attach user feedback to a trace for evaluation and fine-tuning workflows.
client.feedback(trace.getTraceId(), 5, "Great response", List.of("accurate"));Session Tracking
Group related traces into a session to track multi-turn conversations or long-running interactions.
var trace = client.trace("support-agent", TraceOptions.builder()
.sessionId("conv-123")
.input(Map.of("query", "Hello"))
.build());Prompt Templates
Manage versioned prompt templates and associate them with spans for prompt tracking and iteration.
client.setPrompt("system-prompt", PromptOptions.builder()
.content("You are a helpful assistant for {{company}}.")
.variables(List.of("company"))
.modelHint("gpt-4o")
.build());
span.setPromptInfo("system-prompt", 2);Callback URL
Register a callback URL for your agent to receive webhook notifications from NodeLoom.
client.registerCallback("my-agent", "https://my-agent.example.com/callback");Guardrail Config
Fetch the guardrail configuration for your agent at runtime so your code can enforce the rules defined in the NodeLoom UI.
Map<String, Object> config = client.getGuardrailConfig("my-agent");Read-only
Shutdown
Always call client.close() before your application exits. This flushes remaining events and shuts down the internal thread pool.
// Blocks until all events are flushed (up to 10s timeout)
client.close();
// With custom timeout
client.close(30, TimeUnit.SECONDS);Spring Boot integration
@PreDestroy on the close method, or implement DisposableBean to ensure graceful shutdown.