Go SDK
The Go SDK lets you instrument AI agents built with Go. It supports Go 1.21 and above.
Installation
go get github.com/nodeloom/nodeloom-sdk-goQuick Start
package main
import (
"fmt"
nodeloom "github.com/nodeloom/nodeloom-sdk-go"
)
func main() {
client := nodeloom.New("sdk_...")
defer client.Close()
// Start a trace
trace := client.Trace("customer-support-agent",
nodeloom.WithInput(map[string]any{"query": "How do I reset my password?"}),
)
// Track an LLM call
span := trace.Span("openai-call", nodeloom.SpanTypeLLM)
// ... call your LLM ...
span.SetOutput(map[string]any{"response": "You can reset your password..."})
span.SetTokenUsage(150, 200, "gpt-4o")
span.End()
// Track a tool call
toolSpan := trace.Span("lookup-user", nodeloom.SpanTypeTool)
toolSpan.SetInput(map[string]any{"email": "[email protected]"})
// ... call your tool ...
toolSpan.SetOutput(map[string]any{"userId": "u123"})
toolSpan.End()
// End the trace
trace.End(nodeloom.StatusSuccess,
nodeloom.WithOutput(map[string]any{"answer": "You can reset your password from..."}),
)
}Client Configuration
The client uses the functional options pattern:
| Option | Type | Default | Description |
|---|---|---|---|
apiKey (1st arg) | string | (required) | Your SDK token (starts with sdk_) |
WithEndpoint() | string | "https://api.nodeloom.io" | NodeLoom API URL |
WithBatchSize() | int | 100 | Maximum events per batch |
WithFlushInterval() | time.Duration | 5s | Duration between automatic flushes |
WithMaxQueueSize() | int | 10000 | Maximum events held in memory before dropping |
client := nodeloom.New("sdk_...",
nodeloom.WithBatchSize(50),
nodeloom.WithFlushInterval(3 * time.Second),
nodeloom.WithMaxQueueSize(5000),
)
defer client.Close()Traces
A trace represents one run of your agent. Use defer client.Close() to ensure all events are flushed on exit.
// Basic trace
trace := client.Trace("my-agent")
// ... agent work ...
trace.End(nodeloom.StatusSuccess)
// Trace with options
trace := client.Trace("my-agent",
nodeloom.WithInput(map[string]any{"query": "Tell me about AI"}),
nodeloom.WithMetadata(map[string]any{"userId": "u123"}),
nodeloom.WithEnvironment("production"),
)
// ... agent work ...
trace.End(nodeloom.StatusSuccess,
nodeloom.WithOutput(map[string]any{"answer": "AI is..."}),
)
// Trace with error
trace := client.Trace("my-agent",
nodeloom.WithInput(map[string]any{"query": "..."}),
)
result, err := runAgent()
if err != nil {
trace.EndWithError(err)
} else {
trace.End(nodeloom.StatusSuccess,
nodeloom.WithOutput(result),
)
}Spans
Spans track individual operations within a trace. Always call End() on each span.
// Create a span
span := trace.Span("openai-call", nodeloom.SpanTypeLLM)
span.SetInput(map[string]any{"prompt": "Hello"})
result, err := callLLM("Hello")
if err != nil {
span.EndWithError(err)
} else {
span.SetOutput(map[string]any{"response": result})
span.End()
}
// Span with token usage
span := trace.Span("llm-call", nodeloom.SpanTypeLLM)
// ... call your LLM ...
span.SetTokenUsage(150, 200, "gpt-4o")
span.SetOutput(map[string]any{"response": result})
span.End()Span Types
| Constant | Value | Use Case |
|---|---|---|
SpanTypeLLM | llm | LLM API calls |
SpanTypeTool | tool | Tool or function calls |
SpanTypeRetrieval | retrieval | Vector search, document retrieval |
SpanTypeAgent | agent | Sub-agent invocations |
SpanTypeChain | chain | Chain or pipeline steps |
SpanTypeCustom | custom | Any other operation |
Custom Metrics
Record custom numeric metrics to track performance indicators like latency, cost, or quality scores.
client.Metric("response_latency", 1.23, nodeloom.MetricOpts{Unit: "seconds", Tags: map[string]string{"model": "gpt-4o"}})Feedback
Attach user feedback to a trace for evaluation and fine-tuning workflows.
client.Feedback(trace.TraceID, 5, "Great response", []string{"accurate"})Session Tracking
Group related traces into a session to track multi-turn conversations or long-running interactions.
trace := client.Trace("support-agent", nodeloom.WithSessionID("conv-123"), nodeloom.WithInput(map[string]any{"query": "Hello"}))Prompt Templates
Manage versioned prompt templates and associate them with spans for prompt tracking and iteration.
client.SetPrompt("system-prompt", nodeloom.PromptOpts{
Content: "You are a helpful assistant for {{company}}.",
Variables: []string{"company"},
ModelHint: "gpt-4o",
})
span.SetPromptInfo("system-prompt", 2)Callback URL
Register a callback URL for your agent to receive webhook notifications from NodeLoom.
client.RegisterCallback("my-agent", "https://my-agent.example.com/callback")Guardrail Config
Fetch the guardrail configuration for your agent at runtime so your code can enforce the rules defined in the NodeLoom UI.
config, err := client.GetGuardrailConfig("my-agent")Read-only
Shutdown
Always close the client to flush remaining events. The idiomatic Go pattern is to use defer:
client := nodeloom.New("sdk_...")
defer client.Close() // flushes remaining events on exitContext-aware shutdown
CloseWithContext(ctx) to respect cancellation deadlines.