AxCrew

Telemetry (OpenTelemetry)

Enable comprehensive observability with OpenTelemetry integration

AxCrew provides optional OpenTelemetry integration for comprehensive observability. You can pass custom tracer and meter instances to monitor agent operations, track performance, and analyze behavior across your crew.

Features

  • Distributed Tracing: Track agent execution flows, function calls, and dependencies
  • Metrics Collection: Monitor token usage, costs, latency, and error rates
  • Multiple Exporters: Support for console, Jaeger, Prometheus, and other OpenTelemetry backends

Setup

Install OpenTelemetry dependencies:

npm install @opentelemetry/api @opentelemetry/sdk-trace-node @opentelemetry/sdk-metrics

Optional: Install exporters for enhanced visualization:

# For Jaeger tracing UI
npm install @opentelemetry/exporter-jaeger
 
# For Prometheus metrics
npm install @opentelemetry/exporter-prometheus

Basic Configuration

import { AxCrew } from '@amitdeshmukh/ax-crew';
import { trace, metrics } from '@opentelemetry/api';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { MeterProvider } from '@opentelemetry/sdk-metrics';
import { ConsoleSpanExporter, SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base';
 
// Initialize OpenTelemetry
const tracerProvider = new NodeTracerProvider({
  spanProcessors: [new SimpleSpanProcessor(new ConsoleSpanExporter())]
});
tracerProvider.register();
 
const meterProvider = new MeterProvider();
metrics.setGlobalMeterProvider(meterProvider);
 
// Get tracer and meter instances
const tracer = trace.getTracer('my-app');
const meter = metrics.getMeter('my-app');
 
// Pass to AxCrew
const crew = new AxCrew(
  config,
  AxCrewFunctions,
  undefined,
  {
    telemetry: {
      tracer,
      meter
    }
  }
);

What Gets Traced

When telemetry is enabled, AxCrew automatically instruments:

  • Agent Execution: Both forward() and streamingForward() calls create spans with timing and metadata
  • Function Calls: Tool/function invocations are traced with parameters and results
  • Provider Information: Model name, provider, and configuration details
  • Token Metrics: Input/output tokens and estimated costs
  • Errors: Exceptions and failures are captured with full context

Note: Telemetry is injected into the underlying AxAI instance, so all LLM calls (both synchronous and streaming) are automatically instrumented.

Advanced Configuration with Jaeger

For enhanced visualization, export traces to Jaeger:

import { JaegerExporter } from '@opentelemetry/exporter-jaeger';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { SimpleSpanProcessor, ConsoleSpanExporter } from '@opentelemetry/sdk-trace-base';
 
const tracerProvider = new NodeTracerProvider({
  spanProcessors: [
    new SimpleSpanProcessor(new ConsoleSpanExporter()),
    new SimpleSpanProcessor(new JaegerExporter({
      endpoint: 'http://localhost:14268/api/traces'
    }))
  ]
});
tracerProvider.register();
 
// ... rest of setup

Start Jaeger with Docker:

docker run -d --name jaeger \
  -p 16686:16686 \
  -p 14268:14268 \
  jaegertracing/all-in-one:latest

View traces at: http://localhost:16686

Complete Example

import { AxCrew, AxCrewFunctions } from '@amitdeshmukh/ax-crew';
import { trace, metrics } from '@opentelemetry/api';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { MeterProvider } from '@opentelemetry/sdk-metrics';
import { ConsoleSpanExporter, SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base';
 
// Setup OpenTelemetry
const tracerProvider = new NodeTracerProvider({
  spanProcessors: [new SimpleSpanProcessor(new ConsoleSpanExporter())]
});
tracerProvider.register();
 
const meterProvider = new MeterProvider();
metrics.setGlobalMeterProvider(meterProvider);
 
const tracer = trace.getTracer('axcrew-demo');
const meter = metrics.getMeter('axcrew-demo');
 
// Agent configuration
const config = {
  crew: [
    {
      name: "Researcher",
      description: "Researches topics",
      signature: "topic:string -> research:string",
      provider: "openai",
      providerKeyName: "OPENAI_API_KEY",
      ai: { model: "gpt-4", temperature: 0 }
    },
    {
      name: "Writer",
      description: "Writes content based on research",
      signature: "topic:string, research:string -> article:string",
      provider: "anthropic",
      providerKeyName: "ANTHROPIC_API_KEY",
      ai: { model: "claude-3-sonnet", temperature: 0.7 },
      agents: ["Researcher"]
    }
  ]
};
 
// Create crew with telemetry
const crew = new AxCrew(config, AxCrewFunctions, undefined, {
  telemetry: { tracer, meter }
});
 
await crew.addAllAgents();
 
// Run a workflow - all operations will be traced
const writer = crew.agents.get('Writer');
const { article } = await writer.forward({ 
  topic: "Quantum Computing",
  research: "" // Will be filled by Researcher sub-agent
});
 
console.log("Article generated:", article.slice(0, 200) + "...");
 
// Metrics are automatically collected
const crewMetrics = crew.getCrewMetrics();
console.log("Crew metrics:", crewMetrics);

Run the example:

# With console output only
npm run dev examples/telemetry-demo.ts
 
# With Jaeger (start Jaeger first)
docker run -d --name jaeger -p 16686:16686 -p 14268:14268 jaegertracing/all-in-one:latest
npm run dev examples/telemetry-demo.ts
# Open http://localhost:16686 to view traces

AxCrewOptions Interface

interface AxCrewOptions {
  debug?: boolean;
  telemetry?: {
    tracer?: any;  // OpenTelemetry Tracer instance
    meter?: any;   // OpenTelemetry Meter instance
  }
}

Best Practices

  1. Production Setup: Use appropriate exporters for your infrastructure (Jaeger, Zipkin, Cloud providers)
  2. Sampling: Configure sampling strategies to control trace volume in production
  3. Context Propagation: OpenTelemetry automatically propagates trace context across agent calls
  4. Custom Attributes: Extend traces with custom attributes specific to your use case
  5. Performance: Telemetry adds minimal overhead when properly configured

See Also

On this page