Overview
Distributed tracing provides end-to-end visibility into how requests are processed across multiple services. When a request comes in, Acontext automatically creates a trace that follows the request through:- acontext-api: HTTP API layer (Go service)
- acontext-core: Core business logic (Python service)
- Database operations: SQL queries and transactions
- Cache operations: Redis interactions
- Storage operations: S3 blob storage
- Message queue: RabbitMQ message processing
- LLM operations: Embedding and completion calls
Traces are automatically collected when OpenTelemetry is enabled in your deployment. The system uses Jaeger as the trace backend for storage and visualization.
How It Works
Acontext uses OpenTelemetry to instrument both the API and Core services:Automatic Instrumentation
The following operations are automatically traced:- HTTP requests: All API endpoints are instrumented with request/response details
- Database queries: SQL operations are traced with query details
- Cache operations: Redis get/set operations
- Storage operations: S3 upload/download operations
- Message processing: Async message queue operations
- LLM calls: Embedding and completion API calls
Cross-Service Tracing
When a request flows fromacontext-api to acontext-core, the trace context is automatically propagated using OpenTelemetry’s trace context headers. This creates a unified trace showing the complete request flow across both services.

Traces viewer showing distributed traces with hierarchical span visualization
Viewing Traces
Dashboard Traces Viewer
Access the traces viewer from the dashboard to see all traces in your system:- Time range filtering: Filter traces by time ranges (15 minutes, 1 hour, 6 hours, 24 hours, or 7 days)
- Auto-refresh: Automatically refreshes every 30 seconds
- Hierarchical visualization: Expand traces to view nested spans showing the complete request flow
- Service identification: Color-coded spans distinguish between services (acontext-api in teal, acontext-core in blue)
- HTTP method badges: Quickly identify request types
- Duration visualization: Visual timeline bars show relative execution times
- Trace ID: Copy trace IDs to correlate with logs and metrics
Jaeger UI
For advanced trace analysis, you can access Jaeger UI directly. The traces viewer provides a link to open each trace in Jaeger, where you can:- View detailed span attributes and tags
- Analyze trace dependencies and service maps
- Filter and search traces by various criteria
- Compare trace performance over time
Configuration
Tracing is configured through environment variables. The following settings control tracing behavior:Core Service (Python)
API Service (Go)
Understanding Traces
Trace Structure
Each trace consists of:- Root span: The initial request entry point (usually an HTTP endpoint)
- Child spans: Operations performed during request processing
- Nested spans: Operations that are part of larger operations
Span Information
Each span contains:- Operation name: The operation being performed (e.g.,
GET /api/v1/session/:session_id/get_learning_status) - Service name: Which service performed the operation (
acontext-apioracontext-core) - Duration: How long the operation took
- Tags: Additional metadata (HTTP method, status codes, error information)
- Timestamps: When the operation started and ended
Service Colors
In the traces viewer, spans are color-coded by service:- Teal:
acontext-apioperations - Blue:
acontext-coreoperations - Gray: Other services or unknown operations
Use Cases
Performance debugging
Performance debugging
Identify slow operations and bottlenecks in your system by analyzing trace durations. Expand traces to see which specific operation is taking the most time.
- Open the traces viewer in the dashboard
- Filter by time range to focus on recent requests
- Look for traces with long durations
- Expand the trace to see which span is slow
- Check the operation name and service to identify the bottleneck
Error investigation
Error investigation
When an error occurs, use the trace ID to correlate logs and understand the full request flow that led to the error.
- Find the error in your logs and note the trace ID
- Search for the trace ID in the traces viewer
- Expand the trace to see the complete request flow
- Identify which service and operation failed
- Check span tags for error details
Service dependency analysis
Service dependency analysis
Understand how your services interact by analyzing trace flows. See which services call which other services and how frequently.
- View traces in Jaeger UI for advanced analysis
- Use Jaeger’s service map view to visualize dependencies
- Analyze trace patterns to understand service communication
Performance optimization
Performance optimization
Compare trace durations before and after optimizations to measure improvements.
- Note trace durations for specific operations before optimization
- Make your optimizations
- Compare new trace durations to verify improvements
- Use trace data to identify the next optimization target
Best Practices
Use sampling in production
Configure a sampling ratio (e.g., 0.1 for 10%) to reduce storage costs while maintaining observability.
Correlate with logs
Use trace IDs from traces to find related log entries and get complete context for debugging.
Monitor trace volume
Watch trace collection rates to ensure your sampling ratio is appropriate for your traffic volume.
Set up alerts
Configure alerts based on trace durations to catch performance regressions early.