This page focuses on configuration. For a complete, runnable example, see the full working example repository: Spring AI + LangWatch (OpenTelemetry) example.
Prerequisites
- Java 17 or later
- An OpenAI API key (if you use the OpenAI provider via Spring AI)
- A LangWatch API key
Setup
1
Set required environment variables
Export your provider API keys as environment variables used by your app.
Use your platform’s secret manager for variables in production. Never store secrets in source control.
2
Configure the OpenTelemetry exporter to LangWatch
Configure OpenTelemetry and SpringAI in your
src/main/resources/application.yaml
so your app captures and sends traces directly to LangWatch.application.yaml
3
Start your Spring Boot application as usual
Run your application the way you normally do (IDE, Gradle, Maven, or a container). No special commands are required beyond your standard start procedure.
After your application handles AI calls via Spring AI, traces will appear in your LangWatch workspace.
What gets traced
- HTTP requests handled by your Spring Boot application
- AI model calls performed via Spring AI (e.g., OpenAI)
- Prompt and completion content, when capture is enabled/configured
- Performance metrics and errors/exceptions
Monitoring
Once configured:- Visit your LangWatch dashboard to explore spans and AI-specific attributes
- Analyze model performance, usage, and costs
- Investigate failures with full trace context
Troubleshooting
I don't see any traces in LangWatch
I don't see any traces in LangWatch
- Authorization header: Ensure
Authorization: Bearer <your-langwatch-key>
is set underotel.exporter.otlp.headers
. - Endpoint URL: Confirm the endpoint is
https://app.langwatch.ai/api/otel
and protocol ishttp/protobuf
. - Network egress: Verify your environment can reach LangWatch (egress/proxy/firewall settings).
Spring AI calls aren't producing spans
Spring AI calls aren't producing spans
- Provider configuration: Ensure your Spring AI provider (e.g., OpenAI) is properly configured and invoked by your code.
- Sampling: Check OpenTelemetry sampling configuration if you’ve customized it; overly aggressive sampling can drop spans.
For a complete implementation showing controllers, Spring AI configuration, and OpenTelemetry setup, see the
full working example repository.