- Automatically instrument your code with LangWatch tracing for any framework (OpenAI, Agno, Mastra, DSPy, and more)
- Create and manage prompts using LangWatch’s prompt management system
- Set up evaluations to test and monitor your LLM outputs
- Debug production issues by retrieving and analyzing traces from your dashboard
- Add labels, metadata, and custom tracking following LangWatch best practices
Setup
1
Get your LangWatch API key
Get your API key from the LangWatch dashboard.
2
Configure MCP in your editor
- Cursor
- Claude Code
- Other Editors
- Open Cursor Settings
- Navigate to the Tools and MCP section in the sidebar
- Add the LangWatch MCP server:
3
Start using it
Open your AI assistant chat (e.g.,
Cmd/Ctrl + I in Cursor, or Cmd/Ctrl + Shift + P > “Claude Code: Open Chat” in Claude Code) and ask it to help with LangWatch tasks.Usage Examples
Instrument Your Code with LangWatch
Simply ask your AI assistant to add LangWatch tracking to your existing code:- Fetch the relevant LangWatch documentation for your framework
- Add the necessary imports and setup code
- Wrap your functions with
@langwatch.trace()decorators - Configure automatic tracking for your LLM calls
- Add labels and metadata following best practices
Create Prompts with Prompt Management
Ask your AI assistant to set up prompt management:Debug Production Issues
When you encounter an issue in production, ask your AI to investigate:- Retrieve recent traces from your LangWatch dashboard
- Analyze the spans and identify problematic steps
- Suggest fixes based on the trace data
- Update your code with the fixes

Set Up Evaluations
Ask your AI assistant to add evaluations to your LLM outputs:Advanced: Self-Building AI Agents
The LangWatch MCP is so powerful that it can help AI agents automatically instrument themselves while being built. This enables self-improving AI systems that can track and debug their own behavior.MCP Tools Reference
The MCP server provides the following tools that your AI assistant can use:fetch_langwatch_docs
Fetches LangWatch documentation pages to understand how to implement features.
Parameters:
url(optional): The full URL of a specific doc page. If not provided, fetches the docs index.
get_latest_traces
Retrieves the latest LLM traces from your LangWatch dashboard.
Parameters:
pageOffset(optional): Page offset for paginationdaysBackToSearch(optional): Number of days back to search. Defaults to 1.
get_trace_by_id
Retrieves a specific trace by its ID for detailed debugging.
Parameters:
id: The trace ID to retrieve
list_traces_by_user_id
Lists traces filtered by user ID.
Parameters:
userId: The user ID to filter bypageSize(optional): Number of traces per pagepageOffset(optional): Page offset for paginationdaysBackToSearch(optional): Number of days back to search
list_traces_by_thread_id
Lists traces filtered by thread/session ID.
Parameters:
threadId: The thread/session ID to filter bypageSize(optional): Number of traces per pagepageOffset(optional): Page offset for paginationdaysBackToSearch(optional): Number of days back to search
Your AI assistant will automatically choose the right tools based on your request. You don’t need to call these tools manually.