Skip to content

Commit 49c1cb1

Browse files
shah-sidddgustavocidornelas
authored andcommitted
feat: integrated litellm for tracing
1 parent d4fe26f commit 49c1cb1

File tree

5 files changed

+1198
-0
lines changed

5 files changed

+1198
-0
lines changed
Lines changed: 182 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,182 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openlayer-ai/openlayer-python/blob/main/examples/tracing/litellm/litellm_tracing.ipynb)\n",
8+
"\n",
9+
"\n",
10+
"# <a id=\"top\">LiteLLM monitoring quickstart</a>\n",
11+
"\n",
12+
"This notebook illustrates how to get started monitoring LiteLLM completions with Openlayer.\n",
13+
"\n",
14+
"LiteLLM provides a unified interface to call 100+ LLM APIs using the same input/output format. This integration allows you to trace and monitor completions across all supported providers through a single interface.\n"
15+
]
16+
},
17+
{
18+
"cell_type": "code",
19+
"execution_count": null,
20+
"metadata": {},
21+
"outputs": [],
22+
"source": [
23+
"!pip install openlayer litellm\n"
24+
]
25+
},
26+
{
27+
"cell_type": "markdown",
28+
"metadata": {},
29+
"source": [
30+
"## 1. Set the environment variables\n"
31+
]
32+
},
33+
{
34+
"cell_type": "code",
35+
"execution_count": null,
36+
"metadata": {},
37+
"outputs": [],
38+
"source": [
39+
"import os\n",
40+
"\n",
41+
"import litellm\n",
42+
"\n",
43+
"# Set your API keys for the providers you want to use\n",
44+
"os.environ[\"OPENAI_API_KEY\"] = \"YOUR_OPENAI_API_KEY_HERE\"\n",
45+
"os.environ[\"ANTHROPIC_API_KEY\"] = \"YOUR_ANTHROPIC_API_KEY_HERE\" # Optional\n",
46+
"os.environ[\"GROQ_API_KEY\"] = \"YOUR_GROQ_API_KEY_HERE\" # Optional\n",
47+
"\n",
48+
"# Openlayer env variables\n",
49+
"os.environ[\"OPENLAYER_API_KEY\"] = \"YOUR_OPENLAYER_API_KEY_HERE\"\n",
50+
"os.environ[\"OPENLAYER_INFERENCE_PIPELINE_ID\"] = \"YOUR_OPENLAYER_INFERENCE_PIPELINE_ID_HERE\"\n"
51+
]
52+
},
53+
{
54+
"cell_type": "markdown",
55+
"metadata": {},
56+
"source": [
57+
"## 2. Enable LiteLLM tracing\n"
58+
]
59+
},
60+
{
61+
"cell_type": "code",
62+
"execution_count": null,
63+
"metadata": {},
64+
"outputs": [],
65+
"source": [
66+
"from openlayer.lib import trace_litellm\n",
67+
"\n",
68+
"# Enable tracing for all LiteLLM completions\n",
69+
"trace_litellm()\n"
70+
]
71+
},
72+
{
73+
"cell_type": "markdown",
74+
"metadata": {},
75+
"source": [
76+
"## 3. Use LiteLLM normally - tracing happens automatically!\n",
77+
"\n",
78+
"### Basic completion with OpenAI\n"
79+
]
80+
},
81+
{
82+
"cell_type": "code",
83+
"execution_count": null,
84+
"metadata": {},
85+
"outputs": [],
86+
"source": [
87+
"# Basic completion with OpenAI GPT-4\n",
88+
"response = litellm.completion(\n",
89+
" model=\"gpt-4\",\n",
90+
" messages=[\n",
91+
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
92+
" {\"role\": \"user\", \"content\": \"What is the capital of France?\"}\n",
93+
" ],\n",
94+
" temperature=0.7,\n",
95+
" max_tokens=100,\n",
96+
" inference_id=\"litellm-openai-example-1\" # Optional: custom inference ID\n",
97+
")\n",
98+
"\n",
99+
"print(f\"Response: {response.choices[0].message.content}\")\n",
100+
"print(f\"Model used: {response.model}\")\n",
101+
"print(f\"Tokens used: {response.usage.total_tokens}\")\n"
102+
]
103+
},
104+
{
105+
"cell_type": "markdown",
106+
"metadata": {},
107+
"source": [
108+
"### Multi-provider comparison\n",
109+
"\n",
110+
"One of LiteLLM's key features is the ability to easily switch between providers. Let's trace completions from different providers:\n"
111+
]
112+
},
113+
{
114+
"cell_type": "code",
115+
"execution_count": null,
116+
"metadata": {},
117+
"outputs": [],
118+
"source": [
119+
"# Test the same prompt with different models/providers\n",
120+
"prompt = \"Explain quantum computing in simple terms.\"\n",
121+
"messages = [{\"role\": \"user\", \"content\": prompt}]\n",
122+
"\n",
123+
"models_to_test = [\n",
124+
" \"gpt-3.5-turbo\", # OpenAI\n",
125+
" \"claude-3-haiku-20240307\", # Anthropic (if API key is set)\n",
126+
" \"groq/llama-3.1-8b-instant\", # Groq (if API key is set)\n",
127+
"]\n",
128+
"\n",
129+
"for model in models_to_test:\n",
130+
" try:\n",
131+
" print(f\"\\n--- Testing {model} ---\")\n",
132+
" response = litellm.completion(\n",
133+
" model=model,\n",
134+
" messages=messages,\n",
135+
" temperature=0.5,\n",
136+
" max_tokens=150,\n",
137+
" inference_id=f\"multi-provider-{model.replace('/', '-')}\"\n",
138+
" )\n",
139+
" \n",
140+
" print(f\"Model: {response.model}\")\n",
141+
" print(f\"Response: {response.choices[0].message.content[:200]}...\")\n",
142+
" print(f\"Tokens: {response.usage.total_tokens}\")\n",
143+
" \n",
144+
" except Exception as e:\n",
145+
" print(f\"Failed to test {model}: {e}\")\n"
146+
]
147+
},
148+
{
149+
"cell_type": "markdown",
150+
"metadata": {},
151+
"source": [
152+
"## 4. View your traces\n",
153+
"\n",
154+
"Once you've run the examples above, you can:\n",
155+
"\n",
156+
"1. **Visit your OpenLayer dashboard** to see all the traced completions\n",
157+
"2. **Analyze performance** across different models and providers\n",
158+
"3. **Monitor costs** and token usage\n",
159+
"4. **Debug issues** with detailed request/response logs\n",
160+
"5. **Compare models** side-by-side\n",
161+
"\n",
162+
"The traces will include:\n",
163+
"- **Request details**: Model, parameters, messages\n",
164+
"- **Response data**: Generated content, token counts, latency\n",
165+
"- **Provider information**: Which underlying service was used\n",
166+
"- **Custom metadata**: Any additional context you provide\n",
167+
"\n",
168+
"For more information, check out:\n",
169+
"- [OpenLayer Documentation](https://docs.openlayer.com/)\n",
170+
"- [LiteLLM Documentation](https://docs.litellm.ai/)\n",
171+
"- [LiteLLM Supported Models](https://docs.litellm.ai/docs/providers)\n"
172+
]
173+
}
174+
],
175+
"metadata": {
176+
"language_info": {
177+
"name": "python"
178+
}
179+
},
180+
"nbformat": 4,
181+
"nbformat_minor": 2
182+
}

src/openlayer/lib/__init__.py

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
"trace_bedrock",
1414
"trace_oci_genai",
1515
"trace_oci", # Alias for backward compatibility
16+
"trace_litellm",
1617
"update_current_trace",
1718
"update_current_step",
1819
# User and session context functions
@@ -156,3 +157,37 @@ def trace_oci_genai(client, estimate_tokens: bool = True):
156157
# --------------------------------- OCI GenAI -------------------------------- #
157158
# Alias for backward compatibility
158159
trace_oci = trace_oci_genai
160+
161+
162+
# --------------------------------- LiteLLM ---------------------------------- #
163+
def trace_litellm():
164+
"""Enable tracing for LiteLLM completions.
165+
166+
This function patches litellm.completion to automatically trace all completions
167+
made through the LiteLLM library, which provides a unified interface to 100+ LLM APIs.
168+
169+
Example:
170+
>>> import litellm
171+
>>> from openlayer.lib import trace_litellm
172+
>>>
173+
>>> # Enable tracing
174+
>>> trace_litellm()
175+
>>>
176+
>>> # Use LiteLLM normally - tracing happens automatically
177+
>>> response = litellm.completion(
178+
... model="gpt-3.5-turbo",
179+
... messages=[{"role": "user", "content": "Hello!"}],
180+
... inference_id="custom-id-123" # Optional OpenLayer parameter
181+
... )
182+
"""
183+
# pylint: disable=import-outside-toplevel
184+
try:
185+
import litellm
186+
except ImportError:
187+
raise ImportError(
188+
"litellm is required for LiteLLM tracing. Install with: pip install litellm"
189+
)
190+
191+
from .integrations import litellm_tracer
192+
193+
return litellm_tracer.trace_litellm()

0 commit comments

Comments
 (0)