Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions agent-framework/user-guide/agents/agent-observability.md
Original file line number Diff line number Diff line change
Expand Up @@ -332,6 +332,18 @@ This trace shows:

We have a number of samples in our repository that demonstrate these capabilities, see the [observability samples folder](https://github.com/microsoft/agent-framework/tree/main/python/samples/getting_started/observability) on Github. That includes samples for using zero-code telemetry as well.

## Third-party observability integrations

Traces generated by Agent Framework can be exported to your desired backend compatible with OpenTelemetry.

### MLflow

[MLflow](https://mlflow.org/) is a popular open-source platform that provides observability and reproducibility for LLM applications. Agent Framework can export traces to MLflow through its OTLP endpoint to keep a durable record of workflow runs, inputs/outputs, and derived metrics.

![MLflow Traces](https://mlflow.org/docs/latest/images/llms/tracing/microsoft-agent-framework-tracing.png)

See [MLflow Microsoft Agent Framework integration](https://mlflow.org/docs/latest/genai/tracing/integrations/listing/microsoft-agent-framework/) for how to set up MLflow to collect traces from Agent Framework.

::: zone-end

## Next steps
Expand Down
12 changes: 12 additions & 0 deletions agent-framework/user-guide/workflows/observability.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,18 @@ For example:

![Span Relationships](./resources/images/workflow-trace.png)

## Third-party observability integrations

Traces generated by Agent Framework can be exported to your desired backend compatible with OpenTelemetry.

### MLflow

[MLflow](https://mlflow.org/) is a popular open-source platform that provides observability and reproducibility for LLM applications. Agent Framework can export traces to MLflow through its OTLP endpoint to keep a durable record of workflow runs, inputs/outputs, and derived metrics.

![MLflow Traces](https://mlflow.org/docs/latest/images/llms/tracing/microsoft-agent-framework-tracing.png)

See [MLflow Microsoft Agent Framework integration](https://mlflow.org/docs/latest/genai/tracing/integrations/listing/microsoft-agent-framework/) for how to set up MLflow to collect traces from Agent Framework.

## Next Steps

- [Learn how to use agents in workflows](./using-agents.md) to build intelligent workflows.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,7 @@
href: telemetry-with-aspire-dashboard.md
- name: 'Example: Azure AI Foundry Tracing'
href: telemetry-with-azure-ai-foundry-tracing.md
- name: 'Example: MLflow Tracing'
href: mlflow.mdx
- name: 'Advanced telemetry with Semantic Kernel'
href: telemetry-advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,24 @@ Semantic Kernel follows the [OpenTelemetry Semantic Convention](https://opentele
> [!Note]
> Currently, the [Semantic Conventions for Generative AI](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/gen-ai/README.md) are in experimental status. Semantic Kernel strives to follow the OpenTelemetry Semantic Convention as closely as possible, and provide a consistent and meaningful observability experience for AI solutions.

## Third-party Observability Integrations

Traces generated by Semantic Kernel can be exported to your desired backend compatible with OpenTelemetry.

### MLflow

[MLflow](https://mlflow.org/) is a popular open-source platform that provides observability and reproducibility for LLM applications. Semantic Kernel is supported by MLflow for the one-line automatic tracing setup.

```python
import mlflow

mlflow.semantic_kernel.autolog()
```

![MLflow Traces](https://mlflow.org/docs/latest/images/llms/tracing/semantic-kernel-tracing.png)

See [Telemetry with MLflow](telemetry-with-mlflow.md) for more details on how to setup MLflow to collect traces from Semantic Kernel.

## Next steps

Now that you have a basic understanding of observability in Semantic Kernel, you can learn more about how to output telemetry data to the console or use APM tools to visualize and analyze telemetry data.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
---
title: Telemetry with MLflow Tracing
description: Collect Semantic Kernel traces in MLflow using autologging.
zone_pivot_groups: programming-languages
author: TaoChenOSU
ms.topic: conceptual
ms.author: taochen
ms.date: 11/25/2025
ms.service: semantic-kernel
---

# Inspection of telemetry data with MLflow

[MLflow](https://mlflow.org/) provides tracing for LLM applications and includes a built‑in integration for Microsoft Semantic Kernel. With a single line of code, you can capture spans from Semantic Kernel and browse them in the MLflow UI alongside parameters, metrics, and artifacts.

## Prerequisites

- Python 3.10, 3.11, or 3.12.
- An LLM provider. The example below uses Azure OpenAI chat completions.
- MLflow UI or Tracking Server (local UI shown below).

## Setup

::: zone pivot="programming-language-python"

### 1) Install packages

```bash
pip install semantic-kernel mlflow
```

### 2) Start the MLflow Tracking Server (local)

```bash
mlflow sever --port 5000 --backend-store-uri sqlite:///mlflow.db
```

### 3) Create a simple Semantic Kernel script and enable MLflow autologging

Create `telemetry_mlflow_quickstart.py` with the content below and fill in the environment variables for your Azure OpenAI deployment.

```python
import os
import asyncio
import mlflow

from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion

# One-line enablement of MLflow tracing for Semantic Kernel
mlflow.semantic_kernel.autolog()

# Set the tracking URI and experiment name (optional)
mlflow.set_tracking_uri("http://127.0.0.1:5000")
mlflow.set_experiment("telemetry-mlflow-quickstart")


async def main():
# Configure the kernel and add an Azure OpenAI chat service
kernel = Kernel()
kernel.add_service(AzureChatCompletion(
api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
deployment_name=os.environ.get("AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"),
))

# Issue a simple prompt; MLflow records spans automatically
answer = await kernel.invoke_prompt("Why is the sky blue in one sentence?")
print(answer)


if __name__ == "__main__":
asyncio.run(main())
```

Run the script:

```bash
python telemetry_mlflow_quickstart.py
```

### 4) Inspect traces in MLflow

Open the MLflow UI (default at `http://127.0.0.1:5000`). Navigate to the Traces view to see spans emitted by Semantic Kernel, including function execution and model calls.

![MLflow Traces](https://mlflow.org/docs/latest/images/llms/tracing/semantic-kernel-tracing.png)

::: zone-end

## Next steps

- Explore the [Observability overview](./index.md) for additional exporters and patterns.
- Review [Advanced telemetry with Semantic Kernel](./telemetry-advanced.md) to customize signals and attributes.
- Visit [MLflow Semantic Kernel integration](https://mlflow.org/docs/latest/genai/tracing/integrations/listing/semantic-kernel/) for more detailed information on how to use MLflow with Semantic Kernel.