Skip to content

Commit 66fd0d3

Browse files
authored
Update structured output docs for model providers (#97)
* Update structured output docs for model providers * Update structured output docs for model providers * Update examples * Refactor custom_model_provider a bit more
1 parent 0cc8634 commit 66fd0d3

File tree

7 files changed

+347
-2
lines changed

7 files changed

+347
-2
lines changed

docs/user-guide/concepts/model-providers/amazon-bedrock.md

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -511,6 +511,44 @@ response = agent("If a train travels at 120 km/h and needs to cover 450 km, how
511511

512512
> **Note**: Not all models support structured reasoning output. Check the [inference reasoning documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-reasoning.html) for details on supported models.
513513
514+
### Structured Output
515+
516+
Amazon Bedrock models support structured output through their tool calling capabilities. When you use [`Agent.structured_output()`](../../../api-reference/agent.md#strands.agent.agent.Agent.structured_output), the Strands SDK converts your Pydantic models to Bedrock's tool specification format.
517+
518+
```python
519+
from pydantic import BaseModel, Field
520+
from strands import Agent
521+
from strands.models import BedrockModel
522+
from typing import List, Optional
523+
524+
class ProductAnalysis(BaseModel):
525+
"""Analyze product information from text."""
526+
name: str = Field(description="Product name")
527+
category: str = Field(description="Product category")
528+
price: float = Field(description="Price in USD")
529+
features: List[str] = Field(description="Key product features")
530+
rating: Optional[float] = Field(description="Customer rating 1-5", ge=1, le=5)
531+
532+
bedrock_model = BedrockModel()
533+
534+
agent = Agent(model=bedrock_model)
535+
536+
result = agent.structured_output(
537+
ProductAnalysis,
538+
"""
539+
Analyze this product: The UltraBook Pro is a premium laptop computer
540+
priced at $1,299. It features a 15-inch 4K display, 16GB RAM, 512GB SSD,
541+
and 12-hour battery life. Customer reviews average 4.5 stars.
542+
"""
543+
)
544+
545+
print(f"Product: {result.name}")
546+
print(f"Category: {result.category}")
547+
print(f"Price: ${result.price}")
548+
print(f"Features: {result.features}")
549+
print(f"Rating: {result.rating}")
550+
```
551+
514552
## Troubleshooting
515553

516554
### Model access issue

docs/user-guide/concepts/model-providers/anthropic.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,53 @@ The `model_config` configures the underlying model selected for inference. The s
5858

5959
If you encounter the error `ModuleNotFoundError: No module named 'anthropic'`, this means you haven't installed the `anthropic` dependency in your environment. To fix, run `pip install 'strands-agents[anthropic]'`.
6060

61+
## Advanced Features
62+
63+
### Structured Output
64+
65+
Anthropic's Claude models support structured output through their tool calling capabilities. When you use [`Agent.structured_output()`](../../../api-reference/agent.md#strands.agent.agent.Agent.structured_output), the Strands SDK converts your Pydantic models to Anthropic's tool specification format.
66+
67+
```python
68+
from pydantic import BaseModel, Field
69+
from strands import Agent
70+
from strands.models.anthropic import AnthropicModel
71+
72+
class BookAnalysis(BaseModel):
73+
"""Analyze a book's key information."""
74+
title: str = Field(description="The book's title")
75+
author: str = Field(description="The book's author")
76+
genre: str = Field(description="Primary genre or category")
77+
summary: str = Field(description="Brief summary of the book")
78+
rating: int = Field(description="Rating from 1-10", ge=1, le=10)
79+
80+
model = AnthropicModel(
81+
client_args={
82+
"api_key": "<KEY>",
83+
},
84+
max_tokens=1028,
85+
model_id="claude-3-7-sonnet-20250219",
86+
params={
87+
"temperature": 0.7,
88+
}
89+
)
90+
91+
agent = Agent(model=model)
92+
93+
result = agent.structured_output(
94+
BookAnalysis,
95+
"""
96+
Analyze this book: "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
97+
It's a science fiction comedy about Arthur Dent's adventures through space
98+
after Earth is destroyed. It's widely considered a classic of humorous sci-fi.
99+
"""
100+
)
101+
102+
print(f"Title: {result.title}")
103+
print(f"Author: {result.author}")
104+
print(f"Genre: {result.genre}")
105+
print(f"Rating: {result.rating}")
106+
```
107+
61108
## References
62109

63110
- [API](../../../api-reference/models.md)

docs/user-guide/concepts/model-providers/custom_model_provider.md

Lines changed: 108 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,40 @@
22

33
Strands Agents SDK provides an extensible interface for implementing custom model providers, allowing organizations to integrate their own LLM services while keeping implementation details private to their codebase.
44

5+
## Model Provider Functionality
6+
7+
Custom model providers in Strands Agents support two primary interaction modes:
8+
9+
### Conversational Interaction
10+
The standard conversational mode where agents exchange messages with the model. This is the default interaction pattern that is used when you call an agent directly:
11+
12+
```python
13+
agent = Agent(model=your_custom_model)
14+
response = agent("Hello, how can you help me today?")
15+
```
16+
17+
This invokes the underlying model provided to the agent.
18+
19+
### Structured Output
20+
A specialized mode that returns type-safe, validated responses using [Pydantic](https://docs.pydantic.dev/latest/concepts/models/) models instead of raw text. This enables reliable data extraction and processing:
21+
22+
```python
23+
from pydantic import BaseModel
24+
25+
class PersonInfo(BaseModel):
26+
name: str
27+
age: int
28+
occupation: str
29+
30+
result = agent.structured_output(
31+
PersonInfo,
32+
"Extract info: John Smith is a 30-year-old software engineer"
33+
)
34+
# Returns a validated PersonInfo object
35+
```
36+
37+
Both modes work through the same underlying model provider interface, with structured output using tool calling capabilities to ensure schema compliance.
38+
539
## Model Provider Architecture
640

741
Strands Agents uses an abstract `Model` class that defines the standard interface all model providers must implement:
@@ -254,9 +288,58 @@ Now that you have mapped the Strands Agents input to your models request, use th
254288
yield chunk
255289
```
256290

257-
### 5. Use Your Custom Model Provider
291+
### 5. Structured Output Support
292+
293+
To support structured output in your custom model provider, you need to implement a `structured_output()` method that invokes your model, and has it return a json output. Below is an example of what this might look like for a Bedrock model, where we invoke the model with a tool spec, and check if the response contains a `toolUse` response.
294+
295+
```python
296+
297+
T = TypeVar('T', bound=BaseModel)
298+
299+
@override
300+
def structured_output(
301+
self, output_model: Type[T], prompt: Messages, callback_handler: Optional[Callable] = None
302+
) -> T:
303+
"""Get structured output using tool calling."""
304+
305+
# Convert Pydantic model to tool specification
306+
tool_spec = convert_pydantic_to_tool_spec(output_model)
307+
308+
# Use existing converse method with tool specification
309+
response = self.converse(messages=prompt, tool_specs=[tool_spec])
310+
311+
# Process streaming response
312+
for event in process_stream(response, prompt):
313+
if callback_handler and "callback" in event:
314+
callback_handler(**event["callback"])
315+
else:
316+
stop_reason, messages, _, _ = event["stop"]
317+
318+
# Validate tool use response
319+
if stop_reason != "tool_use":
320+
raise ValueError("No valid tool use found in the model response.")
321+
322+
# Extract tool use output
323+
content = messages["content"]
324+
for block in content:
325+
if block.get("toolUse") and block["toolUse"]["name"] == tool_spec["name"]:
326+
return output_model(**block["toolUse"]["input"])
327+
328+
raise ValueError("No valid tool use input found in the response.")
329+
```
330+
331+
**Implementation Suggestions:**
332+
333+
1. **Tool Integration**: Use your existing `converse()` method with tool specifications to invoke your model
334+
2. **Response Validation**: Use `output_model(**data)` to validate the response
335+
3. **Error Handling**: Provide clear error messages for parsing and validation failures
258336

259-
Once implemented, you can use your custom model provider in your applications:
337+
338+
For detailed structured output usage patterns, see the [Structured Output documentation](../agents/structured-output.md).
339+
340+
### 6. Use Your Custom Model Provider
341+
342+
Once implemented, you can use your custom model provider in your applications for regular agent invocation:
260343

261344
```python
262345
from strands import Agent
@@ -280,6 +363,29 @@ agent = Agent(model=custom_model)
280363
response = agent("Hello, how are you today?")
281364
```
282365

366+
Or you can use the `structured_output` feature to generate structured output:
367+
368+
```python
369+
from strands import Agent
370+
from your_org.models.custom_model import Model as CustomModel
371+
from pydantic import BaseModel, Field
372+
373+
class PersonInfo(BaseModel):
374+
name: str = Field(description="Full name")
375+
age: int = Field(description="Age in years")
376+
occupation: str = Field(description="Job title")
377+
378+
model = CustomModel(api_key="key", model_id="model")
379+
380+
agent = Agent(model=model)
381+
382+
result = agent.structured_output(PersonInfo, "John Smith is a 30-year-old engineer.")
383+
384+
print(f"Name: {result.name}")
385+
print(f"Age: {result.age}")
386+
print(f"Occupation: {result.occupation}")
387+
```
388+
283389
## Key Implementation Considerations
284390

285391
### 1. Message Formatting

docs/user-guide/concepts/model-providers/litellm.md

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,46 @@ The `model_config` configures the underlying model selected for inference. The s
5757

5858
If you encounter the error `ModuleNotFoundError: No module named 'litellm'`, this means you haven't installed the `litellm` dependency in your environment. To fix, run `pip install 'strands-agents[litellm]'`.
5959

60+
## Advanced Features
61+
62+
### Structured Output
63+
64+
LiteLLM supports structured output by proxying requests to underlying model providers that support tool calling. The availability of structured output depends on the specific model and provider you're using through LiteLLM.
65+
66+
```python
67+
from pydantic import BaseModel, Field
68+
from strands import Agent
69+
from strands.models.litellm import LiteLLMModel
70+
71+
class BookAnalysis(BaseModel):
72+
"""Analyze a book's key information."""
73+
title: str = Field(description="The book's title")
74+
author: str = Field(description="The book's author")
75+
genre: str = Field(description="Primary genre or category")
76+
summary: str = Field(description="Brief summary of the book")
77+
rating: int = Field(description="Rating from 1-10", ge=1, le=10)
78+
79+
model = LiteLLMModel(
80+
model_id="bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0"
81+
)
82+
83+
agent = Agent(model=model)
84+
85+
result = agent.structured_output(
86+
BookAnalysis,
87+
"""
88+
Analyze this book: "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
89+
It's a science fiction comedy about Arthur Dent's adventures through space
90+
after Earth is destroyed. It's widely considered a classic of humorous sci-fi.
91+
"""
92+
)
93+
94+
print(f"Title: {result.title}")
95+
print(f"Author: {result.author}")
96+
print(f"Genre: {result.genre}")
97+
print(f"Rating: {result.rating}")
98+
```
99+
60100
## References
61101

62102
- [API](../../../api-reference/models.md)

docs/user-guide/concepts/model-providers/llamaapi.md

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,47 @@ The `model_config` configures the underlying model selected for inference. The s
6363

6464
If you encounter the error `ModuleNotFoundError: No module named 'llamaapi'`, this means you haven't installed the `llamaapi` dependency in your environment. To fix, run `pip install 'strands-agents[llamaapi]'`.
6565

66+
## Advanced Features
67+
68+
### Structured Output
69+
70+
Llama API models support structured output through their tool calling capabilities. When you use [`Agent.structured_output()`](../../../api-reference/agent.md#strands.agent.agent.Agent.structured_output), the Strands SDK converts your Pydantic models to tool specifications that Llama models can understand.
71+
72+
```python
73+
from pydantic import BaseModel, Field
74+
from strands import Agent
75+
from strands.models.llamaapi import LlamaAPIModel
76+
77+
class BookAnalysis(BaseModel):
78+
"""Analyze a book's key information."""
79+
title: str = Field(description="The book's title")
80+
author: str = Field(description="The book's author")
81+
genre: str = Field(description="Primary genre or category")
82+
summary: str = Field(description="Brief summary of the book")
83+
rating: int = Field(description="Rating from 1-10", ge=1, le=10)
84+
85+
model = LlamaAPIModel(
86+
client_args={"api_key": "<KEY>"},
87+
model_id="Llama-4-Maverick-17B-128E-Instruct-FP8",
88+
)
89+
90+
agent = Agent(model=model)
91+
92+
result = agent.structured_output(
93+
BookAnalysis,
94+
"""
95+
Analyze this book: "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
96+
It's a science fiction comedy about Arthur Dent's adventures through space
97+
after Earth is destroyed. It's widely considered a classic of humorous sci-fi.
98+
"""
99+
)
100+
101+
print(f"Title: {result.title}")
102+
print(f"Author: {result.author}")
103+
print(f"Genre: {result.genre}")
104+
print(f"Rating: {result.rating}")
105+
```
106+
66107
## References
67108

68109
- [API](../../../api-reference/models.md)

docs/user-guide/concepts/model-providers/ollama.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -191,6 +191,45 @@ creative_agent = Agent(model=creative_model)
191191
factual_agent = Agent(model=factual_model)
192192
```
193193

194+
### Structured Output
195+
196+
Ollama supports structured output for models that have tool calling capabilities. When you use [`Agent.structured_output()`](../../../api-reference/agent.md#strands.agent.agent.Agent.structured_output), the Strands SDK converts your Pydantic models to tool specifications that compatible Ollama models can understand.
197+
198+
```python
199+
from pydantic import BaseModel, Field
200+
from strands import Agent
201+
from strands.models.ollama import OllamaModel
202+
203+
class BookAnalysis(BaseModel):
204+
"""Analyze a book's key information."""
205+
title: str = Field(description="The book's title")
206+
author: str = Field(description="The book's author")
207+
genre: str = Field(description="Primary genre or category")
208+
summary: str = Field(description="Brief summary of the book")
209+
rating: int = Field(description="Rating from 1-10", ge=1, le=10)
210+
211+
ollama_model = OllamaModel(
212+
host="http://localhost:11434",
213+
model_id="llama3",
214+
)
215+
216+
agent = Agent(model=ollama_model)
217+
218+
result = agent.structured_output(
219+
BookAnalysis,
220+
"""
221+
Analyze this book: "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
222+
It's a science fiction comedy about Arthur Dent's adventures through space
223+
after Earth is destroyed. It's widely considered a classic of humorous sci-fi.
224+
"""
225+
)
226+
227+
print(f"Title: {result.title}")
228+
print(f"Author: {result.author}")
229+
print(f"Genre: {result.genre}")
230+
print(f"Rating: {result.rating}")
231+
```
232+
194233
## Tool Support
195234

196235
[Ollama models that support tool use](https://ollama.com/search?c=tools) can use tools through Strands's tool system:

0 commit comments

Comments
 (0)