Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Available Python examples:
- [CLI Reference Agent](python/cli-reference-agent.md) - Example of Command-line reference agent implementation
- [File Operations](python/file_operations.md) - Example of agent with file manipulation capabilities
- [MCP Calculator](python/mcp_calculator.md) - Example of agent with Model Context Protocol capabilities
- [Meta Tooling](python/meta_tooling.md) - Example of Agent with Meta tooling capabilities
- [Meta Tooling](python/meta_tooling.md) - Example of agent with Meta tooling capabilities
- [Multi-Agent Example](python/multi_agent_example/multi_agent_example.md) - Example of a multi-agent system
- [Weather Forecaster](python/weather_forecaster.md) - Example of a weather forecasting agent with http_request capabilities

Expand Down
2 changes: 1 addition & 1 deletion docs/examples/cdk/deploy_to_ec2/package.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"name": "deploy_to_ec2",
"version": "0.1.0",
"description": "CDK TypeScript project to deploy a sample Agent to EC2",
"description": "CDK TypeScript project to deploy a sample agent to EC2",
"private": true,
"bin": {
"cdk-app": "bin/cdk-app.js"
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/cdk/deploy_to_fargate/package.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"name": "deploy_to_lambda",
"version": "0.1.0",
"description": "CDK TypeScript project to deploy a sample Agent Lambda function",
"description": "CDK TypeScript project to deploy a sample agent Lambda function",
"private": true,
"bin": {
"cdk-app": "bin/cdk-app.js"
Expand Down
32 changes: 16 additions & 16 deletions docs/examples/python/agents_workflows.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Agentic Workflow: Research Assistant - Multi-Agent Collaboration Example
# Agentic workflow: Research Assistant - Multi-agent Collaboration Example

This [example](https://github.com/strands-agents/docs/blob/main/docs/examples/python/agents_workflow.py) shows how to create a multi-agent workflow using Strands agents to perform web research, fact-checking, and report generation. It demonstrates specialized agent roles working together in sequence to process information.

Expand All @@ -21,9 +21,9 @@ The `http_request` tool enables the agent to make HTTP requests to retrieve info

The Research Assistant example implements a three-agent workflow where each agent has a specific role and works with other agents to complete tasks that require multiple steps of processing:

1. **Researcher Agent**: Gathers information from web sources using http_request tool
2. **Analyst Agent**: Verifies facts and identifies key insights from research findings
3. **Writer Agent**: Creates a final report based on the analysis
1. **Researcher agent**: Gathers information from web sources using http_request tool
2. **Analyst agent**: Verifies facts and identifies key insights from research findings
3. **Writer agent**: Creates a final report based on the analysis

## Code Structure and Implementation

Expand All @@ -32,7 +32,7 @@ The Research Assistant example implements a three-agent workflow where each agen
Each agent in the workflow is created with a system prompt that defines its role:

```python
# Researcher Agent with web capabilities
# Researcher agent with web capabilities
researcher_agent = Agent(
system_prompt=(
"You are a Researcher Agent that gathers information from the web. "
Expand All @@ -44,7 +44,7 @@ researcher_agent = Agent(
tools=[http_request]
)

# Analyst Agent for verification and insight extraction
# Analyst agent for verification and insight extraction
analyst_agent = Agent(
callback_handler=None,
system_prompt=(
Expand All @@ -55,7 +55,7 @@ analyst_agent = Agent(
),
)

# Writer Agent for final report creation
# Writer agent for final report creation
writer_agent = Agent(
system_prompt=(
"You are a Writer Agent that creates clear reports. "
Expand All @@ -72,19 +72,19 @@ The workflow is orchestrated through a function that passes information between

```python
def run_research_workflow(user_input):
# Step 1: Researcher Agent gathers web information
# Step 1: Researcher agent gathers web information
researcher_response = researcher_agent(
f"Research: '{user_input}'. Use your available tools to gather information from reliable sources.",
)
research_findings = str(researcher_response)

# Step 2: Analyst Agent verifies facts
# Step 2: Analyst agent verifies facts
analyst_response = analyst_agent(
f"Analyze these findings about '{user_input}':\n\n{research_findings}",
)
analysis = str(analyst_response)

# Step 3: Writer Agent creates report
# Step 3: Writer agent creates report
final_report = writer_agent(
f"Create a report on '{user_input}' based on this analysis:\n\n{analysis}"
)
Expand All @@ -94,12 +94,12 @@ def run_research_workflow(user_input):

### 3. Output Suppression

The example suppresses intermediate outputs during the initialization of the agents, showing users only the final result from the `Writer Agent`:
The example suppresses intermediate outputs during the initialization of the agents, showing users only the final result from the `Writer agent`:

```python
researcher_agent = Agent(
system_prompt=(
"You are a Researcher Agent that gathers information from the web. "
"You are a Researcher agent that gathers information from the web. "
"1. Determine if the input is a research query or factual claim "
"2. Use your research tools (http_request, retrieve) to find relevant information "
"3. Include source URLs and keep findings under 500 words"
Expand All @@ -113,9 +113,9 @@ Without this suppression, the default [callback_handler](https://github.com/stra

```python
print("\nProcessing: '{user_input}'")
print("\nStep 1: Researcher Agent gathering web information...")
print("\nStep 1: Researcher agent gathering web information...")
print("Research complete")
print("Passing research findings to Analyst Agent...\n")
print("Passing research findings to Analyst agent...\n")
```

## Sample Queries and Responses
Expand Down Expand Up @@ -202,8 +202,8 @@ print("Passing research findings to Analyst Agent...\n")
Here are some ways to extend this agents workflow example:

1. **Add User Feedback Loop**: Allow users to ask for more detail after receiving the report
2. **Implement Parallel Research**: Modify the Researcher Agent to gather information from multiple sources simultaneously
3. **Add Visual Content**: Enhance the Writer Agent to include images or charts in the report
2. **Implement Parallel Research**: Modify the Researcher agent to gather information from multiple sources simultaneously
3. **Add Visual Content**: Enhance the Writer agent to include images or charts in the report
4. **Create a Web Interface**: Build a web UI for the workflow
5. **Add Memory**: Implement session memory so the system remembers previous research sessions

2 changes: 1 addition & 1 deletion docs/examples/python/mcp_calculator.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def add(x: int, y: int) -> int:
mcp.run(transport="streamable-http")
```

### Now, connect the server to the Strands Agent
### Now, connect the server to the Strands agent

Now let's walk through how to connect a Strands agent to our MCP server:

Expand Down
2 changes: 1 addition & 1 deletion docs/examples/python/memory_agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This [example](https://github.com/strands-agents/docs/blob/main/docs/examples/py
| ------------------ | ------------------------------------------ |
| **Tools Used** | mem0_memory, use_llm |
| **Complexity** | Intermediate |
| **Agent Type** | Single Agent with Memory Management |
| **Agent Type** | Single agent with Memory Management |
| **Interaction** | Command Line Interface |
| **Key Focus** | Memory Operations & Contextual Responses |

Expand Down
2 changes: 1 addition & 1 deletion docs/examples/python/meta_tooling.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ agent = Agent(
```

- `editor`: Tool used to write code directly to a file named `"custom_tool_X.py"`, where "X" is the index of the tool being created.
- `load_tool`: Tool used to load the tool so the Agent can use it.
- `load_tool`: Tool used to load the tool so the agent can use it.
- `shell`: Tool used to execute the tool.

#### 2. Agent System Prompt outlines a strict guideline for naming, structure, and creation of the new tools.
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/python/meta_tooling.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
This example demonstrates Strands Agents' advanced meta-tooling capabilities - the ability of an agent
to create, load, and use custom tools dynamically at runtime.

It creates custom tools using the Agent's built-in tools for file operations and implicit tool calling.
It creates custom tools using the agent's built-in tools for file operations and implicit tool calling.
"""

import os
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/python/weather_forecaster.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This [example](https://github.com/strands-agents/docs/blob/main/docs/examples/py
| **Tool Used** | http_request |
| **API** | National Weather Service API (no key required) |
| **Complexity** | Beginner |
| **Agent Type** | Single Agent |
| **Agent Type** | Single agent |
| **Interaction** | Command Line Interface |

## Tool Overview
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/concepts/agents/prompts.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ In the Strands Agents SDK, system prompts and user messages are the primary way

## System Prompts

System prompts provide high-level instructions to the model about its role, capabilities, and constraints. They set the foundation for how the model should behave throughout the conversation. You can specify the system prompt when initializing an Agent:
System prompts provide high-level instructions to the model about its role, capabilities, and constraints. They set the foundation for how the model should behave throughout the conversation. You can specify the system prompt when initializing an agent:

```python
from strands import Agent
Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide/concepts/experimental/agent-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ The `config_to_agent` function accepts:
- `**kwargs`: Additional [Agent constructor parameters](../../../../api-reference/agent/#strands.agent.agent.Agent.__init__) that override config values

```python
# Override config values with valid Agent parameters
# Override config values with valid agent parameters
agent = config_to_agent(
"/path/to/config.json",
name="Data Analyst"
Expand All @@ -157,7 +157,7 @@ agent = config_to_agent(
## Best Practices

1. **Override when needed**: Use kwargs to override configuration values dynamically
2. **Leverage Agent defaults**: Only specify configuration values you want to override
2. **Leverage agent defaults**: Only specify configuration values you want to override
3. **Use standard tool formats**: Follow Agent class conventions for tool specifications
4. **Handle errors gracefully**: Catch FileNotFoundError and JSONDecodeError for robust applications

2 changes: 1 addition & 1 deletion docs/user-guide/concepts/multi-agent/agent-to-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ a2a_server.serve()

The `A2AServer` constructor accepts several configuration options:

- `agent`: The Strands Agent to wrap with A2A compatibility
- `agent`: The Strands agent to wrap with A2A compatibility
- `host`: Hostname or IP address to bind to (default: "127.0.0.1")
- `port`: Port to bind to (default: 9000)
- `version`: Version of the agent (default: "0.0.1")
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/concepts/multi-agent/graph.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ def only_if_research_successful(state):
builder.add_edge("research", "analysis", condition=only_if_research_successful)
```

When multiple conditional edges converge on a single node, the target node executes as soon as any one of the incoming conditional edges is satisfied. The node doesn't wait for all predecessor nodes to complete, just the first one whose condition evaluates to true.
When multiple conditional edges converge on a single node, the target node executes as soon as the condition of any one of the incoming conditional edges is satisfied. The node doesn't wait for all predecessor nodes to complete, just the first one whose condition evaluates to true.

## Nested Multi-Agent Patterns

Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/concepts/tools/executors.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Assuming the model returns `weather_tool` and `time_tool` use requests, the `Con

### Sequential Behavior

On certain prompts, the model may decide to return one tool use request at a time. Under these circumstances, the tools will execute sequentially. Concurrency is only achieved if the model returns multiple tool use requests in a single response. Certain models however offer additional abilities to coherce a desired behavior. For example, Anthropic exposes an explicit parallel tool use setting ([docs](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/implement-tool-use#parallel-tool-use)).
On certain prompts, the model may decide to return one tool use request at a time. Under these circumstances, the tools will execute sequentially. Concurrency is only achieved if the model returns multiple tool use requests in a single response. Certain models however offer additional abilities to coerce a desired behavior. For example, Anthropic exposes an explicit parallel tool use setting ([docs](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/implement-tool-use#parallel-tool-use)).

## Sequential Executor

Expand Down
8 changes: 4 additions & 4 deletions docs/user-guide/deploy/operating-agents-in-production.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,10 +133,10 @@ Built-in guides are available for several AWS services:

For production deployments, implement comprehensive monitoring:

1. **Tool Execution Metrics**: Monitor execution time and error rates for each tool
2. **Token Usage**: Track token consumption for cost optimization
3. **Response Times**: Monitor end-to-end response times
4. **Error Rates**: Track and alert on agent errors
1. **Tool Execution Metrics**: Monitor execution time and error rates for each tool.
2. **Token Usage**: Track token consumption for cost optimization.
3. **Response Times**: Monitor end-to-end response times.
4. **Error Rates**: Track and alert on agent errors.

Consider integrating with AWS CloudWatch for metrics collection and alerting.

Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide/observability-evaluation/logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ In addition to standard logging, Strands Agents SDK provides a callback system f
- **Logging**: Internal operations, debugging, errors (not typically visible to end users)
- **Callbacks**: User-facing output, streaming responses, tool execution notifications

The callback system is configured through the `callback_handler` parameter when creating an Agent:
The callback system is configured through the `callback_handler` parameter when creating an agent:

```python
from strands.handlers.callback_handler import PrintingCallbackHandler
Expand All @@ -168,7 +168,7 @@ You can create custom callback handlers to process streaming events according to

## Best Practices

1. **Configure Early**: Set up logging configuration before initializing the Agent
1. **Configure Early**: Set up logging configuration before initializing the agent
2. **Appropriate Levels**: Use INFO for normal operation and DEBUG for troubleshooting
3. **Structured Log Format**: Use the structured log format shown in examples for better parsing
4. **Performance**: Be mindful of logging overhead in production environments
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/observability-evaluation/observability.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,4 +97,4 @@ With these components in place, a continuous improvement flywheel emerges which

## Conclusion

Effective observability is crucial for developing agents which reliably complete customers’ tasks. The key to success is treating observability not as an afterthought, but as a core component of agent engineering from day one. This investment will pay dividends in improved reliability, faster development cycles, and better customer experiences.
Effective observability is crucial for developing agents that reliably complete customers’ tasks. The key to success is treating observability not as an afterthought, but as a core component of agent engineering from day one. This investment will pay dividends in improved reliability, faster development cycles, and better customer experiences.
2 changes: 1 addition & 1 deletion docs/user-guide/safety-security/pii-redaction.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ print(result)
langfuse.flush()
```

#### Complete example with a Strands Agent
#### Complete example with a Strands agent

```python
from strands import Agent
Expand Down