diff --git a/docs/examples/README.md b/docs/examples/README.md index 75d0a589..3da8a6f2 100644 --- a/docs/examples/README.md +++ b/docs/examples/README.md @@ -41,7 +41,7 @@ Available Python examples: - [CLI Reference Agent](python/cli-reference-agent.md) - Example of Command-line reference agent implementation - [File Operations](python/file_operations.md) - Example of agent with file manipulation capabilities - [MCP Calculator](python/mcp_calculator.md) - Example of agent with Model Context Protocol capabilities -- [Meta Tooling](python/meta_tooling.md) - Example of Agent with Meta tooling capabilities +- [Meta Tooling](python/meta_tooling.md) - Example of agent with Meta tooling capabilities - [Multi-Agent Example](python/multi_agent_example/multi_agent_example.md) - Example of a multi-agent system - [Weather Forecaster](python/weather_forecaster.md) - Example of a weather forecasting agent with http_request capabilities diff --git a/docs/examples/cdk/deploy_to_ec2/package.json b/docs/examples/cdk/deploy_to_ec2/package.json index 8a971b77..9c6552e1 100644 --- a/docs/examples/cdk/deploy_to_ec2/package.json +++ b/docs/examples/cdk/deploy_to_ec2/package.json @@ -1,7 +1,7 @@ { "name": "deploy_to_ec2", "version": "0.1.0", - "description": "CDK TypeScript project to deploy a sample Agent to EC2", + "description": "CDK TypeScript project to deploy a sample agent to EC2", "private": true, "bin": { "cdk-app": "bin/cdk-app.js" diff --git a/docs/examples/cdk/deploy_to_fargate/package.json b/docs/examples/cdk/deploy_to_fargate/package.json index 6c58b85b..94f273d2 100644 --- a/docs/examples/cdk/deploy_to_fargate/package.json +++ b/docs/examples/cdk/deploy_to_fargate/package.json @@ -1,7 +1,7 @@ { "name": "deploy_to_lambda", "version": "0.1.0", - "description": "CDK TypeScript project to deploy a sample Agent Lambda function", + "description": "CDK TypeScript project to deploy a sample agent Lambda function", "private": true, "bin": { "cdk-app": "bin/cdk-app.js" diff --git a/docs/examples/python/agents_workflows.md b/docs/examples/python/agents_workflows.md index 51331e75..9e0ffce1 100644 --- a/docs/examples/python/agents_workflows.md +++ b/docs/examples/python/agents_workflows.md @@ -1,4 +1,4 @@ -# Agentic Workflow: Research Assistant - Multi-Agent Collaboration Example +# Agentic workflow: Research Assistant - Multi-agent Collaboration Example This [example](https://github.com/strands-agents/docs/blob/main/docs/examples/python/agents_workflow.py) shows how to create a multi-agent workflow using Strands agents to perform web research, fact-checking, and report generation. It demonstrates specialized agent roles working together in sequence to process information. @@ -21,9 +21,9 @@ The `http_request` tool enables the agent to make HTTP requests to retrieve info The Research Assistant example implements a three-agent workflow where each agent has a specific role and works with other agents to complete tasks that require multiple steps of processing: -1. **Researcher Agent**: Gathers information from web sources using http_request tool -2. **Analyst Agent**: Verifies facts and identifies key insights from research findings -3. **Writer Agent**: Creates a final report based on the analysis +1. **Researcher agent**: Gathers information from web sources using http_request tool +2. **Analyst agent**: Verifies facts and identifies key insights from research findings +3. **Writer agent**: Creates a final report based on the analysis ## Code Structure and Implementation @@ -32,7 +32,7 @@ The Research Assistant example implements a three-agent workflow where each agen Each agent in the workflow is created with a system prompt that defines its role: ```python -# Researcher Agent with web capabilities +# Researcher agent with web capabilities researcher_agent = Agent( system_prompt=( "You are a Researcher Agent that gathers information from the web. " @@ -44,7 +44,7 @@ researcher_agent = Agent( tools=[http_request] ) -# Analyst Agent for verification and insight extraction +# Analyst agent for verification and insight extraction analyst_agent = Agent( callback_handler=None, system_prompt=( @@ -55,7 +55,7 @@ analyst_agent = Agent( ), ) -# Writer Agent for final report creation +# Writer agent for final report creation writer_agent = Agent( system_prompt=( "You are a Writer Agent that creates clear reports. " @@ -72,19 +72,19 @@ The workflow is orchestrated through a function that passes information between ```python def run_research_workflow(user_input): - # Step 1: Researcher Agent gathers web information + # Step 1: Researcher agent gathers web information researcher_response = researcher_agent( f"Research: '{user_input}'. Use your available tools to gather information from reliable sources.", ) research_findings = str(researcher_response) - # Step 2: Analyst Agent verifies facts + # Step 2: Analyst agent verifies facts analyst_response = analyst_agent( f"Analyze these findings about '{user_input}':\n\n{research_findings}", ) analysis = str(analyst_response) - # Step 3: Writer Agent creates report + # Step 3: Writer agent creates report final_report = writer_agent( f"Create a report on '{user_input}' based on this analysis:\n\n{analysis}" ) @@ -94,12 +94,12 @@ def run_research_workflow(user_input): ### 3. Output Suppression -The example suppresses intermediate outputs during the initialization of the agents, showing users only the final result from the `Writer Agent`: +The example suppresses intermediate outputs during the initialization of the agents, showing users only the final result from the `Writer agent`: ```python researcher_agent = Agent( system_prompt=( - "You are a Researcher Agent that gathers information from the web. " + "You are a Researcher agent that gathers information from the web. " "1. Determine if the input is a research query or factual claim " "2. Use your research tools (http_request, retrieve) to find relevant information " "3. Include source URLs and keep findings under 500 words" @@ -113,9 +113,9 @@ Without this suppression, the default [callback_handler](https://github.com/stra ```python print("\nProcessing: '{user_input}'") -print("\nStep 1: Researcher Agent gathering web information...") +print("\nStep 1: Researcher agent gathering web information...") print("Research complete") -print("Passing research findings to Analyst Agent...\n") +print("Passing research findings to Analyst agent...\n") ``` ## Sample Queries and Responses @@ -202,8 +202,8 @@ print("Passing research findings to Analyst Agent...\n") Here are some ways to extend this agents workflow example: 1. **Add User Feedback Loop**: Allow users to ask for more detail after receiving the report -2. **Implement Parallel Research**: Modify the Researcher Agent to gather information from multiple sources simultaneously -3. **Add Visual Content**: Enhance the Writer Agent to include images or charts in the report +2. **Implement Parallel Research**: Modify the Researcher agent to gather information from multiple sources simultaneously +3. **Add Visual Content**: Enhance the Writer agent to include images or charts in the report 4. **Create a Web Interface**: Build a web UI for the workflow 5. **Add Memory**: Implement session memory so the system remembers previous research sessions diff --git a/docs/examples/python/mcp_calculator.md b/docs/examples/python/mcp_calculator.md index 6bf087ca..09520ecd 100644 --- a/docs/examples/python/mcp_calculator.md +++ b/docs/examples/python/mcp_calculator.md @@ -36,7 +36,7 @@ def add(x: int, y: int) -> int: mcp.run(transport="streamable-http") ``` -### Now, connect the server to the Strands Agent +### Now, connect the server to the Strands agent Now let's walk through how to connect a Strands agent to our MCP server: diff --git a/docs/examples/python/memory_agent.md b/docs/examples/python/memory_agent.md index e3035aae..eacf5178 100644 --- a/docs/examples/python/memory_agent.md +++ b/docs/examples/python/memory_agent.md @@ -8,7 +8,7 @@ This [example](https://github.com/strands-agents/docs/blob/main/docs/examples/py | ------------------ | ------------------------------------------ | | **Tools Used** | mem0_memory, use_llm | | **Complexity** | Intermediate | -| **Agent Type** | Single Agent with Memory Management | +| **Agent Type** | Single agent with Memory Management | | **Interaction** | Command Line Interface | | **Key Focus** | Memory Operations & Contextual Responses | diff --git a/docs/examples/python/meta_tooling.md b/docs/examples/python/meta_tooling.md index 1fbbd3e0..4d37f01f 100644 --- a/docs/examples/python/meta_tooling.md +++ b/docs/examples/python/meta_tooling.md @@ -38,7 +38,7 @@ agent = Agent( ``` - `editor`: Tool used to write code directly to a file named `"custom_tool_X.py"`, where "X" is the index of the tool being created. - - `load_tool`: Tool used to load the tool so the Agent can use it. + - `load_tool`: Tool used to load the tool so the agent can use it. - `shell`: Tool used to execute the tool. #### 2. Agent System Prompt outlines a strict guideline for naming, structure, and creation of the new tools. diff --git a/docs/examples/python/meta_tooling.py b/docs/examples/python/meta_tooling.py index 1e759977..e7129ce8 100644 --- a/docs/examples/python/meta_tooling.py +++ b/docs/examples/python/meta_tooling.py @@ -5,7 +5,7 @@ This example demonstrates Strands Agents' advanced meta-tooling capabilities - the ability of an agent to create, load, and use custom tools dynamically at runtime. -It creates custom tools using the Agent's built-in tools for file operations and implicit tool calling. +It creates custom tools using the agent's built-in tools for file operations and implicit tool calling. """ import os diff --git a/docs/examples/python/weather_forecaster.md b/docs/examples/python/weather_forecaster.md index 1ead0d8e..e4813b9e 100644 --- a/docs/examples/python/weather_forecaster.md +++ b/docs/examples/python/weather_forecaster.md @@ -9,7 +9,7 @@ This [example](https://github.com/strands-agents/docs/blob/main/docs/examples/py | **Tool Used** | http_request | | **API** | National Weather Service API (no key required) | | **Complexity** | Beginner | -| **Agent Type** | Single Agent | +| **Agent Type** | Single agent | | **Interaction** | Command Line Interface | ## Tool Overview diff --git a/docs/user-guide/concepts/agents/prompts.md b/docs/user-guide/concepts/agents/prompts.md index 2eb5c60e..fba46d98 100644 --- a/docs/user-guide/concepts/agents/prompts.md +++ b/docs/user-guide/concepts/agents/prompts.md @@ -4,7 +4,7 @@ In the Strands Agents SDK, system prompts and user messages are the primary way ## System Prompts -System prompts provide high-level instructions to the model about its role, capabilities, and constraints. They set the foundation for how the model should behave throughout the conversation. You can specify the system prompt when initializing an Agent: +System prompts provide high-level instructions to the model about its role, capabilities, and constraints. They set the foundation for how the model should behave throughout the conversation. You can specify the system prompt when initializing an agent: ```python from strands import Agent diff --git a/docs/user-guide/concepts/experimental/agent-config.md b/docs/user-guide/concepts/experimental/agent-config.md index 7440fcdb..71194ff1 100644 --- a/docs/user-guide/concepts/experimental/agent-config.md +++ b/docs/user-guide/concepts/experimental/agent-config.md @@ -147,7 +147,7 @@ The `config_to_agent` function accepts: - `**kwargs`: Additional [Agent constructor parameters](../../../../api-reference/agent/#strands.agent.agent.Agent.__init__) that override config values ```python -# Override config values with valid Agent parameters +# Override config values with valid agent parameters agent = config_to_agent( "/path/to/config.json", name="Data Analyst" @@ -157,7 +157,7 @@ agent = config_to_agent( ## Best Practices 1. **Override when needed**: Use kwargs to override configuration values dynamically -2. **Leverage Agent defaults**: Only specify configuration values you want to override +2. **Leverage agent defaults**: Only specify configuration values you want to override 3. **Use standard tool formats**: Follow Agent class conventions for tool specifications 4. **Handle errors gracefully**: Catch FileNotFoundError and JSONDecodeError for robust applications diff --git a/docs/user-guide/concepts/multi-agent/agent-to-agent.md b/docs/user-guide/concepts/multi-agent/agent-to-agent.md index f0b0625e..c0544b09 100644 --- a/docs/user-guide/concepts/multi-agent/agent-to-agent.md +++ b/docs/user-guide/concepts/multi-agent/agent-to-agent.md @@ -69,7 +69,7 @@ a2a_server.serve() The `A2AServer` constructor accepts several configuration options: -- `agent`: The Strands Agent to wrap with A2A compatibility +- `agent`: The Strands agent to wrap with A2A compatibility - `host`: Hostname or IP address to bind to (default: "127.0.0.1") - `port`: Port to bind to (default: 9000) - `version`: Version of the agent (default: "0.0.1") diff --git a/docs/user-guide/concepts/multi-agent/graph.md b/docs/user-guide/concepts/multi-agent/graph.md index f03625e6..82deaebd 100644 --- a/docs/user-guide/concepts/multi-agent/graph.md +++ b/docs/user-guide/concepts/multi-agent/graph.md @@ -137,7 +137,7 @@ def only_if_research_successful(state): builder.add_edge("research", "analysis", condition=only_if_research_successful) ``` -When multiple conditional edges converge on a single node, the target node executes as soon as any one of the incoming conditional edges is satisfied. The node doesn't wait for all predecessor nodes to complete, just the first one whose condition evaluates to true. +When multiple conditional edges converge on a single node, the target node executes as soon as the condition of any one of the incoming conditional edges is satisfied. The node doesn't wait for all predecessor nodes to complete, just the first one whose condition evaluates to true. ## Nested Multi-Agent Patterns diff --git a/docs/user-guide/concepts/tools/executors.md b/docs/user-guide/concepts/tools/executors.md index 8b4d3c13..4b4353b1 100644 --- a/docs/user-guide/concepts/tools/executors.md +++ b/docs/user-guide/concepts/tools/executors.md @@ -23,7 +23,7 @@ Assuming the model returns `weather_tool` and `time_tool` use requests, the `Con ### Sequential Behavior -On certain prompts, the model may decide to return one tool use request at a time. Under these circumstances, the tools will execute sequentially. Concurrency is only achieved if the model returns multiple tool use requests in a single response. Certain models however offer additional abilities to coherce a desired behavior. For example, Anthropic exposes an explicit parallel tool use setting ([docs](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/implement-tool-use#parallel-tool-use)). +On certain prompts, the model may decide to return one tool use request at a time. Under these circumstances, the tools will execute sequentially. Concurrency is only achieved if the model returns multiple tool use requests in a single response. Certain models however offer additional abilities to coerce a desired behavior. For example, Anthropic exposes an explicit parallel tool use setting ([docs](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/implement-tool-use#parallel-tool-use)). ## Sequential Executor diff --git a/docs/user-guide/deploy/operating-agents-in-production.md b/docs/user-guide/deploy/operating-agents-in-production.md index 97e95e06..4da83729 100644 --- a/docs/user-guide/deploy/operating-agents-in-production.md +++ b/docs/user-guide/deploy/operating-agents-in-production.md @@ -133,10 +133,10 @@ Built-in guides are available for several AWS services: For production deployments, implement comprehensive monitoring: -1. **Tool Execution Metrics**: Monitor execution time and error rates for each tool -2. **Token Usage**: Track token consumption for cost optimization -3. **Response Times**: Monitor end-to-end response times -4. **Error Rates**: Track and alert on agent errors +1. **Tool Execution Metrics**: Monitor execution time and error rates for each tool. +2. **Token Usage**: Track token consumption for cost optimization. +3. **Response Times**: Monitor end-to-end response times. +4. **Error Rates**: Track and alert on agent errors. Consider integrating with AWS CloudWatch for metrics collection and alerting. diff --git a/docs/user-guide/observability-evaluation/logs.md b/docs/user-guide/observability-evaluation/logs.md index 2607744c..51b7f695 100644 --- a/docs/user-guide/observability-evaluation/logs.md +++ b/docs/user-guide/observability-evaluation/logs.md @@ -153,7 +153,7 @@ In addition to standard logging, Strands Agents SDK provides a callback system f - **Logging**: Internal operations, debugging, errors (not typically visible to end users) - **Callbacks**: User-facing output, streaming responses, tool execution notifications -The callback system is configured through the `callback_handler` parameter when creating an Agent: +The callback system is configured through the `callback_handler` parameter when creating an agent: ```python from strands.handlers.callback_handler import PrintingCallbackHandler @@ -168,7 +168,7 @@ You can create custom callback handlers to process streaming events according to ## Best Practices -1. **Configure Early**: Set up logging configuration before initializing the Agent +1. **Configure Early**: Set up logging configuration before initializing the agent 2. **Appropriate Levels**: Use INFO for normal operation and DEBUG for troubleshooting 3. **Structured Log Format**: Use the structured log format shown in examples for better parsing 4. **Performance**: Be mindful of logging overhead in production environments diff --git a/docs/user-guide/observability-evaluation/observability.md b/docs/user-guide/observability-evaluation/observability.md index ed552692..b8c7ac29 100644 --- a/docs/user-guide/observability-evaluation/observability.md +++ b/docs/user-guide/observability-evaluation/observability.md @@ -97,4 +97,4 @@ With these components in place, a continuous improvement flywheel emerges which ## Conclusion -Effective observability is crucial for developing agents which reliably complete customers’ tasks. The key to success is treating observability not as an afterthought, but as a core component of agent engineering from day one. This investment will pay dividends in improved reliability, faster development cycles, and better customer experiences. +Effective observability is crucial for developing agents that reliably complete customers’ tasks. The key to success is treating observability not as an afterthought, but as a core component of agent engineering from day one. This investment will pay dividends in improved reliability, faster development cycles, and better customer experiences. diff --git a/docs/user-guide/safety-security/pii-redaction.md b/docs/user-guide/safety-security/pii-redaction.md index ec35802e..58aa3873 100644 --- a/docs/user-guide/safety-security/pii-redaction.md +++ b/docs/user-guide/safety-security/pii-redaction.md @@ -85,7 +85,7 @@ print(result) langfuse.flush() ``` -#### Complete example with a Strands Agent +#### Complete example with a Strands agent ```python from strands import Agent