Skip to content

Commit dcbfe63

Browse files
authored
fix(oss): genai tool calling example (#1367)
1 parent eff24ac commit dcbfe63

File tree

2 files changed

+19
-18
lines changed

2 files changed

+19
-18
lines changed

src/oss/langchain/models.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -576,11 +576,11 @@ sequenceDiagram
576576
:::
577577

578578
:::python
579-
To make tools that you have defined available for use by a model, you must bind them using @[`bind_tools()`][BaseChatModel.bind_tools]. In subsequent invocations, the model can choose to call any of the bound tools as needed.
579+
To make tools that you have defined available for use by a model, you must bind them using @[`bind_tools`][BaseChatModel.bind_tools]. In subsequent invocations, the model can choose to call any of the bound tools as needed.
580580
:::
581581

582582
:::js
583-
To make tools that you have defined available for use by a model, you must bind them using @[`bindTools()`][BaseChatModel.bindTools]. In subsequent invocations, the model can choose to call any of the bound tools as needed.
583+
To make tools that you have defined available for use by a model, you must bind them using @[`bindTools`][BaseChatModel.bindTools]. In subsequent invocations, the model can choose to call any of the bound tools as needed.
584584
:::
585585

586586
Some model providers offer built-in tools that can be enabled via model or invocation parameters (e.g. [`ChatOpenAI`](/oss/integrations/chat/openai), [`ChatAnthropic`](/oss/integrations/chat/anthropic)). Check the respective [provider reference](/oss/integrations/providers/overview) for details.
@@ -639,7 +639,7 @@ for (const tool_call of toolCalls) {
639639
```
640640
:::
641641

642-
When binding user-defined tools, the model's response includes a **request** to execute a tool. When using a model separately from an [agent](/oss/langchain/agents), it is up to you to perform the requested action and return the result back to the model for use in subsequent reasoning. Note that when using an [agent](/oss/langchain/agents), the agent loop will handle the tool execution loop for you.
642+
When binding user-defined tools, the model's response includes a **request** to execute a tool. When using a model separately from an [agent](/oss/langchain/agents), it is up to you to execute the requested tool and return the result back to the model for use in subsequent reasoning. When using an [agent](/oss/langchain/agents), the agent loop will handle the tool execution loop for you.
643643

644644
Below, we show some common ways you can use tool calling.
645645

src/oss/python/integrations/chat/google_generative_ai.mdx

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -260,6 +260,7 @@ You can equip the model with tools to call.
260260

261261
```python
262262
from langchain.tools import tool
263+
from langchain.messages import HumanMessage, ToolMessage
263264
from langchain_google_genai import ChatGoogleGenerativeAI
264265

265266

@@ -269,33 +270,33 @@ def get_weather(location: str) -> str:
269270
return "It's sunny."
270271

271272

272-
# Initialize the model and bind the tool
273-
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash-lite")
274-
llm_with_tools = llm.bind_tools([get_weather])
273+
# Initialize and bind (potentially multiple) tools to the model
274+
model_with_tools = ChatGoogleGenerativeAI(model="gemini-2.5-flash-lite").bind_tools([get_weather])
275275

276-
# Invoke the model with a query that should trigger the tool
277-
query = "What's the weather in San Francisco?"
278-
ai_msg = llm_with_tools.invoke(query)
276+
# Step 1: Model generates tool calls
277+
messages = [HumanMessage("What's the weather in Boston?")]
278+
ai_msg = model_with_tools.invoke(messages)
279+
messages.append(ai_msg)
279280

280281
# Check the tool calls in the response
281282
print(ai_msg.tool_calls)
282283

283-
# Example tool call message would be needed here if you were actually running the tool
284-
from langchain.messages import ToolMessage
284+
# Step 2: Execute tools and collect results
285+
for tool_call in ai_msg.tool_calls:
286+
# Execute the tool with the generated arguments
287+
tool_result = get_weather.invoke(tool_call)
288+
messages.append(tool_result)
285289

286-
tool_message = ToolMessage(
287-
content=get_weather(*ai_msg.tool_calls[0]["args"]),
288-
tool_call_id=ai_msg.tool_calls[0]["id"],
289-
)
290-
llm_with_tools.invoke([ai_msg, tool_message]) # Example of passing tool result back
290+
# Step 3: Pass results back to model for final response
291+
final_response = model_with_tools.invoke(messages)
291292
```
292293

293294
```output
294-
[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'a6248087-74c5-4b7c-9250-f335e642927c', 'type': 'tool_call'}]
295+
[{'name': 'get_weather', 'args': {'location': 'Boston'}, 'id': 'fb91e46d-e3f7-445b-a62f-50ae024bcdac', 'type': 'tool_call'}]
295296
```
296297

297298
```output
298-
AIMessage(content="OK. It's sunny in San Francisco.", additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.5-flash-lite', 'safety_ratings': []}, id='run-ac5bb52c-e244-4c72-9fbc-fb2a9cd7a72e-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40, 'input_token_details': {'cache_read': 0}})
299+
AIMessage(content='The weather in Boston is sunny.', additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.5-flash-lite', 'safety_ratings': [], 'model_provider': 'google_genai'}, id='lc_run--3fb38729-285b-4b43-aa3e-499cbc910544-0', usage_metadata={'input_tokens': 83, 'output_tokens': 7, 'total_tokens': 90, 'input_token_details': {'cache_read': 0}})
299300
```
300301

301302
## Structured output

0 commit comments

Comments
 (0)