You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
// creates cached content AFTER uploading is finished
566
566
const cachedContent =awaitcacheManager.create({
567
-
model: "models/gemini-1.5-flash-001",
567
+
model: "models/gemini-2.5-flash",
568
568
displayName: displayName,
569
569
systemInstruction: "You are an expert video analyzer, and your job is to answer "+
570
570
"the user's query based on the video file you have access to.",
@@ -594,7 +594,6 @@ await model.invoke("Summarize the video");
594
594
595
595
**Note**
596
596
597
-
- Context caching supports both Gemini 1.5 Pro and Gemini 1.5 Flash. Context caching is only available for stable models with fixed versions (for example, gemini-1.5-pro-001). You must include the version postfix (for example, the -001 in gemini-1.5-pro-001).
598
597
- The minimum input token count for context caching is 32,768, and the maximum is the same as the maximum for the given model.
Copy file name to clipboardExpand all lines: src/oss/javascript/integrations/llms/google_vertex_ai.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ You may be looking for [this page instead](/oss/integrations/chat/google_vertex_
11
11
</Warning>
12
12
13
13
14
-
[Google Vertex](https://cloud.google.com/vertex-ai) is a service that exposes all foundation models available in Google Cloud, like `gemini-1.5-pro`, `gemini-1.5-flash`, etc.
14
+
[Google Vertex](https://cloud.google.com/vertex-ai) is a service that exposes all foundation models available in Google Cloud, like `gemini-2.5-pro`, `gemini-2.5-flash`, etc.
15
15
16
16
This will help you get started with VertexAI completion models (LLMs) using LangChain. For detailed documentation on `VertexAI` features and configuration options, please refer to the [API reference](https://api.js.langchain.com/classes/langchain_google_vertexai.VertexAI.html).
Copy file name to clipboardExpand all lines: src/oss/javascript/integrations/providers/google.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ and [AI Studio](https://aistudio.google.com/)
9
9
10
10
### Gemini Models
11
11
12
-
Access Gemini models such as `gemini-1.5-pro` and `gemini-2.0-flex` through the [`ChatGoogleGenerativeAI`](/oss/integrations/chat/google_generative_ai),
12
+
Access Gemini models such as `gemini-2.5-pro` and `gemini-2.0-flex` through the [`ChatGoogleGenerativeAI`](/oss/integrations/chat/google_generative_ai),
13
13
or if using VertexAI, via the [`ChatVertexAI`](/oss/integrations/chat/google_vertex_ai) class.
14
14
15
15
<Tip>
@@ -97,7 +97,7 @@ import { ChatVertexAI } from "@langchain/google-vertexai";
97
97
// import { ChatVertexAI } from "@langchain/google-vertexai-web";
Copy file name to clipboardExpand all lines: src/oss/python/integrations/chat/anthropic.mdx
+5-6Lines changed: 5 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,8 @@
1
1
---
2
2
title: ChatAnthropic
3
+
description: Get started using Anthropic [chat models](/oss/langchain/models) in LangChain.
3
4
---
4
5
5
-
This guide provides a quick overview for getting started with Claude [chat models](/oss/langchain/models).
6
-
7
6
You can find information about Anthropic's latest models, their costs, context windows, and supported input types in the [Claude](https://docs.claude.com/en/docs/about-claude/models/overview) docs.
8
7
9
8
<Tip>
@@ -49,7 +48,7 @@ To access Anthropic (Claude) models you'll need to install the `langchain-anthro
49
48
50
49
### Credentials
51
50
52
-
Head to [console.anthropic.com/](https://console.anthropic.com) to sign up for Anthropic and generate an API key. Once you've done this set the `ANTHROPIC_API_KEY` environment variable:
51
+
Head to the [Claude console](https://console.anthropic.com) to sign up and generate a Claude API key. Once you've done this set the `ANTHROPIC_API_KEY` environment variable:
53
52
54
53
```python
55
54
import getpass
@@ -145,7 +144,7 @@ response.content
145
144
'type': 'tool_use'}]
146
145
```
147
146
148
-
Using `content_blocks` will render the content in a standard format that is consistent across other model providers:
147
+
Using `content_blocks` will render the content in a standard format that is consistent across other model providers. Read more about [content blocks](/oss/langchain/messages#standard-content-blocks).
Anthropic supports [caching](https://docs.claude.com/en/docs/build-with-claude/prompt-caching) of [elements of your prompts](https://docs.claude.com/en/docs/build-with-claude/prompt-caching#what-can-be-cached), including messages, tool definitions, tool results, images and documents. This allows you to re-use large documents, instructions, [few-shot documents](/langsmith/create-few-shot-evaluators), and other data to reduce latency and costs.
308
+
Anthropic supports [caching](https://docs.claude.com/en/docs/build-with-claude/prompt-caching) of elements of your prompts, including messages, tool definitions, tool results, images and documents. This allows you to re-use large documents, instructions, [few-shot documents](/langsmith/create-few-shot-evaluators), and other data to reduce latency and costs.
310
309
311
310
To enable caching on an element of a prompt, mark its associated content block using the `cache_control` key. See examples below:
312
311
@@ -400,7 +399,7 @@ Second:
400
399
401
400
### Tools
402
401
403
-
```python
402
+
```python expandable
404
403
from langchain_anthropic import convert_to_anthropic_tool
Copy file name to clipboardExpand all lines: src/oss/python/integrations/chat/azure_chat_openai.mdx
+7-2Lines changed: 7 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,8 @@
1
1
---
2
2
title: AzureChatOpenAI
3
+
description: Get started using OpenAI [chat models](/oss/langchain/models) via Azure in LangChain.
3
4
---
4
5
5
-
This guide provides a quick overview for getting started with OpenAI [chat models](/oss/langchain/models) on Azure.
6
-
7
6
You can find information about Azure OpenAI's latest models and their costs, context windows, and supported input types in the [Azure docs](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
8
7
9
8
<Info>
@@ -32,6 +31,12 @@ You can find information about Azure OpenAI's latest models and their costs, con
32
31
features, or head to the @[`AzureChatOpenAI`] API reference.
33
32
</Note>
34
33
34
+
<Tip>
35
+
**API Reference**
36
+
37
+
For detailed documentation of all features and configuration options, head to the @[`AzureChatOpenAI`] API reference.
Copy file name to clipboardExpand all lines: src/oss/python/integrations/chat/deepseek.mdx
+9-3Lines changed: 9 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,18 @@
1
1
---
2
2
title: ChatDeepSeek
3
+
description: Get started using DeepSeek [chat models](/oss/langchain/models) in LangChain.
3
4
---
4
5
5
-
This will help you get started with DeepSeek's hosted [chat models](/oss/langchain/models). For detailed documentation of all ChatDeepSeek features and configurations head to the [API reference](https://python.langchain.com/api_reference/deepseek/chat_models/langchain_deepseek.chat_models.ChatDeepSeek.html).
6
+
This will help you get started with DeepSeek's hosted [chat models](/oss/langchain/models).
6
7
7
8
<Tip>
8
-
**DeepSeek's models are open source and can be run locally (e.g. in [Ollama](./ollama.ipynb)) or on other inference providers (e.g. [Fireworks](./fireworks.ipynb), [Together](./together.ipynb)) as well.**
9
+
**API Reference**
9
10
11
+
For detailed documentation of all features and configuration options, head to the @[`ChatDeepSeek`] API reference.
12
+
</Tip>
13
+
14
+
<Tip>
15
+
**DeepSeek's models are open source and can be run locally (e.g. in [Ollama](./ollama.ipynb)) or on other inference providers (e.g. [Fireworks](./fireworks.ipynb), [Together](./together.ipynb)) as well.**
10
16
</Tip>
11
17
12
18
## Overview
@@ -15,7 +21,7 @@ This will help you get started with DeepSeek's hosted [chat models](/oss/langcha
15
21
16
22
| Class | Package | Local | Serializable |[JS support](https://js.langchain.com/docs/integrations/chat/deepseek)| Downloads | Version |
0 commit comments