Skip to content

Commit fc754d8

Browse files
bart0401mdrxy
andauthored
Update retriever imports to use langchain_classic for v1 compatibility (#1196)
The langchain.retrievers module was removed in LangChain 1.0+. All retriever imports have been updated to use langchain_classic as documented in the v1 migration guide. Fixes #1195 ## Summary Updated outdated retriever imports that don't work in LangChain 1.0+. Changed `from langchain.retrievers` to `from langchain_classic.retrievers` as per the v1 migration guide. ## Changes 31 files updated across integrations: - 9 document_transformers files - 7 retrievers files - 12 providers files - 2 vectorstores files - 1 document_loaders file Before: ```python from langchain.retrievers import ContextualCompressionRetriever from langchain.retrievers.multi_query import MultiQueryRetriever ``` After: ```python from langchain_classic.retrievers.contextual_compression import ContextualCompressionRetriever from langchain_classic.retrievers.multi_query import MultiQueryRetriever ``` All changes follow the pattern in the [v1 migration guide](https://docs.langchain.com/oss/migrate/langchain-v1#langchain-classic). Co-authored-by: bart0401 <bart0401@users.noreply.github.com> Co-authored-by: Mason Daugherty <mason@langchain.dev>
1 parent 8368b2f commit fc754d8

31 files changed

+39
-39
lines changed

src/oss/python/integrations/callbacks/uptrain.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -62,9 +62,9 @@ NOTE: that you can also install `faiss-gpu` instead of `faiss-cpu` if you want t
6262
from getpass import getpass
6363

6464
from langchain.chains import RetrievalQA
65-
from langchain.retrievers import ContextualCompressionRetriever
66-
from langchain.retrievers.document_compressors import FlashrankRerank
67-
from langchain.retrievers.multi_query import MultiQueryRetriever
65+
from langchain_classic.retrievers.contextual_compression import ContextualCompressionRetriever
66+
from langchain_classic.retrievers.document_compressors import FlashrankRerank
67+
from langchain_classic.retrievers.multi_query import MultiQueryRetriever
6868
from langchain_community.callbacks.uptrain_callback import UpTrainCallbackHandler
6969
from langchain_community.document_loaders import TextLoader
7070
from langchain_community.vectorstores import FAISS

src/oss/python/integrations/document_loaders/docugami.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -204,7 +204,7 @@ We can use a self-querying retriever to improve our query accuracy, using this a
204204

205205
```python
206206
from langchain.chains.query_constructor.schema import AttributeInfo
207-
from langchain.retrievers.self_query.base import SelfQueryRetriever
207+
from langchain_classic.retrievers.self_query.base import SelfQueryRetriever
208208
from langchain_chroma import Chroma
209209

210210
EXCLUDE_KEYS = ["id", "xpath", "structure"]
@@ -322,7 +322,7 @@ CHUNK 21b4d9517f7ccdc0e3a028ce5043a2a0: page_content='1.1 Landlord.\n <Landlord>
322322
```
323323

324324
```python
325-
from langchain.retrievers.multi_vector import MultiVectorRetriever, SearchType
325+
from langchain_classic.retrievers.multi_vector import MultiVectorRetriever, SearchType
326326
from langchain.storage import InMemoryStore
327327
from langchain_chroma import Chroma
328328
from langchain_openai import OpenAIEmbeddings

src/oss/python/integrations/document_transformers/cross_encoder_reranker.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,8 +58,8 @@ pretty_print_docs(docs)
5858
Now let's wrap our base retriever with a `ContextualCompressionRetriever`. `CrossEncoderReranker` uses `HuggingFaceCrossEncoder` to rerank the returned results.
5959

6060
```python
61-
from langchain.retrievers import ContextualCompressionRetriever
62-
from langchain.retrievers.document_compressors import CrossEncoderReranker
61+
from langchain_classic.retrievers.contextual_compression import ContextualCompressionRetriever
62+
from langchain_classic.retrievers.document_compressors import CrossEncoderReranker
6363
from langchain_community.cross_encoders import HuggingFaceCrossEncoder
6464

6565
model = HuggingFaceCrossEncoder(model_name="BAAI/bge-reranker-base")

src/oss/python/integrations/document_transformers/dashscope_rerank.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -263,7 +263,7 @@ And with an unwavering resolve that freedom will always triumph over tyranny.
263263
Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll use the `DashScopeRerank` to rerank the returned results.
264264

265265
```python
266-
from langchain.retrievers import ContextualCompressionRetriever
266+
from langchain_classic.retrievers.contextual_compression import ContextualCompressionRetriever
267267
from langchain_community.document_compressors.dashscope_rerank import DashScopeRerank
268268

269269
compressor = DashScopeRerank()

src/oss/python/integrations/document_transformers/google_cloud_vertexai_rerank.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ Your 1 documents have been split into 266 chunks
6464

6565
```python
6666
import pandas as pd
67-
from langchain.retrievers.contextual_compression import ContextualCompressionRetriever
67+
from langchain_classic.retrievers.contextual_compression import ContextualCompressionRetriever
6868
from langchain_google_community.vertex_rank import VertexAIRank
6969

7070
# Instantiate the VertexAIReranker with the SDK manager

src/oss/python/integrations/document_transformers/infinity_rerank.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -280,7 +280,7 @@ Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll
280280

281281
```python
282282
from infinity_client import Client
283-
from langchain.retrievers import ContextualCompressionRetriever
283+
from langchain_classic.retrievers.contextual_compression import ContextualCompressionRetriever
284284
from langchain_community.document_compressors.infinity_rerank import InfinityRerank
285285

286286
client = Client(base_url="http://localhost:7997")

src/oss/python/integrations/document_transformers/jina_rerank.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ pretty_print_docs(docs)
6565
Now let's wrap our base retriever with a ContextualCompressionRetriever, using Jina Reranker as a compressor.
6666

6767
```python
68-
from langchain.retrievers import ContextualCompressionRetriever
68+
from langchain_classic.retrievers.contextual_compression import ContextualCompressionRetriever
6969
from langchain_community.document_compressors import JinaRerank
7070

7171
compressor = JinaRerank()

src/oss/python/integrations/document_transformers/openvino_rerank.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -291,7 +291,7 @@ Metadata: {'source': '../../how_to/state_of_the_union.txt', 'id': 40}
291291
Now let's wrap our base retriever with a `ContextualCompressionRetriever`, using `OpenVINOReranker` as a compressor.
292292

293293
```python
294-
from langchain.retrievers import ContextualCompressionRetriever
294+
from langchain_classic.retrievers.contextual_compression import ContextualCompressionRetriever
295295
from langchain_community.document_compressors.openvino_rerank import OpenVINOReranker
296296

297297
model_name = "BAAI/bge-reranker-large"

src/oss/python/integrations/document_transformers/rankllm-reranker.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -270,7 +270,7 @@ RankZephyr performs listwise reranking for improved retrieval quality but requir
270270

271271
```python
272272
import torch
273-
from langchain.retrievers.contextual_compression import ContextualCompressionRetriever
273+
from langchain_classic.retrievers.contextual_compression import ContextualCompressionRetriever
274274
from langchain_community.document_compressors.rankllm_rerank import RankLLMRerank
275275

276276
torch.cuda.empty_cache()
@@ -567,7 +567,7 @@ One America.
567567
Retrieval + Reranking with RankGPT
568568

569569
```python
570-
from langchain.retrievers.contextual_compression import ContextualCompressionRetriever
570+
from langchain_classic.retrievers.contextual_compression import ContextualCompressionRetriever
571571
from langchain_community.document_compressors.rankllm_rerank import RankLLMRerank
572572

573573
compressor = RankLLMRerank(top_n=3, model="gpt", gpt_model="gpt-4o-mini")

src/oss/python/integrations/document_transformers/volcengine_rerank.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -295,7 +295,7 @@ To disable this warning, you can either:
295295
Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll use the `VolcengineRerank` to rerank the returned results.
296296

297297
```python
298-
from langchain.retrievers import ContextualCompressionRetriever
298+
from langchain_classic.retrievers.contextual_compression import ContextualCompressionRetriever
299299
from langchain_community.document_compressors.volcengine_rerank import VolcengineRerank
300300

301301
compressor = VolcengineRerank()

0 commit comments

Comments
 (0)