You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -72,11 +72,11 @@ Or open our intro notebook in Google Colab: [<img align="center" src="https://co
72
72
73
73
By default, DSPy installs the latest `openai` from pip. However, if you install old version before OpenAI changed their API `openai~=0.28.1`, the library will use that just fine. Both are supported.
74
74
75
-
For the optional (alphabetically sorted) [Chromadb](https://github.com/chroma-core/chroma), [Qdrant](https://github.com/qdrant/qdrant), [Marqo](https://github.com/marqo-ai/marqo), Pinecone, [Weaviate](https://github.com/weaviate/weaviate),
75
+
For the optional (alphabetically sorted) [Chromadb](https://github.com/chroma-core/chroma), [Qdrant](https://github.com/qdrant/qdrant), [Marqo](https://github.com/marqo-ai/marqo), Pinecone, [Snowflake](https://github.com/snowflakedb/snowpark-python) [Weaviate](https://github.com/weaviate/weaviate),
76
76
or [Milvus](https://github.com/milvus-io/milvus) retrieval integration(s), include the extra(s) below:
77
77
78
78
```
79
-
pip install dspy-ai[chromadb] # or [qdrant] or [marqo] or [mongodb] or [pinecone] or [weaviate] or [milvus]
79
+
pip install dspy-ai[chromadb] # or [qdrant] or [marqo] or [mongodb] or [pinecone] or [snowflake] or [weaviate] or [milvus]
80
80
```
81
81
82
82
## 2) Documentation
@@ -101,7 +101,7 @@ The DSPy documentation is divided into **tutorials** (step-by-step illustration
101
101
102
102
- [DSPy talk at ScaleByTheBay Nov 2023](https://www.youtube.com/watch?v=Dt3H2ninoeY).
103
103
- [DSPy webinar with MLOps Learners](https://www.youtube.com/watch?v=im7bCLW2aM4), a bit longer with Q&A.
104
-
- Hands-on Overviews of DSPy by the community: [DSPy Explained! by Connor Shorten](https://www.youtube.com/watch?v=41EfOY0Ldkc), [DSPy explained by code_your_own_ai](https://www.youtube.com/watch?v=ycfnKPxBMck), [DSPy Crash Course by AI Bites](https://youtu.be/5-zgASQKkKQ?si=3gnmVouT5_rpk_nu)
104
+
- Hands-on Overviews of DSPy by the community: [DSPy Explained! by Connor Shorten](https://www.youtube.com/watch?v=41EfOY0Ldkc), [DSPy explained by code_your_own_ai](https://www.youtube.com/watch?v=ycfnKPxBMck), [DSPy Crash Course by AI Bites](https://youtu.be/5-zgASQKkKQ?si=3gnmVouT5_rpk_nu), [DSPy Paper Explained by Unify](https://youtu.be/kFB8kFchCH4?si=FuM6L5H5lweanckz)
105
105
- Interviews: [Weaviate Podcast in-person](https://www.youtube.com/watch?v=CDung1LnLbY), and you can find 6-7 other remote podcasts on YouTube from a few different perspectives/audiences.
106
106
- **Tracing in DSPy** with Arize Phoenix: [Tutorial for tracing your prompts and the steps of your DSPy programs](https://colab.research.google.com/github/Arize-ai/phoenix/blob/main/tutorials/tracing/dspy_tracing_tutorial.ipynb)
107
107
- [DSPy: Not Your Average Prompt Engineering](https://jina.ai/news/dspy-not-your-average-prompt-engineering), why it's crucial for future prompt engineering, and yet why it is challenging for prompt engineers to learn.
The constructor initializes the base class `LM` and verifies the provided arguments like the `api_key` for GROQ api retriver. The `kwargs` attribute is initialized with default values for relevant text generation parameters needed for communicating with the GPT API, such as `temperature`, `max_tokens`, `top_p`, `frequency_penalty`, `presence_penalty`, and `n`.
15
+
The constructor initializes the base class `LM` and verifies the provided arguments like the `api_key` for GROQ api retriver. The `kwargs` attribute is initialized with default values for relevant text generation parameters needed for communicating with the GROQ API, such as `temperature`, `max_tokens`, `top_p`, `frequency_penalty`, `presence_penalty`, and `n`.
16
16
17
17
```python
18
18
classGroqLM(LM):
@@ -42,10 +42,10 @@ Internally, the method handles the specifics of preparing the request prompt and
42
42
After generation, the generated content look like `choice["message"]["content"]`.
43
43
44
44
**Parameters:**
45
-
-`prompt` (_str_): Prompt to send to OpenAI.
45
+
-`prompt` (_str_): Prompt to send to GROQ.
46
46
-`only_completed` (_bool_, _optional_): Flag to return only completed responses and ignore completion due to length. Defaults to True.
47
47
-`return_sorted` (_bool_, _optional_): Flag to sort the completion choices using the returned averaged log-probabilities. Defaults to False.
48
48
-`**kwargs`: Additional keyword arguments for completion request.
49
49
50
50
**Returns:**
51
-
-`List[Dict[str, Any]]`: List of completion choices.
51
+
-`List[Dict[str, Any]]`: List of completion choices.
Initialize an instance of the `SnowflakeRM` class, with the option to use `e5-base-v2` or `snowflake-arctic-embed-m` embeddings or any other Snowflake Cortex supported embeddings model.
10
+
11
+
```python
12
+
SnowflakeRM(
13
+
snowflake_table_name: str,
14
+
snowflake_credentials: dict,
15
+
k: int=3,
16
+
embeddings_field: str,
17
+
embeddings_text_field:str,
18
+
embeddings_model: str="e5-base-v2",
19
+
)
20
+
```
21
+
22
+
**Parameters:**
23
+
24
+
-`snowflake_table_name (str)`: The name of the Snowflake table containing embeddings.
25
+
-`snowflake_credentials (dict)`: The connection parameters needed to initialize a Snowflake Snowpark Session.
26
+
-`k (int, optional)`: The number of top passages to retrieve. Defaults to 3.
27
+
-`embeddings_field (str)`: The name of the column in the Snowflake table containing the embeddings.
28
+
-`embeddings_text_field (str)`: The name of the column in the Snowflake table containing the passages.
29
+
-`embeddings_model (str)`: The model to be used to convert text to embeddings
Search the Snowflake table for the top `k` passages matching the given query or queries, using embeddings generated via the default `e5-base-v2` model or the specified `embedding_model`.
36
+
37
+
**Parameters:**
38
+
39
+
-`query_or_queries` (_Union[str, List[str]]_): The query or list of queries to search for.
40
+
-`k` (_Optional[int]_, _optional_): The number of results to retrieve. If not specified, defaults to the value set during initialization.
41
+
42
+
**Returns:**
43
+
44
+
-`dspy.Prediction`: Contains the retrieved passages, each represented as a `dotdict` with schema `[{"id": str, "score": float, "long_text": str, "metadatas": dict }]`
45
+
46
+
### Quickstart
47
+
48
+
To support passage retrieval, it assumes that a Snowflake table has been created and populated with the passages in a column `embeddings_text_field` and the embeddings in another column `embeddings_field`
49
+
50
+
SnowflakeRM uses `e5-base-v2` embeddings model by default or any Snowflake Cortex supported embeddings model.
51
+
52
+
#### Default OpenAI Embeddings
53
+
54
+
```python
55
+
from dspy.retrieve.snowflake_rm import SnowflakeRM
Copy file name to clipboardExpand all lines: docs/docs/building-blocks/6-optimizers.md
+25-11Lines changed: 25 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,40 +27,54 @@ DSPy programs consist of multiple calls to LMs, stacked together as [DSPy module
27
27
28
28
Given a metric, DSPy can optimize all of these three with multi-stage optimization algorithms. These can combine gradient descent (for LM weights) and discrete LM-driven optimization, i.e. for crafting/updating instructions and for creating/validating demonstrations. DSPy Demonstrations are like few-shot examples, but they're far more powerful. They can be created from scratch, given your program, and their creation and selection can be optimized in many effective ways.
29
29
30
-
In many cases, we found that compiling leads to better prompts than humans write. Not because DSPy optimizers are more creative than humans, but simply because they can try more things, much more systematically, and tune the metrics directly.
30
+
In many cases, we found that compiling leads to better prompts than human writing. Not because DSPy optimizers are more creative than humans, but simply because they can try more things, much more systematically, and tune the metrics directly.
31
31
32
32
33
33
## What DSPy Optimizers are currently available?
34
34
35
+
<!-- The following diagram was generated by: -->
36
+
<!-- 1. Running symilar on the teleprompter module to extract the python hierarchy as a Graphviz dot file -->
37
+
<!-- 2. Hand-editing the resulting dot file to remove classes that are not teleprompters/optimizers (e.g., classes for data structures manipulated by optimizers). -->
38
+
<!-- 3. Using dot to compile the `.dot` file into a PNG -->
39
+
<!-- Robert Goldman [2024/05/11:rpg] -->
40
+
41
+
[Subclasses of Teleprompter](figures/teleprompter-classes.png)
42
+
35
43
All of these can be accessed via `from dspy.teleprompt import *`.
36
44
37
45
#### Automatic Few-Shot Learning
38
46
39
-
1.**`LabeledFewShot`**: Simply constructs few-shot examples from provided labeled Q/A pairs.
47
+
These optimizers extend the signature by automatically generating and including **optimized** examples within the prompt sent to the model, implementing few-shot learning.
48
+
49
+
1.**`LabeledFewShot`**: Simply constructs few-shot examples (demos) from provided labeled input and output data points. Requires `k` (number of examples for the prompt) and `trainset` to randomly select `k` examples from.
50
+
51
+
2.**`BootstrapFewShot`**: Uses a `teacher` module (which defaults to your program) to generate complete demonstrations for every stage of your program, along with labeled examples in `trainset`. Parameters include `max_labeled_demos` (the number of demonstrations randomly selected from the `trainset`) and `max_bootstrapped_demos` (the number of additional examples generated by the `teacher`). The bootstrapping process employs the metric to validate demonstrations, including only those that pass the metric in the "compiled" prompt. Advanced: Supports using a `teacher` program that is a *different* DSPy program that has compatible structure, for harder tasks.
40
52
41
-
2.**`BootstrapFewShot`**: Uses your program to self-generate complete demonstrations for every stage of your program. Will simply use the generated demonstrations (if they pass the metric) without any further optimization. Advanced: Supports using a teacher program (a different DSPy program that has compatible structure) and a teacher LM, for harder tasks.
53
+
3.**`BootstrapFewShotWithRandomSearch`**: Applies `BootstrapFewShot` several times with random search over generated demonstrations, and selects the best program over the optimization. Parameters mirror those of `BootstrapFewShot`, with the addition of `num_candidate_programs`, which specifies the number of random programs evaluated over the optimization, including candidates of the uncompiled program, `LabeledFewShot` optimized program, `BootstrapFewShot` compiled program with unshuffled examples and `num_candidate_programs` of `BootstrapFewShot` compiled programs with randomized example sets.
42
54
43
-
3.**`BootstrapFewShotWithRandomSearch`**: Applies `BootstrapFewShot`several times with random search over generated demonstrations, and selects the best program.
55
+
4.**`BootstrapFewShotWithOptuna`**: Applies `BootstrapFewShot` with Optuna optimization across demonstration sets, running trials to maximize evaluation metrics and selecting the best demonstrations.
44
56
45
-
4.**`BootstrapFewShotWithOptuna`**: Applies `BootstrapFewShot`through Optuna hyperparameter optimization across demonstration sets, running trials to maximize evaluation metrics.
57
+
5.**`KNNFewShot`**. Selects demonstrations through k-Nearest Neighbors algorithm to pick a diverse set of examples from different clusters. Vectorizes the examples, and then clusters them, using cluster centers with `BootstrapFewShot`for bootstrapping/selection process. This will be useful when there's a lot of data over random spaces: using KNN helps optimize the `trainset` for `BootstrapFewShot`. See [this notebook](https://github.com/stanfordnlp/dspy/blob/main/examples/knn.ipynb) for an example.
46
58
47
59
48
60
#### Automatic Instruction Optimization
49
61
50
-
4.**`COPRO`**: Generates and refines new instructions for each step, and optimizes them with coordinate ascent.
62
+
These optimizers produce optimal instructions for the prompt and, in the case of MIPRO also optimize the set of few-shot demonstrations.
51
63
52
-
5.**`MIPRO`**: Generates instructions and few-shot examples in each step. The instruction generation is data-aware and demonstration-aware. Uses Bayesian Optimization to effectively search over the space of generation instructions/demonstrations across your modules.
64
+
6.**`COPRO`**: Generates and refines new instructions for each step, and optimizes them with coordinate ascent (hill-climbing using the metric function and the `trainset`). Parameters include `depth` which is the number of iterations of prompt improvement the optimizer runs over.
65
+
66
+
7.**`MIPRO`**: Generates instructions *and* few-shot examples in each step. The instruction generation is data-aware and demonstration-aware. Uses Bayesian Optimization to effectively search over the space of generation instructions/demonstrations across your modules.
53
67
54
68
55
69
#### Automatic Finetuning
56
70
71
+
This optimizer is used to fine-tune the underlying LLM(s).
72
+
57
73
6.**`BootstrapFinetune`**: Distills a prompt-based DSPy program into weight updates (for smaller LMs). The output is a DSPy program that has the same steps, but where each step is conducted by a finetuned model instead of a prompted LM.
58
74
59
75
60
76
#### Program Transformations
61
77
62
-
7.**`KNNFewShot`**. Selects demonstrations through k-Nearest Neighbors algorithm integrating `BootstrapFewShot` for bootstrapping/selection process.
63
-
64
78
8.**`Ensemble`**: Ensembles a set of DSPy programs and either uses the full set or randomly samples a subset into a single program.
65
79
66
80
@@ -90,7 +104,7 @@ from dspy.teleprompt import BootstrapFewShotWithRandomSearch
90
104
91
105
# Set up the optimizer: we want to "bootstrap" (i.e., self-generate) 8-shot examples of your program's steps.
92
106
# The optimizer will repeat this 10 times (plus some initial attempts) before selecting its best attempt on the devset.
0 commit comments