You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ Domain-specific embeddings can significantly improve the quality of vector repre
8
8
9
9
## Contents
10
10
-`sentence-transformer/`: This directory contains a Jupyter notebook demonstrating how to fine-tune a sentence embedding model using the Multiple Negatives Ranking Loss technique. The Multiple Negatives Ranking Loss function is recommended when in your training data you only have positive pairs, for example, only pairs of similar texts like pairs of paraphrases, pairs of duplicate questions, pairs of (query, response), or pairs of (source_language, target_language).
11
-
We are using the Multiple Negatives Ranking Loss function because we are utilizing Bedrock FAQ as the training data, which consists of pairs of questions and answers.
11
+
We are using the [Multiple Negatives Ranking Loss function](https://www.sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) because we are utilizing [Bedrock FAQ](https://aws.amazon.com/bedrock/faqs/) as the training data, which consists of pairs of questions and answers.
12
12
The code in this directory is used in the AWS blog post "Improve RAG accuracy with finetuned embedding models on Sagemaker"
0 commit comments