Skip to content

best working prompt text #1819

@Amitt1412

Description

@Amitt1412

Hello folks,

I used my pdfs to get the data & set of question & answers. Also, I implemented the dspy flow. Could you please help me understand the following

  1. Why the retrieval is happening instead of sending the chunk directly to the model as our main aim is to get the response generated from the llm as per the business need. Retrieval process may miss the chunk from where the question actually belonged to.
index = torch.load('index.pt', weights_only=True)
max_characters = 4000 # >98th percentile of document lengths

@functools.lru_cache(maxsize=None)
def search(query, k=5):
    query_embedding = torch.tensor(Embed(input=query, model="text-embedding-3-small").data[0]['embedding'])
    topk_scores, topk_indices = torch.matmul(index, query_embedding).topk(k)
    topK = [dict(score=score.item(), **corpus[idx]) for idx, score in zip(topk_indices, topk_scores)]
    return [doc['text'][:max_characters] for doc in topK]
  1. I wanted to save the best prompt what is the command to save the same. once I have run the following
tp = dspy.MIPROv2(metric=metric, auto="medium", num_threads=24, num_candidates=2)  # use fewer threads if your rate limit is small

optimized_rag = tp.compile(RAG(), trainset=trainset, valset=valset, num_trials=20,
                           max_bootstrapped_demos=2, max_labeled_demos=2,
                           requires_permission_to_run=False)

Suggestion would be appreciated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions