Skip to content

Commit 484a492

Browse files
pjoshi30Preetam Joshi
andauthored
Updating documentation for the .detect() API in the client (#13)
* Updating documentation in the client --------- Co-authored-by: Preetam Joshi <info@aimon.ai>
1 parent 28b1e4e commit 484a492

File tree

1 file changed

+20
-22
lines changed

1 file changed

+20
-22
lines changed

aimon/client.py

Lines changed: 20 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -146,29 +146,27 @@ def detect(self, data_to_send: List[Dict[str, Any]], config=Config()):
146146
"score": A score indicating the probability that the whole "generated_text" is hallucinated
147147
"sentences": An array of objects where each object contains a sentence level hallucination "score" and
148148
the "text" of the sentence.
149-
"quality_metrics": A collection of quality metrics for the response of the LLM
150-
"results": A dict containing results of response quality detectors like conciseness and completeness
151-
"conciseness": This detector checks whether or not the response had un-necessary information
152-
for the given query and the context documents
153-
"reasoning": An explanation of the score that was provided.
154-
"score": A probability score of how concise the response is for the user query and context documents.
155-
"completeness": This detector checks whether or not the response was complete enough for the
149+
"conciseness": This detector checks whether the response had un-necessary information
150+
for the given query and the context documents. It includes the following fields:
151+
"reasoning": An explanation of the score that was provided.
152+
"score": A probability score of how concise the response is for the user query and context documents.
153+
"completeness": This detector checks whether the response was complete enough for the
156154
given query and context documents
157-
"reasoning": An explanation of the score that was provided.
158-
"score": A probability score of how complete the response is for the user query and context documents.
159-
"instruction_adherence": This detector checks whether the response followed the specified instructions.
160-
Results are returned in this JSON format
161-
```json
162-
{
163-
"instruction_adherence": [
164-
{
165-
"instruction": "<String>",
166-
"adherence": "<Boolean>",
167-
"detailed_explanation": "<String>"
168-
}
169-
]
170-
}
171-
```
155+
"reasoning": An explanation of the score that was provided.
156+
"score": A probability score of how complete the response is for the user query and context documents.
157+
"instruction_adherence": This detector checks whether the response followed the specified instructions.
158+
Results are returned in this JSON format:
159+
```json
160+
{
161+
"instruction_adherence": [
162+
{
163+
"instruction": "<String>", # The instruction provided by the user
164+
"adherence": "<Boolean>", # Whether the response adhered to the instruction
165+
"detailed_explanation": "<String>" # A detailed explanation of the adherence
166+
}
167+
]
168+
}
169+
```
172170
"toxicity": Indicates whether there was toxic content in the response. It uses 6 different label types for this.
173171
"identity_hate": The response contained hateful content that calls out real or perceived "identity factors" of an individual or a group.
174172
"insult": The response contained insulting content.

0 commit comments

Comments
 (0)