Skip to content

Commit b6c19d7

Browse files
committed
Update guardrails.rst
1 parent c36cf9a commit b6c19d7

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/user_guide/large_language_model/guardrails.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ For example, to use the `toxicity measurement <https://huggingface.co/spaces/eva
1616
# Only allow content with toxicity score less than 0.2
1717
toxicity = HuggingFaceEvaluation(path="toxicity", threshold=0.2)
1818
19-
By default, it uses the `facebook/roberta-hate-speech-dynabench-r4-target<https://huggingface.co/facebook/roberta-hate-speech-dynabench-r4-target>`_ model. You may use a custom model by specifying the ``load_args`` and ``compute_args``. For example, to use the ``DaNLP/da-electra-hatespeech-detection`` model:
19+
By default, it uses the `facebook/roberta-hate-speech-dynabench-r4-target <https://huggingface.co/facebook/roberta-hate-speech-dynabench-r4-target>`_ model. You may use a custom model by specifying the ``load_args`` and ``compute_args``. For example, to use the ``DaNLP/da-electra-hatespeech-detection`` model:
2020

2121
.. code-block:: python3
2222

0 commit comments

Comments
 (0)