You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/user_guide/large_language_model/training_llm.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ This page shows an example of fine-tuning the `Llama 2 <https://ai.meta.com/llam
15
15
We recommend running the training on a private subnet.
16
16
In this example, internet access is needed to download the source code and the pre-trained model.
17
17
18
-
The `llama-recipes <llama-recipes>`_ repository contains example code to fine-tune llama2 model.
18
+
The `llama-recipes <https://github.com/facebookresearch/llama-recipes>`_ repository contains example code to fine-tune llama2 model.
19
19
The example `fine-tuning script <https://github.com/facebookresearch/llama-recipes/blob/1aecd00924738239f8d86f342b36bacad180d2b3/examples/finetuning.py>`_ supports both full parameter fine-tuning
20
20
and `Parameter-Efficient Fine-Tuning (PEFT) <https://huggingface.co/blog/peft>`_.
21
21
With ADS, you can start the training job by taking the source code directly from Github with no code change.
0 commit comments