You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+21-7Lines changed: 21 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,19 @@
1
1
# 💫 StarCoder
2
2
3
-
## What is this about?
4
-
💫 StarCoder is a language model (LM) trained on source code and natural language text. Its training data incorporates more that 80 different programming languages as well as text extracted from github issues and commits and from notebooks. This repository showcases how we can fine-tune this LM on a specific downstream task.
3
+
# What is this about?
4
+
💫 StarCoder is a language model (LM) trained on source code and natural language text. Its training data incorporates more that 80 different programming languages as well as text extracted from github issues and commits and from notebooks. This repository showcases how we get an overview of this LM's capabilities.
5
+
6
+
# Table of Contents
7
+
1.[Fine-tuning](#fine-tuning)
8
+
-[Step by step installation with conda](#step-by-step-installation-with-conda)
Here, we showcase how we can fine-tune this LM on a specific downstream task.
5
17
6
18
## Step by step installation with conda
7
19
@@ -52,10 +64,10 @@ wandb login
52
64
```
53
65
Now that everything is done, you can clone the repository and get into the corresponding directory.
54
66
55
-
## Fine-Tuning (`finetune.py`)
67
+
## Datasets
56
68
💫 StarCoder can be fine-tuned to achieve multiple downstream tasks. Our interest here is to fine-tune StarCoder in order to make it follow instructions. [Instruction fine-tuning](https://arxiv.org/pdf/2109.01652.pdf) has gained a lot of attention recently as it proposes a simple framework that teaches language models to align their outputs with human needs. That procedure requires the availability of quality instruction datasets, which contain multiple `instruction - answer` pairs. Unfortunately such datasets are not ubiquitous but thanks to Hugging Face 🤗's [datasets](https://github.com/huggingface/datasets) library we can have access to some good proxies. To fine-tune cheaply and efficiently, we use Hugging Face 🤗's [PEFT](https://github.com/huggingface/peft) as well as Tim Dettmers' [bitsandbytes](https://github.com/TimDettmers/bitsandbytes).
57
69
58
-
### Code Alpaca (CA)
70
+
### Code Alpaca CA
59
71
[Code Alpaca](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K) is a dataset of about 20K `prompt - completion` pairs generated by the technique presented in the [self-instruct](https://arxiv.org/abs/2212.10560) paper. Each prompt describes a task that is asked by a user and the corresponding completion is the answer to that task as generated by `text-davinci-003`.
60
72
61
73
To execute the fine-tuning script run the following command:
[Stack Exchange](https://en.wikipedia.org/wiki/Stack_Exchange) is a well-known network of Q&A websites on topics in diverse fields. It is a place where a user can ask a question and obtain answers from other users. Those answers are scored and ranked based on their quality. [Stack exchange instruction](https://huggingface.co/datasets/ArmelR/stack-exchange-instruction) is a dataset that was obtained by scrapping the site in order to build a collection of Q&A pairs. A language model can then be fine-tuned on that dataset to make it elicit strong and diverse question-answering skills.
99
111
100
112
To execute the fine-tuning script run the following command:
0 commit comments