Skip to content

Commit de8d9d8

Browse files
authored
Update README.md, the quantize flag is no longer available, quantize_type assumes the role of the original flag. (#97)
1 parent 4e7085c commit de8d9d8

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

README.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -69,10 +69,9 @@ Need to manually modify the `config.json` in the checkpoint folder to make it a
6969
```bash
7070
export input_ckpt_dir=Original llama weights directory
7171
export output_ckpt_dir=The output directory
72-
export quantize=True #whether to quantize
7372
export model_name="llama-3" # or "llama-2", "gemma"
74-
export quantize_type="int8_per_channel" # Availabe quantize type: {"int8", "int4"} x {"per_channel", "blockwise"}
75-
python -m convert_checkpoints --model_name=$model_name --input_checkpoint_dir=$input_ckpt_dir --output_checkpoint_dir=$output_ckpt_dir --quantize=$quantize --quantize_type=$quantize_type
73+
export quantize_type="int8_per_channel" # Availabe quantize type: {"int8", "int4"} x {"per_channel", "blockwise"}, setting this will quantize the weights
74+
python -m convert_checkpoints --model_name=$model_name --input_checkpoint_dir=$input_ckpt_dir --output_checkpoint_dir=$output_ckpt_dir --quantize_type=$quantize_type
7675
```
7776

7877

0 commit comments

Comments
 (0)