Skip to content

Commit 6bc2a38

Browse files
authored
Update README.md
1 parent 3839ae4 commit 6bc2a38

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ checkpoint = "bigcode/starcoder"
3636
device = "cuda" # for GPU usage or "cpu" for CPU usage
3737

3838
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
39-
model = AutoModelForCausalLM.from_pretrained(checkpoint, trust_remote_code=True).to(device)
39+
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
4040

4141
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
4242
outputs = model.generate(inputs)
@@ -45,10 +45,10 @@ print(tokenizer.decode(outputs[0]))
4545
or
4646
```python
4747
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
48-
model_ckpt = "bigcode/starcoder"
48+
checkpoint = "bigcode/starcoder"
4949

50-
model = AutoModelForCausalLM.from_pretrained(model_ckpt)
51-
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
50+
model = AutoModelForCausalLM.from_pretrained(checkpoint)
51+
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
5252

5353
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
5454
print( pipe("def hello():") )

0 commit comments

Comments
 (0)