We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 576502e commit e25ab3aCopy full SHA for e25ab3a
README.md
@@ -47,7 +47,7 @@ checkpoint = "bigcode/starcoder"
47
device = "cuda" # for GPU usage or "cpu" for CPU usage
48
49
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
50
-# to save memory consider using fp16 or bf16 by specifying torch.dtype=torch.float16 for example
+# to save memory consider using fp16 or bf16 by specifying torch_dtype=torch.float16 for example
51
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
52
53
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
0 commit comments