Skip to content
This repository was archived by the owner on Oct 25, 2024. It is now read-only.

Commit ab787f8

Browse files
[NeuralChat] Fix magicoder model tokenizer issue (#1075)
* fix magicoder tokenizer issue Signed-off-by: lvliang-intel <liang1.lv@intel.com>
1 parent 8f75eb1 commit ab787f8

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

intel_extension_for_transformers/neural_chat/models/model_utils.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -441,7 +441,8 @@ def load_model(
441441
try:
442442
tokenizer = AutoTokenizer.from_pretrained(
443443
tokenizer_name,
444-
use_fast=False if config.model_type == "llama" else True,
444+
use_fast=False if (re.search("llama", model_name, re.IGNORECASE)
445+
or re.search("neural-chat-7b-v2", model_name, re.IGNORECASE)) else True,
445446
use_auth_token=hf_access_token,
446447
trust_remote_code=True if (re.search("qwen", model_name, re.IGNORECASE) or \
447448
re.search("chatglm", model_name, re.IGNORECASE)) else False,

0 commit comments

Comments
 (0)