Skip to content

Commit 803f958

Browse files
committed
Merge branch 'temp-branch'
2 parents c41bf84 + 1066dd0 commit 803f958

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# MERGED 1.5 Version. macOS TEST VERSION
1+
# MERGED 1.5.1 macOS Merged Version
22

33
This is a development version and I have not added many changes I had planned. Please feel free to use at your own risk as there may be bugs not yet found.
44

@@ -41,7 +41,7 @@ While the focus of this branch is to enhance macOS and Apple Silicon support, I
4141

4242
Anyone who would like to assist with supporting Apple Silicon, let me know. There is much to do and I can only do so much by myself.
4343

44-
- [MERGED 1.5 Version. macOS TEST VERSION](#merged-15-version--macos-test-version)
44+
- [MERGED 1.5.1 macOS Merged Version](#merged-151-macos-merged-version)
4545
- [Features](#features)
4646
- [Installation](#installation)
4747
- [Downloading models](#downloading-models)

modules/models.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,7 @@ def llamacpp_loader(model_name):
258258
if path.is_file():
259259
model_file = path
260260
else:
261-
model_file = list(Path(f'{shared.args.model_dir}/{model_name}').glob('*ggml*'))[0]
261+
model_file = list(Path(f'{shared.args.model_dir}/{model_name}').glob('*gguf'))[0]
262262

263263
logger.info(f"llama.cpp weights detected: {model_file}\n")
264264
model, tokenizer = LlamaCppModel.from_pretrained(model_file)

0 commit comments

Comments
 (0)