File tree Expand file tree Collapse file tree 2 files changed +6
-33
lines changed Expand file tree Collapse file tree 2 files changed +6
-33
lines changed Original file line number Diff line number Diff line change @@ -6,6 +6,8 @@ Run the examples in this directory with:
66python3 examples/< example> .py
77```
88
9+ See [ ollama/docs/api.md] ( https://github.com/ollama/ollama/blob/main/docs/api.md ) for full API documentation
10+
911### Chat - Chat with a model
1012- [ chat.py] ( chat.py )
1113- [ async-chat.py] ( async-chat.py )
@@ -50,12 +52,8 @@ Requirement: `pip install tqdm`
5052
5153
5254### Ollama Create - Create a model from a Modelfile
53- ``` python
54- python create.py < model> < modelfile>
55- ```
5655- [ create.py] ( create.py )
5756
58- See [ ollama/docs/modelfile.md] ( https://github.com/ollama/ollama/blob/main/docs/modelfile.md ) for more information on the Modelfile format.
5957
6058
6159### Ollama Embed - Generate embeddings with a model
Original file line number Diff line number Diff line change 1- import sys
1+ from ollama import Client
22
3- from ollama import create
4-
5-
6- args = sys .argv [1 :]
7- if len (args ) == 2 :
8- # create from local file
9- path = args [1 ]
10- else :
11- print ('usage: python create.py <name> <filepath>' )
12- sys .exit (1 )
13-
14- # TODO: update to real Modelfile values
15- modelfile = f"""
16- FROM { path }
17- """
18- example_modelfile = """
19- FROM llama3.2
20- # sets the temperature to 1 [higher is more creative, lower is more coherent]
21- PARAMETER temperature 1
22- # sets the context window size to 4096, this controls how many tokens the LLM can use as context to generate the next token
23- PARAMETER num_ctx 4096
24-
25- # sets a custom system message to specify the behavior of the chat assistant
26- SYSTEM You are Mario from super mario bros, acting as an assistant.
27- """
28-
29- for response in create (model = args [0 ], modelfile = modelfile , stream = True ):
30- print (response ['status' ])
3+ client = Client ()
4+ response = client .create (model = 'my-assistant' , from_ = 'llama3.2' , stream = False )
5+ print (response .status )
You can’t perform that action at this time.
0 commit comments