@@ -29,6 +29,7 @@ Run local LLMs on iGPU, APU and CPU (AMD , Intel, and Qualcomm (Coming Soon)). E
2929 - [ Launch Chatbot Web UI] ( #launch-chatbot-web-ui )
3030 - [ Launch Model Management UI] ( #launch-model-management-ui )
3131- [ Compile OpenAI-API Compatible Server into Windows Executable] ( #compile-openai-api-compatible-server-into-windows-executable )
32+ - [ Prebuilt Binary (Alpha)] ( #compile-openai-api-compatible-server-into-windows-executable )
3233- [ Acknowledgements] ( #acknowledgements )
3334
3435## Supported Models (Quick Start)
@@ -59,39 +60,39 @@ Run local LLMs on iGPU, APU and CPU (AMD , Intel, and Qualcomm (Coming Soon)). E
5960
6061 1 . Custom Setup:
6162
62- - ** XPU** : Requires anaconda environment. ` conda create -n ellm python=3.10 libuv; conda activate llm ` .
63+ - ** IPEX( XPU) ** : Requires anaconda environment. ` conda create -n ellm python=3.10 libuv; conda activate ellm ` .
6364 - ** DirectML** : If you are using Conda Environment. Install additional dependencies: ` conda install conda-forge::vs2015_runtime ` .
6465
6566 2 . Install embeddedllm package. ` $env:ELLM_TARGET_DEVICE='directml'; pip install -e . ` . Note: currently support ` cpu ` , ` directml ` and ` cuda ` .
6667
6768 - ** DirectML:** ` $env:ELLM_TARGET_DEVICE='directml'; pip install -e .[directml] `
6869 - ** CPU:** ` $env:ELLM_TARGET_DEVICE='cpu'; pip install -e .[cpu] `
6970 - ** CUDA:** ` $env:ELLM_TARGET_DEVICE='cuda'; pip install -e .[cuda] `
70- - ** XPU :** ` $env:ELLM_TARGET_DEVICE='xpu '; pip install -e .[xpu] `
71+ - ** IPEX :** ` $env:ELLM_TARGET_DEVICE='ipex '; python setup.py develop `
7172 - ** With Web UI** :
7273 - ** DirectML:** ` $env:ELLM_TARGET_DEVICE='directml'; pip install -e .[directml,webui] `
7374 - ** CPU:** ` $env:ELLM_TARGET_DEVICE='cpu'; pip install -e .[cpu,webui] `
7475 - ** CUDA:** ` $env:ELLM_TARGET_DEVICE='cuda'; pip install -e .[cuda,webui] `
75- - ** XPU :** ` $env:ELLM_TARGET_DEVICE='xpu '; pip install -e .[xpu, webui] `
76+ - ** IPEX :** ` $env:ELLM_TARGET_DEVICE='ipex '; python setup.py develop; pip install -r requirements- webui.txt `
7677
7778- ** Linux**
7879
7980 1 . Custom Setup:
8081
81- - ** XPU** : Requires anaconda environment. ` conda create -n ellm python=3.10 libuv; conda activate llm ` .
82+ - ** IPEX( XPU) ** : Requires anaconda environment. ` conda create -n ellm python=3.10 libuv; conda activate ellm ` .
8283 - ** DirectML** : If you are using Conda Environment. Install additional dependencies: ` conda install conda-forge::vs2015_runtime ` .
8384
8485 2 . Install embeddedllm package. ` ELLM_TARGET_DEVICE='directml' pip install -e . ` . Note: currently support ` cpu ` , ` directml ` and ` cuda ` .
8586
8687 - ** DirectML:** ` ELLM_TARGET_DEVICE='directml' pip install -e .[directml] `
8788 - ** CPU:** ` ELLM_TARGET_DEVICE='cpu' pip install -e .[cpu] `
8889 - ** CUDA:** ` ELLM_TARGET_DEVICE='cuda' pip install -e .[cuda] `
89- - ** XPU :** ` ELLM_TARGET_DEVICE='xpu' pip install -e .[xpu] `
90+ - ** IPEX :** ` ELLM_TARGET_DEVICE='ipex' python setup.py develop `
9091 - ** With Web UI** :
9192 - ** DirectML:** ` ELLM_TARGET_DEVICE='directml' pip install -e .[directml,webui] `
9293 - ** CPU:** ` ELLM_TARGET_DEVICE='cpu' pip install -e .[cpu,webui] `
9394 - ** CUDA:** ` ELLM_TARGET_DEVICE='cuda' pip install -e .[cuda,webui] `
94- - ** XPU :** ` ELLM_TARGET_DEVICE='xpu' pip install -e .[xpu, webui] `
95+ - ** IPEX :** ` $env: ELLM_TARGET_DEVICE='ipex'; python setup.py develop; pip install -r requirements- webui.txt `
9596
9697### Launch OpenAI API Compatible Server
9798
@@ -131,9 +132,19 @@ It is an interface that allows you to download and deploy OpenAI API compatible
131132## Compile OpenAI-API Compatible Server into Windows Executable
132133
1331341. Install `embeddedllm`.
134- 2. Install PyInstaller: `pip install pyinstaller`.
135+ 2. Install PyInstaller: `pip install pyinstaller==6.9.0 `.
1351363. Compile Windows Executable: `pyinstaller .\ellm_api_server.spec`.
1361374. You can find the executable in the `dist\ellm_api_server`.
138+ 5. Use it like `ellm_server`. `.\ellm_api_server.exe --model_path <path/to/model/weight>`.
139+
140+ ## Prebuilt OpenAI API Compatible Windows Executable (Alpha)
141+ You can find the prebuilt OpenAI API Compatible Windows Executable in the Release page.
142+
143+ *Powershell/Terminal Usage (Use it like `ellm_server`)*:
144+ ```powershell
145+ .\ellm_api_server.exe --model_path <path/to/model/weight>
146+ ```
147+
137148
138149## Acknowledgements
139150
0 commit comments