Skip to content

Commit 12ea863

Browse files
committed
Update README and fix deps
1 parent 6f51b4c commit 12ea863

File tree

2 files changed

+22
-23
lines changed

2 files changed

+22
-23
lines changed

README.md

Lines changed: 21 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -20,32 +20,27 @@ Optimum ExecuTorch enables efficient deployment of transformer models using Meta
2020

2121
## ⚡ Quick Installation
2222

23-
### 1. Create a virtual environment
24-
Install [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html) on your machine. Then, create a virtual environment to manage our dependencies.
25-
```
26-
conda create -n optimum-executorch python=3.11
27-
conda activate optimum-executorch
28-
```
29-
30-
### 2. Install optimum-executorch from source
31-
```
32-
git clone https://github.com/huggingface/optimum-executorch.git
33-
cd optimum-executorch
34-
pip install '.[dev]'
23+
To install the latest stable version:
24+
```bash
25+
pip install optimum-executorch
3526
```
3627

37-
- 🔜 Install from pypi coming soon...
28+
<details>
29+
<summary>Other installation options</summary>
3830

39-
### 3. Install dependencies in dev mode
40-
41-
To access every available optimization and experiment with the newest features, run:
42-
```
43-
python install_dev.py
44-
```
31+
```
32+
# Install from source for development in editable mode
33+
pip install -e '.[dev]'
4534
46-
This script will install `executorch`, `torch`, `torchao`, `transformers`, etc. from nightly builds or from source to access the latest models and optimizations.
35+
# Install from source, using the most recent nightly Torch and Transformers dependencies.
36+
# When a new model is released and enabled on Optimum ExecuTorch, it will usually first be
37+
# available here as it will require installing recent Transformers from source.
38+
python install_dev.py
4739
48-
To leave an existing ExecuTorch installation untouched, run `install_dev.py` with `--skip_override_torch` to prevent it from being overwritten.
40+
# Leave existing ExecuTorch installation and other torch dependencies untouched.
41+
python install_dev.py --skip_override_torch
42+
```
43+
</details>
4944

5045
## 🎯 Quick Start
5146

@@ -127,7 +122,11 @@ Optimum transformer models utilize:
127122
- A **custom KV cache** that uses a custom op for efficient in-place cache update on CPU, boosting performance by **2.5x** compared to default static KV cache.
128123

129124
### Backends Delegation
130-
Currently, **Optimum-ExecuTorch** supports the [XNNPACK Backend](https://pytorch.org/executorch/main/backends-xnnpack.html) for CPU and [CoreML Backend](https://docs.pytorch.org/executorch/stable/backends-coreml.html) for GPU on Apple devices.
125+
Currently, **Optimum-ExecuTorch** supports the following backends:
126+
- [XNNPACK](https://pytorch.org/executorch/main/backends-xnnpack.html) - this is the most supported backend and will work with all models.
127+
- [CoreML](https://docs.pytorch.org/executorch/stable/backends-coreml.html)
128+
- Cuda
129+
- [Metal](https://docs.pytorch.org/executorch/stable/backends/mps/mps-overview.html) - current only available with `executorch >= 1.1.0.dev20251017`. Please separately install this nightly to use the Metal backend.
131130

132131
For a comprehensive overview of all backends supported by ExecuTorch, please refer to the [ExecuTorch Backend Overview](https://pytorch.org/executorch/main/backends-overview.html).
133132

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,6 @@ dependencies = [
2828
"executorch>=1.0.0",
2929
"transformers==4.56.1",
3030
"pytorch-tokenizers>=1.0.1",
31-
"accelerate>=0.26.0",
3231
]
3332

3433
[project.optional-dependencies]
@@ -46,6 +45,7 @@ dev = [
4645
"tiktoken",
4746
"black~=23.1",
4847
"ruff==0.4.4",
48+
"mistral-common",
4949
]
5050

5151
[project.urls]

0 commit comments

Comments
 (0)