You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+21-22Lines changed: 21 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,32 +20,27 @@ Optimum ExecuTorch enables efficient deployment of transformer models using Meta
20
20
21
21
## ⚡ Quick Installation
22
22
23
-
### 1. Create a virtual environment
24
-
Install [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html) on your machine. Then, create a virtual environment to manage our dependencies.
To access every available optimization and experiment with the newest features, run:
42
-
```
43
-
python install_dev.py
44
-
```
31
+
```
32
+
# Install from source for development in editable mode
33
+
pip install -e '.[dev]'
45
34
46
-
This script will install `executorch`, `torch`, `torchao`, `transformers`, etc. from nightly builds or from source to access the latest models and optimizations.
35
+
# Install from source, using the most recent nightly Torch and Transformers dependencies.
36
+
# When a new model is released and enabled on Optimum ExecuTorch, it will usually first be
37
+
# available here as it will require installing recent Transformers from source.
38
+
python install_dev.py
47
39
48
-
To leave an existing ExecuTorch installation untouched, run `install_dev.py` with `--skip_override_torch` to prevent it from being overwritten.
40
+
# Leave existing ExecuTorch installation and other torch dependencies untouched.
- A **custom KV cache** that uses a custom op for efficient in-place cache update on CPU, boosting performance by **2.5x** compared to default static KV cache.
128
123
129
124
### Backends Delegation
130
-
Currently, **Optimum-ExecuTorch** supports the [XNNPACK Backend](https://pytorch.org/executorch/main/backends-xnnpack.html) for CPU and [CoreML Backend](https://docs.pytorch.org/executorch/stable/backends-coreml.html) for GPU on Apple devices.
125
+
Currently, **Optimum-ExecuTorch** supports the following backends:
126
+
-[XNNPACK](https://pytorch.org/executorch/main/backends-xnnpack.html) - this is the most supported backend and will work with all models.
-[Metal](https://docs.pytorch.org/executorch/stable/backends/mps/mps-overview.html) - current only available with `executorch >= 1.1.0.dev20251017`. Please separately install this nightly to use the Metal backend.
131
130
132
131
For a comprehensive overview of all backends supported by ExecuTorch, please refer to the [ExecuTorch Backend Overview](https://pytorch.org/executorch/main/backends-overview.html).
0 commit comments