|
| 1 | +<!-- Copyright 2025 The SANA-Video Authors and HuggingFace Team. All rights reserved. |
| 2 | +# |
| 3 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
| 4 | +# you may not use this file except in compliance with the License. |
| 5 | +# You may obtain a copy of the License at |
| 6 | +# |
| 7 | +# http://www.apache.org/licenses/LICENSE-2.0 |
| 8 | +# |
| 9 | +# Unless required by applicable law or agreed to in writing, software |
| 10 | +# distributed under the License is distributed on an "AS IS" BASIS, |
| 11 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
| 12 | +# See the License for the specific language governing permissions and |
| 13 | +# limitations under the License. --> |
| 14 | + |
| 15 | +# SanaVideoPipeline |
| 16 | + |
| 17 | +<div class="flex flex-wrap space-x-1"> |
| 18 | + <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/> |
| 19 | + <img alt="MPS" src="https://img.shields.io/badge/MPS-000000?style=flat&logo=apple&logoColor=white%22"> |
| 20 | +</div> |
| 21 | + |
| 22 | +[SANA-Video: Efficient Video Generation with Block Linear Diffusion Transformer](https://huggingface.co/papers/2509.24695) from NVIDIA and MIT HAN Lab, by Junsong Chen, Yuyang Zhao, Jincheng Yu, Ruihang Chu, Junyu Chen, Shuai Yang, Xianbang Wang, Yicheng Pan, Daquan Zhou, Huan Ling, Haozhe Liu, Hongwei Yi, Hao Zhang, Muyang Li, Yukang Chen, Han Cai, Sanja Fidler, Ping Luo, Song Han, Enze Xie. |
| 23 | + |
| 24 | +The abstract from the paper is: |
| 25 | + |
| 26 | +*We introduce SANA-Video, a small diffusion model that can efficiently generate videos up to 720x1280 resolution and minute-length duration. SANA-Video synthesizes high-resolution, high-quality and long videos with strong text-video alignment at a remarkably fast speed, deployable on RTX 5090 GPU. Two core designs ensure our efficient, effective and long video generation: (1) Linear DiT: We leverage linear attention as the core operation, which is more efficient than vanilla attention given the large number of tokens processed in video generation. (2) Constant-Memory KV cache for Block Linear Attention: we design block-wise autoregressive approach for long video generation by employing a constant-memory state, derived from the cumulative properties of linear attention. This KV cache provides the Linear DiT with global context at a fixed memory cost, eliminating the need for a traditional KV cache and enabling efficient, minute-long video generation. In addition, we explore effective data filters and model training strategies, narrowing the training cost to 12 days on 64 H100 GPUs, which is only 1% of the cost of MovieGen. Given its low cost, SANA-Video achieves competitive performance compared to modern state-of-the-art small diffusion models (e.g., Wan 2.1-1.3B and SkyReel-V2-1.3B) while being 16x faster in measured latency. Moreover, SANA-Video can be deployed on RTX 5090 GPUs with NVFP4 precision, accelerating the inference speed of generating a 5-second 720p video from 71s to 29s (2.4x speedup). In summary, SANA-Video enables low-cost, high-quality video generation. [this https URL](https://github.com/NVlabs/SANA).* |
| 27 | + |
| 28 | +This pipeline was contributed by SANA Team. The original codebase can be found [here](https://github.com/NVlabs/Sana). The original weights can be found under [hf.co/Efficient-Large-Model](https://hf.co/collections/Efficient-Large-Model/sana-video). |
| 29 | + |
| 30 | +Available models: |
| 31 | + |
| 32 | +| Model | Recommended dtype | |
| 33 | +|:-----:|:-----------------:| |
| 34 | +| [`Efficient-Large-Model/SANA-Video_2B_480p_diffusers`](https://huggingface.co/Efficient-Large-Model/ANA-Video_2B_480p_diffusers) | `torch.bfloat16` | |
| 35 | + |
| 36 | +Refer to [this](https://huggingface.co/collections/Efficient-Large-Model/sana-video) collection for more information. |
| 37 | + |
| 38 | +Note: The recommended dtype mentioned is for the transformer weights. The text encoder and VAE weights must stay in `torch.bfloat16` or `torch.float32` for the model to work correctly. Please refer to the inference example below to see how to load the model with the recommended dtype. |
| 39 | + |
| 40 | +## Quantization |
| 41 | + |
| 42 | +Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model. |
| 43 | + |
| 44 | +Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [`SanaVideoPipeline`] for inference with bitsandbytes. |
| 45 | + |
| 46 | +```py |
| 47 | +import torch |
| 48 | +from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, SanaVideoTransformer3DModel, SanaVideoPipeline |
| 49 | +from transformers import BitsAndBytesConfig as BitsAndBytesConfig, AutoModel |
| 50 | + |
| 51 | +quant_config = BitsAndBytesConfig(load_in_8bit=True) |
| 52 | +text_encoder_8bit = AutoModel.from_pretrained( |
| 53 | + "Efficient-Large-Model/SANA-Video_2B_480p_diffusers", |
| 54 | + subfolder="text_encoder", |
| 55 | + quantization_config=quant_config, |
| 56 | + torch_dtype=torch.float16, |
| 57 | +) |
| 58 | + |
| 59 | +quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True) |
| 60 | +transformer_8bit = SanaVideoTransformer3DModel.from_pretrained( |
| 61 | + "Efficient-Large-Model/SANA-Video_2B_480p_diffusers", |
| 62 | + subfolder="transformer", |
| 63 | + quantization_config=quant_config, |
| 64 | + torch_dtype=torch.float16, |
| 65 | +) |
| 66 | + |
| 67 | +pipeline = SanaVideoPipeline.from_pretrained( |
| 68 | + "Efficient-Large-Model/SANA-Video_2B_480p_diffusers", |
| 69 | + text_encoder=text_encoder_8bit, |
| 70 | + transformer=transformer_8bit, |
| 71 | + torch_dtype=torch.float16, |
| 72 | + device_map="balanced", |
| 73 | +) |
| 74 | + |
| 75 | +model_score = 30 |
| 76 | +prompt = "Evening, backlight, side lighting, soft light, high contrast, mid-shot, centered composition, clean solo shot, warm color. A young Caucasian man stands in a forest, golden light glimmers on his hair as sunlight filters through the leaves. He wears a light shirt, wind gently blowing his hair and collar, light dances across his face with his movements. The background is blurred, with dappled light and soft tree shadows in the distance. The camera focuses on his lifted gaze, clear and emotional." |
| 77 | +negative_prompt = "A chaotic sequence with misshapen, deformed limbs in heavy motion blur, sudden disappearance, jump cuts, jerky movements, rapid shot changes, frames out of sync, inconsistent character shapes, temporal artifacts, jitter, and ghosting effects, creating a disorienting visual experience." |
| 78 | +motion_prompt = f" motion score: {model_score}." |
| 79 | +prompt = prompt + motion_prompt |
| 80 | + |
| 81 | +output = pipeline( |
| 82 | + prompt=prompt, |
| 83 | + negative_prompt=negative_prompt, |
| 84 | + height=480, |
| 85 | + width=832, |
| 86 | + num_frames=81, |
| 87 | + guidance_scale=6.0, |
| 88 | + num_inference_steps=50 |
| 89 | +).frames[0] |
| 90 | +export_to_video(output, "sana-video-output.mp4", fps=16) |
| 91 | +``` |
| 92 | + |
| 93 | +## SanaVideoPipeline |
| 94 | + |
| 95 | +[[autodoc]] SanaVideoPipeline |
| 96 | + - all |
| 97 | + - __call__ |
| 98 | + |
| 99 | + |
| 100 | +## SanaVideoPipelineOutput |
| 101 | + |
| 102 | +[[autodoc]] pipelines.sana.pipeline_sana_video.SanaVideoPipelineOutput |
0 commit comments