Skip to content

Commit 1614cf9

Browse files
authored
Add support for VaultGemma (#1413)
1 parent df905b4 commit 1614cf9

File tree

4 files changed

+11
-0
lines changed

4 files changed

+11
-0
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -439,6 +439,7 @@ You can refine your search by selecting the task you're interested in (e.g., [te
439439
1. **Ultravox** (from Fixie.ai) released with the repository [fixie-ai/ultravox](https://github.com/fixie-ai/ultravox) by the Fixie.ai team.
440440
1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://huggingface.co/papers/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
441441
1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://huggingface.co/papers/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
442+
1. **[VaultGemma](https://huggingface.co/docs/transformers/main/model_doc/vaultgemma)** (from Google) released with the technical report [VaultGemma: A Differentially Private Gemma Model](https://services.google.com/fh/files/blogs/vaultgemma_tech_report.pdf) by the VaultGemma Google team.
442443
1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://huggingface.co/papers/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
443444
1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://huggingface.co/papers/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
444445
1. **[ViTMatte](https://huggingface.co/docs/transformers/model_doc/vitmatte)** (from HUST-VL) released with the paper [ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers](https://huggingface.co/papers/2305.15272) by Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang.

docs/snippets/6_supported-models.snippet

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -153,6 +153,7 @@
153153
1. **Ultravox** (from Fixie.ai) released with the repository [fixie-ai/ultravox](https://github.com/fixie-ai/ultravox) by the Fixie.ai team.
154154
1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://huggingface.co/papers/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
155155
1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://huggingface.co/papers/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
156+
1. **[VaultGemma](https://huggingface.co/docs/transformers/main/model_doc/vaultgemma)** (from Google) released with the technical report [VaultGemma: A Differentially Private Gemma Model](https://services.google.com/fh/files/blogs/vaultgemma_tech_report.pdf) by the VaultGemma Google team.
156157
1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://huggingface.co/papers/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
157158
1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://huggingface.co/papers/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
158159
1. **[ViTMatte](https://huggingface.co/docs/transformers/model_doc/vitmatte)** (from HUST-VL) released with the paper [ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers](https://huggingface.co/papers/2305.15272) by Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang.

src/configs.js

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -137,6 +137,7 @@ function getNormalizedConfig(config) {
137137
case 'qwen3':
138138
case 'gemma':
139139
case 'gemma2':
140+
case 'vaultgemma':
140141
case 'gemma3_text':
141142
case 'gemma3n_text':
142143
case 'glm':

src/models.js

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4711,6 +4711,12 @@ export class Gemma2Model extends Gemma2PreTrainedModel { }
47114711
export class Gemma2ForCausalLM extends Gemma2PreTrainedModel { }
47124712
//////////////////////////////////////////////////
47134713

4714+
//////////////////////////////////////////////////
4715+
// VaultGemma models
4716+
export class VaultGemmaPreTrainedModel extends PreTrainedModel { }
4717+
export class VaultGemmaModel extends VaultGemmaPreTrainedModel { }
4718+
export class VaultGemmaForCausalLM extends VaultGemmaPreTrainedModel { }
4719+
//////////////////////////////////////////////////
47144720

47154721
//////////////////////////////////////////////////
47164722
// Gemma3 models
@@ -7853,6 +7859,7 @@ const MODEL_MAPPING_NAMES_DECODER_ONLY = new Map([
78537859
['cohere', ['CohereModel', CohereModel]],
78547860
['gemma', ['GemmaModel', GemmaModel]],
78557861
['gemma2', ['Gemma2Model', Gemma2Model]],
7862+
['vaultgemma', ['VaultGemmaModel', VaultGemmaModel]],
78567863
['gemma3_text', ['Gemma3Model', Gemma3Model]],
78577864
['helium', ['HeliumModel', HeliumModel]],
78587865
['glm', ['GlmModel', GlmModel]],
@@ -7962,6 +7969,7 @@ const MODEL_FOR_CAUSAL_LM_MAPPING_NAMES = new Map([
79627969
['cohere', ['CohereForCausalLM', CohereForCausalLM]],
79637970
['gemma', ['GemmaForCausalLM', GemmaForCausalLM]],
79647971
['gemma2', ['Gemma2ForCausalLM', Gemma2ForCausalLM]],
7972+
['vaultgemma', ['VaultGemmaForCausalLM', VaultGemmaForCausalLM]],
79657973
['gemma3_text', ['Gemma3ForCausalLM', Gemma3ForCausalLM]],
79667974
['helium', ['HeliumForCausalLM', HeliumForCausalLM]],
79677975
['glm', ['GlmForCausalLM', GlmForCausalLM]],

0 commit comments

Comments
 (0)