Skip to content

Commit 20a2be1

Browse files
committed
Add gMLP-S weights, 79.6 top-1
1 parent 85f894e commit 20a2be1

File tree

2 files changed

+6
-1
lines changed

2 files changed

+6
-1
lines changed

README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,9 @@ I'm fortunate to be able to dedicate significant time and money of my own suppor
2323

2424
## What's New
2525

26+
### June 23, 2021
27+
* Reproduce gMLP model training, `gmlp_s16_224` trained to 79.6 top-1, matching [paper](https://arxiv.org/abs/2105.08050).
28+
2629
### June 20, 2021
2730
* Release Vision Transformer 'AugReg' weights from [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270)
2831
* .npz weight loading support added, can load any of the 50K+ weights from the [AugReg series](https://console.cloud.google.com/storage/browser/vit_models/augreg)

timm/models/mlp_mixer.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,9 @@ def _cfg(url='', **kwargs):
129129
mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD),
130130

131131
gmlp_ti16_224=_cfg(),
132-
gmlp_s16_224=_cfg(),
132+
gmlp_s16_224=_cfg(
133+
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gmlp_s16_224_raa-10536d42.pth',
134+
),
133135
gmlp_b16_224=_cfg(),
134136
)
135137

0 commit comments

Comments
 (0)