Skip to content

Commit ca9b078

Browse files
committed
Update README.md and docs. Version bumped to 0.4.3
1 parent 6853b07 commit ca9b078

File tree

6 files changed

+108
-36
lines changed

6 files changed

+108
-36
lines changed

README.md

Lines changed: 18 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,15 @@
22

33
## What's New
44

5+
### Feb 10, 2021
6+
* More model archs, incl a flexible ByobNet backbone ('Bring-your-own-blocks')
7+
* GPU-Efficient-Networks (https://github.com/idstcv/GPU-Efficient-Networks), impl in `byobnet.py`
8+
* RepVGG (https://github.com/DingXiaoH/RepVGG), impl in `byobnet.py`
9+
* classic VGG (from torchvision, impl in `vgg.py`)
10+
* Refinements to normalizer layer arg handling and normalizer+act layer handling in some models
11+
* Default AMP mode changed to native PyTorch AMP instead of APEX. Issues not being fixed with APEX. Native works with `--channels-last` and `--torchscript` model training, APEX does not.
12+
* Fix a few bugs introduced since last pypi release
13+
514
### Feb 8, 2021
615
* Add several ResNet weights with ECA attention. 26t & 50t trained @ 256, test @ 320. 269d train @ 256, fine-tune @320, test @ 352.
716
* `ecaresnet26t` - 79.88 top-1 @ 320x320, 79.08 @ 256x256
@@ -118,30 +127,6 @@ Bunch of changes:
118127
* Some import cleanup and classifier reset changes, all models will have classifier reset to nn.Identity on reset_classifer(0) call
119128
* Prep for 0.1.28 pip release
120129

121-
### May 12, 2020
122-
* Add ResNeSt models (code adapted from https://github.com/zhanghang1989/ResNeSt, paper https://arxiv.org/abs/2004.08955))
123-
124-
### May 3, 2020
125-
* Pruned EfficientNet B1, B2, and B3 (https://arxiv.org/abs/2002.08258) contributed by [Yonathan Aflalo](https://github.com/yoniaflalo)
126-
127-
### May 1, 2020
128-
* Merged a number of execellent contributions in the ResNet model family over the past month
129-
* BlurPool2D and resnetblur models initiated by [Chris Ha](https://github.com/VRandme), I trained resnetblur50 to 79.3.
130-
* TResNet models and SpaceToDepth, AntiAliasDownsampleLayer layers by [mrT23](https://github.com/mrT23)
131-
* ecaresnet (50d, 101d, light) models and two pruned variants using pruning as per (https://arxiv.org/abs/2002.08258) by [Yonathan Aflalo](https://github.com/yoniaflalo)
132-
* 200 pretrained models in total now with updated results csv in results folder
133-
134-
### April 5, 2020
135-
* Add some newly trained MobileNet-V2 models trained with latest h-params, rand augment. They compare quite favourably to EfficientNet-Lite
136-
* 3.5M param MobileNet-V2 100 @ 73%
137-
* 4.5M param MobileNet-V2 110d @ 75%
138-
* 6.1M param MobileNet-V2 140 @ 76.5%
139-
* 5.8M param MobileNet-V2 120d @ 77.3%
140-
141-
### March 18, 2020
142-
* Add EfficientNet-Lite models w/ weights ported from [Tensorflow TPU](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/lite)
143-
* Add RandAugment trained ResNeXt-50 32x4d weights with 79.8 top-1. Trained by [Andrew Lavin](https://github.com/andravin) (see Training section for hparams)
144-
145130
## Introduction
146131

147132
Py**T**orch **Im**age **M**odels (`timm`) is a collection of image models, layers, utilities, optimizers, schedulers, data-loaders / augmentations, and reference training / validation scripts that aim to pull together a wide variety of SOTA models with ability to reproduce ImageNet training results.
@@ -150,7 +135,7 @@ The work of many others is present here. I've tried to make sure all source mate
150135

151136
## Models
152137

153-
All model architecture families include variants with pretrained weights. The are variants without any weights. Help training new or better weights is always appreciated. Here are some example [training hparams](https://rwightman.github.io/pytorch-image-models/training_hparam_examples) to get you started.
138+
All model architecture families include variants with pretrained weights. There are specific model variants without any weights, it is NOT a bug. Help training new or better weights is always appreciated. Here are some example [training hparams](https://rwightman.github.io/pytorch-image-models/training_hparam_examples) to get you started.
154139

155140
A full version of the list below with source links can be found in the [documentation](https://rwightman.github.io/pytorch-image-models/models/).
156141

@@ -170,6 +155,7 @@ A full version of the list below with source links can be found in the [document
170155
* MNASNet B1, A1 (Squeeze-Excite), and Small - https://arxiv.org/abs/1807.11626
171156
* MobileNet-V2 - https://arxiv.org/abs/1801.04381
172157
* Single-Path NAS - https://arxiv.org/abs/1904.02877
158+
* GPU-Efficient Networks - https://arxiv.org/abs/2006.14090
173159
* HRNet - https://arxiv.org/abs/1908.07919
174160
* Inception-V3 - https://arxiv.org/abs/1512.00567
175161
* Inception-ResNet-V2 and Inception-V4 - https://arxiv.org/abs/1602.07261
@@ -178,6 +164,7 @@ A full version of the list below with source links can be found in the [document
178164
* NF-RegNet / NF-ResNet - https://arxiv.org/abs/2101.08692
179165
* PNasNet - https://arxiv.org/abs/1712.00559
180166
* RegNet - https://arxiv.org/abs/2003.13678
167+
* RepVGG - https://arxiv.org/abs/2101.03697
181168
* ResNet/ResNeXt
182169
* ResNet (v1b/v1.5) - https://arxiv.org/abs/1512.03385
183170
* ResNeXt - https://arxiv.org/abs/1611.05431
@@ -261,9 +248,10 @@ The root folder of the repository contains reference train, validation, and infe
261248

262249
One of the greatest assets of PyTorch is the community and their contributions. A few of my favourite resources that pair well with the models and componenets here are listed below.
263250

264-
### Training / Frameworks
265-
* PyTorch Lightning - https://github.com/PyTorchLightning/pytorch-lightning
266-
* fastai - https://github.com/fastai/fastai
251+
### Object Detection, Instance and Semantic Segmentation
252+
* Detectron2 - https://github.com/facebookresearch/detectron2
253+
* Segmentation Models (Semantic) - https://github.com/qubvel/segmentation_models.pytorch
254+
* EfficientDet (Obj Det, Semantic soon) - https://github.com/rwightman/efficientdet-pytorch
267255

268256
### Computer Vision / Image Augmentation
269257
* Albumentations - https://github.com/albumentations-team/albumentations
@@ -276,10 +264,8 @@ One of the greatest assets of PyTorch is the community and their contributions.
276264
### Metric Learning
277265
* PyTorch Metric Learning - https://github.com/KevinMusgrave/pytorch-metric-learning
278266

279-
### Object Detection, Instance and Semantic Segmentation
280-
* Detectron2 - https://github.com/facebookresearch/detectron2
281-
* Segmentation Models (Semantic) - https://github.com/qubvel/segmentation_models.pytorch
282-
* EfficientDet (Obj Det, Semantic soon) - https://github.com/rwightman/efficientdet-pytorch
267+
### Training / Frameworks
268+
* fastai - https://github.com/fastai/fastai
283269

284270
## Licenses
285271

docs/archived_changes.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,29 @@
11
# Archived Changes
22

3+
### May 12, 2020
4+
* Add ResNeSt models (code adapted from https://github.com/zhanghang1989/ResNeSt, paper https://arxiv.org/abs/2004.08955))
5+
6+
### May 3, 2020
7+
* Pruned EfficientNet B1, B2, and B3 (https://arxiv.org/abs/2002.08258) contributed by [Yonathan Aflalo](https://github.com/yoniaflalo)
8+
9+
### May 1, 2020
10+
* Merged a number of execellent contributions in the ResNet model family over the past month
11+
* BlurPool2D and resnetblur models initiated by [Chris Ha](https://github.com/VRandme), I trained resnetblur50 to 79.3.
12+
* TResNet models and SpaceToDepth, AntiAliasDownsampleLayer layers by [mrT23](https://github.com/mrT23)
13+
* ecaresnet (50d, 101d, light) models and two pruned variants using pruning as per (https://arxiv.org/abs/2002.08258) by [Yonathan Aflalo](https://github.com/yoniaflalo)
14+
* 200 pretrained models in total now with updated results csv in results folder
15+
16+
### April 5, 2020
17+
* Add some newly trained MobileNet-V2 models trained with latest h-params, rand augment. They compare quite favourably to EfficientNet-Lite
18+
* 3.5M param MobileNet-V2 100 @ 73%
19+
* 4.5M param MobileNet-V2 110d @ 75%
20+
* 6.1M param MobileNet-V2 140 @ 76.5%
21+
* 5.8M param MobileNet-V2 120d @ 77.3%
22+
23+
### March 18, 2020
24+
* Add EfficientNet-Lite models w/ weights ported from [Tensorflow TPU](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/lite)
25+
* Add RandAugment trained ResNeXt-50 32x4d weights with 79.8 top-1. Trained by [Andrew Lavin](https://github.com/andravin) (see Training section for hparams)
26+
327
### April 5, 2020
428
* Add some newly trained MobileNet-V2 models trained with latest h-params, rand augment. They compare quite favourably to EfficientNet-Lite
529
* 3.5M param MobileNet-V2 100 @ 73%

docs/changes.md

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,55 @@
11
# Recent Changes
22

3+
### Feb 10, 2021
4+
* More model archs, incl a flexible ByobNet backbone ('Bring-your-own-blocks')
5+
* GPU-Efficient-Networks (https://github.com/idstcv/GPU-Efficient-Networks), impl in `byobnet.py`
6+
* RepVGG (https://github.com/DingXiaoH/RepVGG), impl in `byobnet.py`
7+
* classic VGG (from torchvision, impl in `vgg`)
8+
* Refinements to normalizer layer arg handling and normalizer+act layer handling in some models
9+
* Default AMP mode changed to native PyTorch AMP instead of APEX. Issues not being fixed with APEX. Native works with `--channels-last` and `--torchscript` model training, APEX does not.
10+
* Fix a few bugs introduced since last pypi release
11+
12+
### Feb 8, 2021
13+
* Add several ResNet weights with ECA attention. 26t & 50t trained @ 256, test @ 320. 269d train @ 256, fine-tune @320, test @ 352.
14+
* `ecaresnet26t` - 79.88 top-1 @ 320x320, 79.08 @ 256x256
15+
* `ecaresnet50t` - 82.35 top-1 @ 320x320, 81.52 @ 256x256
16+
* `ecaresnet269d` - 84.93 top-1 @ 352x352, 84.87 @ 320x320
17+
* Remove separate tiered (`t`) vs tiered_narrow (`tn`) ResNet model defs, all `tn` changed to `t` and `t` models removed (`seresnext26t_32x4d` only model w/ weights that was removed).
18+
* Support model default_cfgs with separate train vs test resolution `test_input_size` and remove extra `_320` suffix ResNet model defs that were just for test.
19+
20+
### Jan 30, 2021
21+
* Add initial "Normalization Free" NF-RegNet-B* and NF-ResNet model definitions based on [paper](https://arxiv.org/abs/2101.08692)
22+
23+
### Jan 25, 2021
24+
* Add ResNetV2 Big Transfer (BiT) models w/ ImageNet-1k and 21k weights from https://github.com/google-research/big_transfer
25+
* Add official R50+ViT-B/16 hybrid models + weights from https://github.com/google-research/vision_transformer
26+
* ImageNet-21k ViT weights are added w/ model defs and representation layer (pre logits) support
27+
* NOTE: ImageNet-21k classifier heads were zero'd in original weights, they are only useful for transfer learning
28+
* Add model defs and weights for DeiT Vision Transformer models from https://github.com/facebookresearch/deit
29+
* Refactor dataset classes into ImageDataset/IterableImageDataset + dataset specific parser classes
30+
* Add Tensorflow-Datasets (TFDS) wrapper to allow use of TFDS image classification sets with train script
31+
* Ex: `train.py /data/tfds --dataset tfds/oxford_iiit_pet --val-split test --model resnet50 -b 256 --amp --num-classes 37 --opt adamw --lr 3e-4 --weight-decay .001 --pretrained -j 2`
32+
* Add improved .tar dataset parser that reads images from .tar, folder of .tar files, or .tar within .tar
33+
* Run validation on full ImageNet-21k directly from tar w/ BiT model: `validate.py /data/fall11_whole.tar --model resnetv2_50x1_bitm_in21k --amp`
34+
* Models in this update should be stable w/ possible exception of ViT/BiT, possibility of some regressions with train/val scripts and dataset handling
35+
36+
### Jan 3, 2021
37+
* Add SE-ResNet-152D weights
38+
* 256x256 val, 0.94 crop top-1 - 83.75
39+
* 320x320 val, 1.0 crop - 84.36
40+
* Update results files
41+
42+
### Dec 18, 2020
43+
* Add ResNet-101D, ResNet-152D, and ResNet-200D weights trained @ 256x256
44+
* 256x256 val, 0.94 crop (top-1) - 101D (82.33), 152D (83.08), 200D (83.25)
45+
* 288x288 val, 1.0 crop - 101D (82.64), 152D (83.48), 200D (83.76)
46+
* 320x320 val, 1.0 crop - 101D (83.00), 152D (83.66), 200D (84.01)
47+
48+
### Dec 7, 2020
49+
* Simplify EMA module (ModelEmaV2), compatible with fully torchscripted models
50+
* Misc fixes for SiLU ONNX export, default_cfg missing from Feature extraction models, Linear layer w/ AMP + torchscript
51+
* PyPi release @ 0.3.2 (needed by EfficientDet)
52+
353
### Oct 30, 2020
454
* Test with PyTorch 1.7 and fix a small top-n metric view vs reshape issue.
555
* Convert newly added 224x224 Vision Transformer weights from official JAX repo. 81.8 top-1 for B/16, 83.1 L/16.

docs/models.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,10 @@ The validation results for the pretrained weights can be found [here](results.md
3131
* My PyTorch code: https://github.com/rwightman/pytorch-dpn-pretrained
3232
* Reference code: https://github.com/cypw/DPNs
3333

34+
## GPU-Efficient Networks [[byobnet.py](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/byobnet.py)]
35+
* Paper: `Neural Architecture Design for GPU-Efficient Networks` - https://arxiv.org/abs/2006.14090
36+
* Reference code: https://github.com/idstcv/GPU-Efficient-Networks
37+
3438
## HRNet [[hrnet.py](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/hrnet.py)]
3539
* Paper: `Deep High-Resolution Representation Learning for Visual Recognition` - https://arxiv.org/abs/1908.07919
3640
* Code: https://github.com/HRNet/HRNet-Image-Classification
@@ -82,6 +86,10 @@ The validation results for the pretrained weights can be found [here](results.md
8286
* Paper: `Designing Network Design Spaces` - https://arxiv.org/abs/2003.13678
8387
* Reference code: https://github.com/facebookresearch/pycls/blob/master/pycls/models/regnet.py
8488

89+
## RepVGG [[byobnet.py](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/byobnet.py)]
90+
* Paper: `Making VGG-style ConvNets Great Again` - https://arxiv.org/abs/2101.03697
91+
* Reference code: https://github.com/DingXiaoH/RepVGG
92+
8593
## ResNet, ResNeXt [[resnet.py](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/resnet.py)]
8694

8795
* ResNet (V1B)
@@ -136,6 +144,10 @@ NOTE: I am deprecating this version of the networks, the new ones are part of `r
136144
* Paper: `TResNet: High Performance GPU-Dedicated Architecture` - https://arxiv.org/abs/2003.13630
137145
* Code: https://github.com/mrT23/TResNet
138146

147+
## VGG [[vgg.py](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vgg.py)]
148+
* Paper: `Very Deep Convolutional Networks For Large-Scale Image Recognition` - https://arxiv.org/pdf/1409.1556.pdf
149+
* Reference code: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py
150+
139151
## Vision Transformer [[vision_transformer.py](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py)]
140152
* Paper: `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` - https://arxiv.org/abs/2010.11929
141153
* Reference code and pretrained weights: https://github.com/google-research/vision_transformer

docs/scripts.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@ The variety of training args is large and not all combinations of options (or ev
1010

1111
To train an SE-ResNet34 on ImageNet, locally distributed, 4 GPUs, one process per GPU w/ cosine schedule, random-erasing prob of 50% and per-pixel random value:
1212

13-
`./distributed_train.sh 4 /data/imagenet --model seresnet34 --sched cosine --epochs 150 --warmup-epochs 5 --lr 0.4 --reprob 0.5 --remode pixel --batch-size 256 -j 4`
13+
`./distributed_train.sh 4 /data/imagenet --model seresnet34 --sched cosine --epochs 150 --warmup-epochs 5 --lr 0.4 --reprob 0.5 --remode pixel --batch-size 256 --amp -j 4`
1414

15-
NOTE: NVIDIA APEX should be installed to run in per-process distributed via DDP or to enable AMP mixed precision with the --amp flag
15+
NOTE: It is recommended to use PyTorch 1.7+ w/ PyTorch native AMP and DDP instead of APEX AMP. `--amp` defaults to native AMP as of timm ver 0.4.3. `--apex-amp` will force use of APEX components if they are installed.
1616

1717
## Validation / Inference Scripts
1818

@@ -24,4 +24,4 @@ To validate with the model's pretrained weights (if they exist):
2424

2525
To run inference from a checkpoint:
2626

27-
`python inference.py /imagenet/validation/ --model mobilenetv3_large_100 --checkpoint ./output/model_best.pth.tar`
27+
`python inference.py /imagenet/validation/ --model mobilenetv3_large_100 --checkpoint ./output/train/model_best.pth.tar`

timm/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = '0.4.2'
1+
__version__ = '0.4.3'

0 commit comments

Comments
 (0)