You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+11-88Lines changed: 11 additions & 88 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,6 +26,15 @@
26
26
* The Hugging Face Hub (https://huggingface.co/timm) is now the primary source for `timm` weights. Model cards include link to papers, original source, license.
27
27
* Previous 0.6.x can be cloned from [0.6.x](https://github.com/rwightman/pytorch-image-models/tree/0.6.x) branch or installed via pip with version.
28
28
29
+
### Feb 19, 2024
30
+
* Next-ViT models added. Adapted from https://github.com/bytedance/Next-ViT
31
+
* HGNet and PP-HGNetV2 models added. Adapted from https://github.com/PaddlePaddle/PaddleClas by [SeeFun](https://github.com/seefun)
32
+
* Removed setup.py, moved to pyproject.toml based build supported by PDM
33
+
* Add updated model EMA impl using _for_each for less overhead
34
+
* Support device args in train script for non GPU devices
35
+
* Other misc fixes and small additions
36
+
* Release 0.9.16
37
+
29
38
### Jan 8, 2024
30
39
Datasets & transform refactoring
31
40
* HuggingFace streaming (iterable) dataset support (`--dataset hfids:org/dataset`)
* Add DaViT models. Supports `features_only=True`. Adapted from https://github.com/dingmyu/davit by [Fredo](https://github.com/fffffgggg54).
233
-
* Use a common NormMlpClassifierHead across MaxViT, ConvNeXt, DaViT
234
-
* Add EfficientFormer-V2 model, update EfficientFormer, and refactor LeViT (closely related architectures). Weights on HF hub.
235
-
* New EfficientFormer-V2 arch, significant refactor from original at (https://github.com/snap-research/EfficientFormer). Supports `features_only=True`.
236
-
* Minor updates to EfficientFormer.
237
-
* Refactor LeViT models to stages, add `features_only=True` support to new `conv` variants, weight remap required.
238
-
* Move ImageNet meta-data (synsets, indices) from `/results` to [`timm/data/_info`](timm/data/_info/).
239
-
* Add ImageNetInfo / DatasetInfo classes to provide labelling for various ImageNet classifier layouts in `timm`
* Cleanup some popular models to better support arg passthrough / merge with model configs, more to go.
307
-
308
-
### Jan 5, 2023
309
-
* ConvNeXt-V2 models and weights added to existing `convnext.py`
310
-
* Paper: [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](http://arxiv.org/abs/2301.00808)
311
-
* Reference impl: https://github.com/facebookresearch/ConvNeXt-V2 (NOTE: weights currently CC-BY-NC)
312
-
313
234
## Introduction
314
235
315
236
Py**T**orch **Im**age **M**odels (`timm`) is a collection of image models, layers, utilities, optimizers, schedulers, data-loaders / augmentations, and reference training / validation scripts that aim to pull together a wide variety of SOTA models with ability to reproduce ImageNet training results.
@@ -365,6 +286,7 @@ All model architecture families include variants with pretrained weights. There
0 commit comments