We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 6e894fc commit 65b5cfdCopy full SHA for 65b5cfd
README.md
@@ -1,4 +1,4 @@
1
-# Flat and anneal lr scheduler in pytorch
+## Flat and anneal lr scheduler in pytorch
2
3
`warmup_method`:
4
* `linear`
@@ -12,10 +12,10 @@
12
13
* `exp`
14
15
-# Usage:
+## Usage:
16
See `test_flat_and_anneal()`.
17
18
-# Convention
+## Convention
19
* The scheduler should be applied by iteration (or by batch) instead of by epoch.
20
* `anneal_point` and `steps` are the percentages of the total iterations.
21
* `init_warmup_lr = warmup_factor * base_lr`
0 commit comments