Skip to content
This repository was archived by the owner on Aug 28, 2025. It is now read-only.

Commit d52eada

Browse files
ValahaarBorda
andauthored
Fix mistake in total steps computation for GLUE notebook (#142)
* fix to total_steps formula * Update .meta.yml * drop cpu Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
1 parent 226bb97 commit d52eada

File tree

2 files changed

+4
-5
lines changed

2 files changed

+4
-5
lines changed

lightning_examples/text-transformers/.meta.yml

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
title: Finetune Transformers Models with PyTorch Lightning
22
author: PL team
33
created: 2021-01-31
4-
updated: 2021-12-03
4+
updated: 2022-02-08
55
license: CC BY-SA
6-
build: 2
6+
build: 0
77
tags:
88
- Text
99
description: |
@@ -17,5 +17,4 @@ requirements:
1717
- scikit-learn
1818
- torchtext>=0.9
1919
accelerator:
20-
- CPU
2120
- GPU

lightning_examples/text-transformers/text-transformers.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -224,8 +224,8 @@ def setup(self, stage=None) -> None:
224224

225225
# Calculate total steps
226226
tb_size = self.hparams.train_batch_size * max(1, self.trainer.gpus)
227-
ab_size = self.trainer.accumulate_grad_batches * float(self.trainer.max_epochs)
228-
self.total_steps = (len(train_loader.dataset) // tb_size) // ab_size
227+
ab_size = tb_size * self.trainer.accumulate_grad_batches
228+
self.total_steps = int((len(train_loader.dataset) / ab_size) * float(self.trainer.max_epochs))
229229

230230
def configure_optimizers(self):
231231
"""Prepare optimizer and schedule (linear warmup and decay)"""

0 commit comments

Comments
 (0)