File tree Expand file tree Collapse file tree 1 file changed +3
-3
lines changed
doc/api/training/smp_versions/v1.2.0 Expand file tree Collapse file tree 1 file changed +3
-3
lines changed Original file line number Diff line number Diff line change @@ -140,16 +140,16 @@ This API document assumes you use the following import statements in your traini
140140 computation. \ ``bucket_cap_mb ``\ controls the bucket size in MegaBytes
141141 (MB).
142142
143- - ``trace_memory_usage `` (default: False): When set to True, the library attempts
143+ - ``trace_memory_usage `` (default: False): When set to True, the library attempts
144144 to measure memory usage per module during tracing. If this is disabled,
145145 memory usage will be estimated through the sizes of tensors returned from
146146 the module.
147147
148- - ``broadcast_buffers `` (default: True): Flag to be used with ``ddp=True ``.
148+ - ``broadcast_buffers `` (default: True): Flag to be used with ``ddp=True ``.
149149 This parameter is forwarded to the underlying ``DistributedDataParallel `` wrapper.
150150 Please see: `broadcast_buffer <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel >`__.
151151
152- - ``gradient_as_bucket_view (PyTorch 1.7 only) `` (default: False): To be
152+ - ``gradient_as_bucket_view (PyTorch 1.7 only) `` (default: False): To be
153153 used with ``ddp=True ``. This parameter is forwarded to the underlying
154154 ``DistributedDataParallel `` wrapper. Please see `gradient_as_bucket_view <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel >`__.
155155
You can’t perform that action at this time.
0 commit comments