Skip to content

Commit 0ea3168

Browse files
BabakkGraphcoregeorgepaw
authored andcommitted
Adding within_replica_ops to TF python API docs page.
Summary: Fixes T58535 Test Plan: CI Reviewers: #tensorflow, #framework_ip_review_-_any_oss_or_third-party_code_use_has_been_approved, jamiep, georgep Reviewed By: #tensorflow, #framework_ip_review_-_any_oss_or_third-party_code_use_has_been_approved, jamiep, georgep Maniphest Tasks: T58535 Differential Revision: https://phabricator.sourcevertex.net/D63449
1 parent e352077 commit 0ea3168

File tree

2 files changed

+12
-5
lines changed

2 files changed

+12
-5
lines changed

tensorflow/compiler/plugin/poplar/docs/api.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -276,6 +276,10 @@ It is also possible to access the operators via the
276276
:members:
277277
:imported-members:
278278

279+
.. automodule:: tensorflow.python.ipu.within_replica_ops
280+
:members:
281+
:imported-members:
282+
279283
.. Poprand
280284
281285
.. automodule:: tensorflow.python.ipu.rand_ops

tensorflow/python/ipu/ops/within_replica_ops.py

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ def all_reduce(input_shards, op): #pylint: disable=missing-type-doc,missing-ret
2828
on the results, so each shard contains all the reduced results. Inputs
2929
are 0 padded to the same size. Example:
3030
31-
.. code-block: none
31+
.. code-block:: none
3232
3333
Input: IPU0 [x0, y0]
3434
IPU1 [x1, y1, z1]
@@ -105,14 +105,17 @@ def all_gather(input_shards, axis=0): #pylint: disable=missing-type-doc,missing
105105
Perform an all gather for a list of sharded tensors within a replica.
106106
107107
Args:
108-
input_shards: the sharded input tensors to gather. These are expected to
108+
input_shards: The sharded input tensors to gather. These are expected to
109109
be supplied in incrementing sharded order, so that input_shards[0] is on
110-
shard0 and input_shard[i] is on shard i. Additionally these tensors must
110+
shard 0 and input_shard[i] is on shard i. Additionally these tensors must
111111
all be of the same type and of the same rank.
112-
:
112+
axis: `input_shards` are flattened to rank 1 prior to being gathered and
113+
reshaped on return. This argument specifies the axis that the gathered
114+
elements should be added to.
115+
113116
Returns:
114117
A tuple of tensors that contains a copy of the data for each shard. Element
115-
i is the `Tensor` mapped to shard i. Each sub tensor is of shape
118+
i is the tensor mapped to shard i. Each sub-tensor is of shape
116119
`tf.concat(input_shards, axis=axis)`.
117120
"""
118121
_validate_inputs(input_shards)

0 commit comments

Comments
 (0)