Skip to content

Commit 9e3e04a

Browse files
committed
Working on documentation
1 parent 2dae0ca commit 9e3e04a

File tree

4 files changed

+23
-11
lines changed

4 files changed

+23
-11
lines changed

docs/source/global.rst

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -252,8 +252,10 @@ Multidimensional distributed arrays
252252

253253
The procedure discussed above remains the same for any type of array, of any
254254
dimensionality. With mpi4py-fft we can distribute any array of arbitrary dimensionality
255-
using an arbitrary number of processor groups. How to distribute is completely
256-
configurable through the classes in the :mod:`.pencil` module.
255+
using any number of processor groups. We only require that the number of processor
256+
groups is at least one less than the number of dimensions, since one axis must
257+
remain aligned. Apart from this the distribution is completely configurable through
258+
the classes in the :mod:`.pencil` module.
257259

258260
We denote a global :math:`d`-dimensional array as :math:`u_{j_0, j_1, \ldots, j_{d-1}}`,
259261
where :math:`j_m\in\textbf{j}_m` for :math:`m=[0, 1, \ldots, d-1]`.
@@ -263,7 +265,7 @@ than one processor group, the groups are indexed, like :math:`P_0, P_1` etc.
263265

264266
Lets illustrate using a 4-dimensional array with 3 processor groups. Let the
265267
array be aligned only in axis 3 first (:math:`u_{j_0/P_0, j_1/P_1, j_2/P_2, j_3}`),
266-
and then redistributed for alignment along axes 2, 1 and finally 0. Mathematically,
268+
and then redistribute for alignment along axes 2, 1 and finally 0. Mathematically,
267269
we will now be executing the three following global redistributions:
268270

269271
.. math::
@@ -273,6 +275,13 @@ we will now be executing the three following global redistributions:
273275
u_{j_0/P_0, j_1, j_2/P_1, j_3/P_2} \xleftarrow[P_1]{2 \rightarrow 1} u_{j_0/P_0, j_1/P_1, j_2, j_3/P_2} \\
274276
u_{j_0, j_1/P_0, j_2/P_1, j_3/P_2} \xleftarrow[P_0]{1 \rightarrow 0} u_{j_0/P_0, j_1, j_2/P_1, j_3/P_2}
275277
278+
Note that in the first step it is only processor group :math:`P_2` that is
279+
active in the redistribution, and the output (left hand side) is now aligned
280+
in axis 2. This can be seen since there is no processor group there to
281+
share the :math:`j_2` index.
282+
In the second step processor group :math:`P_1` is the active one, and
283+
in the final step :math:`P_0`.
284+
276285
Now, it is not necessary to use three processor groups just because we have a
277286
four-dimensional array. We could just as well have been using 2 or 1. The advantage
278287
of using more groups is that you can then use more processors in total. Assuming

docs/source/howtocite.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ Please cite mpi4py-fft using
77

88
@article{jpdc_fft,
99
author = {{Dalcin, Lisandro and Mortensen, Mikael and Keyes, David E}},
10-
year = 2019,
10+
year = {{2019}},
1111
title = {{Fast parallel multidimensional FFT using advanced MPI}},
1212
journal = {{Journal of Parallel and Distributed Computing}},
13-
volume = in press
13+
volume = {{in press}}
1414
}
1515
@electronic{mpi4py-fft,
16-
author = {{Lisandro Dalcin and Mikael Mortensen}},
17-
title = {{mpi4py-fft}},
18-
url = {https://bitbucket.org/mpi4py/mpi4py-fft}
16+
author = {{Lisandro Dalcin and Mikael Mortensen}},
17+
title = {{mpi4py-fft}},
18+
url = {{https://bitbucket.org/mpi4py/mpi4py-fft}}
1919
}

docs/source/installation.rst

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ is not installed. This can be achieved with, e.g.,
3737

3838
::
3939

40-
conda create --name mpi4py-fft -c conda-forge mpi4py-fft mpich nomkl
40+
conda create --name mpi4py-fft -c conda-forge mpi4py-fft mpich nomkl h5py=*=mpi*
4141

4242
Note that the nomkl package makes sure that numpy is installed without
4343
mkl, whereas mpich here chooses this backend over openmpi.
@@ -50,11 +50,13 @@ any version of mpi4py-fft hosted on `pypi`_ using `pip`_
5050

5151
pip install mpi4py-fft
5252

53-
whereas the following will install the latest version from github
53+
whereas either one of the following will install the latest version
54+
from github
5455

5556
::
5657

5758
pip install git+https://bitbucket.org/mpi4py/mpi4py-fft@master
59+
pip install https://bitbucket.org/mpi4py/mpi4py-fft/get/master.zip
5860

5961
You can also build mpi4py-fft yourselves from the top directory,
6062
after cloning or forking

mpi4py_fft/distributedarray.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
import os
2+
from numbers import Number
23
import numpy as np
34
from mpi4py import MPI
45
from .pencil import Pencil, Subcomm
@@ -85,7 +86,7 @@ def __new__(cls, global_shape, subcomm=None, val=None, dtype=np.float,
8586
if rank > 0:
8687
subshape = global_shape[:rank] + subshape
8788
obj = np.ndarray.__new__(cls, subshape, dtype=dtype, buffer=buffer)
88-
if buffer is None and isinstance(val, int):
89+
if buffer is None and isinstance(val, Number):
8990
obj.fill(val)
9091
obj._p0 = p0
9192
obj._rank = rank

0 commit comments

Comments
 (0)