Skip to content

Commit 3126858

Browse files
committed
Minor fixes
1 parent 9b81243 commit 3126858

File tree

3 files changed

+69
-51
lines changed

3 files changed

+69
-51
lines changed

docs/jacobi_tutorial.jl

Lines changed: 30 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
# # Getting started
1+
# # Jacobi method
22
#
3-
# Welcome to this "Getting Started" tutorial on using the PartitionedArrays.jl package. In this guide, you'll learn how to implement a parallel version of the one-dimensional Jacobi method using PartitionedArrays. Before you start, please make sure to have installed the following packages:
3+
# In this tutorial, you'll learn how to implement a parallel version of the one-dimensional Jacobi method using PartitionedArrays. Before you start, please make sure to have installed the following packages:
44
# ```julia
55
# using Pkg
66
# Pkg.add("PartitionedArrays")
@@ -97,18 +97,19 @@ jacobi(10,100)
9797

9898
# ## Parallel version
9999
# Next, we will implement a parallelized version of Jacobi method using partitioned arrays. The parallel function will take the number of processes $p$ as an additional argument.
100-
101-
function jacobi_par(n,niters,p)
102-
103-
end
100+
# ```julia
101+
# function jacobi_par(n,niters,p)
102+
# # TODO
103+
# end
104+
# ```
104105

105106
using PartitionedArrays
106107

107108
# Define the grid size `n` and the number of iterations `niters`. We also specify the number of processors as 3.
108109

109110
n = 10
110111
niters = 100
111-
p = 3
112+
p = 3;
112113

113114
# The following line creates an array of Julia type `LinearIndices`. This array holds linear indices of a specified range and shape ([documentation](https://docs.julialang.org/en/v1/base/arrays/#Base.LinearIndices)).
114115

@@ -122,9 +123,11 @@ ranks = LinearIndices((p,))
122123
ranks = DebugArray(LinearIndices((p,)))
123124

124125
# To demonstrate that `DebugArray` emulates the limitations of `MPIArray`, run the following code. It is expected to throw an error, since indexing is not permitted.
125-
# ```julia
126-
# ranks[1]
127-
# ```
126+
try
127+
ranks[1]
128+
catch e
129+
println(e)
130+
end
128131

129132

130133
# ### Partition the data
@@ -157,34 +160,36 @@ map(ghost_to_owner, row_partition)
157160
u = pzeros(Float64,row_partition)
158161

159162
# Note that, like `DebugArray`, a `PVector` represents an array whose elements are distributed (i.e. partitioned) across processes, and indexing is disabled here as well. Therefore, the following examples are expected to raise an error.
160-
# ```julia
161-
# u[1]
162-
# u[end]
163-
# ```
163+
164+
try
165+
u[1]
166+
u[end]
167+
catch e
168+
println(e)
169+
end
164170

165171

166172
# To view the local values of a partitioned vector, use method `partition` or `local_values`.
167173

168174
partition(u)
169175

170176
# Partition returns a `DebugArray`, so again indexing, such as in the following examples, is not permitted.
171-
# ```julia
172-
# partition(u)[1][1]
173-
# partition(u)[end][end]
174-
# ```
175177

178+
try
179+
partition(u)[1][1]
180+
partition(u)[end][end]
181+
catch e
182+
println(e)
183+
end
176184

177-
# ### Initialize boundary conditions
178-
# The values of the partition are still all 0, so next we need to set the correct boundary conditions.
179-
# ```julia
180-
# u[1] = -1
181-
# u[end] = 1
182-
# ```
183185

186+
# ### Initialize boundary conditions
187+
# The values of the partition are still all 0, so next we need to set the correct boundary conditions: $u(0) = -1$ and $u(L)= 1$.
188+
#
189+
#
184190
# Since `PVector` is distributed, one process cannot access the values that are owned by other processes, so we need to find a different approach. Each process can set the boundary conditions locally. This is possible with the following piece of code. Using Julia function `map`, the function `set_bcs` is executed locally by each process on its locally available part of `partition(u)`. These local partitions are standard Julia `Vector`s and are allowed to be indexed.
185191

186192
function set_bcs(my_u,rank)
187-
@show rank
188193
if rank == 1
189194
my_u[1] = 1
190195
end
@@ -206,7 +211,6 @@ partition(u)
206211
# Remember that to perform the Jacobi update, alternate writing to one data structure `u` and another `u_new` was required. Hence, we need to create a second data structure to hold a copy of our partition. Using Julia function `copy`, the new object has the same type and values as the original data structure `u`.
207212

208213
u_new = copy(u)
209-
210214
partition(u_new)
211215

212216
# ### Communication of ghost cell values

docs/make.jl

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,9 @@ makedocs(
1818
"Introduction" => "index.md",
1919
"usage.md",
2020
"examples.md",
21-
"jacobi_tutorial.md",
21+
"Tutorials" =>[
22+
"jacobi_tutorial.md",
23+
],
2224
"Reference" =>[
2325
"reference/backends.md",
2426
"reference/arraymethods.md",

docs/src/jacobi_tutorial.md

Lines changed: 36 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22
EditURL = "../jacobi_tutorial.jl"
33
```
44

5-
# Getting started
5+
# Jacobi method
66

7-
Welcome to this "Getting Started" tutorial on using the PartitionedArrays.jl package. In this guide, you'll learn how to implement a parallel version of the one-dimensional Jacobi method using PartitionedArrays. Before you start, please make sure to have installed the following packages:
7+
In this tutorial, you'll learn how to implement a parallel version of the one-dimensional Jacobi method using PartitionedArrays. Before you start, please make sure to have installed the following packages:
88
```julia
99
using Pkg
1010
Pkg.add("PartitionedArrays")
@@ -101,12 +101,13 @@ Thus, the algorithm is usually implemented following two main phases at each ite
101101

102102
## Parallel version
103103
Next, we will implement a parallelized version of Jacobi method using partitioned arrays. The parallel function will take the number of processes $p$ as an additional argument.
104-
105-
````@example jacobi_tutorial
104+
```julia
106105
function jacobi_par(n,niters,p)
107-
106+
# TODO
108107
end
108+
```
109109

110+
````@example jacobi_tutorial
110111
using PartitionedArrays
111112
````
112113

@@ -115,7 +116,8 @@ Define the grid size `n` and the number of iterations `niters`. We also specify
115116
````@example jacobi_tutorial
116117
n = 10
117118
niters = 100
118-
p = 3
119+
p = 3;
120+
nothing #hide
119121
````
120122

121123
The following line creates an array of Julia type `LinearIndices`. This array holds linear indices of a specified range and shape ([documentation](https://docs.julialang.org/en/v1/base/arrays/#Base.LinearIndices)).
@@ -134,9 +136,14 @@ ranks = DebugArray(LinearIndices((p,)))
134136
````
135137

136138
To demonstrate that `DebugArray` emulates the limitations of `MPIArray`, run the following code. It is expected to throw an error, since indexing is not permitted.
137-
```julia
138-
ranks[1]
139-
```
139+
140+
````@example jacobi_tutorial
141+
try
142+
ranks[1]
143+
catch e
144+
println(e)
145+
end
146+
````
140147

141148
### Partition the data
142149

@@ -180,10 +187,15 @@ u = pzeros(Float64,row_partition)
180187
````
181188

182189
Note that, like `DebugArray`, a `PVector` represents an array whose elements are distributed (i.e. partitioned) across processes, and indexing is disabled here as well. Therefore, the following examples are expected to raise an error.
183-
```julia
184-
u[1]
185-
u[end]
186-
```
190+
191+
````@example jacobi_tutorial
192+
try
193+
u[1]
194+
u[end]
195+
catch e
196+
println(e)
197+
end
198+
````
187199

188200
To view the local values of a partitioned vector, use method `partition` or `local_values`.
189201

@@ -192,23 +204,24 @@ partition(u)
192204
````
193205

194206
Partition returns a `DebugArray`, so again indexing, such as in the following examples, is not permitted.
195-
```julia
196-
partition(u)[1][1]
197-
partition(u)[end][end]
198-
```
207+
208+
````@example jacobi_tutorial
209+
try
210+
partition(u)[1][1]
211+
partition(u)[end][end]
212+
catch e
213+
println(e)
214+
end
215+
````
199216

200217
### Initialize boundary conditions
201-
The values of the partition are still all 0, so next we need to set the correct boundary conditions.
202-
```julia
203-
u[1] = -1
204-
u[end] = 1
205-
```
218+
The values of the partition are still all 0, so next we need to set the correct boundary conditions: $u(0) = -1$ and $u(L)= 1$.
219+
206220

207221
Since `PVector` is distributed, one process cannot access the values that are owned by other processes, so we need to find a different approach. Each process can set the boundary conditions locally. This is possible with the following piece of code. Using Julia function `map`, the function `set_bcs` is executed locally by each process on its locally available part of `partition(u)`. These local partitions are standard Julia `Vector`s and are allowed to be indexed.
208222

209223
````@example jacobi_tutorial
210224
function set_bcs(my_u,rank)
211-
@show rank
212225
if rank == 1
213226
my_u[1] = 1
214227
end
@@ -234,7 +247,6 @@ Remember that to perform the Jacobi update, alternate writing to one data struct
234247

235248
````@example jacobi_tutorial
236249
u_new = copy(u)
237-
238250
partition(u_new)
239251
````
240252

0 commit comments

Comments
 (0)