You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/jacobi_tutorial.jl
+30-26Lines changed: 30 additions & 26 deletions
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
-
# # Getting started
1
+
# # Jacobi method
2
2
#
3
-
#Welcome to this "Getting Started" tutorial on using the PartitionedArrays.jl package. In this guide, you'll learn how to implement a parallel version of the one-dimensional Jacobi method using PartitionedArrays. Before you start, please make sure to have installed the following packages:
3
+
# In this tutorial, you'll learn how to implement a parallel version of the one-dimensional Jacobi method using PartitionedArrays. Before you start, please make sure to have installed the following packages:
4
4
# ```julia
5
5
# using Pkg
6
6
# Pkg.add("PartitionedArrays")
@@ -97,18 +97,19 @@ jacobi(10,100)
97
97
98
98
# ## Parallel version
99
99
# Next, we will implement a parallelized version of Jacobi method using partitioned arrays. The parallel function will take the number of processes $p$ as an additional argument.
100
-
101
-
functionjacobi_par(n,niters,p)
102
-
103
-
end
100
+
# ```julia
101
+
# function jacobi_par(n,niters,p)
102
+
# # TODO
103
+
# end
104
+
# ```
104
105
105
106
using PartitionedArrays
106
107
107
108
# Define the grid size `n` and the number of iterations `niters`. We also specify the number of processors as 3.
108
109
109
110
n =10
110
111
niters =100
111
-
p =3
112
+
p =3;
112
113
113
114
# The following line creates an array of Julia type `LinearIndices`. This array holds linear indices of a specified range and shape ([documentation](https://docs.julialang.org/en/v1/base/arrays/#Base.LinearIndices)).
114
115
@@ -122,9 +123,11 @@ ranks = LinearIndices((p,))
122
123
ranks =DebugArray(LinearIndices((p,)))
123
124
124
125
# To demonstrate that `DebugArray` emulates the limitations of `MPIArray`, run the following code. It is expected to throw an error, since indexing is not permitted.
# Note that, like `DebugArray`, a `PVector` represents an array whose elements are distributed (i.e. partitioned) across processes, and indexing is disabled here as well. Therefore, the following examples are expected to raise an error.
160
-
# ```julia
161
-
# u[1]
162
-
# u[end]
163
-
# ```
163
+
164
+
try
165
+
u[1]
166
+
u[end]
167
+
catch e
168
+
println(e)
169
+
end
164
170
165
171
166
172
# To view the local values of a partitioned vector, use method `partition` or `local_values`.
167
173
168
174
partition(u)
169
175
170
176
# Partition returns a `DebugArray`, so again indexing, such as in the following examples, is not permitted.
171
-
# ```julia
172
-
# partition(u)[1][1]
173
-
# partition(u)[end][end]
174
-
# ```
175
177
178
+
try
179
+
partition(u)[1][1]
180
+
partition(u)[end][end]
181
+
catch e
182
+
println(e)
183
+
end
176
184
177
-
# ### Initialize boundary conditions
178
-
# The values of the partition are still all 0, so next we need to set the correct boundary conditions.
179
-
# ```julia
180
-
# u[1] = -1
181
-
# u[end] = 1
182
-
# ```
183
185
186
+
# ### Initialize boundary conditions
187
+
# The values of the partition are still all 0, so next we need to set the correct boundary conditions: $u(0) = -1$ and $u(L)= 1$.
188
+
#
189
+
#
184
190
# Since `PVector` is distributed, one process cannot access the values that are owned by other processes, so we need to find a different approach. Each process can set the boundary conditions locally. This is possible with the following piece of code. Using Julia function `map`, the function `set_bcs` is executed locally by each process on its locally available part of `partition(u)`. These local partitions are standard Julia `Vector`s and are allowed to be indexed.
185
191
186
192
functionset_bcs(my_u,rank)
187
-
@show rank
188
193
if rank ==1
189
194
my_u[1] =1
190
195
end
@@ -206,7 +211,6 @@ partition(u)
206
211
# Remember that to perform the Jacobi update, alternate writing to one data structure `u` and another `u_new` was required. Hence, we need to create a second data structure to hold a copy of our partition. Using Julia function `copy`, the new object has the same type and values as the original data structure `u`.
Copy file name to clipboardExpand all lines: docs/src/jacobi_tutorial.md
+36-24Lines changed: 36 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,9 +2,9 @@
2
2
EditURL = "../jacobi_tutorial.jl"
3
3
```
4
4
5
-
# Getting started
5
+
# Jacobi method
6
6
7
-
Welcome to this "Getting Started" tutorial on using the PartitionedArrays.jl package. In this guide, you'll learn how to implement a parallel version of the one-dimensional Jacobi method using PartitionedArrays. Before you start, please make sure to have installed the following packages:
7
+
In this tutorial, you'll learn how to implement a parallel version of the one-dimensional Jacobi method using PartitionedArrays. Before you start, please make sure to have installed the following packages:
8
8
```julia
9
9
using Pkg
10
10
Pkg.add("PartitionedArrays")
@@ -101,12 +101,13 @@ Thus, the algorithm is usually implemented following two main phases at each ite
101
101
102
102
## Parallel version
103
103
Next, we will implement a parallelized version of Jacobi method using partitioned arrays. The parallel function will take the number of processes $p$ as an additional argument.
104
-
105
-
````@example jacobi_tutorial
104
+
```julia
106
105
functionjacobi_par(n,niters,p)
107
-
106
+
#TODO
108
107
end
108
+
```
109
109
110
+
````@example jacobi_tutorial
110
111
using PartitionedArrays
111
112
````
112
113
@@ -115,7 +116,8 @@ Define the grid size `n` and the number of iterations `niters`. We also specify
115
116
````@example jacobi_tutorial
116
117
n = 10
117
118
niters = 100
118
-
p = 3
119
+
p = 3;
120
+
nothing #hide
119
121
````
120
122
121
123
The following line creates an array of Julia type `LinearIndices`. This array holds linear indices of a specified range and shape ([documentation](https://docs.julialang.org/en/v1/base/arrays/#Base.LinearIndices)).
To demonstrate that `DebugArray` emulates the limitations of `MPIArray`, run the following code. It is expected to throw an error, since indexing is not permitted.
137
-
```julia
138
-
ranks[1]
139
-
```
139
+
140
+
````@example jacobi_tutorial
141
+
try
142
+
ranks[1]
143
+
catch e
144
+
println(e)
145
+
end
146
+
````
140
147
141
148
### Partition the data
142
149
@@ -180,10 +187,15 @@ u = pzeros(Float64,row_partition)
180
187
````
181
188
182
189
Note that, like `DebugArray`, a `PVector` represents an array whose elements are distributed (i.e. partitioned) across processes, and indexing is disabled here as well. Therefore, the following examples are expected to raise an error.
183
-
```julia
184
-
u[1]
185
-
u[end]
186
-
```
190
+
191
+
````@example jacobi_tutorial
192
+
try
193
+
u[1]
194
+
u[end]
195
+
catch e
196
+
println(e)
197
+
end
198
+
````
187
199
188
200
To view the local values of a partitioned vector, use method `partition` or `local_values`.
189
201
@@ -192,23 +204,24 @@ partition(u)
192
204
````
193
205
194
206
Partition returns a `DebugArray`, so again indexing, such as in the following examples, is not permitted.
195
-
```julia
196
-
partition(u)[1][1]
197
-
partition(u)[end][end]
198
-
```
207
+
208
+
````@example jacobi_tutorial
209
+
try
210
+
partition(u)[1][1]
211
+
partition(u)[end][end]
212
+
catch e
213
+
println(e)
214
+
end
215
+
````
199
216
200
217
### Initialize boundary conditions
201
-
The values of the partition are still all 0, so next we need to set the correct boundary conditions.
202
-
```julia
203
-
u[1] =-1
204
-
u[end] =1
205
-
```
218
+
The values of the partition are still all 0, so next we need to set the correct boundary conditions: $u(0) = -1$ and $u(L)= 1$.
219
+
206
220
207
221
Since `PVector` is distributed, one process cannot access the values that are owned by other processes, so we need to find a different approach. Each process can set the boundary conditions locally. This is possible with the following piece of code. Using Julia function `map`, the function `set_bcs` is executed locally by each process on its locally available part of `partition(u)`. These local partitions are standard Julia `Vector`s and are allowed to be indexed.
208
222
209
223
````@example jacobi_tutorial
210
224
function set_bcs(my_u,rank)
211
-
@show rank
212
225
if rank == 1
213
226
my_u[1] = 1
214
227
end
@@ -234,7 +247,6 @@ Remember that to perform the Jacobi update, alternate writing to one data struct
0 commit comments