Skip to content

Commit 4820e8d

Browse files
committed
multiprocessing
1 parent 0ba5301 commit 4820e8d

File tree

7 files changed

+109
-1
lines changed

7 files changed

+109
-1
lines changed

Project.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
99
Cairo = "159f3aea-2a34-519c-b102-8c37f9878175"
1010
Compose = "a81c6b42-2e10-5240-aca2-a61377ecd94b"
1111
DelimitedFiles = "8bb1440f-4735-579b-a4ab-409b98df4dab"
12+
Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b"
1213
FFTW = "7a1cc6ca-52ef-59f5-83cd-3a7055c09341"
1314
Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
1415
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"

docs/src/performancetips.md

Lines changed: 38 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,44 @@ julia> lineplot(hamming_distribution(samples, samples))
135135
⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀80⠀
136136
```
137137

138-
(To be written.)
138+
## Multiprocessing
139+
Submodule `GraphTensorNetworks.SimpleMutiprocessing` provides a function [`multiprocess_run`](@ref) function for simple multi-processing jobs.
140+
Suppose we want to find the independence polynomial for multiple graphs with 4 processes.
141+
We can create a file, e.g. named `run.jl` with the following content
142+
143+
```julia
144+
using Distributed, GraphTensorNetworks.SimpleMultiprocessing
145+
using Random, GraphTensorNetworks # to avoid multi-precompiling
146+
@everywhere using Random, GraphTensorNetworks
147+
148+
results = multiprocess_run(collect(1:10)) do seed
149+
Random.seed!(seed)
150+
n = 10
151+
@info "Graph size $n x $n, seed= $seed"
152+
g = random_diagonal_coupled_graph(n, n, 0.8)
153+
gp = Independence(g; optimizer=TreeSA(), simplifier=MergeGreedy())
154+
res = solve(gp, GraphPolynomial())[]
155+
return res
156+
end
157+
158+
println(results)
159+
```
160+
161+
One can run this script file with the following command
162+
```bash
163+
$ julia -p4 run.jl
164+
From worker 3: [ Info: running argument 4 on device 3
165+
From worker 4: [ Info: running argument 2 on device 4
166+
From worker 5: [ Info: running argument 3 on device 5
167+
From worker 2: [ Info: running argument 1 on device 2
168+
From worker 3: [ Info: Graph size 10 x 10, seed= 4
169+
From worker 4: [ Info: Graph size 10 x 10, seed= 2
170+
From worker 5: [ Info: Graph size 10 x 10, seed= 3
171+
From worker 2: [ Info: Graph size 10 x 10, seed= 1
172+
From worker 4: [ Info: running argument 5 on device
173+
...
174+
```
175+
You will see a vector of polynomials printed out.
139176
140177
## Make use of GPUs
141178
To upload the computing to GPU, you just add need to use CUDA, and offer a new key word argument.

docs/src/ref.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -137,6 +137,11 @@ random_square_lattice_graph
137137

138138
One can also use `random_regular_graph` and `smallgraph` in [Graphs](https://github.com/JuliaGraphs/Graphs.jl) to build special graphs.
139139

140+
#### Multiprocessing
141+
```@docs
142+
GraphTensorNetworks.SimpleMultiprocessing.multiprocess_run
143+
```
144+
140145
#### Shortcuts
141146
```@docs
142147
max_size

src/GraphTensorNetworks.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,7 @@ include("bounding.jl")
6565
include("visualize.jl")
6666
include("interfaces.jl")
6767
include("deprecate.jl")
68+
include("multiprocessing.jl")
6869

6970
using Requires
7071
function __init__()

src/multiprocessing.jl

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
module SimpleMultiprocessing
2+
using Distributed
3+
export multiprocess_run
4+
5+
function do_work(f, jobs, results) # define work function everywhere
6+
while true
7+
job = take!(jobs)
8+
@info "running argument $job on device $(Distributed.myid())"
9+
res = f(job)
10+
put!(results, res)
11+
end
12+
end
13+
14+
"""
15+
multiprocess_run(func, inputs::AbstractVector)
16+
17+
Execute function `func` on `inputs` with multiple processing.
18+
19+
Example
20+
---------------------------
21+
Suppose we have a file `run.jl` with the following contents
22+
```julia
23+
using GraphTensorNetworks.SimpleMultiprocessing
24+
25+
results = multiprocess_run(x->x^2, randn(8))
26+
```
27+
28+
In an terminal, you may run the script with 4 processes by typing
29+
```bash
30+
\$ julia -p4 run.jl
31+
From worker 2: [ Info: running argument -0.17544008350172655 on device 2
32+
From worker 5: [ Info: running argument 0.34578117779452555 on device 5
33+
From worker 3: [ Info: running argument 2.0312551239727705 on device 3
34+
From worker 4: [ Info: running argument -0.7319353419291961 on device 4
35+
From worker 2: [ Info: running argument 0.013132180639054629 on device 2
36+
From worker 3: [ Info: running argument 0.9960101782201602 on device 3
37+
From worker 4: [ Info: running argument -0.5613942832743966 on device 4
38+
From worker 5: [ Info: running argument 0.39460402723831134 on device 5
39+
```
40+
"""
41+
function multiprocess_run(func, inputs::AbstractVector{T}) where T
42+
n = length(inputs)
43+
jobs = RemoteChannel(()->Channel{T}(n));
44+
results = RemoteChannel(()->Channel{Any}(n));
45+
for i in 1:n
46+
put!(jobs, inputs[i])
47+
end
48+
for p in workers() # start tasks on the workers to process requests in parallel
49+
remote_do(do_work, p, func, jobs, results)
50+
end
51+
return Any[take!(results) for i=1:n]
52+
end
53+
54+
end

test/multiprocessing.jl

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
using GraphTensorNetworks.SimpleMultiprocessing, Test
2+
3+
@testset "multiprocessing" begin
4+
results = multiprocess_run(x->x^2, collect(1:5))
5+
@test results == [1, 2, 3, 4, 5] .^ 2
6+
end

test/runtests.jl

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,10 @@ end
3737
include("visualize.jl")
3838
end
3939

40+
@testset "multiprocessing" begin
41+
include("multiprocessing.jl")
42+
end
43+
4044
# --------------
4145
# doctests
4246
# --------------

0 commit comments

Comments
 (0)