@@ -135,7 +135,44 @@ julia> lineplot(hamming_distribution(samples, samples))
135135 ⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀80⠀
136136```
137137
138- (To be written.)
138+ ## Multiprocessing
139+ Submodule ` GraphTensorNetworks.SimpleMutiprocessing ` provides a function [ ` multiprocess_run ` ] ( @ref ) function for simple multi-processing jobs.
140+ Suppose we want to find the independence polynomial for multiple graphs with 4 processes.
141+ We can create a file, e.g. named ` run.jl ` with the following content
142+
143+ ``` julia
144+ using Distributed, GraphTensorNetworks. SimpleMultiprocessing
145+ using Random, GraphTensorNetworks # to avoid multi-precompiling
146+ @everywhere using Random, GraphTensorNetworks
147+
148+ results = multiprocess_run (collect (1 : 10 )) do seed
149+ Random. seed! (seed)
150+ n = 10
151+ @info " Graph size $n x $n , seed= $seed "
152+ g = random_diagonal_coupled_graph (n, n, 0.8 )
153+ gp = Independence (g; optimizer= TreeSA (), simplifier= MergeGreedy ())
154+ res = solve (gp, GraphPolynomial ())[]
155+ return res
156+ end
157+
158+ println (results)
159+ ```
160+
161+ One can run this script file with the following command
162+ ``` bash
163+ $ julia -p4 run.jl
164+ From worker 3: [ Info: running argument 4 on device 3
165+ From worker 4: [ Info: running argument 2 on device 4
166+ From worker 5: [ Info: running argument 3 on device 5
167+ From worker 2: [ Info: running argument 1 on device 2
168+ From worker 3: [ Info: Graph size 10 x 10, seed= 4
169+ From worker 4: [ Info: Graph size 10 x 10, seed= 2
170+ From worker 5: [ Info: Graph size 10 x 10, seed= 3
171+ From worker 2: [ Info: Graph size 10 x 10, seed= 1
172+ From worker 4: [ Info: running argument 5 on device
173+ ...
174+ ` ` `
175+ You will see a vector of polynomials printed out.
139176
140177# # Make use of GPUs
141178To upload the computing to GPU, you just add need to use CUDA, and offer a new key word argument.
0 commit comments