You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here, the `problem` is a `IndependentSet` instance, it contains the tensor network contraction tree for the target graph.
60
-
Here, we choose the `TreeSA` optimizer to optimize the tensor network contraciton tree, it is a local search based algorithm, check [arXiv: 2108.05665](https://arxiv.org/abs/2108.05665). You will see some warnings, do not panic, this is because we set `sc_target` (target space complex) to 1 for agressive optimization of space complexity. Type `?TreeSA` in a Julia REPL for more information about the key word arguments.
61
-
Similarly, one can select tensor network structures for solving other problems like `MaximalIS`, `MaxCut`, `Matching`, `Coloring{K}`, `PaintShop` and `set_packing`.
56
+
Here, the `problem` is a `IndependentSet` instance, it contains the tensor network contraction tree for the target graph (the `code` field).
If you want to enumerate all MISs, we highly recommend using the bounded version to save the computational effort. One can also store the configurations on your disk by typing
110
+
It will use the bounded version to save the computational effort. If you want to save/load your configurations, you can type
158
111
```julia
159
-
julia> cs =solve(problem, "configs max (bounded)")[1].c # the `c` field is a `ConfigEnumerator`
julia> problem =IndependentSet(graph; optimizer=TreeSA(sc_target=20, sc_weight=1.0, rw_weight=3.0, ntrials=10, βs=0.01:0.1:15.0, niters=20), simplifier=MergeGreedy());
11
+
```
12
+
13
+
Key word argument `optimizer` decides the contraction order optimizer of the tensor network.
14
+
Here, we choose the `TreeSA` optimizer to optimize the tensor network contraciton tree, it is a local search based algorithm.
15
+
It is one of the state of the art tensor network contraction order optimizers, one may check [arXiv: 2108.05665](https://arxiv.org/abs/2108.05665) to learn more about the algorithm.
16
+
Other optimizers include
17
+
*[`GreedyMethod`](@ref) (default, fastest in searching speed but worse in contraction order)
18
+
*[`TreeSA`](@ref)
19
+
*[`KaHyParBipartite`](@ref)
20
+
*[`SABipartite`](@ref)
21
+
22
+
One can type `?TreeSA` in a Julia REPL for more information about how to configure the hyper-parameters of `TreeSA` method.
23
+
`simplifier` keyword argument is not so important, it is a preprocessing routine to improve the searching speed of the `optimizer`.
24
+
25
+
The returned instance `problem` contains a field `code` that specifies the tensor network contraction order. For an independence problem, its contraction time space complexity is ``2^{{\rm tw}(G)}``, where ``{\rm tw(G)}`` is the [tree-width](https://en.wikipedia.org/wiki/Treewidth) of ``G``.
26
+
One can check the time, space and read-write complexity with the following function.
27
+
28
+
```julia
29
+
julia>timespacereadwrite_complexity(problem)
30
+
```
31
+
32
+
The return values are `log2` of the the number of iterations, the number elements in the max tensor and the number of read-write operations to tensor elements.
33
+
34
+
## GEMM for Tropical numbers
35
+
You can speed up the Tropical number matrix multiplication when computing `SizeMax()` by using the Tropical GEMM routines implemented in package [`TropicalGEMM.jl`](https://github.com/TensorBFS/TropicalGEMM.jl/).
36
+
37
+
```julia
38
+
julia>using BenchmarkTools
39
+
40
+
julia>@btimesolve(problem, SizeMax())
41
+
91.630 ms (19203 allocations:23.72 MiB)
42
+
0-dimensional Array{TropicalF64, 0}:
43
+
53.0ₜ
44
+
45
+
julia>using TropicalGEMM
46
+
47
+
julia>@btimesolve(problem, SizeMax())
48
+
8.960 ms (18532 allocations:17.01 MiB)
49
+
0-dimensional Array{TropicalF64, 0}:
50
+
53.0ₜ
51
+
```
52
+
53
+
The `TropicalGEMM` pirates the `LinearAlgebra.mul!` interface, hence it takes effect upon using.
54
+
The GEMM routine can speed up the computation on CPU for one order, with multi-threading, it can be even faster.
55
+
Benchmark shows the performance of `TropicalGEMM` is close to the theoretical optimal value.
56
+
57
+
## Make use of GPUs
58
+
To upload the computing to GPU, you just add need to use CUDA, and offer a new key word argument.
59
+
```julia
60
+
julia>using CUDA
61
+
[ Info: OMEinsum loaded the CUDA module successfully
# Let us contruct the problem instance with optimized tensor network contraction order as bellow.
39
-
problem =IndependentSet(graph; optimizer=TreeSA(sc_weight=1.0, ntrials=10,
40
-
βs=0.01:0.1:15.0, niters=20, rw_weight=0.2),
41
-
simplifier=MergeGreedy());
39
+
problem =IndependentSet(graph; optimizer=TreeSA());
42
40
43
41
# In the input arguments of [`IndependentSet`](@ref), the `optimizer` is for optimizing the contraction orders.
44
-
# Here we use the local search based optimizer in [arXiv:2108.05665](https://arxiv.org/abs/2108.05665).
45
-
# If no optimizer is specified, the default fast (in terms of the speed of searching contraction order)
46
-
# but worst (in term of contraction complexity) [`GreedyMethod`](@ref) will be used.
47
-
# `simplifier` is a preprocessing routine to speed up the `optimizer`.
42
+
# Here we use the local search based optimizer `TreeSA`.
48
43
# The returned instance `problem` contains a field `code` that specifies the tensor network contraction order.
49
-
# Its contraction time space complexity is ``2^{{\rm tw}(G)}``, where ``{\rm tw(G)}`` is the [tree-width](https://en.wikipedia.org/wiki/Treewidth) of ``G``.
44
+
# The optimal contraction time and space complexity of an independent set problem is ``2^{{\rm tw}(G)}``,
45
+
# where ``{\rm tw(G)}`` is the [tree-width](https://en.wikipedia.org/wiki/Treewidth) of ``G``.
50
46
# One can check the time, space and read-write complexity with the following function.
51
47
52
48
timespacereadwrite_complexity(problem)
53
49
54
50
# The return values are `log2` of the the number of iterations, the number elements in the max tensor and the number of read-write operations to tensor elements.
51
+
# For more information about the performance, please check the [Performance Tips](@ref).
0 commit comments