Skip to content

Commit c275072

Browse files
committed
initial
1 parent 7e884de commit c275072

File tree

5 files changed

+134
-102
lines changed

5 files changed

+134
-102
lines changed

docs/Manifest.toml

Lines changed: 0 additions & 100 deletions
This file was deleted.

docs/Project.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
[deps]
22
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
3+
Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"
34
TensorInference = "c2297e78-99bd-40ad-871d-f50e56b81012"

docs/make.jl

Lines changed: 18 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,20 @@
11
using TensorInference
2-
using Documenter
2+
using TensorInference: OMEinsum, OMEinsumContractionOrders
3+
using Documenter, Literate
4+
5+
# Literate
6+
for each in readdir(pkgdir(TensorInference, "examples"))
7+
input_file = pkgdir(TensorInference, "examples", each)
8+
endswith(input_file, ".jl") || continue
9+
@info "building" input_file
10+
output_dir = pkgdir(TensorInference, "docs", "src", "generated")
11+
Literate.markdown(input_file, output_dir; name=each[1:end-3], execute=false)
12+
end
313

414
DocMeta.setdocmeta!(TensorInference, :DocTestSetup, :(using TensorInference); recursive=true)
515

616
makedocs(;
7-
modules=[TensorInference],
17+
modules=[TensorInference, OMEinsumContractionOrders],
818
authors="Jin-Guo Liu, Martin Roa Villescas",
919
repo="https://github.com/TensorBFS/TensorInference.jl/blob/{commit}{path}#{line}",
1020
sitename="TensorInference.jl",
@@ -16,7 +26,13 @@ makedocs(;
1626
),
1727
pages=[
1828
"Home" => "index.md",
29+
"Examples" => [
30+
"Asia network" => "generated/asia.md",
31+
],
32+
"Performance Tips" => "performance.md",
33+
"References" => "ref.md",
1934
],
35+
doctest = false,
2036
)
2137

2238
deploydocs(;

docs/src/performance.md

Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
# Performance Tips
2+
## Optimize contraction orders
3+
4+
Let us use the independent set problem on 3-regular graphs as an example.
5+
```julia
6+
julia> using TensorInferece, Artifacts
7+
8+
9+
julia> function get_instance_filepaths(problem_name::AbstractString, task::AbstractString)
10+
model_filepath = joinpath(artifact"uai2014", task, problem_name * ".uai")
11+
evidence_filepath = joinpath(artifact"uai2014", task, problem_name * ".uai.evid")
12+
solution_filepath = joinpath(artifact"uai2014", task, problem_name * ".uai." * task)
13+
return model_filepath, evidence_filepath, solution_filepath
14+
end
15+
16+
julia> model_filepath, evidence_filepath, solution_filepath = get_instance_filepaths("Promedus_14", "MAR")
17+
18+
julia> instance = read_instance(model_filepath; evidence_filepath, solution_filepath)
19+
```
20+
21+
Next, we select the tensor network contraction order optimizer.
22+
```julia
23+
julia> optimizer = TreeSA(ntrials = 1, niters = 5, βs = 0.1:0.1:100)
24+
```
25+
26+
Here, we choose the local search based [`TreeSA`](@ref) algorithm, which often finds the smallest time/space complexity and supports slicing.
27+
One can type `?TreeSA` in a Julia REPL for more information about how to configure the hyper-parameters of the [`TreeSA`](@ref) method,
28+
while the detailed algorithm explanation is in [arXiv: 2108.05665](https://arxiv.org/abs/2108.05665).
29+
Alternative tensor network contraction order optimizers include
30+
* [`GreedyMethod`](@ref) (default, fastest in searching speed but worst in contraction complexity)
31+
* [`KaHyParBipartite`](@ref)
32+
* [`SABipartite`](@ref)
33+
34+
```julia
35+
julia> tn = TensorNetworkModel(instance; optimizer)
36+
```
37+
The returned object `tn` contains a field `code` that specifies the tensor network with optimized contraction order. To check the contraction complexity, please type
38+
```julia
39+
julia> contraction_complexity(problem)
40+
```
41+
42+
The returned object contains log2 values of the number of multiplications, the number elements in the largest tensor during contraction and the number of read-write operations to tensor elements.
43+
44+
```julia
45+
julia> p1 = probability(tn)
46+
```
47+
48+
## Slicing technique
49+
50+
For large scale applications, it is also possible to slice over certain degrees of freedom to reduce the space complexity, i.e.
51+
loop and accumulate over certain degrees of freedom so that one can have a smaller tensor network inside the loop due to the removal of these degrees of freedom.
52+
In the [`TreeSA`](@ref) optimizer, one can set `nslices` to a value larger than zero to turn on this feature.
53+
54+
```julia
55+
julia> tn = TensorNetworkModel(instance; optimizer=TreeSA());
56+
57+
julia> contraction_complexity(tn)
58+
(20.856518235241687, 16.0, 18.88208476145812)
59+
```
60+
61+
As a comparision we slice over 5 degrees of freedom, which can reduce the space complexity by at most 5.
62+
In this application, the slicing achieves the largest possible space complexity reduction 5, while the time and read-write complexity are only increased by less than 1,
63+
i.e. the peak memory usage is reduced by a factor ``32``, while the (theoretical) computing time is increased by at a factor ``< 2``.
64+
```
65+
julia> tn = TensorNetworkModel(instance; optimizer=TreeSA(nslices=5));
66+
67+
julia> timespacereadwrite_complexity(problem)
68+
(21.134967710592804, 11.0, 19.84529401927876)
69+
```
70+
71+
## GEMM for Tropical numbers
72+
No extra effort is required to enjoy the BLAS level speed provided by [`TropicalGEMM`](https://github.com/TensorBFS/TropicalGEMM.jl).
73+
The benchmark in the `TropicalGEMM` repo shows this performance is close to the theoretical optimal value.
74+
Its implementation on GPU is under development in Github repo [`CuTropicalGEMM.jl`](https://github.com/ArrogantGao/CuTropicalGEMM.jl) as a part of [Open Source Promotion Plan summer program](https://summer-ospp.ac.cn/).
75+
76+
## Working with GPUs
77+
To upload the computation to GPU, you just add `using CUDA` before calling the `solve` function, and set the keyword argument `usecuda` to `true`.
78+
```julia
79+
julia> using CUDA
80+
[ Info: OMEinsum loaded the CUDA module successfully
81+
82+
julia> marginals(tn; usecuda = true)
83+
```
84+
85+
Functions support `usecuda` keyword argument includes
86+
* [`probability`](@ref)
87+
* [`log_probability`](@ref)
88+
* [`marginals`](@ref)
89+
* [`most_probable_config`](@ref)
90+
91+
## Benchmarks
92+
Please check our [paper (link to be added)]().

docs/src/ref.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# References
2+
3+
## TensorInference
4+
```@autodocs
5+
Modules = [TensorInference]
6+
Order = [:function, :type]
7+
```
8+
9+
## Tensor Network
10+
```@docs
11+
optimize_code
12+
getixsv
13+
getiyv
14+
contraction_complexity
15+
estimate_memory
16+
@ein_str
17+
GreedyMethod
18+
TreeSA
19+
SABipartite
20+
KaHyParBipartite
21+
MergeVectors
22+
MergeGreedy
23+
```

0 commit comments

Comments
 (0)