Skip to content

Commit b4c9e8d

Browse files
committed
2 parents 0b8fd2e + 201a3a0 commit b4c9e8d

File tree

14 files changed

+109
-107
lines changed

14 files changed

+109
-107
lines changed

docs/make.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@ makedocs(;
4444
],
4545
"Topics" => [
4646
"Save and load solutions" => "tutorials/saveload.md"
47+
"Sum product tree representation" => "sumproduct.md"
4748
"Weighted problems" => "tutorials/weighted.md"
4849
"Open degree of freedoms" => "tutorials/open.md"
4950
],

docs/src/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ This package implements generic tensor networks to compute *solution space prope
88
The *solution space properties* include
99
* The maximum/minimum solution sizes,
1010
* The number of solutions at certain sizes,
11-
* The enumeration of solutions at certian sizes.
12-
* The direct sampling of solutions at certian sizes.
11+
* The enumeration of solutions at certain sizes.
12+
* The direct sampling of solutions at certain sizes.
1313

1414
The solvable problems include [Independent set problem](@ref), [Maximal independent set problem](@ref), [Cutting problem (Spin-glass problem)](@ref), [Vertex matching problem](@ref), [Binary paint shop problem](@ref), [Coloring problem](@ref), [Dominating set problem](@ref), [Set packing problem](@ref) and [Set covering problem](@ref).
1515

docs/src/performancetips.md

Lines changed: 26 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -13,33 +13,32 @@ julia> problem = IndependentSet(graph; optimizer=TreeSA(
1313
sc_target=20, sc_weight=1.0, rw_weight=3.0, ntrials=10, βs=0.01:0.1:15.0, niters=20), simplifier=MergeGreedy());
1414
```
1515

16-
The [`IndependentSet`](@ref) constructor maps a independent set problem to a tensor network with optimized contraction order.
16+
The [`IndependentSet`](@ref) constructor maps an independent set problem to a tensor network with optimized contraction order.
1717
The key word argument `optimizer` specifies the contraction order optimizer of the tensor network.
18-
Here, we choose the local search based [`TreeSA`](@ref) algorithm,
19-
which is one of the state of the art contraction order optimizer detailed in [arXiv: 2108.05665](https://arxiv.org/abs/2108.05665).
20-
One can type `?TreeSA` in a Julia REPL for more information about how to configure the hyper-parameters of the `TreeSA` method.
21-
Alternative tensor network contraction orders optimizers include
18+
Here, we choose the local search based [`TreeSA`](@ref) algorithm, which often finds the smallest time/space complexity and supports slicing.
19+
One can type `?TreeSA` in a Julia REPL for more information about how to configure the hyper-parameters of the [`TreeSA`](@ref) method,
20+
while the detailed algorithm explanation is in [arXiv: 2108.05665](https://arxiv.org/abs/2108.05665).
21+
Alternative tensor network contraction order optimizers include
2222
* [`GreedyMethod`](@ref) (default, fastest in searching speed but worst in contraction complexity)
23-
* [`TreeSA`](@ref) (often best in contraction complexity, supports slicing)
2423
* [`KaHyParBipartite`](@ref)
2524
* [`SABipartite`](@ref)
2625

27-
The keyword argument `simplifier` is for preprocessing sub-routine to improve the searching speed of the contraction order finding.
26+
The keyword argument `simplifier` specifies the preprocessor to improve the searching speed of the contraction order finding.
2827
For example, the `MergeGreedy()` here "contracts" tensors greedily whenever the contraction result has a smaller space complexity.
2928
It can remove all vertex tensors (vectors) before entering the contraction order optimization algorithm.
3029

31-
The returned instance `problem` contains a field `code` that specifies the tensor network contraction order. For an independent set problem, its contraction time space complexity is ``2^{{\rm tw}(G)}``, where ``{\rm tw(G)}`` is the [tree-width](https://en.wikipedia.org/wiki/Treewidth) of ``G``.
32-
One can check the time, space and read-write complexity with the following function.
30+
The returned object `problem` contains a field `code` that specifies the tensor network with optimized contraction order.
31+
For an independent set problem, the optimal contraction time/space complexity is ``\sim 2^{{\rm tw}(G)}``, where ``{\rm tw(G)}`` is the [tree-width](https://en.wikipedia.org/wiki/Treewidth) of ``G``.
32+
One can check the time, space and read-write complexity with the [`timespacereadwrite_complexity`](@ref) function.
3333

3434
```julia
3535
julia> timespacereadwrite_complexity(problem)
3636
(21.90683335864693, 17.0, 20.03588509836998)
3737
```
3838

39-
The return values are `log2` of the the number of iterations, the number elements in the largest tensor during contraction and the number of read-write operations to tensor elements.
40-
In this example, the number of `+` and `*` operations are both ``\sim 2^{21.9}``
41-
and the number of read-write operations are ``\sim 2^{20}``.
42-
The largest tensor size is ``2^{17}``, one can check the element size by typing
39+
The return values are `log2` values of the number of multiplications, the number elements in the largest tensor during contraction and the number of read-write operations to tensor elements.
40+
In this example, the number `*` operations is ``\sim 2^{21.9}``, the number of read-write operations are ``\sim 2^{20}``, and the largest tensor size is ``2^{17}``.
41+
One can check the element size by typing
4342
```julia
4443
julia> sizeof(TropicalF64)
4544
8
@@ -55,20 +54,19 @@ julia> sizeof(TruncatedPoly{5,Float64,Float64})
5554
```
5655

5756
One can use [`estimate_memory`](@ref) to get a good estimation of peak memory in bytes.
57+
For example, to compute the graph polynomial, the peak memory can be estimated as follows.
5858
```julia
5959
julia> estimate_memory(problem, GraphPolynomial(; method=:finitefield))
6060
297616
6161

6262
julia> estimate_memory(problem, GraphPolynomial(; method=:polynomial))
6363
71427840
6464
```
65-
It means one only needs 298 KB memory to find the graph polynomial with the finite field approach,
66-
but needs 71 MB memory to find the graph polynomial using the [`Polynomial`](https://juliamath.github.io/Polynomials.jl/stable/polynomials/polynomial/#Polynomial-2) type.
65+
The finite field approach only requires 298 KB memory, while using the [`Polynomial`](https://juliamath.github.io/Polynomials.jl/stable/polynomials/polynomial/#Polynomial-2) number type requires 71 MB memory.
6766

6867
!!! note
69-
* The actual run time memory can be several times larger than the size of the maximum tensor.
70-
There is no constant bound for the factor, an empirical value for it is 3x.
71-
* For mutable types like [`Polynomial`](https://juliamath.github.io/Polynomials.jl/stable/polynomials/polynomial/#Polynomial-2) and [`ConfigEnumerator`](@ref), the `sizeof` function does not measure the actual element size.
68+
* The actual run time memory can be several times larger than the size of the maximum tensor, so the [`estimate_memory`](@ref) is more accurate in estimating the peak memory.
69+
* For mutable element types like [`ConfigEnumerator`](@ref), none of memory estimation functions measure the actual memory usage correctly.
7270

7371
## Slicing
7472

@@ -98,7 +96,7 @@ In this application, the slicing achieves the largest possible space complexity
9896
i.e. the peak memory usage is reduced by a factor ``32``, while the (theoretical) computing time is increased by at a factor ``< 2``.
9997

10098
## GEMM for Tropical numbers
101-
You can speed up the Tropical number matrix multiplication when computing `SizeMax()` by using the Tropical GEMM routines implemented in package [`TropicalGEMM.jl`](https://github.com/TensorBFS/TropicalGEMM.jl/).
99+
One can speed up the Tropical number matrix multiplication when computing the solution space property [`SizeMax`](@ref)`()` by using the Tropical GEMM routines implemented in package [`TropicalGEMM`](https://github.com/TensorBFS/TropicalGEMM.jl/).
102100

103101
```julia
104102
julia> using BenchmarkTools
@@ -116,65 +114,19 @@ julia> @btime solve(problem, SizeMax())
116114
53.0
117115
```
118116

119-
The `TropicalGEMM` pirates the `LinearAlgebra.mul!` interface, hence it takes effect upon using.
120-
The GEMM routine can speed up the computation on CPU for one order, with multi-threading, it can be even faster.
121-
Benchmark shows the performance of `TropicalGEMM` is close to the theoretical optimal value.
122-
123-
## Sum product representation for configurations
124-
[`SumProductTree`](@ref) (an alias of [`SumProductTree`](@ref) with [`StaticElementVector`](@ref) as its data type) can save a lot memory for you to store exponential number of configurations in polynomial space.
125-
It is a sum-product expression tree to store [`ConfigEnumerator`](@ref) in a lazy style, configurations can be extracted by depth first searching the tree with the `Base.collect` method. Although it is space efficient, it is in general not easy to extract information from it.
126-
This tree structure supports directed sampling so that one can get some statistic properties from it with an intermediate effort.
127-
128-
For example, if we want to check some property of an intermediate scale graph, one can type
129-
```julia
130-
julia> graph = random_regular_graph(70, 3)
131-
132-
julia> problem = IndependentSet(graph; optimizer=TreeSA());
133-
134-
julia> tree = solve(problem, ConfigsAll(; tree_storage=true))[];
135-
16633909006371
136-
```
137-
If one wants to store these configurations, he will need a hard disk of size 256 TB!
138-
However, this sum-product binary tree structure supports efficient and unbiased direct sampling.
139-
140-
```julia
141-
samples = generate_samples(tree, 1000);
142-
```
143-
144-
With these samples, one can already compute useful properties like distribution of hamming distance (see [`hamming_distribution`](@ref)).
145-
146-
```julia
147-
julia> using UnicodePlots
148-
149-
julia> lineplot(hamming_distribution(samples, samples))
150-
┌────────────────────────────────────────┐
151-
100000 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢰⠹⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
152-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡎⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
153-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡇⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
154-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
155-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠸⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
156-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
157-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
158-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢰⠃⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
159-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
160-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡞⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
161-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
162-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⢰⠁⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
163-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⠀⠀⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
164-
│⠀⠀⠀⠀⠀⠀⠀⠀⢠⠇⠀⠀⠀⠀⠀⠀⠀⢳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
165-
0 │⢀⣀⣀⣀⣀⣀⣀⣀⠎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠓⢄⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⠀⠀⠀⠀│
166-
└────────────────────────────────────────┘
167-
⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀80⠀
168-
```
117+
The `TropicalGEMM` package pirates the `LinearAlgebra.mul!` interface, hence it takes effect upon using.
118+
The above example shows more than 10x speed up on a single thread CPU, which can be even faster if [the Julia multi-threading](https://docs.julialang.org/en/v1/manual/multi-threading/) if turned on.
119+
The benchmark in the `TropicalGEMM` repo shows this performance is close to the theoretical optimal value.
169120

170121
## Multiprocessing
171-
Submodule `GenericTensorNetworks.SimpleMutiprocessing` provides a function [`GenericTensorNetworks.SimpleMultiprocessing.multiprocess_run`](@ref) function for simple multi-processing jobs.
122+
Submodule `GenericTensorNetworks.SimpleMutiprocessing` provides one function [`GenericTensorNetworks.SimpleMultiprocessing.multiprocess_run`](@ref) function for simple multi-processing jobs.
123+
It is not directly related to `GenericTensorNetworks`, but is very convenient to have one.
172124
Suppose we want to find the independence polynomial for multiple graphs with 4 processes.
173125
We can create a file, e.g. named `run.jl` with the following content
174126

175127
```julia
176128
using Distributed, GenericTensorNetworks.SimpleMultiprocessing
177-
using Random, GenericTensorNetworks # to avoid multi-precompiling
129+
using Random, GenericTensorNetworks # to avoid multi-precompilation
178130
@everywhere using Random, GenericTensorNetworks
179131

180132
results = multiprocess_run(collect(1:10)) do seed
@@ -207,7 +159,7 @@ $ julia -p4 run.jl
207159
You will see a vector of polynomials printed out.
208160
209161
## Make use of GPUs
210-
To upload the computing to GPU, you just add need to use CUDA, and offer a new key word argument.
162+
To upload the computation to GPU, you just add `using CUDA` before calling the `solve` function, and set the keyword argument `usecuda` to `true`.
211163
```julia
212164
julia> using CUDA
213165
[ Info: OMEinsum loaded the CUDA module successfully
@@ -217,9 +169,9 @@ julia> solve(problem, SizeMax(), usecuda=true)
217169
53.0ₜ
218170
```
219171
220-
CUDA backended properties are
172+
Solution space properties computable on GPU includes
221173
* [`SizeMax`](@ref) and [`SizeMin`](@ref)
222174
* [`CountingAll`](@ref)
223175
* [`CountingMax`](@ref) and [`CountingMin`](@ref)
224176
* [`GraphPolynomial`](@ref)
225-
* [`SingleConfigMax`](@ref) and [`SingleConfigMin`](@ref)
177+
* [`SingleConfigMax`](@ref) and [`SingleConfigMin`](@ref)

docs/src/sumproduct.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
# Sum product representation for configurations
2+
[`SumProductTree`](@ref) can use polynomial memory to store exponential number of configurations.
3+
It is a sum-product expression tree to store [`ConfigEnumerator`](@ref) in a lazy style, where configurations can be extracted by depth first searching the tree with the `Base.collect` method.
4+
Although it is space efficient, it is in general not easy to extract information from it due to the exponential large configuration space.
5+
Directed sampling is one of its most important operations, with which one can get some statistic properties from it with an intermediate effort. For example, if we want to check some property of an intermediate scale graph, one can type
6+
```julia
7+
julia> graph = random_regular_graph(70, 3)
8+
9+
julia> problem = IndependentSet(graph; optimizer=TreeSA());
10+
11+
julia> tree = solve(problem, ConfigsAll(; tree_storage=true))[];
12+
16633909006371
13+
```
14+
If one wants to store these configurations, he will need a hard disk of size 256 TB!
15+
However, this sum-product binary tree structure supports efficient and unbiased direct sampling.
16+
17+
```julia
18+
samples = generate_samples(tree, 1000);
19+
```
20+
21+
With these samples, one can already compute useful properties like Hamming distance (see [`hamming_distribution`](@ref)) distribution.
22+
23+
```julia
24+
julia> using UnicodePlots
25+
26+
julia> lineplot(hamming_distribution(samples, samples))
27+
┌────────────────────────────────────────┐
28+
100000 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢰⠹⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
29+
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡎⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
30+
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡇⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
31+
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
32+
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠸⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
33+
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
34+
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
35+
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢰⠃⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
36+
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
37+
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡞⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
38+
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
39+
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⢰⠁⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
40+
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⠀⠀⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
41+
│⠀⠀⠀⠀⠀⠀⠀⠀⢠⠇⠀⠀⠀⠀⠀⠀⠀⢳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
42+
0 │⢀⣀⣀⣀⣀⣀⣀⣀⠎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠓⢄⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⠀⠀⠀⠀│
43+
└────────────────────────────────────────┘
44+
⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀80⠀
45+
```
46+
47+
Here, the ``x``-axis is the Hamming distance and the ``y``-axis is the counting of pair-wise Hamming distances.

examples/Coloring.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ problem = Coloring{3}(graph);
2525

2626
# ### Theory (can skip)
2727
# Type [`Coloring`](@ref) can be used for constructing the tensor network with optimized contraction order for a coloring problem.
28-
# Let us use 3-colouring problem defined on vertices as an example.
28+
# Let us use 3-coloring problem defined on vertices as an example.
2929
# For a vertex ``v``, we define the degree of freedoms ``c_v\in\{1,2,3\}`` and a vertex tensor labelled by it as
3030
# ```math
3131
# W(v) = \left(\begin{matrix}
@@ -42,7 +42,7 @@ problem = Coloring{3}(graph);
4242
# x & x & 1
4343
# \end{matrix}\right).
4444
# ```
45-
# The number of possible colouring can be obtained by contracting this tensor network by setting vertex tensor elements ``r_v, g_v`` and ``b_v`` to 1.
45+
# The number of possible coloring can be obtained by contracting this tensor network by setting vertex tensor elements ``r_v, g_v`` and ``b_v`` to 1.
4646

4747
# ## Solving properties
4848
# ##### counting all possible coloring

examples/DominatingSet.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ problem = DominatingSet(graph; optimizer=TreeSA());
2828
# ### Theory (can skip)
2929
# Let ``G=(V,E)`` be the target graph that we want to solve.
3030
# The tensor network representation map a vertex ``v\in V`` to a boolean degree of freedom ``s_v\in\{0, 1\}``.
31-
# We defined the restriction on a vertex and its neighbouring vertices ``N(v)``:
31+
# We defined the restriction on a vertex and its neighboring vertices ``N(v)``:
3232
# ```math
3333
# T(x_v)_{s_1,s_2,\ldots,s_{|N(v)|},s_v} = \begin{cases}
3434
# 0 & s_1=s_2=\ldots=s_{|N(v)|}=s_v=0,\\
@@ -37,7 +37,7 @@ problem = DominatingSet(graph; optimizer=TreeSA());
3737
# \end{cases}
3838
# ```
3939
# where ``w_v`` is the weight of vertex ``v``.
40-
# This tensor means if both ``v`` and its neighbouring vertices are not in ``D``, i.e., ``s_1=s_2=\ldots=s_{|N(v)|}=s_v=0``,
40+
# This tensor means if both ``v`` and its neighboring vertices are not in ``D``, i.e., ``s_1=s_2=\ldots=s_{|N(v)|}=s_v=0``,
4141
# this configuration is forbidden because ``v`` is not adjacent to any member in the set.
4242
# otherwise, if ``v`` is in ``D``, it has a contribution ``x_v^{w_v}`` to the final result.
4343
# One can check the contraction time space complexity of a [`DominatingSet`](@ref) instance by typing:

0 commit comments

Comments
 (0)