Skip to content

Commit 472df53

Browse files
committed
resolve pr comments
1 parent 59930d8 commit 472df53

File tree

16 files changed

+94
-116
lines changed

16 files changed

+94
-116
lines changed

benchmark/bench_map.jl

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,10 @@ using Artifacts
66

77
const SUITE = BenchmarkGroup()
88

9-
model_filepath, evidence_filepath, _, solution_filepath = get_instance_filepaths("Promedus_14", "MAR")
10-
problem = read_instance(model_filepath; evidence_filepath, solution_filepath)
9+
problem = problem_from_artifact("uai2014", "MAR" "Promedus", 14)
1110

1211
optimizer = TreeSA(ntrials = 1, niters = 2, βs = 1:0.1:40)
13-
tn = TensorNetworkModel(problem; optimizer)
12+
tn = TensorNetworkModel(read_model(problem); optimizer, evidence=get_evidence(problem))
1413
SUITE["map"] = @benchmarkable most_probable_config(tn)
1514

1615
end # module

benchmark/bench_mar.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ using Artifacts
88

99
const SUITE = BenchmarkGroup()
1010

11-
model_filepath, evidence_filepath, _, solution_filepath = get_instance_filepaths("Promedus_14", "MAR")
12-
problem = read_instance(model_filepath; evidence_filepath, solution_filepath)
11+
model_filepath, evidence_filepath, _, solution_filepath = get_model_filepaths("Promedus_14", "MAR")
12+
problem = read_model(model_filepath; evidence_filepath, solution_filepath)
1313

1414
optimizer = TreeSA(ntrials = 1, niters = 5, βs = 0.1:0.1:100)
1515
tn1 = TensorNetworkModel(problem; optimizer)

benchmark/bench_mmap.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,8 @@ using Artifacts
66

77
const SUITE = BenchmarkGroup()
88

9-
model_filepath, evidence_filepath, _, solution_filepath = get_instance_filepaths("Promedus_14", "MAR")
10-
problem = read_instance(model_filepath; evidence_filepath, solution_filepath)
9+
model_filepath, evidence_filepath, _, solution_filepath = get_model_filepaths("Promedus_14", "MAR")
10+
problem = read_model(model_filepath; evidence_filepath, solution_filepath)
1111
optimizer = TreeSA(ntrials = 1, niters = 2, βs = 1:0.1:40)
1212

1313
# Does not marginalize any var

docs/src/api/public.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ MMAPModel
4242
RescaledArray
4343
TensorNetworkModel
4444
ArtifactProblemSpec
45-
UAIInstance
45+
UAIModel
4646
```
4747

4848
## Functions
@@ -58,13 +58,12 @@ most_probable_config
5858
probability
5959
dataset_from_artifact
6060
problem_from_artifact
61-
read_instance
61+
read_model
6262
read_evidence
6363
read_solution
6464
read_queryvars
65-
read_instance_file
65+
read_model_file
6666
read_evidence_file
67-
read_solution_file
6867
read_td_file
6968
sample
7069
```

docs/src/performance.md

Lines changed: 19 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,16 @@
11
# Performance Tips
22
## Optimize contraction orders
33

4-
Let us use the independent set problem on 3-regular graphs as an example.
5-
```julia
6-
julia> using TensorInference, Artifacts, Pkg
7-
8-
julia> Pkg.ensure_artifact_installed("uai2014", pkgdir(TensorInference, "test", "Artifacts.toml"));
9-
10-
julia> function get_instance_filepaths(problem_name::AbstractString, task::AbstractString)
11-
model_filepath = joinpath(artifact"uai2014", task, problem_name * ".uai")
12-
evidence_filepath = joinpath(artifact"uai2014", task, problem_name * ".uai.evid")
13-
solution_filepath = joinpath(artifact"uai2014", task, problem_name * ".uai." * task)
14-
return model_filepath, evidence_filepath, solution_filepath
15-
end
16-
17-
julia> model_filepath, evidence_filepath, solution_filepath = get_instance_filepaths("Promedus_14", "MAR")
18-
19-
julia> instance = read_instance(model_filepath; evidence_filepath, solution_filepath)
4+
Let us use a problem instance from the "Promedus" dataset of the UAI 2014 competition as an example.
5+
```@repl performance
6+
using TensorInference
7+
problem = problem_from_artifact("uai2014", "MAR", "Promedus", 11)
8+
model, evidence = read_model(problem), read_evidence(problem);
209
```
2110

2211
Next, we select the tensor network contraction order optimizer.
23-
```julia
24-
julia> optimizer = TreeSA(ntrials = 1, niters = 5, βs = 0.1:0.1:100)
12+
```@repl performance
13+
optimizer = TreeSA(ntrials = 1, niters = 5, βs = 0.1:0.3:100)
2514
```
2615

2716
Here, we choose the local search based [`TreeSA`](@ref) algorithm, which often finds the smallest time/space complexity and supports slicing.
@@ -32,41 +21,32 @@ Alternative tensor network contraction order optimizers include
3221
* [`KaHyParBipartite`](@ref)
3322
* [`SABipartite`](@ref)
3423

35-
```julia
36-
julia> tn = TensorNetworkModel(instance; optimizer)
24+
```@repl performance
25+
tn = TensorNetworkModel(model; optimizer, evidence);
3726
```
3827
The returned object `tn` contains a field `code` that specifies the tensor network with optimized contraction order. To check the contraction complexity, please type
39-
```julia
40-
julia> contraction_complexity(problem)
28+
```@repl performance
29+
contraction_complexity(tn)
4130
```
4231

4332
The returned object contains log2 values of the number of multiplications, the number elements in the largest tensor during contraction and the number of read-write operations to tensor elements.
4433

45-
```julia
46-
julia> p1 = probability(tn)
34+
```@repl performance
35+
probability(tn)
4736
```
4837

4938
## Slicing technique
5039

5140
For large scale applications, it is also possible to slice over certain degrees of freedom to reduce the space complexity, i.e.
5241
loop and accumulate over certain degrees of freedom so that one can have a smaller tensor network inside the loop due to the removal of these degrees of freedom.
5342
In the [`TreeSA`](@ref) optimizer, one can set `nslices` to a value larger than zero to turn on this feature.
54-
55-
```julia
56-
julia> tn = TensorNetworkModel(instance; optimizer=TreeSA());
57-
58-
julia> contraction_complexity(tn)
59-
(20.856518235241687, 16.0, 18.88208476145812)
60-
```
61-
62-
As a comparision we slice over 5 degrees of freedom, which can reduce the space complexity by at most 5.
43+
As a comparison we slice over 5 degrees of freedom, which can reduce the space complexity by at most 5.
6344
In this application, the slicing achieves the largest possible space complexity reduction 5, while the time and read-write complexity are only increased by less than 1,
6445
i.e. the peak memory usage is reduced by a factor ``32``, while the (theoretical) computing time is increased by at a factor ``< 2``.
65-
```
66-
julia> tn = TensorNetworkModel(instance; optimizer=TreeSA(nslices=5));
67-
68-
julia> timespacereadwrite_complexity(problem)
69-
(21.134967710592804, 11.0, 19.84529401927876)
46+
```@repl performance
47+
optimizer = TreeSA(ntrials = 1, niters = 5, βs = 0.1:0.3:100, nslices=5)
48+
tn = TensorNetworkModel(model; optimizer, evidence);
49+
contraction_complexity(problem)
7050
```
7151

7252
## GEMM for Tropical numbers
@@ -80,7 +60,7 @@ To upload the computation to GPU, you just add `using CUDA` before calling the `
8060
julia> using CUDA
8161
[ Info: OMEinsum loaded the CUDA module successfully
8262

83-
julia> marginals(tn; usecuda = true)
63+
julia> marginals(tn; usecuda = true);
8464
```
8565
8666
Functions support `usecuda` keyword argument includes

examples/asia/main.jl

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,12 +51,12 @@ using TensorInference
5151
# Load the ASIA network model from the `asia.uai` file located in the examples
5252
# directory. See [Model file format (.uai)](@ref) for a description of the
5353
# format of this file.
54-
instance = read_instance_file(pkgdir(TensorInference, "examples", "asia", "asia.uai"))
54+
model = read_model_file(pkgdir(TensorInference, "examples", "asia", "asia.uai"))
5555

5656
# ---
5757

5858
# Create a tensor network representation of the loaded model.
59-
tn = TensorNetworkModel(instance)
59+
tn = TensorNetworkModel(model)
6060

6161
# ---
6262

@@ -78,7 +78,7 @@ get_vars(tn)
7878
# Set an evidence: Assume that the "X-ray" result (variable 7) is positive.
7979
# Since setting an evidence may affect the contraction order of the tensor
8080
# network, recompute it.
81-
tn = TensorNetworkModel(instance, evidence = Dict(7 => 0))
81+
tn = TensorNetworkModel(model, evidence = Dict(7 => 0))
8282

8383
# ---
8484

@@ -103,7 +103,7 @@ logp, cfg = most_probable_config(tn)
103103
# Compute the most probable values of certain variables (e.g., 4 and 7) while
104104
# marginalizing over others. This is known as Maximum a Posteriori (MAP)
105105
# estimation.
106-
mmap = MMAPModel(instance, evidence=Dict(7=>0), queryvars=[4,7])
106+
mmap = MMAPModel(model, evidence=Dict(7=>0), queryvars=[4,7])
107107

108108
# ---
109109

src/Core.jl

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -23,16 +23,16 @@ $(TYPEDEF)
2323
* `cards` is a vector of cardinalities for variables,
2424
* `factors` is a vector of factors,
2525
"""
26-
struct UAIInstance{ET, FT <: Factor{ET}}
26+
struct UAIModel{ET, FT <: Factor{ET}}
2727
nvars::Int
2828
nclique::Int
2929
cards::Vector{Int}
3030
factors::Vector{FT}
3131
end
3232

33-
Base.show(io::IO, ::MIME"text/plain", uai::UAIInstance) = Base.show(io, uai)
34-
function Base.show(io::IO, uai::UAIInstance)
35-
println(io, "UAIInstance(nvars = $(uai.nvars), nclique = $(uai.nclique))")
33+
Base.show(io::IO, ::MIME"text/plain", uai::UAIModel) = Base.show(io, uai)
34+
function Base.show(io::IO, uai::UAIModel)
35+
println(io, "UAIModel(nvars = $(uai.nvars), nclique = $(uai.nclique))")
3636
println(io, " variables :")
3737
println(io, " factors : ")
3838
for (k, f) in enumerate(uai.factors)
@@ -89,16 +89,16 @@ end
8989
$(TYPEDSIGNATURES)
9090
"""
9191
function TensorNetworkModel(
92-
instance::UAIInstance;
92+
model::UAIModel;
9393
openvars = (),
9494
evidence = Dict{Int,Int}(),
9595
optimizer = GreedyMethod(),
9696
simplifier = nothing
9797
)::TensorNetworkModel
9898
return TensorNetworkModel(
99-
1:(instance.nvars),
100-
instance.cards,
101-
instance.factors;
99+
1:(model.nvars),
100+
model.cards,
101+
model.factors;
102102
openvars,
103103
evidence,
104104
optimizer,

src/TensorInference.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,9 @@ export RescaledArray
1818
export contraction_complexity, TreeSA, GreedyMethod, KaHyParBipartite, SABipartite, MergeGreedy, MergeVectors
1919

2020
# read and load uai files
21-
export read_instance_file, read_td_file, read_evidence_file, read_solution_file
21+
export read_model_file, read_td_file, read_evidence_file
2222
export problem_from_artifact, ArtifactProblemSpec
23-
export read_instance, UAIInstance, read_evidence, read_solution, read_queryvars, dataset_from_artifact
23+
export read_model, UAIModel, read_evidence, read_solution, read_queryvars, dataset_from_artifact
2424

2525
# marginals
2626
export TensorNetworkModel, get_vars, get_cards, log_probability, probability, marginals

src/mmap.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,9 +58,9 @@ end
5858
"""
5959
$(TYPEDSIGNATURES)
6060
"""
61-
function MMAPModel(instance::UAIInstance; openvars = (), optimizer = GreedyMethod(), queryvars, evidence = Dict{Int, Int}(), simplifier = nothing)::MMAPModel
61+
function MMAPModel(model::UAIModel; openvars = (), optimizer = GreedyMethod(), queryvars, evidence = Dict{Int, Int}(), simplifier = nothing)::MMAPModel
6262
return MMAPModel(
63-
1:(instance.nvars), instance.cards, instance.factors; queryvars, evidence, optimizer, simplifier, openvars
63+
1:(model.nvars), model.cards, model.factors; queryvars, evidence, optimizer, simplifier, openvars
6464
)
6565
end
6666

src/utils.jl

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,21 @@
11
"""
22
$(TYPEDSIGNATURES)
33
4-
Parse the problem instance found in `instance_filepath` defined in the UAI instance
4+
Parse the problem instance found in `model_filepath` defined in the UAI model
55
format. If the provided file path is empty, return `nothing`.
66
77
The UAI file formats are defined in:
88
https://uaicompetition.github.io/uci-2022/file-formats/
99
"""
10-
function read_instance_file(instance_filepath::AbstractString; factor_eltype = Float64)::UAIInstance
10+
function read_model_file(model_filepath::AbstractString; factor_eltype = Float64)::UAIModel
1111
# Read the uai file into an array of lines
12-
str = open(instance_filepath) do file
12+
str = open(model_filepath) do file
1313
read(file, String)
1414
end
15-
return read_instance_from_string(str; factor_eltype)
15+
return read_model_from_string(str; factor_eltype)
1616
end
1717

18-
function read_instance_from_string(str::AbstractString; factor_eltype = Float64)::UAIInstance
18+
function read_model_from_string(str::AbstractString; factor_eltype = Float64)::UAIModel
1919
rawlines = split(str, "\n")
2020
# Filter out empty lines
2121
lines = filter(!isempty, rawlines)
@@ -59,7 +59,7 @@ function read_instance_from_string(str::AbstractString; factor_eltype = Float64)
5959
# Wrap the tables with their corresponding scopes in an array of Factor type
6060
factors = [Factor{factor_eltype, length(scope)}(Tuple(scope), table) for (scope, table) in zip(scopes_sorted, tables_sorted)]
6161

62-
return UAIInstance(nvars, ntables, cards, factors)
62+
return UAIModel(nvars, ntables, cards, factors)
6363
end
6464

6565
"""
@@ -125,7 +125,7 @@ end
125125
$(TYPEDSIGNATURES)
126126
127127
Parse the solution marginals of all variables from the UAI MAR solution file.
128-
The order of the variables is the same as in the instance definition.
128+
The order of the variables is the same as in the model definition.
129129
130130
The UAI file formats are defined in:
131131
https://uaicompetition.github.io/uci-2022/file-formats/
@@ -190,8 +190,8 @@ broadcasted_content(x) = asarray(content.(x), x)
190190
"""
191191
$TYPEDEF
192192
193-
Specify the UAI instances from the artifacts.
194-
It can be used as the input of [`read_instance`](@ref).
193+
Specify the UAI models from the artifacts.
194+
It can be used as the input of [`read_model`](@ref).
195195
196196
### Fields
197197
$TYPEDFIELDS
@@ -217,11 +217,11 @@ end
217217
"""
218218
$TYPEDSIGNATURES
219219
220-
Read an UAI instance from an artifact.
220+
Read an UAI model from an artifact.
221221
"""
222-
function read_instance(instance::ArtifactProblemSpec; eltype=Float64)
223-
problem_name = "$(instance.problem_set)_$(instance.problem_id).uai"
224-
return read_instance_file(joinpath(instance.artifact_path, instance.task, problem_name); factor_eltype = eltype)
222+
function read_model(problem::ArtifactProblemSpec; eltype=Float64)
223+
problem_name = "$(problem.problem_set)_$(problem.problem_id).uai"
224+
return read_model_file(joinpath(problem.artifact_path, problem.task, problem_name); factor_eltype = eltype)
225225
end
226226

227227
"""
@@ -232,21 +232,21 @@ Return the solution in the artifact.
232232
The UAI file formats are defined in:
233233
https://uaicompetition.github.io/uci-2022/file-formats/
234234
"""
235-
function read_solution(instance::ArtifactProblemSpec; factor_eltype=Float64)
236-
problem_name = "$(instance.problem_set)_$(instance.problem_id).uai.$(instance.task)"
237-
solution_filepath = joinpath(instance.artifact_path, instance.task, problem_name)
235+
function read_solution(problem::ArtifactProblemSpec; factor_eltype=Float64)
236+
problem_name = "$(problem.problem_set)_$(problem.problem_id).uai.$(problem.task)"
237+
solution_filepath = joinpath(problem.artifact_path, problem.task, problem_name)
238238

239239
# Read the solution file into an array of lines
240240
rawlines = open(solution_filepath) do file
241241
readlines(file)
242242
end
243243

244-
if instance.task == "MAR" || instance.task == "MAR2"
244+
if problem.task == "MAR" || problem.task == "MAR2"
245245
return parse_mar_solution_file(rawlines; factor_eltype)
246-
elseif instance.task == "MAP" || instance.task == "MMAP"
246+
elseif problem.task == "MAP" || problem.task == "MMAP"
247247
# Return all elements except the first in the last line as a vector of integers
248248
return last(rawlines) |> split |> x -> x[2:end] |> x -> parse.(Int, x)
249-
elseif instance.task == "PR"
249+
elseif problem.task == "PR"
250250
# Parse the number in the last line as a floating point
251251
return last(rawlines) |> x -> parse(Float64, x)
252252
end
@@ -255,19 +255,19 @@ end
255255
"""
256256
$TYPEDSIGNATURES
257257
"""
258-
function read_evidence(instance::ArtifactProblemSpec)
259-
problem_name = "$(instance.problem_set)_$(instance.problem_id).uai.evid"
260-
evidence_filepath = joinpath(instance.artifact_path, instance.task, problem_name)
258+
function read_evidence(problem::ArtifactProblemSpec)
259+
problem_name = "$(problem.problem_set)_$(problem.problem_id).uai.evid"
260+
evidence_filepath = joinpath(problem.artifact_path, problem.task, problem_name)
261261
obsvars, obsvals = read_evidence_file(evidence_filepath)
262262
return Dict(zip(obsvars, obsvals))
263263
end
264264

265265
"""
266266
$TYPEDSIGNATURES
267267
"""
268-
function read_queryvars(instance::ArtifactProblemSpec)
269-
problem_name = "$(instance.problem_set)_$(instance.problem_id).uai.query"
270-
query_filepath = joinpath(instance.artifact_path, instance.task, problem_name)
268+
function read_queryvars(problem::ArtifactProblemSpec)
269+
problem_name = "$(problem.problem_set)_$(problem.problem_id).uai.query"
270+
query_filepath = joinpath(problem.artifact_path, problem.task, problem_name)
271271
return read_query_file(query_filepath)
272272
end
273273

0 commit comments

Comments
 (0)