This repository is for benchmarking the performance of different tensor network contraction order optimizers in OMEinsumContractionOrders.
The following figure shows the results of the contraction order optimizers on the examples/quantumcircuit/codes/sycamore_53_20_0.json instance.
- Version:
OMEinsumContractionOrders@v1.0.0 - Platform: Ubuntu 24.04 LTS
- Device: Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz
The data for the above figure is available in the jg/scan-tc branch in the examples/*/results folder.
Please check the full report: report.pdf for more details.
make init # install all dependencies for all examples
make init-cotengra # install cotengra dependencies via uvPrerequisites: For cotengra benchmarks, you need uv installed:
curl -LsSf https://astral.sh/uv/install.sh | shIf you want to benchmark with the developed version of OMEinsumContractionOrders, run
make dev # develop the master branch of OMEinsumContractionOrders for all examplesTo switch back to the released version of OMEinsumContractionOrders, run
make free # switch back to the released version of OMEinsumContractionOrdersTo update the dependencies of all examples, run
make update
make update-cotengra # update all dependencies for cotengraExamples are defined in the examples folder. To generate contraction codes for all examples, run
make generate-codesIt will generate a file in the codes folder of each example, named *.json.
These instances are defined in the main.jl file of each example.
There is also a script to generate the contraction codes for the einsumorg package, run
make generate-einsumorg-codesIt will generate a file in the codes folder of the einsumorg example, named *.json. It requires:
- Having a working python interpreter in your terminal.
- Downloading the
instancesdataset from here and unpack it in theexamples/einsumorg/instancesfolder.
To run benchmarks with Julia optimizers, run:
optimizer="Treewidth(alg=MF())" make run
optimizer="Treewidth(alg=MMD())" make run
optimizer="Treewidth(alg=AMF())" make run
optimizer="KaHyParBipartite(; sc_target=25)" make run
optimizer="KaHyParBipartite(; sc_target=25, imbalances=0.0:0.1:0.8)" make run
optimizer="HyperND()" make run
optimizer="HyperND(; dis=METISND(), width=50, imbalances=100:10:800)" make run
optimizer="HyperND(; dis=KaHyParND(), width=50, imbalances=100:10:800)" make runTo scan parameters for Julia optimizers:
for niters in 1 2 4 6 8 10 20 30 40 50; do optimizer="TreeSA(niters=$niters)" make run; done
for niters in {0..10}; do optimizer="GreedyMethod(α=$niters * 0.1)" make run; doneCotengra is a pure Python implementation managed via uv. See cotengra/README.md for detailed documentation.
Quick usage:
# Basic run (default: 1 trial, minimize='flops')
method=greedy params={} make run-cotengra
method=kahypar params={} make run-cotengra
# With parameters (dict syntax, like Julia)
method=greedy params="{'max_repeats': 10}" make run-cotengra
method=greedy params="{'random_strength': 0.1, 'temperature': 0.5}" make run-cotengra
method=kahypar params="{'parts': 8, 'imbalance': 0.1}" make run-cotengra
# Different optimization objectives
method=greedy params="{'minimize': 'size'}" make run-cotengra # minimize space
method=greedy params="{'minimize': 'write'}" make run-cotengra # minimize memory writes
method=greedy params="{'minimize': 'combo'}" make run-cotengra # combo of flops+write
# Scan parameters
for n in 1 5 10 20 50; do method=greedy params="{'max_repeats': $n}" make run-cotengra; done
for p in 2 4 8 16; do method=kahypar params="{'parts': $p}" make run-cotengra; done
# Run on specific problems only
method=greedy params="{'problems': [['quantumcircuit', 'sycamore_53_20_0.json']]}" make run-cotengraList available methods and hyperparameters:
cd cotengra && uv run benchmark.py --list-methodsSee cotengra/README.md for:
- Complete list of 9+ optimization methods
- Hyperparameter explanations for each method
- Advanced usage examples
- Installation troubleshooting
If you want to overwrite the existing results, run with argument overwrite=true. To remove existing results of all benchmarks, run
make clean-resultsTo summarize the results (a necessary step for visualization), run:
make summaryThis will generate summary.json in the root folder, which contains results from both Julia optimizers (OMEinsumContractionOrders) and Python optimizers (cotengra). All cotengra optimizer names are prefixed with cotengra_ for easy identification.
To visualize the results, typst >= 0.13 is required. After installing typst just run
make reportIt will generate a file named report.pdf in the root folder, which contains the report of the benchmarks.
Alternatively, you can use VSCode + Tinymist typst extension to directly preview it.
The examples are defined in the examples folder. To add a new example, you need to:
- Add a new folder in the
examplesfolder, named after the problem. - Setup a independent environment in the new folder, and add the dependencies in the
Project.tomlfile. - Add a new
main.jlfile in the new folder, which should contain the following functions:main(folder::String): the main function to generate the contraction codes to the target folder. The sample JSON file is as follows:
The{ "einsum": { "ixs": [[1, 2], [2, 3], [3, 4]], "iy": [] }, "size": { "1": 2, "2": 2, "3": 2, "4": 2 } }einsumfield is the contraction code with two fields:ixs(input labels) andiy(output label), andsizeis the size of the tensor indices. - Edit the
config.tomlfile to add the new example in theinstancessection.
Please open an issue on the issue tracker.