diff --git a/deployment/app_wrapper.py b/deployment/app_wrapper.py deleted file mode 100644 index 2eddb9cb..00000000 --- a/deployment/app_wrapper.py +++ /dev/null @@ -1,8 +0,0 @@ -"""This file is used to launch LNT inside a gunicorn webserver. - -This can be used for deploying on the cloud. -""" -import lnt.server.ui.app - -app = lnt.server.ui.app.App.create_standalone('lnt.cfg', - '/var/log/lnt/lnt.log') diff --git a/docs/concepts.rst b/docs/concepts.rst index aa3b34be..68d78a54 100644 --- a/docs/concepts.rst +++ b/docs/concepts.rst @@ -3,13 +3,12 @@ Concepts ======== -LNT's data model is pretty simple, and just following the :ref:`quickstart` can -get you going with performance testing. Moving beyond that, it is useful to have -an understanding of some of the core concepts in LNT. This can help you get the -most out of LNT. +LNT's data model is pretty simple and can be grasped intuitively. However, for +more advanced usage, it is useful to have an understanding of some of the core +concepts in LNT. This can help you get the most out of LNT. -Orders Machines and Tests -------------------------- +Orders, Machines and Tests +-------------------------- LNT's data model was designed to track the performance of a system in many configurations over its evolution. In LNT, an Order is the x-axis of your @@ -19,15 +18,15 @@ also be used to represent treatments, such as a/b. You can put anything you want into LNT as an order, as long as it can be sorted by Python's sort function. -A Machine in LNT is the logical bucket which results are categorized by. +A Machine in LNT is the logical bucket which results are categorized by. Comparing results from the same machine is easy, across machines is harder. Sometimes machine can literally be a machine, but more abstractly, it can be any configuration you are interested in tracking. For example, to store results -from an Arm test machine, you could have a machine call "ArmMachine"; but, you +from an Arm test machine, you could have a machine call "ArmMachine"; but, you may want to break machines up further for example "ArmMachine-Release" "ArmMachine-Debug", when you compile the thing you want to test in two modes. When doing testing of LLVM, we often string all the useful parameters of the -configuration into one machines name:: +configuration into one machines name:: --- @@ -37,7 +36,7 @@ Runs and Samples ---------------- Samples are the actual data points LNT collects. Samples have a value, and -belong to a metric, for example a 4.00 second (value) compile time (metric). +belong to a metric, for example a 4.00 second (value) compile time (metric). Runs are the unit in which data is submitted. A Run represents one run through a set of tests. A run has a Order which it was run on, a Machine it ran on, and a set of Tests that were run, and for each Test @@ -54,7 +53,7 @@ Test Suites LNT uses the idea of a Test Suite to control what metrics are collected. Simply, the test suite acts as a definition of the data that should be stored about the tests that are being run. LNT currently comes with two default test suites. -The Nightly Test Suite (NTS) (which is run far more often than nightly now), +The Nightly Test Suite (NTS) (which is run far more often than nightly now), collects 6 metrics per test: compile time, compile status, execution time, execution status, score and size. The Compile (compile) Test Suite, is focused on metrics for compile quality: wall, system and user compile time, compile memory usage diff --git a/docs/contents.rst b/docs/contents.rst index e1546bca..979fe9e7 100644 --- a/docs/contents.rst +++ b/docs/contents.rst @@ -8,22 +8,22 @@ Contents intro - quickstart + tests - tools + running_server - tests + importing_data + + tools concepts api - importing_data + profiles developer_guide - profiles - Indices and tables ================== @@ -39,4 +39,3 @@ Module Listing :maxdepth: 2 modules/testing - diff --git a/docs/importing_data.rst b/docs/importing_data.rst index 9f3904fc..bd1ae7a0 100644 --- a/docs/importing_data.rst +++ b/docs/importing_data.rst @@ -6,20 +6,20 @@ Importing Data Importing Data in a Text File ----------------------------- -The LNT importreport command will import data in a simple text file format. The +The ``lnt importreport`` command will import data in a simple text file format. The command takes a space separated key value file and creates an LNT report file, which can be submitted to a LNT server. Example input file:: - foo.exec 123 + foo.execution_time 123 bar.size 456 foo/bar/baz.size 789 -The format is "test-name.metric", so exec and size are valid metrics for the -test suite you are submitting to. +The format is ``test-name.metric value``, so ``execution_time`` and ``size`` must be valid +metrics for the test suite you are submitting to. Example:: - echo -n "foo.exec 25\nbar.score 24.2\nbar/baz.size 110.0\n" > results.txt + echo -n "foo.execution_time 25\nbar.score 24.2\nbar/baz.size 110.0\n" > results.txt lnt importreport --machine=my-machine-name --order=1234 --testsuite=nts results.txt report.json lnt submit http://mylnt.com/db_default/submitRun report.json @@ -28,7 +28,7 @@ Example:: LNT Report File Format ---------------------- -The lnt importreport tool is an easy way to import data into LNTs test format. +The ``lnt importreport`` tool is an easy way to import data into LNTs test format. You can also create LNTs report data directly for additional flexibility. First, make sure you've understood the underlying :ref:`concepts` used by LNT. diff --git a/docs/intro.rst b/docs/intro.rst index 342c0699..e04e7470 100644 --- a/docs/intro.rst +++ b/docs/intro.rst @@ -8,116 +8,37 @@ of two main parts, a web application for accessing and visualizing performance data, and command line utilities to allow users to generate and submit test results to the server. -The package was originally written for use in testing LLVM compiler -technologies, but is designed to be usable for the performance testing of any -software. - -If you are an LLVM developer who is mostly interested in just using LNT to run -the test-suite against some compiler, then you should fast forward to the -:ref:`quickstart` or to the information on :ref:`tests`. - -LNT uses a simple and extensible format for interchanging data between the test -producers and the server; this allows the LNT server to receive and store data -for a wide variety of applications. +The package was originally written for use in testing LLVM compiler technologies, +but is designed to be usable for the performance testing of any software. LNT uses +a simple and extensible format for interchanging data between the test producers and +the server; this allows the LNT server to receive and store data for a wide variety +of applications. Both the LNT client and server are written in Python, however the test data itself can be passed in one of several formats, including property lists and JSON. This makes it easy to produce test results from almost any language. +.. _installation: Installation ------------ -If you are only interested in using LNT to run tests locally, see the -:ref:`quickstart`. - -If you want to run an LNT server, you will need to perform the following -additional steps: - - 2. Create a new LNT installation:: - - lnt create path/to/install-dir - - This will create the LNT configuration file, the default database, and a - .wsgi wrapper to create the application. You can execute the generated app - directly to run with the builtin web server, or use:: - - lnt runserver path/to/install-dir - - which provides additional command line options. Neither of these servers is - recommended for production use. - - 3. Edit the generated 'lnt.cfg' file if necessary, for example to: - - a. Update the databases list. - - b. Update the public URL the server is visible at. - - c. Update the nt_emailer configuration. - - 4. Add the 'lnt.wsgi' app to your Apache configuration. You should set also - configure the WSGIDaemonProcess and WSGIProcessGroup variables if not - already done. - - If running in a virtualenv you will need to configure that as well; see the - `modwsgi wiki `_. - -For production servers, you should consider using a full DBMS like PostgreSQL. -To create an LNT instance with PostgreSQL backend, you need to do this instead: +You can install the latest stable release of LNT from PyPI. We recommend doing +that from a virtual environment:: - 1. Create an LNT database in PostgreSQL, also make sure the user has - write permission to the database:: + python3 -m venv .venv + source .venv/bin/activate + pip install llvm-lnt - CREATE DATABASE "lnt.db" +This will install the client-side tools. If you also want to run a production +server, you should instead include the server-side optional requirements:: - 2. Then create LNT installation:: + pip install "llvm-lnt[server]" - lnt create path/to/install-dir --db-dir postgresql://user@host +That's it! ``lnt`` should now be accessible from the virtual environment. - 3. Run server normally:: - - lnt runserver path/to/install-dir - -Architecture ------------- - -The LNT web app is currently implemented as a Flask WSGI web app, with Jinja2 -for the templating engine. My hope is to eventually move to a more AJAXy web -interface. - -The database layer uses SQLAlchemy for its ORM, and is typically backed by -SQLite, although I have tested on MySQL on the past, and supporting other -databases should be trivial. My plan is to always support SQLite as this allows -the possibility of developers easily running their own LNT installation for -viewing nightly test results, and to run with whatever DB makes the most sense -on the server. - -Running a LNT Server Locally ----------------------------- - -LNT can accommodate many more users in the production config. In production: -- Postgres or MySQL should be used as the database. -- A proper wsgi server should be used, in front of a proxy like Nginx or Apache. - -To install the extra packages for the server config:: - - virtualenv venv - . ./venv/bin/activate - pip install -r requirements.server.txt - lnt create path/to/data_dir --db-dir postgresql://user@host # data_dir path will be where lnt data will go. - cd deployment - # Now edit app_wrapper.py to have your path/to/data_dir path and the log-file below. - gunicorn app_wrapper:app --bind 0.0.0.0:8000 --workers 8 --timeout 300 --name lnt_server --log-file /var/log/lnt/lnt.log --access-logfile /var/log/lnt/gunicorn_access.log --max-requests 250000 - - -Running a LNT Server via Docker -------------------------------- - -We provide a Docker Compose setup with Docker containers that can be used to -easily bring up a fully working production server within minutes. The container -can be built and run with:: - - docker compose --file docker/compose.yaml --env-file up - -```` should be the path to a file containing environment variables -required by the containers. Please refer to the Docker Compose file for details. +If you are an LLVM developer who is mostly interested in just using LNT to run +the test-suite against some compiler, then you should fast forward to the section +on :ref:`running tests `. If you want to run your own LNT server, jump to +the section on :ref:`running a server `. Otherwise, jump to the +:ref:`table of contents ` to get started. diff --git a/docs/quickstart.rst b/docs/quickstart.rst deleted file mode 100644 index ae5af2d5..00000000 --- a/docs/quickstart.rst +++ /dev/null @@ -1,118 +0,0 @@ -.. _quickstart: - -Quickstart Guide -================ - -This quickstart guide is designed for LLVM developers who are primarily -interested in using LNT to test compilers using the LLVM test-suite. - -Installation ------------- - -You can install the latest stable release of LNT from PyPI. We recommend doing -that from a virtual environment:: - - python3 -m venv .venv - source .venv/bin/activate - pip install llvm-lnt - -This will install the client-side tools. If you also want to run a production -server, you should instead include the server-side optional requirements:: - - pip install "llvm-lnt[server]" - -That's it! ``lnt`` should now be accessible from the virtual environment. - - -Running Tests -------------- - -To execute the LLVM test-suite using LNT you use the ``lnt runtest`` -command. The information below should be enough to get you started, but see the -:ref:`tests` section for more complete documentation. - -#. Checkout the LLVM test-suite, if you haven't already:: - - git clone https://github.com/llvm/llvm-test-suite.git ~/llvm-test-suite - - You should always keep the test-suite directory itself clean (that is, never - do a configure inside your test suite). Make sure not to check it out into - the LLVM projects directory, as LLVM's configure/make build will then want to - automatically configure it for you. - -#. Execute the ``lnt runtest test-suite`` test producer, point it at the test suite and - the compiler you want to test:: - - lnt runtest test-suite \ - --sandbox /tmp/BAR \ - --cc ~/llvm.obj.64/Release+Asserts/bin/clang \ - --cxx ~/llvm.obj.64/Release+Asserts/bin/clang++ \ - --test-suite ~/llvm-test-suite \ - --cmake-cache Release - - The ``SANDBOX`` value is a path to where the test suite build products and - results will be stored (inside a timestamped directory, by default). - - We recommend adding ``--build-tool-options "-k"`` (if you are using ``make``) - or ``--build-tool-options "-k 0"`` (if you are using ``ninja``). This ensures - that the build tool carries on building even if there is a compilation - failure in one of the tests. Without these options, every test after the - compilation failure will not be compiled and will be reported as a missing - executable. - -#. On most systems, the execution time results will be a bit noisy. There are - a range of things you can do to reduce noisiness (with LNT runtest test-suite - command line options when available between brackets): - - * Only build the benchmarks in parallel, but do the actual running of the - benchmark code at most one at a time. (``--threads 1 --build-threads 6``). - Of course, when you're also interested in the measured compile time, - you should also build sequentially. (``--threads 1 --build-threads 1``). - * When running under linux: Make lnt use linux perf to get more accurate - timing for short-running benchmarks (``--use-perf=1``) - * Pin the running benchmark to a specific core, so the OS doesn't move the - benchmark process from core to core. (Under linux: - ``--make-param="RUNUNDER=taskset -c 1"``) - * Only run the programs that are marked as a benchmark; some of the tests - in the test-suite are not intended to be used as a benchmark. - (``--benchmarking-only``) - * Make sure each program gets run multiple times, so that LNT has a higher - chance of recognizing which programs are inherently noisy - (``--multisample=5``) - * Disable frequency scaling / turbo boost. In case of thermal throttling it - can skew the results. - * Disable as many processes or services as possible on the target system. - - -Viewing Results ---------------- - -By default, ``lnt runtest test-suite`` will show the passes and failures after doing a -run, but if you are interested in viewing the result data in more detail you -should install a local LNT instance to submit the results to. - -You can create a local LNT instance with, e.g.:: - - lnt create ~/myperfdb - -This will create an LNT instance at ``~/myperfdb`` which includes the -configuration of the LNT application and a SQLite database for storing the -results. - -Once you have a local instance, you can either submit results directly with:: - - lnt import ~/myperfdb SANDBOX/test-/report.json - -or as part of a run with:: - - lnt runtest --submit ~/myperfdb nt ... arguments ... - -Once you have submitted results into a database, you can run the LNT web UI -with:: - - lnt runserver ~/myperfdb - -which runs the server on ``http://localhost:8000`` by default. - -In the future, LNT will grow a robust set of command line tools to allow -investigation of performance results without having to use the web UI. diff --git a/docs/running_server.rst b/docs/running_server.rst new file mode 100644 index 00000000..785f81e5 --- /dev/null +++ b/docs/running_server.rst @@ -0,0 +1,57 @@ +.. _running_server: + +Running a LNT Server +==================== + +Running a LNT server locally is easy and can be sufficient for basic tasks. To do +so: + +#. Install ``lnt`` as explained in the :ref:`installation section `. + +#. Create a LNT installation:: + + lnt create path/to/installation + + This will create the LNT configuration file and the default database at the + specified path. + +#. You can then run the server on that installation:: + + lnt runserver path/to/installation + + Note that running the server in this way is not recommended for production, since + this server is single-threaded and uses a SQLite database. + +#. You are now ready to submit data to the server. See the section on :ref:`importing data ` + for details. + +#. While the above is enough for most use cases, you can also customize your installation. + To do so, edit the generated ``lnt.cfg``, for example to: + + a. Update the databases list. + b. Update the public URL the server is visible at. + c. Update the ``nt_emailer`` configuration. + + +Server Architecture +------------------- + +The LNT web app is currently implemented as a Flask WSGI web app, with Jinja2 +for the templating engine. The hope is to eventually move to a more AJAXy web +interface. The database layer uses SQLAlchemy for its ORM, and is typically +backed by SQLite or Postgres. + +Running a Production Server on Docker +------------------------------------- + +We provide a Docker Compose service that can be used to easily bring up a fully working +production server within minutes. The service can be built and run with:: + + docker compose --file docker/compose.yaml --env-file up + +```` should be the path to a file containing environment variables +required by the containers. Please refer to the Docker Compose file for details. +This service runs a LNT production web server attached to a Postgres database. +For production use, we recommend using this service and tweaking the desired +aspects in your custom setup (for example redirecting ports or changing volume +binds). diff --git a/docs/tests.rst b/docs/tests.rst index dd769b39..dd0747bd 100644 --- a/docs/tests.rst +++ b/docs/tests.rst @@ -1,12 +1,90 @@ .. _tests: +Running Tests +============= + +Quickstart +---------- + +To execute the LLVM test-suite using LNT, use the ``lnt runtest`` command. The information +below should be enough to get you started, but see the sections below for more complete +documentation. + +#. Install ``lnt`` as explained in the :ref:`installation section `. + +#. Make sure ``lit`` is installed, for example with ``pip install lit`` or via a + monorepo installation accessible in your ``$PATH``. By default, ``lnt`` will + look for a binary named ``llvm-lit`` in your ``$PATH``. Depending on how you + install ``lit``, you may have to point ``lnt`` to the right binary by using + the ``--use-lit `` flag in the command below. + +#. Checkout the LLVM test-suite, if you haven't already:: + + git clone https://github.com/llvm/llvm-test-suite.git llvm-test-suite + + You should always keep the test-suite directory itself clean (that is, never + do a configure inside your test suite). Make sure not to check it out into + the LLVM projects directory, as LLVM's configure/make build will then want to + automatically configure it for you. + +#. Execute the ``lnt runtest test-suite`` test producer, point it at the test suite and + the compiler you want to test:: + + lnt runtest test-suite --sandbox $PWD/sandbox \ + --cc clang \ + --cxx clang++ \ + --test-suite $PWD/llvm-test-suite \ + --cmake-cache Release + + The ``--sandbox`` argument is a path to where the test suite build products and + results will be stored (inside a timestamped directory, by default). + + We recommend adding ``--build-tool-options "-k"`` (if you are using ``make``) + or ``--build-tool-options "-k 0"`` (if you are using ``ninja``). This ensures + that the build tool carries on building even if there is a compilation + failure in one of the tests. Without these options, every test after the + compilation failure will not be compiled and will be reported as a missing + executable. + +#. If you already have a LNT server instance running, you can submit these results to it + by passing ``--submit ``. + +#. On most systems, the execution time results will be a bit noisy. There are + a range of things you can do to reduce noise: + + * Only build the benchmarks in parallel, but do the actual running of the + benchmark code at most one at a time (use ``--threads 1 --build-threads 6``). + Of course, when you're also interested in the measured compile time, + you should also build sequentially (use ``--threads 1 --build-threads 1``). + * When running on linux: Make ``lnt`` use ``perf`` to get more accurate + timing for short-running benchmarks (use ``--use-perf=1``). + * Pin the running benchmark to a specific core, so the OS doesn't move the + benchmark process from core to core (on linux, use ``--make-param="RUNUNDER=taskset -c 1"``). + * Only run the programs that are marked as a benchmark; some of the tests + in the test-suite are not intended to be used as a benchmark (use ``--benchmarking-only``). + * Make sure each program gets run multiple times, so that LNT has a higher + chance of recognizing which programs are inherently noisy (use ``--multisample=5``). + * Disable frequency scaling / turbo boost. In case of thermal throttling it + can skew the results. + * Disable as many processes or services as possible on the target system. + + +Viewing Results +--------------- + +By default, ``lnt runtest test-suite`` will show the passes and failures after doing a +run, but if you are interested in viewing the result data in more detail you should install +a local LNT instance to submit the results to. See the sections on :ref:`running a server ` +and :ref:`importing data ` for instructions on how to do that. + + Test Producers -============== +-------------- On the client-side, LNT comes with a number of built-in test data producers. -This section focuses on the LLVM test-suite (aka nightly test) generator, since -it is the primary test run using the LNT infrastructure, but note that LNT also -includes tests for other interesting pieces of data, for example Clang +This documentation focuses on the LLVM test-suite (aka nightly test) generator, +since it is the primary test run using the LNT infrastructure, but note that LNT +also includes tests for other interesting pieces of data, for example Clang compile-time performance. LNT also makes it easy to add new test data producers and includes examples of @@ -14,29 +92,9 @@ custom data importers (e.g., to import buildbot build information into) and dynamic test data generators (e.g., abusing the infrastructure to plot graphs, for example). -Running a Local Server ----------------------- -It is useful to set up a local LNT server to view the results of tests, either -for personal use or to preview results before submitting them to a public -server. To set up a one-off server for testing:: - - # Create a new installation in /tmp/FOO. - $ lnt create /tmp/FOO - created LNT configuration in '/tmp/FOO' - ... - - # Run a local LNT server. - $ lnt runserver /tmp/FOO &> /tmp/FOO/runserver.log & - [2] 69694 - - # Watch the server log. - $ tail -f /tmp/FOO/runserver.log - * Running on http://localhost:8000/ - ... - -Running Tests -------------- +Built-in Tests +-------------- The built-in tests are designed to be run via the ``lnt`` tool. The following tools for working with built-in tests are available: @@ -56,8 +114,6 @@ following tools for working with built-in tests are available: runtest --help``. The following section provides specific documentation on the built-in tests. -Built-in Tests --------------- LLVM CMake test-suite ~~~~~~~~~~~~~~~~~~~~~ @@ -67,24 +123,148 @@ The llvm test-suite can be run with the ``test-suite`` built-in test. Running the test-suite via CMake and lit uses a different LNT test:: rm -rf /tmp/BAR - lnt runtest test-suite \ - --sandbox /tmp/BAR \ - --cc ~/llvm.obj.64/Release+Asserts/bin/clang \ - --cxx ~/llvm.obj.64/Release+Asserts/bin/clang++ \ - --use-cmake=/usr/local/bin/cmake \ - --use-lit=~/llvm/utils/lit/lit.py \ - --test-suite ~/llvm-test-suite \ + lnt runtest test-suite \ + --sandbox /tmp/BAR \ + --cc ~/llvm.obj.64/Release+Asserts/bin/clang \ + --cxx ~/llvm.obj.64/Release+Asserts/bin/clang++ \ + --use-cmake=/usr/local/bin/cmake \ + --use-lit=~/llvm/utils/lit/lit.py \ + --test-suite llvm-test-suite \ --cmake-cache Release Since the CMake test-suite uses lit to run the tests and compare their output, -LNT needs to know the path to your LLVM lit installation. The test-suite holds +LNT needs to know the path to your LLVM lit installation. The test-suite holds some common configurations in CMake caches. The ``--cmake-cache`` flag and the ``--cmake-define`` flag allow you to change how LNT configures cmake for the test-suite run. -LLVM Makefile test-suite (aka LLVM Nightly Test) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Capturing Linux perf profile info +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When using the CMake driver in the test-suite, LNT can also capture profile +information using linux perf. This can then be explored through the LNT webUI +as demonstrated at +http://blog.llvm.org/2016/06/using-lnt-to-track-performance.html . + +To capture these profiles, use command line option ``--use-perf=all``. A +typical command line using this for evaluating the performance of generated +code looks something like the following:: + + lnt runtest test-suite \ + --sandbox SANDBOX \ + --cc ~/bin/clang \ + --use-cmake=/usr/local/bin/cmake \ + --use-lit=~/llvm/utils/lit/lit.py \ + --test-suite llvm-test-suite \ + --benchmarking-only \ + --build-threads 8 \ + --threads 1 \ + --use-perf=all \ + --exec-multisample=5 \ + --run-under 'taskset -c 1' + + +Bisecting: ``--single-result`` and ``--single-result-predicate`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The LNT driver for the CMake-based test suite comes with helpers for bisecting conformance and performance changes with ``llvmlab bisect``. + +``llvmlab bisect`` is part of the ``zorg`` repository and allows easy bisection of some predicate through a build cache. The key to using ``llvmlab`` effectively is to design a good predicate command - one which exits with zero on 'pass' and nonzero on 'fail'. + +LNT normally runs one or more tests then produces a test report. It always exits with status zero unless an internal error occurred. The ``--single-result`` argument changes LNT's behaviour - it will only run one specific test and will apply a predicate to the result of that test to determine LNT's exit status. + +The ``--single-result-predicate`` argument defines the predicate to use. This is a Python expression that is executed in a context containing several pre-set variables: + + * ``status`` - Boolean passed or failed (True for passed, False for failed). + * ``exec_time`` - Execution time (note that ``exec`` is a reserved keyword in Python!) + * ``compile`` (or ``compile_time``) - Compilation time + +Any metrics returned from the test, such as "score" or "hash" are also added to the context. + +The default predicate is simply ``status`` - so this can be used to debug correctness regressions out of the box. More complex predicates are possible; for example ``exec_time < 3.0`` would bisect assuming that a 'good' result takes less than 3 seconds. + +Full example using ``llvmlab`` to debug a performance improvement:: + + llvmlab bisect --min-rev=261265 --max-rev=261369 \ + lnt runtest test-suite \ + --cc '%(path)s/bin/clang' \ + --sandbox SANDBOX \ + --test-suite /work/llvm-test-suite \ + --use-lit lit \ + --run-under 'taskset -c 5' \ + --cflags '-O3 -mthumb -mcpu=cortex-a57' \ + --single-result MultiSource/Benchmarks/TSVC/Expansion-flt/Expansion-flt \ + --single-result-predicate 'exec_time > 8.0' + + +Producing Diagnositic Reports +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The test-suite module can produce a diagnostic report which might be useful +for figuring out what is going on with a benchmark:: + + lnt runtest test-suite \ + --sandbox /tmp/BAR \ + --cc ~/llvm.obj.64/Release+Asserts/bin/clang \ + --cxx ~/llvm.obj.64/Release+Asserts/bin/clang++ \ + --use-cmake=/usr/local/bin/cmake \ + --use-lit=~/llvm/utils/lit/lit.py \ + --test-suite llvm-test-suite \ + --cmake-cache Release \ + --diagnose --only-test SingleSource/Benchmarks/Stanford/Bubblesort + +This will run the test-suite many times over, collecting useful information +in a report directory. The report collects many things like execution profiles, +compiler time reports, intermediate files, binary files, and build information. + + +Cross-compiling +~~~~~~~~~~~~~~~ + +The best way to run the test-suite in a cross-compiling setup with the +cmake driver is to use cmake's built-in support for cross-compiling as much as +possible. In practice, the recommended way to cross-compile is to use a cmake +toolchain file (see +https://cmake.org/cmake/help/v3.0/manual/cmake-toolchains.7.html#cross-compiling) + +An example command line for cross-compiling on an X86 machine, targeting +AArch64 linux, is:: + + lnt runtest test-suite \ + --sandbox SANDBOX \ + --test-suite /work/llvm-test-suite \ + --use-lit lit \ + --cppflags="-O3" \ + --run-under=$HOME/dev/aarch64-emu/aarch64-qemu.sh \ + --cmake-define=CMAKE_TOOLCHAIN_FILE:FILEPATH=$HOME/clang_aarch64_linux.cmake + +The key part here is the CMAKE_TOOLCHAIN_FILE define. As you're +cross-compiling, you may need a --run-under command as the produced binaries +probably won't run natively on your development machine, but something extra +needs to be done (e.g. running under a qemu simulator, or transferring the +binaries to a development board). This isn't explained further here. + +In your toolchain file, it's important to specify that the cmake variables +defining the toolchain must be cached in CMakeCache.txt, as that's where lnt +reads them from to figure out which compiler was used when needing to construct +metadata for the json report. An example is below. The important keywords to +make the variables appear in the CMakeCache.txt are "CACHE STRING "" FORCE":: + + $ cat clang_aarch64_linux.cmake + set(CMAKE_SYSTEM_NAME Linux ) + set(triple aarch64-linux-gnu ) + set(CMAKE_C_COMPILER /home/user/build/bin/clang CACHE STRING "" FORCE) + set(CMAKE_C_COMPILER_TARGET ${triple} CACHE STRING "" FORCE) + set(CMAKE_CXX_COMPILER /home/user/build/bin/clang++ CACHE STRING "" FORCE) + set(CMAKE_CXX_COMPILER_TARGET ${triple} CACHE STRING "" FORCE) + set(CMAKE_SYSROOT /home/user/aarch64-emu/sysroot-glibc-linaro-2.23-2016.11-aarch64-linux-gnu ) + set(CMAKE_C_COMPILER_EXTERNAL_TOOLCHAIN /home/user/aarch64-emu/gcc-linaro-6.2.1-2016.11-x86_64_aarch64-linux-gnu ) + set(CMAKE_CXX_COMPILER_EXTERNAL_TOOLCHAIN /home/user/aarch64-emu/gcc-linaro-6.2.1-2016.11-x86_64_aarch64-linux-gnu ) + + +[Deprecated] LLVM Makefile test-suite (aka LLVM Nightly Test) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: The Makefile test-suite is deprecated. Consider using the cmake based ``lnt runtest test-suite`` mode instead. @@ -130,7 +310,7 @@ local build:: --cxx ~/llvm.obj.64/Release+Asserts/bin/clang++ \ --llvm-src ~/llvm \ --llvm-obj ~/llvm.obj.64 \ - --test-suite ~/llvm-test-suite \ + --test-suite llvm-test-suite \ TESTER_NAME \ -j 16 2010-04-17 23:46:40: using nickname: 'TESTER_NAME__clang_DEV__i386' @@ -254,130 +434,3 @@ environment to use for various commands: For more information, see the example tests in the LLVM test-suite repository under the ``LNT/Examples`` directory. - - - -Capturing Linux perf profile info -+++++++++++++++++++++++++++++++++ - -When using the CMake driver in the test-suite, LNT can also capture profile -information using linux perf. This can then be explored through the LNT webUI -as demonstrated at -http://blog.llvm.org/2016/06/using-lnt-to-track-performance.html . - -To capture these profiles, use command line option ``--use-perf=all``. A -typical command line using this for evaluating the performance of generated -code looks something like the following:: - - lnt runtest test-suite \ - --sandbox SANDBOX \ - --cc ~/bin/clang \ - --use-cmake=/usr/local/bin/cmake \ - --use-lit=~/llvm/utils/lit/lit.py \ - --test-suite ~/llvm-test-suite \ - --benchmarking-only \ - --build-threads 8 \ - --threads 1 \ - --use-perf=all \ - --exec-multisample=5 \ - --run-under 'taskset -c 1' - - -Bisecting: ``--single-result`` and ``--single-result-predicate`` -++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -The LNT driver for the CMake-based test suite comes with helpers for bisecting conformance and performance changes with ``llvmlab bisect``. - -``llvmlab bisect`` is part of the ``zorg`` repository and allows easy bisection of some predicate through a build cache. The key to using ``llvmlab`` effectively is to design a good predicate command - one which exits with zero on 'pass' and nonzero on 'fail'. - -LNT normally runs one or more tests then produces a test report. It always exits with status zero unless an internal error occurred. The ``--single-result`` argument changes LNT's behaviour - it will only run one specific test and will apply a predicate to the result of that test to determine LNT's exit status. - -The ``--single-result-predicate`` argument defines the predicate to use. This is a Python expression that is executed in a context containing several pre-set variables: - - * ``status`` - Boolean passed or failed (True for passed, False for failed). - * ``exec_time`` - Execution time (note that ``exec`` is a reserved keyword in Python!) - * ``compile`` (or ``compile_time``) - Compilation time - -Any metrics returned from the test, such as "score" or "hash" are also added to the context. - -The default predicate is simply ``status`` - so this can be used to debug correctness regressions out of the box. More complex predicates are possible; for example ``exec_time < 3.0`` would bisect assuming that a 'good' result takes less than 3 seconds. - -Full example using ``llvmlab`` to debug a performance improvement:: - - llvmlab bisect --min-rev=261265 --max-rev=261369 \ - lnt runtest test-suite \ - --cc '%(path)s/bin/clang' \ - --sandbox SANDBOX \ - --test-suite /work/llvm-test-suite \ - --use-lit lit \ - --run-under 'taskset -c 5' \ - --cflags '-O3 -mthumb -mcpu=cortex-a57' \ - --single-result MultiSource/Benchmarks/TSVC/Expansion-flt/Expansion-flt \ - --single-result-predicate 'exec_time > 8.0' - - -Producing Diagnositic Reports -+++++++++++++++++++++++++++++ - -The test-suite module can produce a diagnostic report which might be useful -for figuring out what is going on with a benchmark:: - - lnt runtest test-suite \ - --sandbox /tmp/BAR \ - --cc ~/llvm.obj.64/Release+Asserts/bin/clang \ - --cxx ~/llvm.obj.64/Release+Asserts/bin/clang++ \ - --use-cmake=/usr/local/bin/cmake \ - --use-lit=~/llvm/utils/lit/lit.py \ - --test-suite ~/llvm-test-suite \ - --cmake-cache Release \ - --diagnose --only-test SingleSource/Benchmarks/Stanford/Bubblesort - -This will run the test-suite many times over, collecting useful information -in a report directory. The report collects many things like execution profiles, -compiler time reports, intermediate files, binary files, and build information. - - -Cross-compiling -+++++++++++++++ - -The best way to run the test-suite in a cross-compiling setup with the -cmake driver is to use cmake's built-in support for cross-compiling as much as -possible. In practice, the recommended way to cross-compile is to use a cmake -toolchain file (see -https://cmake.org/cmake/help/v3.0/manual/cmake-toolchains.7.html#cross-compiling) - -An example command line for cross-compiling on an X86 machine, targeting -AArch64 linux, is:: - - lnt runtest test-suite \ - --sandbox SANDBOX \ - --test-suite /work/llvm-test-suite \ - --use-lit lit \ - --cppflags="-O3" \ - --run-under=$HOME/dev/aarch64-emu/aarch64-qemu.sh \ - --cmake-define=CMAKE_TOOLCHAIN_FILE:FILEPATH=$HOME/clang_aarch64_linux.cmake - -The key part here is the CMAKE_TOOLCHAIN_FILE define. As you're -cross-compiling, you may need a --run-under command as the produced binaries -probably won't run natively on your development machine, but something extra -needs to be done (e.g. running under a qemu simulator, or transferring the -binaries to a development board). This isn't explained further here. - -In your toolchain file, it's important to specify that the cmake variables -defining the toolchain must be cached in CMakeCache.txt, as that's where lnt -reads them from to figure out which compiler was used when needing to construct -metadata for the json report. An example is below. The important keywords to -make the variables appear in the CMakeCache.txt are "CACHE STRING "" FORCE":: - - $ cat clang_aarch64_linux.cmake - set(CMAKE_SYSTEM_NAME Linux ) - set(triple aarch64-linux-gnu ) - set(CMAKE_C_COMPILER /home/user/build/bin/clang CACHE STRING "" FORCE) - set(CMAKE_C_COMPILER_TARGET ${triple} CACHE STRING "" FORCE) - set(CMAKE_CXX_COMPILER /home/user/build/bin/clang++ CACHE STRING "" FORCE) - set(CMAKE_CXX_COMPILER_TARGET ${triple} CACHE STRING "" FORCE) - set(CMAKE_SYSROOT /home/user/aarch64-emu/sysroot-glibc-linaro-2.23-2016.11-aarch64-linux-gnu ) - set(CMAKE_C_COMPILER_EXTERNAL_TOOLCHAIN /home/user/aarch64-emu/gcc-linaro-6.2.1-2016.11-x86_64_aarch64-linux-gnu ) - set(CMAKE_CXX_COMPILER_EXTERNAL_TOOLCHAIN /home/user/aarch64-emu/gcc-linaro-6.2.1-2016.11-x86_64_aarch64-linux-gnu ) - - diff --git a/pyproject.toml b/pyproject.toml index 559f3b38..e397ddcd 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -128,7 +128,7 @@ commands = [ [tool.tox.env.flake8] description = "Run linter on the codebase" deps = [".[dev]"] -commands = [["flake8", "--statistics", "--exclude=./lnt/external/", "./lnt/", "./tests/", "./deployment/"]] +commands = [["flake8", "--statistics", "--exclude=./lnt/external/", "./lnt/", "./tests/"]] skip_install = true [tool.tox.env.mypy]