Skip to content
This repository was archived by the owner on Nov 14, 2022. It is now read-only.

Smoke testing

Elliot Oram edited this page Aug 30, 2018 · 23 revisions

This covers all the areas that need to be thoroughly checked during the code freeze before release as well as how to test them. All of the below should be done firstly in the valid expected way and then in a malicious way with the intent of trying to break the program. All problems discovered should be created as issues on the repository and then (based on severity) will be addressed either before or after the release.

Before doing any of these, ensure you are using an up-to-date version of the code on master and all the settings files are using production settings (assuming you are on the production node).

Checkout

  • Ensure that all the resources are pulled down correctly from git. The following should be on each node:
  • Linux
    • QueueProcessors/
    • utils/
    • setup.py
    • requirements.txt
  • Windows (Webapp):
    • Webapp/
    • setup.py
    • requirements.txt
  • Windows (Utility machine):
    • EndOfRunMonitor/
    • utils/
    • setup.py
    • requirements.txt

Unit tests

Run all valid unit tests on the node. These will vary depending on what node you are testing, but can be done with:

pytest <name_of_directory>

This should be an easy way to find quick errors in the setup of the project

Monitors

End of run monitor

  • Point the end of run monitor to a fake data archive (via the settings files)
  • Re-install the service using python isis_monitor_win_service.py install
  • Start the service
  • Update the lastrun.txt file for a given instrument and ensure that the data message is sent (this can be validated in hawtio)
  • Repeat this for all valid autoreduction instruments

Queues

  • Start both of the QueueProcessors on the linux node using the QueueProcessor/restart.sh
  • Use ps aux | grep python to validate that both of these services have started.
  • Use the manual submission script scripts/manual_submission_script/manual_submission.py to check that data can be sent through from every instrument.
  • Check the database to validate that the runs all made it to the database

WebApp

  • Boot up the webapp using Apache see production installation instruction for how to do this.
  • Ensure that the webapp is visible from outside of the local environment - e.g. if you go to the URL from another machine
  • Test the basic functionality of the webapp:
    • Run inspection
    • Run resubmission
    • All the navigation works
    • New runs appear when the database is changed
  • Test the webapp admin content is working

Database

  • Backup (dump) data first - in case a revert is required due to improper flush
python manage.py dumpdata > db.json
  • Check db.json contains something that looks vaguely sensible
  • Some data should be preserved (such as user details and static data - instruments/status flags)
  • Selectively delete data following tables in the following order:
reduction_variables_runvariable
reduction_variables_instrumentvariable
reduction_variables_variable
reduction_viewer_datalocation
reduction_viewer_reductionlocation
reduction_viewer_reductionrun
reduction_viewer_experiment
  • Submit one run per instrument - This is to ensure all the options are valid on the webapp
python scripts/manual_submission/manual_submission <INST> <run_number>
  • Once the database changes are validated db.json can be removed

Clone this wiki locally