Skip to content

Writing regression tests

James Davies edited this page May 18, 2021 · 10 revisions

RegtestData

A class that holds state information that allows a test to communicate with bytesalad (Artifactory). Attributes point to where input and truth data files live locally and on Artifactory, and where outputs from the test live locally.

RegtestData.input
RegtestData.input_remote
RegtestData.output
RegtestData.truth
RegtestData.truth_remote

Also provides some convenience methods to get the input and truth data from Artifactory into the local directory where the test is running.

RegtestData.get_data()
RegtestData.get_truth()
RegtestData.get_asn()

Fixtures that return a RegtestData instance to the test:

  • rtdata returns an instance of RegtestData to a test for use.

  • rtdata_module returns an instance of RegtestData that is module-scoped. Fixtures can mutate it in the module, but only one instance of it exists for all the tests in the module.

JSON breadcrumbs get left behind on failure

  • If a tests fails, a _results.json JSON specfile gets left behind for each test, which Jenkins uses to upload the test outputs to Artifactory.
  • Additionally, a JSON specfile that allows OKification and an ASDF file serializingthe test's RegtestData instance get left behind and uploaded to Artifactory in our Jenkins builds. The okify_regtest script uses these to interactively OKify tests.

An example test using the rtdata fixture:

import pytest
from astropy.io.fits.diff import FITSDiff

from jwst.pipeline.collect_pipeline_cfgs import collect_pipeline_cfgs
from jwst.stpipe import Step

@pytest.mark.bigdata
def test_miri_image2_cal(rtdata, fitsdiff_default_kwargs):
    rtdata.get_data("miri/image/jw00001001001_01101_00001_mirimage_rate.fits")

    collect_pipeline_cfgs("config")
    args = ["config/calwebb_image2.cfg", rtdata.input]
    Step.from_cmdline(args)
    rtdata.output = "jw00001001001_01101_00001_mirimage_cal.fits"

    rtdata.get_truth("truth/test_miri_image2_cal/jw00001001001_01101_00001_mirimage_cal.fits")

    fitsdiff_default_kwargs["rtol"] = 0.0001
    diff = FITSDiff(rtdata.output, rtdata.truth, **fitsdiff_default_kwargs)
    assert diff.identical, diff.report()

Instructions for writing new regression tests

These are instructions for writing regression tests for the new regression test software.

  1. Identify the test that needs to be created. If a Jira ticket in the JP project does not exist, then create one.

  2. If necessary, determine the data that is needed to create the test. This could be data that already exists in the older regression tests, data that exists in SDP testing suite, or data that needs to be simulated.

  3. If it is obtained at the same time, run the input data through the code to produce the expected outcome data. Check to make sure that the output data is as expected.

  4. Upload the data to artifactory. The input data should be uploaded to a directory under jwst-pipeline/dev/[instrument]/[exp_type]. The truth files should be uploaded to a directory under jwst-pipeline/dev/truth/[name of test].

  5. Write the test. These should be in the jwst/regtest directory in the code repository. There are examples of how to write tests for a step or a pipeline and if they have single or multiple outputs. The file should follow the convention of test_[instrument]_[exp_type].py. Additional information can be added after the exposure type if it helps to clarify what the test is for.

  6. Each test should test one file. So if your step or pipeline produces multiple files, you can run the step or pipeline in a fixture, and then to test each output, parametrize the test to do so. You can parametrize for different output types (outputs from different steps from a pipeline) or parametrize for multiple outputs from the same step.

  7. Test the test locally. This can be done by running pytest --bigdata jwst/regtest/ -k [name of test]

  8. Make a pull request to the main repository.