DIST‐S1 Beta Acceptance Testing Instructions - nasa/opera-sds-pge GitHub Wiki
This page contains instructions for performing Acceptance Testing for the DIST-S1 Beta delivery from the OPERA-ADT team. These instructions pertain to the latest version of the Interface release, currently SAS v0.0.4. These instructions assume the user has Docker installed on their local machine.
Acquiring the DIST-S1 Beta Docker Image
For testing delivered images
The image is currently hosted on JPL FN-Artifactory, which requires JPL VPN access and JPL credentials. You may also need to be added to the gov.nasa.jpl.opera.adt organization.
Once you have access, the container tarball delivery is available under
general/gov/nasa/jpl/opera/adt/dist_s1/r2.1/beta/dockerimg_dist_s1_beta_0.0.6.tar
NOTE: Current delivery's tarball size is very large (>17 GiB), so consider downloading to /data/tmp/
Test inputs/golden dataset are bundled with the image.
Loading the image into Docker
The first step in running the DIST-S1 image is to load it into Docker via the following command:
docker load -i dockerimg_dist_s1_beta_0.0.6.tar
This should add the Docker image to your local repository with the name opera/dist-s1
and the tag beta_0.0.6
.
tag="beta_0.0.6"
For testing iterated images
When errors are found, ADT will push test images to GHCR that we can then quickly evaluate for them to make a new delivery
docker pull ghcr.io/opera-adt/dist-s1:test
docker tag ghcr.io/opera-adt/dist-s1:test opera/dist-s1:test
docker rmi ghcr.io/opera-adt/dist-s1:test
tag="test"
Preparing the test data
For the current DIST-S1 Beta delivery, a copy of the test data is bundled into the SAS Docker image for use with the built-in unit test suite. This test data can be copied to the local filesystem so we can execute the dist-s1
container in a production-like fashion. The location the files are copied to will be referred to throughout these instructions as <DIST_S1_DIR>
.
-
Create the output directory:
mkdir -p <DIST_S1_DIR>/output_dir
-
Create the runconfig directory and ensure it is writable:
mkdir -p <DIST_S1_DIR>/runconfig_dir
-
Create the scratch directory and ensure it is writable:
mkdir -p <DIST_S1_DIR>/scratch_dir
First start the dist-s1
container with an infinite loop in detached mode (looping because it will exit immediately by default):
container_id=$(docker run -d --rm --name dist-s1 opera/dist-s1:$tag "while true; do : ; done")
NOTE: We instruct the container to use the name "dist-s1" for easy reference in the following command.
Next, to extract the test data:
docker cp dist-s1:home/ops/dist-s1/tests/test_data <DIST_S1_DIR>
You should see something like Successfully copied 33.1MB to <DIST_S1_DIR>/.
Lastly, to terminate the running dist-s1
container:
docker kill $container_id
Move the golden dataset runconfig to the runconfig directory:
mv <DIST_S1_DIR>/test_data/cropped/sample_runconfig_10SGD_cropped.yml <DIST_S1_DIR>/runconfig_dir/runconfig.yaml
You will then need to edit <DIST_S1_DIR>/runconfig_dir/run_config.yml
- For all input paths (
pre_rtc_copol
,pre_rtc_crosspol
,post_rtc_copol
, andpost_rtc_crosspol
), replacetest_data/cropped/
with/home/ops/input_dir/
(vim command::%s/test_data\/cropped\//\/home\/ops\/input_dir\//g
) - Replace
dst_dir
path with/home/ops/scratch_dir
- Replace
product_dst_dir
path with/home/ops/output_dir
Executing the DIST-S1 container on the sample test data
Change directory into the <DIST_S1_DIR>
directory.
cd <DIST_S1_DIR>/
We're now ready to execute the DIST-S1 Interface delivery. Run the following the command to kick off execution with the test assets and copy the output products to the mounted output_dir
volume:
NOTE: Docker requires absolute paths for volume arguments, so assuming you are running from within <DIST_S1_DIR>
, the $(pwd)
command can be utilized as a shortcut.
docker run --rm -i --tty -u $(id -u):$(id -g) \
-v $(pwd)/test_data/cropped:/home/ops/input_dir \
-v $(pwd)/runconfig_dir:/home/ops/runconfig_dir \
-v $(pwd)/output_dir:/home/ops/output_dir \
-v $(pwd)/scratch_dir:/home/ops/scratch_dir \
opera/dist-s1:$tag "/opt/miniforge/envs/dist-s1-env/bin/dist-s1 run_sas --runconfig_yml_path /home/ops/runconfig_dir/runconfig.yaml"
NOTE: For delivered 0.0.8 image, use:
docker run --rm -i --tty -u $(id -u):$(id -g) \
-v $(pwd)/test_data/cropped:/home/ops/input_dir \
-v $(pwd)/runconfig_dir:/home/ops/runconfig_dir \
-v $(pwd)/output_dir:/home/ops/output_dir \
-v $(pwd)/scratch_dir:/home/ops/scratch_dir \
--entrypoint /opt/miniforge/envs/dist-s1-env/bin/dist-s1 \
opera/dist-s1:$tag run_sas --runconfig_yml_path /home/ops/runconfig_dir/runconfig.yaml
You should see many progress bars filling your screen before the container exits.
An output product of three GeoTIFFs and one PNG browse image should be created in the output_dir
:
$ tree output_dir/
output_dir/
└── OPERA_L3_DIST-ALERT-S1_T10SGD_20250102T015857Z_20250505T232447Z_S1_30_v0.1
├── OPERA_L3_DIST-ALERT-S1_T10SGD_20250102T015857Z_20250505T232447Z_S1_30_v0.1_DIST-GEN-STATUS-ACQ.tif
├── OPERA_L3_DIST-ALERT-S1_T10SGD_20250102T015857Z_20250505T232447Z_S1_30_v0.1_DIST-GEN-STATUS.tif
├── OPERA_L3_DIST-ALERT-S1_T10SGD_20250102T015857Z_20250505T232447Z_S1_30_v0.1_GEN-METRIC.tif
└── OPERA_L3_DIST-ALERT-S1_T10SGD_20250102T015857Z_20250505T232447Z_S1_30_v0.1.png
1 directory, 4 files
NOTE: The 20250505T232447Z
portion of this sample product name is the production time, and will be different for each execution of the dist-s1
container. All other portions of the file name should match.
For this Acceptance Test, ensure the same set of expected files has been created on the local machine.
Comparing SAS output to golden dataset
First get the output and golden product names (for ease)
GOLDEN_PRODUCT=$(find test_data/golden_datasets/10SGD/ -maxdepth 1 -mindepth 1 -type d -name 'OPERA_L3_DIST-ALERT-S1_*' | rev | cut -d / -f 1 | rev | head -1)
OUTPUT_PRODUCT=$(find output_dir/ -maxdepth 1 -mindepth 1 -type d -name 'OPERA_L3_DIST-ALERT-S1_*' | rev | cut -d / -f 1 | rev | head -1)
The comparison between DIST-S1 products is provided in the SAS Python module. Mount the products into a new, interactive container started to a python interpreter in the DIST conda environment.
docker run --rm -i --tty --entrypoint /opt/miniforge/envs/dist-s1-env/bin/python \
-v $(pwd)/test_data/golden_datasets/10SGD:/home/ops/golden_product \
-v $(pwd)/output_dir:/home/ops/test_product \
-e OUTPUT_PRODUCT=$OUTPUT_PRODUCT -e GOLDEN_PRODUCT=$GOLDEN_PRODUCT \
opera/dist-s1:$tag
Copy the following script into it:
import os
from dist_s1.data_models.output_models import ProductDirectoryData
golden_product_name = os.environ['GOLDEN_PRODUCT']
test_product_name = os.environ['OUTPUT_PRODUCT']
golden_product = ProductDirectoryData.from_product_path(f'/home/ops/golden_product/{golden_product_name}')
test_product = ProductDirectoryData.from_product_path(f'/home/ops/test_product/{test_product_name}')
if golden_product == test_product:
print('Products match')
else:
print('Products do not match')
If Products match
is printed, the test has passed.
Sample output
$ docker run --rm -i --tty --entrypoint /opt/miniforge/envs/dist-s1-env/bin/python \
> -v $(pwd)/test_data/golden_datasets/10SGD:/home/ops/golden_product \
> -v $(pwd)/output_dir:/home/ops/test_product \
> -e OUTPUT_PRODUCT=$OUTPUT_PRODUCT -e GOLDEN_PRODUCT=$GOLDEN_PRODUCT \
> opera/dist-s1:$tag
Python 3.13.2 | packaged by conda-forge | (main, Feb 17 2025, 14:10:22) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>>
>>> import os
... from dist_s1.data_models.output_models import ProductDirectoryData
...
... golden_product_name = os.environ['GOLDEN_PRODUCT']
... test_product_name = os.environ['OUTPUT_PRODUCT']
...
... golden_product = ProductDirectoryData.from_product_path(f'/home/ops/golden_product/{golden_product_name}')
... test_product = ProductDirectoryData.from_product_path(f'/home/ops/test_product/{test_product_name}')
...
... if golden_product == test_product:
... print('Products match')
... else:
... print('Products do not match')
...
Products match
>>>
Executing the unit test suite within the DIST-S1 container
To ensure the dist-s1
container is executing as expected, we can also run its built-in Python unit test suite from within the container:
docker run --rm opera/dist-s1:$tag 'cd dist-s1 && pytest tests'
For this Acceptance Test, all tests within the suite are expected to pass. A passing pytest report should match the following:
$ docker run --rm opera/dist-s1:$tag 'cd dist-s1 && pytest tests'
============================= test session starts ==============================
platform linux -- Python 3.13.2, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/ops/dist-s1
configfile: pyproject.toml
plugins: anyio-4.8.0, cov-6.0.0, typeguard-4.4.2
collected 29 items
tests/test_credentials.py ... [ 10%]
tests/test_main.py . [ 13%]
tests/test_output_models.py . [ 17%]
tests/test_package.py . [ 20%]
tests/test_proc.py .......... [ 55%]
tests/test_runconfig_model.py . [ 58%]
tests/test_workflows.py ............ [100%]
=============================== warnings summary ===============================
../../../opt/miniforge/envs/dist-s1-env/lib/python3.13/site-packages/botocore/utils.py:670
/opt/miniforge/envs/dist-s1-env/lib/python3.13/site-packages/botocore/utils.py:670: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
current_time = datetime.datetime.utcnow()
tests/test_output_models.py::test_product_directory_data_from_product_path
/opt/miniforge/envs/dist-s1-env/lib/python3.13/site-packages/dist_s1/data_models/output_models.py:200: UserWarning: Layer DIST-GEN-STATUS arrays do not match
warn(f'Layer {layer} arrays do not match', UserWarning)
tests/test_runconfig_model.py::test_input_data_model_from_cropped_dataset
/home/ops/dist-s1/tests/test_runconfig_model.py:101: FutureWarning: The behavior of DatetimeProperties.to_pydatetime is deprecated, in a future version this will return a Series containing python datetime objects instead of an ndarray. To retain the old behavior, call `np.array` on the result
pre_acq_dts = np.array(df[ind_pre & ind_burst].acq_dt.dt.to_pydatetime())
tests/test_runconfig_model.py::test_input_data_model_from_cropped_dataset
/home/ops/dist-s1/tests/test_runconfig_model.py:102: FutureWarning: The behavior of DatetimeProperties.to_pydatetime is deprecated, in a future version this will return a Series containing python datetime objects instead of an ndarray. To retain the old behavior, call `np.array` on the result
post_acq_dts = np.array(df[ind_post & ind_burst].acq_dt.dt.to_pydatetime())
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================== 29 passed, 4 warnings in 188.46s (0:03:08) ==================