HYBRID Testing Standards and Practices - idaholab/HYBRID GitHub Wiki

Tests in Hybrid

Hybrid has two types of high level tests: Modelica tests and Raven tests. The Modelica tests can be further broken down into system tests and unit tests.

Regression tests in Hybrid are used to test consistent performance of Raven workflows and physical models, and ensure values are not changed as the result of changes in the code. The Modelica tests comprise of systems tests and unit tests. Unit tests are tests that try very small portions of code, usually individual component models or functions. These assist in breaking down where larger models may be breaking at. System tests are comprised of several interconnected components that are tested as individual components. System tests help test the underlying Dymola solution methodology as it breaks down the set of interconnected equations.

Analytic Tests

The best of both regression tests and unit tests are analytic. By this, we mean tests that have some outputs that have been calculated by hand and should never change, regardless of code changes. Stronger than consistency checks, analytic tests help developers assure that code has not changed behaviors in undesired ways.

When possible the user should develop unit tests that are analytic in nature. The documentation of such analytic tests should be made in the Examples file of the Modelica model within the NHES model.

Testing Methodology

The testing logic is fairly straightforward and follows the step acceptance algorithm procedures within Dymola. Dymola simulations require a 'tolerance' value such that |local error| < tolerance_relative*|xi| + tolerance_absolute. Nominally, the relative and absolute tolerances are equal (absolute value is necessary for values very close to, but not equal to, 0). The testing harness compares simulation output against the reference file value-by-value for both the relative and absolute difference (relative to the file in the gold folder). Both the relative and absolute differences are used because Dymola nominally uses tolerance_relative = tolerance_absolute. Therefore, this same tolerance should be used in the regression tests, although it is up to the user to ensure that appropriate values are used. The value-by-value minimum between the absolute and relative differences are then compressed to a maximum value for each variable. If each variable's maximum error is smaller than the tolerance value, then the simulation will pass.

Every .mat file in the gold folder will be checked for a given test. Due to some uncontrolled differences between versions of Dymola, different versions of Dymola can produce different micro results even for identical macro behavior. Therefore, this robust testing system will flag errors where, when manually investigated, no true errors exist. Thus, the tester will now test all of these different files (which should still be <=1 per version of Dymola) and if any of the files pass, the entire test passes. The output now will also provide a list of different variables from each of the testing files, meaning that some variables may be listed multiple times if they failed in each of the reference files.