Automating the Checking of Test Results against Expected results - Gnorion/BizVR GitHub Wiki

Here is a way you might use BVR decisions to define the checking of test results against expected results for any decision

This example uses the TLP example but the method is the same for all decisions:

image

The changes collection that is input is created by the deep-diff js function which can compare ANY two json files. When you execute a test of the TLP decision BizVR will automatically find the differences between the actual results and expected results (assuming you have set up some expected results).

As part of setting up a test case you will specify the decision that is to be used to check the results. Its a good idea to define this decision in the same project as the decision its checking.

If you do not specify a decision of your own then BVR will just use the generic deep-diff utility

These "raw" differences are then fed to your test result comparison decision to determine if the differences are significant. You decide what is a significant difference.

It consists of a collection of changes (or differences).

The structure of the changes object is this

[{"change":{
     "type": "E",                             // can be E, D or N
     "path": "results -> overall_risk",       // json property affected - in the case of TLP its overall_risk
     "lhs": "medium",                         // expected value of overall_risk (medium)
     "rhs": "high"                            // actual value of overall_risk (high)
         }
}]

So instead of trying to write rules that compare all of the fields that might be in the actual and expected json (which would be different for every decision that creates them) we write rules about the generic object change which always has the same four fields. The only thing we need to do in our decision table is set the path to the specific json property that we we want to compare between the actual and expected. Here's what a decision might look like that reviews the actual and expected results from the TLP example.

Defining Custom Rules to Compare Actual and Expected Results

In this table rule R1 just counts the number of times the property "overall_risk" is different between the actual and expected If there are no differences at all then R2 sets total changes to zero. R3 counts the number if changes that resulted in a new property in the actual that was not in the expected. R4 counts the properties that were expected but not found in the actual image

Once we have these counts (and we can count anything that's important to us) we can use the second table to check the counts image R1 gives a FAIL if there are more than 10 differences (of any kind) R2 suggested a REVIEW when there are 10 or fewer differences and the number of tye E differences is less than 5 and so on

You can write any rules that you want to check the actual against the expected

What This Looks Like In Your Workspace

So in your TLP project space you might have this:

image

One decision is your business rules like this

image

And the other is the rules that you will use to compare actual and expected results during testing (illustrated above)

How to Compare json Results using js and deep-diff