Test Development Guide - AxiomBenchmark/Axiom GitHub Wiki

Test Execution Diagram

Test Execution Diagram

Axiom Tests Execution

The Axiom test application suite runs local JavaScript applets which are sent back to the benchmark server for reporting. Initially, the benchmark server is provided information on the current test application (framework, version, script file, html file, test function names, and test descriptions) through TestProfiles.json. The benchmark then deploys a User Test Agent, which runs each test for each framework detailed in TestProfiles.json. Within each test applet, specific tests are orchestrated, such as lifecycle tests, CSS animation tests, child object render tests, etc. and the test result metrics are stored in a JSON-formatted object. Each test function within the test applet is mounted to a global "UUT" object to which the User Test Agent has access. After each test is finished running, the benchmark client send the JSON-formatted results in a callback to the benchmark server which are in turn processed by the reporting engine and displayed back to the user. On the reporting engine, the user can view their benchmark results and statistically compare them to long-run averages as well as other benchmark runs. For more information regarding client/server communication and application execution, see the Execution Guide. To learn more about the reporting engine and the statistical analyses, see the Reporting Guide.

Writing a Test App

The Axiom test applets stored in the testapps folder perform basic life-cycle tests, with React running some CSS animation timing tests as well. As stated previously, each function mounted to the global UUT object runs an individual test. However, this type of test development is not the end-all test format. The only guideline is that the test result metrics are passed in a callback to the server in the form of a JSON object. Therefore, if a developer wishes to only run one test function that performs multiple tasks, this is possible with Axiom.

Test Development

First, build the test applet in Axiom and store it in the testapps folder. This application should be designed and developed to generate performance metric, whether that be for a single-run score or a long-run average. Create an instance framework-specific object, and provide said object with the necessary test hook functions required for the intended test. For example, if one were to create a life-cycle test for Vue, a Vue object's built-in render functions provide opportunities to take time stamps. A performance.now() call could be placed in the created() function, and another performance.now() call could be placed in the mounted() function in order to test the speed at which a Vue component is mounted to the web page. The user could further extend the life-cycle test by adding time stamps for functions like updated() and destroyed(). The user can take advantage of these life-cycle hooks, or time any other functionality to their application. Axiom is designed to be as platform and framework agnostic as possible, so virtually any test metric passed to the callback function in the form of a JSON object is valid and parseable by the reporting engine.

UUT Mounting

Next, mount the framework-specific object and the test functions to window. The window object is a global variable in JavaScript, and can be utilized to communicate between the applets and the benchmark client. Due to this, the framework object and the test functions must be mounted to window, specifically window.UUT, in order to be run by both the client and the benchmark agent. Window.UUT is a global object through which the benchmark agent and client communicate. Assigning a function to window.UUT For example, the vue applet contains the lines:

window.UUT.testA = testA;

window.UUT.testB = testB;
window.UUT.testC = testC;
window.UUT.vm = vm();

In this example, testA, testB, and testC are the three test functions mounted to window.UUT, and the vm() function call creates a Vue object with a global view. This step is crucial to allow communication between the client and the applet. After each benchmark is run by the agent, the UUT object is reloaded with the test function names for the next requested framework test.

Callback Results

As mentioned previously, the reporting engine requires each test function to pass the JSON object of test results using a callback function. Each test function must be declared as follows:

 var testA = function(callback){..}

This callback function allows the test applet to notify the benchmark agent of the incoming test results, as well as signalling the agent to start the next test. The test results are to be passed in that callback function at the end of the test function (in this case, testA):

'results = {};`

     results.test = "Lifecycle Test A"
     results.render = timing.rend/1000
     results.destroy = timing.dest/1000
     results.mount = timing.mount/1000
     callback(results)

In this scenario, testA was a lifecycle test for a Vue.js object, which was taking the delta in time for mounting, rendering, and destroying a vue object. These test result names (render, destroy, and mount) are displayed verbatim in reporting engine.

Thus, the names for the performance metrics are declared within the test applet.

One may notice that these metrics don't give details regarding the JavaScript framework, version, framework ID, etc. This information is stored in TestProfile.json, and is provided to the benchmark server prior to the launching of the benchmark agent and the execution of the test applets. The following section details more on how to format TestProfiles.json in order to provide the test agent enough information orchestrate these tests.

Adding the app to Axiom

Test Profiles (TestProfiles.json)

[TestProfiles.json](https://github.com/AxiomBenchmark/Axiom/blob/master/TestProfiles.json), as previously mentioned, holds all the information required for both the benchmark agent, the benchmark client, and the reporting engine. In this JSON file, the user sets the framework, framework ID, framework version, test function names, test descriptions, and performance metric descriptions displayed in the reporting engine. Here is an example of [TestProfiles.json](https://github.com/AxiomBenchmark/Axiom/blob/master/TestProfiles.json) object for the Vue test applet: 
{
       "id" : "vue",
       "framework" : "Vue",
       "version" : "1.0.0",
       "testapp_script" : "vue.js",
       "testapp_html" : "vue.html",
       "tests" : [
           {
               "name" : "Lifecycle Test A",
               "function" : "testA",
               "descriptions" : {
                   "render" : "Average of render timing",
                   "mount" : "Average of mount timing",
                   "destroy" : "Average of destroy timing"
               }
           },
           {
               "name" : "Lifecycle Test B",
               "function" : "testB",
               "descriptions" : {
                   "mount" : "Average of mount timing",
                   "destroy" : "Average of destroy timing",
                   "render" : "Average of render timing",
                   "update" : "Average of update timing"
               }
           },
           {
               "name" : "Child Lifecycle Test",
               "function" : "testC",
               "descriptions" : {
                   "mount" : "Average of mount timing",
                   "destroy" : "Average of destroy timing",
                   "render" : "Average of render timing",
                   "update" : "Average of update timing"
               }
           }
       ]
}

Each JSON object for an applet provides the general information regarding the framework type and version. The id key provides the reporting engine with information regarding the currently run benchmark(s). The testapp_script and testapp_html keys load the framework script and html for the specific applet to execute. The tests key details each test function mounted to UUT in a JSON object array.

Each JSON sub-object provides the name of the test, the function name (which should identically match the test function names within the test applet), and the descriptions of each performance metric. The keys serve multiple different purposes. The function key tells the benchmark agent which test to run, whereas the name key is displayed for the reporting engine. The description key can summarize the test's functionality, purpose, performance metric, etc.; the developer chooses how to use it, as it mainly serves as a future reference for the developer and has no functional effect on the benchmark.

Build Process (/package.json scripts)

Once the test application is built, it needs to be added to Axiom's build process. This step will transpile the project into a format that is usable by the client, usually through Webpack, Babel, or a similar tool. For all test apps, these build scripts are defined as standard node scripts in the package.json file, similar to most Node applications. The main requirement is that the build destination is the public/testapp_bin folder (subfolders allowed) so it is accessible from connecting clients.

Consider these scripts defined in package.json that are relevant to the build process:

"scripts": {
   "postinstall": "npm run build",
   "build": "npm run build_react && npm run build_copy",
   "build_copy": "copyfiles -u 1 tests/*.js public/test_bin && copyfiles -u 1 testapps/*.html public/testapp_bin",
   "build_react": "browserify -t reactify testapps/react.js -o public/testapp_bin/react.js",
   "clean": "del-cli public/testapp_bin/*.js && del-cli public/testapp_bin/*.html"
}

We see five scripts here, with the names in bold.

postinstall is run automatically after the install process as a step between install and start on most PaaS (platform as a service) implementations. Defining this script and adding npm run build allows us a spot to trigger the build process.

If we look at our build script, the first part is build_react. This is where the developer comes in, deciding what these scripts will contain. For this example, our build_react script runs Browserify with the Reactify transpiler with the output folder specified to be the /public/testapp_bin folder. This will vary framework to framework, and there are several ways to accomplish the same thing. The naming scheme for the scripts, build order, and any other steps like our build_copy example are arbitrary.

In summary, as long as all the defined test applications are built in the /public/testapp_bin folder by the time Axiom starts, it is a valid build process.