version 1.0 - Gibiscus/wiki GitHub Wiki

About Cost Optimization Testing Tool (Version 1.0)


Cost Optimization Testing tool:


An accessible and functionally unlimited framework for developing automated tests, which is easily integrated with CI/CD development tools, and can test UI / REST - API within one test scenario with the ability to interact with multiple databases and API.

We are developing a product that will fully cover your needs, working with projects of any complexity. It is no secret that high-quality software is a product, which has a complete testing coverage. Our test framework for <end2end> - testing, will bring you a bit closer to developing a perfect software.

Installation

🚀 How to run the tool

We recommend that you use Intellij IDEA as your IDE when working with COTT.

The COTT application takes 2 arguments as input on startup:

  1. The name of the configuration xml file.

-c={configuration-file-name}.xml or --config={configuration-file-name}.xml

  1. Path to the folder with test resources containing your test scripts and configuration file.

-p={absolute-path-to-your-resources} or --path={absolute-path-to-test-resources}

Example 1: -c=cott-config.xml -p=/user/projects/test-resources

Example 2: --config=cott-config.xml --path=/user/projects/test-resources

(note that the filename and path is just an example, you should not create the same filename or the same directories on your device)

Run from CLI

cd cost-optimization-testing-tool
mvn clean install
cd target
java -jar cott-with-dependencies.jar --config={configuration-file-name}.xml --path={absolute-path-to-test-resources}

Run using Docker (host network)

Note that you must input your own values to {image-name}, {configuration-file-name}, {absolute-path-to-test-resources}.

You can pull the latest release image from packages

  • Pulling the image
docker pull ghcr.io/knubisoftofficial/cost-optimization-testing-tool:master 
docker run --rm --network=host --mount type=bind,source="{absolute-path-to-test-resources}",target="{absolute-path-to-test-resources}" "ghcr.io/knubisoftofficial/cost-optimization-testing-tool:master" "-c={configuration-file-name}.xml" "-p={absolute-path-to-test-resources}"

or you can use sh script 'run-docker-local' from project root to run docker image

docker pull ghcr.io/knubisoftofficial/cost-optimization-testing-tool:master
cd cost-optimization-testing-tool
./run-docker-local ghcr.io/knubisoftofficial/cost-optimization-testing-tool:master -c={configuration-file-name}.xml -p={absolute-path-to-test-resources}
  • Build your own image
cd cost-optimization-testing-tool
docker build . -t {image-name}
docker run --rm --network=host --mount type=bind,source="{absolute-path-to-test-resources}",target="{absolute-path-to-test-resources}" "{image-name}" "-c={configuration-file-name}.xml" "-p={absolute-path-to-test-resources}"

or you can use sh script 'run-docker-local' from project root to run docker image

cd cost-optimization-testing-tool
docker build . -t {image-name}
./run-docker-local {image-name} -c={configuration-file-name}.xml -p={absolute-path-to-test-resources}

Run via IDE (Intellij IDEA)

  • Opportunity 1:
  1. Click on Add Configuration...

add_configuration.png

  1. Click on Add new, then select Application

add_new_application.png

  1. Enter the settings as in the screenshot, input your own values for

    --config={configuration-file-name}.xml --path={absolute-path-to-test-resources}

    and

    {your-working-directory} (usually this is the root of the project, should already be set by default)

settings.png

  1. Сlick Apply + OK

  2. Run the COTT

run.png

  • Opportunity 2:
  1. Open src/test/java/com/knubisoft/cott/runner/TestRunner.java, right-click on launch icon then click on Modify Run Configuration

test_runner.png

  1. Repeat steps 3 to 5 from the first opportunity.

🎯 Run using site sample as a system for testing

  • Сlone the project with test resources

  • Run the site sample and report server

cd cott-test-resources
docker-compose -f docker-compose-site-sample.yaml up -d
docker-compose -f docker-compose-report-server.yaml up -d
  • Check if the site sample and report server started successfully

It usually takes 1-3 minutes to launch a site.

site-sample: http://localhost:8080

report-server: http://localhost:1010

  • Run the test tool using one of the options in the "How to run the tool" section above

Use the following arguments:

--config=config-local.xml --path=/{your-part-of-path}/cott-test-resources



Main functions

  • 🌎 High level of cross-browser
  • 📷 Screenshot errors fixation of a tested scenario
  • 📦 Consistent and clear tag structure out of the box
  • 🔀 UI/API testing, within one test scenario
  • 🔑 Custom authorization
  • 🔌 Ability to work with multiple databases and APIs using aliases
  • 📊 Reporting Tool
  • 🔧 Unlimited Integrations

Cross Browser

COTT can launch testing scripts in such browsers as:

  • Google Chrome
  • Firefox
  • Safari
  • Edge
  • Opera

Cross Browser Testing with COTT is easy:

  • Convenient cross browser configuration
  • Launched in 3 modes ( Local, Remote, Docker )
  • <screenRecording> ability in Docker
  • Configuration flexibility of each browser
  • Versioning
  • Maintaining capabilities and `options

Browser settings configuration structure:

    <ui enabled="true">
        <browserSettings>
            <takeScreenshotOfEachUiCommand enable="true"/>
            <webElementAutowait seconds="5"/>
            <browsers>

                <chrome enable="true" maximizedBrowserWindow="true" headlessMode="false" browserWindowSize="800x600">
                    <browserType>
                        <localBrowser/>
                    </browserType>
                    <chromeOptionsArguments>
                        <argument>--incognito</argument>
                    </chromeOptionsArguments>
                </chrome>

                <chrome enable="true" maximizedBrowserWindow="true" headlessMode="true" browserWindowSize="1920x1080">
                    <browserType>
                        <browserInDocker browserVersion="102.0" enableVNC="true">
                            <screenRecording enable="true" outputFolder="/Users/user/e2e-testing-scenarios"/>
                        </browserInDocker>
                    </browserType>
                    <chromeOptionsArguments>
                        <argument>--disable-popup-blocking</argument>
                    </chromeOptionsArguments>
                </chrome>

                <firefox enable="false" maximizedBrowserWindow="false" headlessMode="true">
                    <browserType>
                        <remoteBrowser browserVersion="101.0" remoteBrowserURL="http://localhost:4444/"/>
                    </browserType>
                </firefox>

                <edge enable="false" maximizedBrowserWindow="false" headlessMode="true">
                    <browserType>
                        <remoteBrowser browserVersion="100.0" remoteBrowserURL="http://localhost:4444/"/>
                    </browserType>
                </edge>

Everything listed above allows you to run test scenarios flexibly without local browsers



Screenshots

There is a tag in the global configuration file called <takeScreenshotOfEachUiCommand> - which is a function that captures each step of UI scenario execution

Advantages:

  • Capturing screenshots without launching a browser
  • Capturing screenshots in docker
  • Automatic generation in scenario folder
  • Capturing exceptions

➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖

screenshot.png

➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖

This function also makes it easy to view the scenario step at which an error occurred as <screenshotsLogging> captures an exception. Thus, we are able to visually monitor the cause of the scenario error by running the scenario in different browser options



Consistent and clear tag structure out of the box

COTT has an easy to understand and functional set of tags with a uniform structure, which allows you to quickly master the skills of writing an automated test scenario


'UI script'
<ui comment="Start UI scenario">

    <input comment="Input 'Email'"
           locatorId="locator.email"
           value="[email protected]"/>

    <input comment="Input 'Password'"
           locatorId="locator.password"
           value="Testing#1"/>

    <click comment="Click on 'Log in' button" locatorId="locator.logInButton"/>

</ui>

<postgres comment="Check all user's in system" alias="Api_Alias" file="expected_2.json">
    <query>SELECT * FROM t_user</query>
</postgres>

'HTTP - request'
 `   <http comment="Re-login to get a new JWT token" alias="SHOPIZER">`
        `<post url="/api/v1/customer/login">`
            `<response code="200" file="expected_28.json"/>`
            `<body>`
                `<from file="request_28.json"/>`
            `</body>`
        `</post>`
    `</http>`

⤵️⤵️⤵️

"The XSD schema makes working with tags even easier.
To compose a test scenario, you only need to set the necessary values ​​​​in the drop-down parameters"


UI/API testing, within one test scenario

One of the main features of COTT is that within one test scenario with a convenient structure and easy-to-read scripts, we can simultaneously test both UI and perform REST - API - testing using a variety of approaches such as Data Driven Testing, Behavior Driven Development, Test Driven Development and others. We can also easily access the database and declare variables.

"Basically, perform end2end software testing:"

Advantages of end2end testing:

  • Coverage of all levels of the system
  • Regression
  • System description
  • CI/CD
  • Global testing report

"Tag's Structure - end2end script:"
   <ui comment="Start UI action">
        
        <click comment="Click on 'Add to cart' 'Spring in Action' book"
               locatorId="shopizer.addToCartSpringBook"/>
    </ui>


  <var comment="Create variable for shp_cart_code value " name="CART_CODE">
        <jpath>$.[0].content.[0].shp_cart_code</jpath>
    </var>


 <http comment="Make sure the order is added to the cart" alias="API">
        <get url="/api/v1/cart/{{CART_CODE}}">
            <response code="200" file="expected_18.json"/>
        </get>
    </http>


Custom authorization

In progress



Ability to work with multiple databases and APIs using aliases

When setting up a configuration file, for each integration tag in the structure there is a mandatory and unique alias parameter, with the help of which we get interaction with databases and APIs within the test scenario.
Each service has a flag: true & false

  • Configure the services you need and easily switch between them

Database Integrations

 <postgres alias="FIRST" enabled="true">
 <mongo alias="SECOND" enabled="true">
 <elasticsearch alias="THIRD" enabled="false">
 <mysql alias="FOURTH" enabled="false">

API Integrations

    <apiIntegration>
         <api alias="FIRST" url="http://localhost:4000/"/>
    </apiIntegration>

    <apiIntegration>
         <api alias="SECOND" url="http://localhost:8081/"/>
    </apiIntegration>

    <apiIntegration>
         <api alias="THIRD" url="http://localhost:8082/"/>
    </apiIntegration>


Reporting Tool

COTT - has the ability to generate reports both locally and on the server

Features of reports:

  • Convenient dashboard with graphs of the results of passing test scenarios
  • Step-by-step analysis of each step of the test script
  • Access to view screenshots of each step
  • Full stacktrace of each step
  • Availability for the entire development team

report (1) (1).gif



Unlimited Integrations

COTT supports a set of integrations out-of-the-box that you can use to configure your project. If you need a certain integration to test your software, it will not be difficult to add and use it in the future. As you can imagine, we can automate the testing for a given project structure.

"Out-of-the-Box Integrations:"

Integrations

  • Clickhouse
  • DynamoDB
  • Elasticsearch
  • Kafka
  • MongoDB
  • MySQL
  • Oracle
  • Postgres
  • RabbitMQ
  • Redis
  • AWS (S3, SES, SQS)
  • Sendgrid


COTT is the perfect testing tool for less experienced testers who want to switch from manual to automated testing. They can start creating automated tests within several days without knowing a programming language. To write a test scenario, a QA specialist just needs to get familiar with the list of tags (commands), which are under the hood of the framework.

We chose XML to form the test scenario

The language does not depend on the operating system and processing environment. XML serves to represent some data in the form of a structure, which we continue to develop for you, so that QA specialists could write scripts that will be understandable to all team members.

Advantages:

  • Easy-to-read, simple form;
  • Standard coding type;
  • Ability to exchange data between any platforms;

The structure of each < tag > has a uniform writing standard, due to which the perception of the test scenario is visualized.

We have also added a mandatory description field to each test step to encourage not only test development but also its maintenance.


Structure COTT

To start working with COTT, you need to create a directory with your project's resources which will contain the main folders of your directory and global-config - file with global configuration settings for your project. As COTT requires the presence of mandatory folders that must be in the structure of your resources, these folders will contain the data for testing, and the test scenarios themselves.


Folder structure

'Mandatory folders':
  • 📁 data
  • 📁 locators
  • 📁 report
  • 📁 scenarios

Folder data - root folder for test data (used to store datasets and files for migration, in various formats)

Inside this, you also have the option to separate your test files into separate folders for a readable structure and ease of use

When you first start COTT - inside the "data" folder, we left the default folder structure for storing test data, which you can change in the future for yourself

Default folders inside data:

  • credentials
  • patches
  • variations

'Purpose' ⤵️
  • credentials - Folder for storing system user data for authorization within the test scenario
{
  "username": "Test",
  "password": "Qwerty12@"
}


  • patches - Folder to store datasets for testing ( sql, csv, javascript, xlsx, partiql, bson, shell and others)
INSERT INTO t_role (id, name, description, enabled)
VALUES (1, 'ADMIN', 'Owner company', true),
       (2, 'USER', 'Admin company', true),
       (3, 'SUPERUSER', 'Lawyer company', true);

var greeating = 'Hello, World';
console.log(greeating)

var="\"The content after crated a file.\""
echo "{ \"content\":" " $var }" > ./shell-1.json
rm -f ./shell-1.json

  • Variations - Folder in which a data set is created and stored for interacting with the UI (in the format csv )

  • Folder locators - Folder where element locators are stored (For interaction with UI)

  • Inside of locators folder there should be such folders as:

  • component - a folder for storing locators that refers to the footer, header elements of the pages. (It is recommended to separate these locators for structuredness and ease of use)

  • pages - a folder in which the locators of a particular page are stored

  • (In the xml file with locators, which is located in the pages folder, it is possible to request the desired footer and header component using the tag: <include>

locator's.png

<locator locatorId="registerButton">
            <id>registerLink</id>
        </locator>

This way we will be able to pass locators that are in the component folder to the scenarios through the xml file pages


report - Folder where the test pass report will be generated


scenarios - Folder for creating and storing test scenarios

<scenario xmlns="http://www.knubisoft.com/e2e/testing/model/scenario">

    <overview>
        <description>Demonstration of the work of the 'assert' tag</description>
        <name>Assert</name>
    </overview>

    <tags>
        <tag>UI</tag>
    </tags>


Global config file structure

global-config.xml - is a file which will contain the main configuration settings of your project You have the ability to create multiple configuration files and divide the testing environment.

For example:

  • global-config-local.xml
  • global-config-dev.xml
  • global.config.jenkins.xml

global-config.xml - is a XSD schema of which has an optional set of tags out-of-the-box, which extremely simplifies the settings configuration process.

The set of basic tags for configuration global.config.xml:

<stopScenarioOnFailure> 
<runScenariosByTag>
<report>
<auth authStrategy>
<ui>
<integrations>

<stopScenarioOnFailure> - is a tag that controls the passage of your test scenarios.

<stopScenarioOnFailure>false</stopScenarioOnFailure>

Contains a flag:

true ( when running several or one test scenario, and detecting an exception in one of them, the starter will be stopped with an error output in a specific scenario and at a specific step )

false ( when running several or one test scenario, and detecting an exception in one of them, the starter will not be stopped, all scenarios and steps will be executed until the starter finishes, with a further output of an exception for all scenarios that have been passed and received an exception.


<runScenariosByTag> - allows you to be able to implement a global launch of test scenarios for a specific tag. It is designed to separate the launch of test scenarios.

  • Create your own filterTag for your convenience

  • _ Set a unique tag for each scenario_

    <runScenariosByTag enable="true">
        <tag name="UI" enable="true"/>
        <tag name="API" enable="false"/>
    </runScenariosByTag>

When configuring <runScenariosByTag> - you have the ability to create your own set of tags and give them names that will be used to separate and run test scenarios by tag.


<report> - configuring local and server reporting

    <report>
        <extentReports projectName="shop">
            <htmlReportGenerator enable="true"/>
            <klovServerReportGenerator enable="true">
                <mongoDB host="localhost" port="27017"/>
                <klovServer url="http://localhost:1010"/>
            </klovServerReportGenerator>
        </extentReports>

<auth authStrategy> - selection of the authorization strategy used in the test scenario

  • basic
  • jwt
  • Oauth2
  • custom

<ui> - is intended to set configurations of the UI part

When this tag is opened, a list of internal UI tags appears that must be filled in, to set the configuration. Tags inside UI:

<browserSettings>
<takeScreenshotOfEachUiCommand>
<webElementAutowait>
<browsers>
<chrome>
<opera>
<safari>
<edge>
<firefox>
<browserType>
<localBrowser>
<browserInDocker>
<<remoteBrowser>
<chromeOptionsArguments>
<capabilities>
<baseUrl>
  • UI config

config_ui 2 (1).gif


<integrations> - is made to set up integrations with API, Database

Inside of integration tag there is a set of integrations out-of-the-box:

<apis>
<clickhouse>
<dynamo>
<elasticsearch>
<kafka>
<mongo>
<mysql>
<oracle>
<postgres>
<rabbitmq>
<redis>
<s3>
<sendgrid>
<ses>
<sqs>
  • integration config

integration_conf.gif


When you select a specific tag, a list of parameters that must be filled in to implement integrations, will be revealed

By automating these tags, you can easily configure your global-config.xml – file by substituting the necessary values for a quick start with the project

After the installation is completed and the global-config.xml file is formed, you can start studying the tags used for testing and writing the first test scenario.



UI - Scripts

Locators

Before describing the UI related tags, it is necessary to know how to create a locator and connect it to a scenario

Available types of locators:

  • id
  • class
  • xpath

To create locators you need:

  • To create a file with the name of the page or component (e.g:loginPages.xml)
  • Inside loginPages.xml open <locators> tag
  • To give a unique name to the locator’s element
  • To set a type of the locator ( id, xpath, class )
  • Place content in the selected locator

Example ( быстрое видео создания локатора ) Animation


Structure of the tag:

<locator locatorId="firstName">   - a unique name of the locator
      <id>register_account_first-name_input</id>   - selected value of interaction
 </locator>

Do you wonder how there is a connection between the locator and tags?

Let's demonstrate the locator connection using the tag <click>

The tag that executes the ‘click’ command on the selected page element

     <click comment = "Click on 'Login' button"  - a comment of the action performed
        locatorId="locators.firstName"/>  - path to desired element

After the element locatorId= "" - we’re paving the way to the locator we need

We already know that all created locators are stored in a folder called: locators, respectively, we enter the name of the folder we need, and then use the unique name of the locator

In our case it is:

'firstName'

This is exactly how we implement the interaction between the locator and tags.


Now let's learn more about all the tags that will help you test the software qualitatively.

Global tag

The tag is the main interpreter, containing a set of all tags for interacting with the user interface, which will be described below.

An example of using the tag:

 <ui comment="Start UI scripts">

        <navigate command="to" comment="Go to base page"
                  path="/shop/"/>
        
        <wait comment="Wait for visualize a click"
              time="2" unit="seconds"/>
        
        <click comment="Click on the 'Shopizer' website link which opens in a new window" 
               locatorId="shopizer.webSiteShopizer"/>

    </ui>

UI tag's:

  • click
  • input
  • dropDown
  • navigate
  • assert
  • scrollTo
  • scroll
  • javascript
  • include
  • wait
  • clear
  • closeSecondTab
  • repeat
  • hovers

 <input>

A tag that executes a command to enter a value in the selected field

 <input comment="Input first name" 
     `locatorId="ownerRegistration.firstName" 
 value="Mario"/>`   -  в value -  in value there is the value of entering 'Mario'

Also, using the <input> tag, we have the opportunity to insert an image file of the desired format (for example, add an account photo)

To implement this function, the desired image must be located in the same folder as your test scenario, for example, in a folder called scenario

<input comment="Add profile photo" 
     locatorId="userPhoto.addProfilePhotoButton" -  selected element of interaction
  value="file:Firmino.jpg"/>   -  Inside of `value` we’re indicating that we want to insert a file using `file:` , after that we enter the name of the file (which is located in the scenario folder)

i.e. Firmino.jpg.

 <DropDown>

Tag of interaction with select function

< This tag is used for select and deselect function, with the ability to interact with multiselect

  1. Select One Value

Selects one value from the list

   <dropDown comment="Select 'Country'" locatorId="registration.country">
            <oneValue type="select" by="text" value="Spain"/>
        </dropDown>
  1. deSelect One Values

Drops one value from the list

   <dropDown comment="Deselect 'Country'" locatorId="registration.country">
            <oneValue type="deselect" by="text" value="Spain"/>
        </dropDown>

Параметр by= "" - parameter offers a choice of interaction:

  • text
  • value
  • index
  1. Deselect All Values

Drops all values from the list

<dropDown comment="Deselect all values'" locatorId="registration.country">
            <allValues type="deselect"/>
        </dropDown>

<navigate>

A tag that allows you to navigate through UI pages using a URL

To use, you only need to add the path to the desired page, as by default your base URL will be used, which is specified in the configuration settings.

It’s got 3 commands:

  • to
  • back
  • reload

1.NavigateTo

Navigation to the page mentioned in path=""

 <navigate comment="Go to register account page" 
                  command="to" 
                  path="/registerAccount"/>  - path URL
  1. NavigateBack

Return to the previous page of the scenario

<navigate comment="Go to register account page"
                  command="back"/>
  1. NavigateReload

Reloading the current scenario page

<navigate comment="Go to register account page"
                  command="reload"/>

<assert>

The tag that allows you to confirm:

That the previous action led to the desired result

  • Example:

You used the <navigateBack> - tag to return to the previous page

To confirm the execution of this function, the <assert> tag is used

As a confirmation, you will need to create element’s locatorId of the needed page and forward it to the <assert> tag

 <assert comment="Verify that the transition to the previous page was successful" 
       locatorId="billing.billingPositiveAssert"  - the name of element locator of the needed page
           attribute="id">    - attribute choice ( id, class, xpath, html )
           <content>
                   register_profile-photo_input    - the element itself 
          </content>
  </assert>

<scrollTo>

A tag that allows you to scroll the page to a specific element

 <scrollTo comment="Scroll to element" 
                  locatorId="footer.registerButton"/>

<scroll>

Scroll Up or Down by pixel

  • value="" - takes default value in pixels
    <scroll comment="Scroll Down" value="1024" direction="down"/>

    <scroll comment="Scroll UP" value="976" direction="up"/>

Scroll Up or Down by percent

  • value="" - has a maximum value of 100 (when scrolling as a percentage)
    <scroll comment="Scroll Down in percent" value="60" measure="percent" direction="down"/>

    <scroll comment="Scroll Up in percent" value="80" measure="percent" direction="up"/>

<javascript>

The tag that calls the code that is in the folder 'javascript'

<javascript comment="Apply javascript function" file="function.js"/>

 <include>

A tag that allows you to run a ‘scenario inside a scenario’ (often used to bundle different scenarios)

Implemented by passing the path to the scenario, which is located in a specific folder

The beginning of the path always starts with the scenarios folder, since all folders with created scenarios must be stored in this folder

 <include comment="Add scenario for login to the system"
       scenario="/nameOfScenarioFolderWithSlash"/> - a path to the scenario you need to launch

 <wait>

A tag that allows you to freeze the passage of the scenario to perform a certain function that requires a wait. Afterwards, the scenario will run at its normal pace

 <wait comment="Waiting for loading Dashboard page" 
      time="1" unit="seconds"/>  - There is a choice: `seconds` or `milliseconds`

 <clear>

A tag that allows you to clear the entered data in the fields

 <clear comment="Clear name field"  
   locatorId="updateProfile.name"/> 

 <closeSecondTab>

A tag that allows you to close the current browser tab

 <closeSecondTab comment="Check if closing 'second tab' works correctly"/>

 <repeat>

A tag that allows you to repeat any action used in the tags above

Used both inside and outside the UI tag

   <repeat comment="Repeat action" times="5">  - the number of repetitions 
          <click comment="Click on 'Send button'"
            locatorId="locator.clickButton"/>
  </repeat>`

 <hovers>

A tag that allows you expand the list dropdown which supports the function hover

        <hovers comment="Open dropdown list with 'hover' function">
            <hover comment="Open drop down 'Novels' tab" locatorId="locator.novels"/>
        </hovers>


Variations

What are ‘variations’ in COTT?

CSV is a file that contains compiled QA-data for testing

Variations play an important role in writing UI automated tests. When your QA specialists are faced with testing the functionality of the fields and selectors, using 'variations' they will be able to prepare the necessary data set in a short time, which will be in a tabular form inside the csv file, and easily test the validation with the main functionality of the fields and selectors.

  • Variations are effective for Positive and Negative testing

  • Data is often organized into tables, so CSV files are very easy and efficient to use

Usage example:

We have a new user registration form, which consists of the following fields:

  • First name
  • Last name
  • EmailAddress
  • Password
  • Repat Password

The structure of this csv-file will have unique field names with enumeration of values for each of them.


variations.png


When we link our csv file to the scenario, the scenario will be run 5 times in a row, since the table has 5 columns with different data inside. Each run of the scenario will iterate through all the values that are in this file.

In order to test the functionality and validation of each field manually, it would take a lot of time to write test documentation and conduct manual testing, however the usage of Variations will take much less time, and the efficiency will increase too

Creation of such csv file with data for testing is quite simple because:

The CSV file consists of rows of data and delimiters that mark the boundaries of the columns. In one such positive or negative file, QA-specialists will be able to sort through all possible sets of values that will help to test this functionality more effectively.


Use of variation

To include variations in a script, add the value to the schema path 'variations="fileName"


regfow.png

registerN - the name of the brought out csv file


Usage example of variations in a scenario

 <input comment="Input first name"
    locatorId="registerAccount.firstName" 
     value="{{firstName}}"/>  the usage of variation in `value` 


 <input comment="Input lastname" 
    locatorId="registerAccount.lastName"
     value="{{lastName}}"/>  - the usage of variations



 <input comment="Input email" 
    locatorId="registerAccount.emailAddress"
     value="{{emailAddress}}"/>   - the usage of variations



 <input comment="Input password" 
    locatorId="registerAccount.password"
     value="{{password}}"/>   - the usage of variations



 <input comment="Repeat password" 
    locatorId="registerAccount.repeatPassword"
     value="{{repPassword}}"/>   - the usage of variations


Integrations tag's

Database tag's:

  • Postgres
  • Mongo
  • Oracle
  • MySQL
  • Dynamo
  • ClickHouse
  • Redis
<postgres>

Database interaction tags:

   <postgres comment="Get all user's from system" alias="shopDb" file="expected_1">
        <query>SELECT * FROM t_user</query>
   </postgres>

<mongo>
  <mongo comment="Get all user's from system" alias="shopDb" file="expected_1">
      <query>SELECT * FROM t_user</query>
  </mongo>

<oracle>
   <oracle comment="Get all user's from system" alias="shopDb" file="expected_1">
      <query>SELECT * FROM t_user</query>
   </oracle>

<mysql>
   <mysql comment="Get all user's from system" alias="shopDb" file="expected_1">
       <query>SELECT * FROM t_user</query>
   </mysql>

<dynamo>
   <dynamo comment="Get all user's from system" alias="shopDb" file="expected_1">
       <query>SELECT * FROM t_user</query>
   </dynamo>

<clickhouse>
   <clickhouse comment="Get all user's from system" alias="shopDb" file="expected_1">
       <query>SELECT * FROM t_user</query>
   </clickhouse>

<redis>
   <redis comment="Get all user's from system" alias="shopDb" file="expected_1">
       <query></query>
   </redis>

Queue tag's:

  • Rabbit
  • Kafka
  • SQS
<rabbit>
    <rabbit comment="Receive 2 times and send 1 time"
            alias="rabbit-one">
        <receive queue="queue">
            <file>expected_1.json</file>
        </receive>

        <receive queue="queue">
            <message>[]</message>
        </receive>

        <send routingKey="queue">
            <file>body_1.json</file>
        </send>
    </rabbit>

<kafka>
  <kafka comment="Receive then send and receive 2 time"
           alias="kafka-one">
        <send topic="queue2" correlationId="343gfrvs-dh4aakgksa-cgo60dmsw-sdf4gj62">
            <file>body_2.json</file>
        </send>

        <send topic="queue3" correlationId="dfskogdfa9sd-rekjdfnkv-sdfkjewnd8-erkfdn">
            <file>body_4.json</file>
        </send>

        <receive topic="queue2" timeoutMillis="1200">
            <value>
                [ {
                "key" : null,
                "value" : "{\n \"squadName\": \"Still rule cool\",\n \"homeTown\": \"Metro Tower\",\n \"formed\":
                2018,\n \"secretBase\": \"Rower\",\n \"active\": false\n}",
                "correlationId" : "343gfrvs-dh4aakgksa-cgo60dmsw-sdf4gj62",
                "headers" : { }
                } ]
            </value>
        </receive>

<sqs>
  <sqs comment="Compare message from empty queue with file content"
         alias="queue_one"
         queue="queue">
        <receive>expected_1.json</receive>
  </sqs>

Email Services:

  • SES
  • Sendgrid
<ses>
   <ses comment="Sending a message to the email"
         alias="ses_one">
        <destination>[email protected]</destination>
        <source>[email protected]</source>
        <message>
            <body>
                <html>Amazon SES test</html>
                <text>Hello World</text>
            </body>
            <subject>TITLE</subject>
        </message>
    </ses>

<sendgrid>
    <sendgrid comment="Sending a message to the email from file"
              alias="sendgrid_one">
        <post url="mail/send">
            <response code="202"/>
            <body>
                <from file="body1.json"/>
            </body>
        </post>
    </sendgrid>

Object storage:

  • S3
<s3>
   <s3 comment="Upload json file to the bucket"
        alias="bucket"
        key="/com/tool/integration-testing/expected_2.json">
        <upload>expected_2.json</upload>
    </s3>

Search system:

  • ElasticSearch
<elasticsearch>
    <elasticsearch comment="Execute elasticsearch commands - get list of indexes"
                   alias="elastic_one">
        <get url="/_cat/indices">
            <response file="expected_1.json">
                <header name="content-length" data="0"/>
            </response>
        </get>
    </elasticsearch>


Multidatasets

COTT has the ability to migrate data to the database using extensions such as:

All test data is stored in the Data folder

  • sql
  • csv
  • xlsx
  • partiql
  • bson

Migrations

Migration tag's:

  • migrate
<migrate>
  • migrate tag interacts with all existing relational databases
    <migrate comment="Add data for database" alias="mySql">
        <data>dataset.sql</data>
    </migrate>

REST API - Scripts

  • <var>
  • <http>
  • <auth>

Var

A variable is a named or differently addressable area of ​​memory that can be used to access data. In simple words, a variable is a data store. You can put any value here (for example, a number, a line, or another data type). Variables store certain data that can later be used in the program.

Variable is flexible:

  • it can store information;
  • you can extract information from it, which will not affect the value of the variable itself;
  • New data can be written into it

In order to create a variable, you must declare it (i.e. reserve a memory cell for certain data)

How to create a variable in COTT?

  • Let's say our test scenario has a <postgres> step where we access the database to display data from a specific table and retrieve an authorization token.
  <postgres comment="Get token and user_id from t_one_time_token table" file="expected_7.json" alias="central">
        <query>SELECT token, user_id
            FROM t_user_one_time_token ORDER BY ID DESC LIMIT 1
        </query>
    </postgres>
  • After executing this request, we will get the actual_1.json file with the database response, and pass it to the expected_1.json

Declaration of variable with data extraction:

     <var comment="Created token var" name="TOKEN"> 
          <jpath>$.[0].content.[0].token</jpath> - an internal tag with which we extract the data we need from `expected_file`
     </var>
  • name="TOKEN" - the name of the declared variable <jpath>$.[0].content.[0].token</jpath> - a way to extract data from json (expected file)

An example of passing the variable in an http request

<http comment="Get user profile" alias="API">
        <get url="/api/v1/profile">
            <response code="200" file="expected_1.json"/>
            <header name="Authorization" data="Bearer {{TOKEN}}"/> - variable usage
        </get>
   </http>

Structure HTTP

The HTTP request structure has all the basic features of API testing tools

HTTP in a test scenario

  <http comment="Check the ability login system" alias="API">  - action description + alias API
      <post url="/api/v1/login"> - indication of the type of request + used url
        <response code="200" file="expected_1.json"/>  - response code + expected result
          <body> 
                  <from file="request_1.json"/> - transfer of the request body
          </body>
       </post>
   </http>

   <var comment="Get JWT token from previous expected_file" - creating the variable
         name="profile"
         path="$.body.token"/>

   <http comment="Get user profile" alias="API">
        <get url="/api/v1/profile">
            <response code="200" file="expected_1.json"/>
            <header name="Authorization" data="Bearer {{profile}}"/> - variable usage
        </get>
   </http>

Methods:

  • GET
  • POST
  • PUT
  • PATCH
  • DELETE
  • OPTIONS
  • HEAD
  • TRACE

Response code

  • 1xx
  • 2xx
  • 3xx
  • 4xx
  • 5xx

Request file

{
  "attributes": [
    {
      "id": 4
    }
  ],
  "product": 1,
  "quantity": 1
}

Expected Result

{
  "body": null,
  "errors": {
    "code": [
      "Forbidden"
    ],
    "message": [
      "Access is denied"
    ]
  },
  "debugInfo": {
    "requestId": "p(any)",
    "stackTrace": null
  }
}

Comparison

actual.png


After executing each HTTP request, comparison generates an actual_file which contains the API response in json format with the response code and data

actual_file is generated in order for QA-specialist to understand how the system reacted to this check right away. If QA is satisfied with the actual result, then all the data from actual_file, QA transfers to a file called expected_file ( which is in the http-request structure as the expected result of the test, to pass this test successfully.

HTTP in COTT allows you to perform smoke testing (and make sure that nothing important is broken), conduct unit and integration testing, run the same tests with various sets of input data, or quickly perform any supporting actions to create test data and situations.


Create HTTP - scripts

gif.gif


Function authentication

COTT - has a unique <auth> tag - allowing instant authorization by a system user within a test script

In the global-config-file settings you have the option to choose an authorization strategy by opening the tag:

<auth authStrategy=""

Types of authorization <auth> tag

  • Basic Authentication
  • Token Authentication
  • OAuth2 Authentication
  • Custom ( in progress )

After selecting an authorization strategy in the global-config-file by opening the <auth> tag in the test script, authorization will be performed in relation to the selected strategy

The <auth> tag is mainly used for REST - API - testing - as it allows you to perform many requests within yourself, under a specific system user

    <auth comment="Test case for auth tag" apiAlias="SHOPIZER" credentials="jwt_user.json" loginEndpoint="/api/v1/customer/login">
        <http comment="Get all stores in system" alias="SHOPIZER">
            <get url="/api/v1/auth/customers/profile">
                <response code="200" file="expected_3.json"/>
            </get>
        </http>
    </auth>
  • apiAlias="" - API interaction alias
  • credentials - Authorization data file
  • loginEndpoint="" - Authorization endpoint used

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

    • where user_1.json the name of the file that contains the necessary data for authorization
    • closing the tag means logging out this user within the test script

The interaction of the tag and the HTTP request simplifies REST API testing. For example, when performing many checks inside the system (private), you will not need to pass the authorization token each time, since we will act already under an authorized user inside the tag

  • This function makes it easy to test functionality with complex logic and a high level of privacy, since <auth> will allow you to instantly switch from one user of the system to another inside the test script, which will allow the tester to effectively test functionality with a complex system of permissions and rights for users systems in a short time


Scenario Collecting

  • Each test scenario has its own status, which is processed during assembly to run test scenarios

Scenario statuses:

<active="true">
<active="false">
<onlyThis="true">
<variations="name">

<active="true">
  • Status of an active scenario
  • Every new test scenario has active=""true" status by default
  • Not required to be marked

activeTrue.png


<active="false">
  • Status of an inactive scenario
  • Test scenario with status <active="false"> will not run during the assembly

activeFalse.png


<onlyThis="true">
  • Status of an active scenario
  • If the scenario status is <onlyThis="true"> - only this scenario will run regardless of the activity of other test scenarios
  • It is possible to assign this status to several test scenarios for selective launch

onlyThisTrue.png


<variations="">
  • Status of variations
  • <variations=""> is a status indicating that this scenario uses csv - variations
  • Assigned in addition to the above statuses:

variations_status.png


Work description

When running test scenarios in COTT - Scenario Collector collects all test scenarios located in the working directory and checks for scenarios’ statuses

Checking for status onlyThis=""true

  • When compiling test scripts to run, specifying the status onlyThis="true" - Scenario Runner - will only run those scenarios that are in this status, and will ignore the launch of other active scenarios in the status <active="true">
  • onlyThis=""true - is the most independent scenario status

➖➖➖➖➖➖➖➖➖➖

Checking for status active="true"

  • When compiling test scenarios to run, by specifying this status, Scenario Runner will run all test scenarios that are in active="true"
  • If there is at least one test onlyThis=""true in the test scenarios directory - running all scenarios in the active="true" state will be ignored and the scenario in onlyThis=""true will be run

➖➖➖➖➖➖➖➖➖➖

Checking for status active="false"

  • When compiling test scenarios for launch, by setting the status active="false", Scenario Runner will not let these scenarios run, and will simply ignore them.
  • Status active="false" - can be assigned to any scenario you don't want to run; when running all test scenarios in general, then all scripts in this status will be ignored.

Error processing

Before checking scenarios for statuses, COTT performs a global initialization of all test scenarios for validity.

  • Basic Validity Checks
  • Check for correct syntax in all files in a directory
  • Correct structure of each tag
  • Validity of locators, and their correct transfer to the scenario
  • Validity of variations, and their correct transfer to the scenario
  • Matching the paths and names of transferred files
  • Check for required files and folders to run test scenarios
  • All of the above steps are processed and initialized before each test scenario run. If an error is detected, the scenarios will not be launched, with the output of the corresponding error and indication of: directory, path, file name, and the nature of the error.

Pros:

  • The user is always aware of the validity of their test scenarios and test data
  • Elimination of bugs on early stages
  • Tests’ stability

Run scenario by <tag>

COTT can run scenarios on unique tags that you create and assign to a specific scenario

  • This feature is very useful when working with large volumes of test data, as using scenario triggering by tags gives you the ability to split your test scenarios into blocks, and easily switch between running them
  • Configuration example:
    <runScenariosByTag enable="true">
        <tag name="registrationFlow" enable="true"/>
        <tag name="loginFlow" enable="false"/>
        <tag name="createOrder" enable="true"/>
    </runScenariosByTag>

In this example, <runScenariosByTag enable="true"> means that tag run filtering is enabled and is ready to run on the specified tags

All test scenarios that have the registrationFlow & createOrder tag will be sent to launch when Scenario Runner starts

  • Assigning a tag to a test script:

tagScenario.png


Where <tags> - is assigning the tag for the test scenario

    <tags>
        <tag>registrationFlow</tag>
    </tags>

Oauth2 Authentication

In progress

Locators Plugin

In Progress

Comparison

COTT has a function of comparison

This function implies comparing the expected result with the actual one after the step is completed

To compare test results, the following files are used:

  • expected - expected test result
  • actual - actual test result

Having the structure expected_1.json, actual_1.json - the number is put, depending on the step of the test scenario

  • Presence of expected file is a mandatory parameter for http, postgres - requests

The principle of operation on the example of postgres:

<postgres comment="Check successfully adding Product to shopping cart"
              alias="Shop" file="expected_11.json">
        <query>
            SELECT shp_cart_id, customer_id, shp_cart_code
            FROM shopping_cart
            WHERE merchant_id = 1
        </query>
    </postgres>
  • Steps:
  • Make a request specifying the expected file (with the scenario step number, in this case expected_11.json)
  • Create a file in the scenario folder with the name specified inside the postgres request (in this case, expected_11.json)
  • Leave generated expected_11.json empty
  • Run test scenario
  • After running the test scenario and executing this query, camparison will automatically generate an actual_file with the scenario step number, see an empty file expected_11.json and compare it with the query result received in actual_11.json. If the result of the request is satisfactory to the user, it will transfer all data from the actual file to the expected file, for further comparison and successful completion of the test scenario
  • camparison - will generate the actual file only if the content between actual and expected does not match

Logs

COTT has informative and easy to read logs

  • Каждый тег имеет уникальную структуру отображения в логах, за счет индивидуального подхода и визуализации тегов в логах
  • Each tag has a unique display structure in the logs, due to the individual approach and visualization of tags in the logs
  • Uniqueness of logs:
  • Ease of perception
  • Table structure
  • Step by step analysis of the execution of each step of the scenario
  • Detailed analysis of the execution of each tag
  • Informative output of exceptions
  • Displaying the overall result of passing scenarios

UI log's structure

  • An example of tags display:
<navigate>
<click>
<input>

uiLog's.png


  • In these logs, we can see all the specific steps in passing our test scenarios, with a display of the transmitted and used data

  • In the <navigate> tag we see:

  • Comment - a unique parameter for each tag that describes the action of the step
  • Command type - a parameter indicating the command to be executed to, back or reload
  • URL - used navigation address
  • Execution time - a unique value for each tag that prints the execution time of each command

➖➖➖➖➖➖➖➖➖➖

  • In the <click> tag we see:
  • Comment - a unique parameter for each tag that describes the action of the step
  • Locator - passed UI interaction element, with folder name, and unique element name
  • Execution time - a unique value for each tag that prints the execution time of each command

➖➖➖➖➖➖➖➖➖➖

  • In the <input> tag we see:
  • Comment - a unique parameter for each tag that describes the action of the step
  • Locator - passed UI interaction element, with folder name, and unique element name
  • Value - output of the passed value
  • Execution time - a unique value for each tag that prints the execution time of each command

➖➖➖➖➖➖➖➖➖➖

  • This structure allows you to quickly find errors in the test scenarios, and see the display of high-quality logs

http log's structure

  • An example of displaying a <http> request:

httpLogs.png


  • In the <http> tag we see:
  • Comment - a unique parameter for each tag that describes the action of the step
  • Alias - the unique name of the API we are interacting with
  • Method - display of the http method used
  • Endpoint - used endpoint
  • Body - display of the transmitted request body
  • Status code - API response code
  • Execution time - a unique value for each tag that prints the execution time of each command

Structure of the DB log's

  • An example of displaying a <postgres> request:

postgresLog.png


  • In the <postgres> tag we see:
  • Comment - a unique parameter for each tag that describes the action of the step
  • Alias - unique alias of the database we are interacting with
  • Query - database queries
  • Execution time - a unique value for each tag that prints the execution time of each command

  • Demonstration of the logs

Go and find out more

'To start, click on the running line'

⤵️

Typing SVG

⤴️

Data Driven Testing

Any modern software, including web-based applications, is tested for errors. The speed of identifying these errors depends not only on the tools, the number of testers and their experience, but also on the chosen approach. This is what we will talk about.

This is data driven testing. With this approach, test data is stored separately from test cases, for example, in a file or in a database. This separation logically simplifies the tests.

  • Data-Driven Testing is used in those projects where you need to test individual applications in multiple environments with large data sets and stable test cases. Typically, the following operations are performed during DDT:
    • extracting part of the test data from the storage;
    • data entry in the application form;
    • checking the results;
    • continue testing with the next set of inputs.

This method allows a QA specialist to prepare a set of test data at the early stages of development to test the functionality and logic of your project.

To create a dataset, the tester only needs to create a file with the required extension and put it in the data folder (storage for test data)

Migration to any of the databases is easy to do with the global tag <migrate>⤵️

    <migrate comment="Add data set for database" alias="postgres">
        <data>site_sample_init_data.sql</data>
    </migrate>

After the data has been uploaded to the selected database, the QA specialist can create effective test cases, perform data-based testing and use http, SQL queries to interact with the system. This method covers all sections of the code and the system as a whole with tests, allows you to effectively create integration tests and conduct high-quality regression of the product.

  • Data-Driven Testing is also great for UI testing. It allows you to track down a large number of bugs at an early stage of development. Especially effective for:
  • Detection of functionality bugs
  • Detection of unhandled exceptions when interacting with the interface
  • Detection of loss or distortion of data transmitted through interface elements;

Files available for migration:

  • sql
  • csv
  • xlsx
  • partiql
  • bson

Let's write the first test script in COTT 🚀

Automation is easy

On this page, we'll take a look at how easy it is:

  • ⚙️ Set up a global configuration file
  • ▶️ Run scripts
  • 🔧 Create the first UI script
  • ⛏ Create an API script
  • 🛠 Create merge script
  • 🔍 View script logs

Recommended development environment for COTT - IntelliJ IDEA


Set up a global configuration file

Step:1

  • Indicate the default path to the XSD schema using the tag

global_config.png ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Step:2

  • Open a tag:
 <stopScenarioOnFailure>false</stopScenarioOnFailure> -  to manage ScenarioRunner during exception

true (when running several or one test scenario, and an exception is detected in one of them, the starter will be stopped with an error output, in a specific scenario and at a specific step)

false (when running several or one test scenario, and an exception is detected, the starter will not be stopped, all the scenarios and steps will be executed until the starter finishes working, with a further output of an exception for all the scenarios that have been passed and received an exception)

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Step:3

  • Open a tag:
<runScenariosByTag>

Name the tags, which will be used to run tags’ scenarios

  • Example:
   <runScenariosByTag enable="true">
       <tag name="positive_case" enable="true"/>
       <tag name="negative_case" enable="false"/>
   </runScenariosByTag>

Running the scenario by tags is not a mandatory requirement for the user. You have the ability to turn this feature on and off using enable="" |true or false|

If enable="false" - Scenario Runner will work as usual

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Step:4

<ui>
  • Open <ui> tag to configure cross browser

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

You have the opportunity to choose flexible configuration settings for one or more types of browsers individually

For example, you can configure multiple versions and launch modes Chrome Browser

The number of configurations for the browsers is unlimited

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Main tags for opening <ui> tag:

<baseUrl>http//:locahost:8080</baseUrl>
  • Inside of it, there should be the URL of the tested site
<browserSettings>
  • Performs basic cross browser settings

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Tags inside <browserSettings>:

<takeScreenshotOfEachUiCommand> 
  • To enable or disable the screenshot mode
  • Has true | false flags

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

<webElementAutowait>
  • Interacts with such tags as:

click

<input>

assert

dropDown

navigate

  • Designed to wait and get elements <id>, <class>, <xpath> from the tree
  • <webElementAutowait> - will work until it finds the page element you needed

Has a parameter seconds="5" - to assign a timeout

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

<browsers>

Opens up browser configuration settings:

  • chrome
  • opera
  • edge
  • firefox
  • safari

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

The structure of browser configuration:

  • Chrome Local Browser
          <chrome enable="false" maximizedBrowserWindow="false" headlessMode="false" browserWindowSize="1920x1080">
                <browserType>
                    <localBrowser driverVersion="102.0.5005.27"/>
                </browserType>
                    <chromeOptionsArguments>
                        <argument>--incognito</argument>
                    </chromeOptionsArguments>
          </chrome>
  • maximizedBrowserWindow="" - to select browser window size true | false
  • headlessMode="" - is a built-in option browser startup mode true | false
  • browserWindowSize="1920x1080" - to set browser size with maximizedBrowserWindow="false"
  • localBrowser driverVersion="" - to set a specific local browser version
  • <chromeOptionsArguments> - to set arguments

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • Edge in Docker Browser
                <edge enable="true" maximizedBrowserWindow="true" headlessMode="true" browserWindowSize="800x600">
                    <browserType>
                        <browserInDocker browserVersion="91.0.864.37" enableVNC="true">
                            <screenRecording enable="true" outputFolder="/Users/user/e2e-testing-scenarios"/>
                        </browserInDocker>
                    </browserType>
                    <edgeOptionsArguments>
                        <argument>--disable-extensions</argument>
                    </edgeOptionsArguments>
                </edge>
  • <browserInDocker browserVersion="" - to select a driver version to run in docker
  • enableVNC="" - allows you to connect to a remote desktop session by simply using the enableVnc() method of the dockerized browser When using this option, two different technologies are used internally:
  • Virtual Network Computing (VNC), is a desktop sharing graphic system, VNC-сервер he VNC server runs in a browser container.
  • noVNC is VNC web client with an open initial code that uses its own noVNC Docker image to connect via noVNC.
  • <screenRecording enable="" - is a mode for recording test scenarios that pass into docker has flag true | false when using enableVNC="true"
  • outputFolder="" - to set a specific directory to store the test scenarios records
  • <edgeOptionsArguments> - to set arguments

Step:5

<integration>
  • Open up <integration> to configure integrations

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Configure several API

Configure several DB

Configure services

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • Examples of the configurations

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • APIsIntergration
        <apis>
            <api alias="SHOPIZER" url="http://localhost:8080/"/>
        </apis>

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • PostgresIntegration
       <postgresIntegration>
            <postgres alias="SHOPIZER" enabled="true">
                <jdbcDriver>org.postgresql.Driver</jdbcDriver>
                <username>postgres</username>
                <password>password</password>
                <connectionUrl>jdbc:postgresql://localhost:5433/SHOPIZER</connectionUrl>
                <schema>salesmanager</schema>
                <hikari>
                    <connectionTimeout>45000</connectionTimeout>
                    <idleTimeout>60000</idleTimeout>
                    <maxLifetime>180000</maxLifetime>
                    <maximumPoolSize>50</maximumPoolSize>
                    <minimumIdle>5</minimumIdle>
                    <connectionInitSql>SELECT 1</connectionInitSql>
                    <connectionTestQuery>SELECT 1</connectionTestQuery>
                    <poolName>core-postgres-db-pool</poolName>
                    <autoCommit>true</autoCommit>
                </hikari>
            </postgres>
        </postgresIntegration>

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • SendgridIntegration
        <sendgridIntegration>
            <sendgrid alias="email" enabled="true">
                <apiUrl>http://localhost:8080/</apiUrl>
                <apiKey>apiKey</apiKey>
            </sendgrid>
        </sendgridIntegration>

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • RedisIntegrations
        <redisIntegration>
            <redis alias="redis" enabled="true">
                <host>localhost</host>
                <port>6379</port>
            </redis>

            <redis alias="redis_two" enabled="true">
                <host>localhost</host>
                <port>6360</port>
            </redis>
        </redisIntegration>

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

After creating the configuration file, Edit Configuration in Test Runner

runner.png

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Specify the path of the file with global configurations

edit_configurations.png


➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Create UI test-script

Step:1

  • Create a folder with the name of the test scenario inside the scenarios folder
  • Create a file scenario.xml inside of the folder

scenarios_1.png


Step:2

  • Open locators folder
  • Create a file called UI page in the format: registration.xml
  • Inside the created file - open the <page> tag - fill in the data

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

locatorst_xsd.png

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • Open the tag <locators>
  • Create the necessary number of locators

Step:3

  • Open the created file scenario.xml
  • Open <scenario> tag
  • Populate the list of dropdown tags

scenario get started.png

 <description>
  • A description of the tested scenario

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

<name>
  • A name of the tested scenario

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

<tags>
  • Setting the tag written in global-config-file.xml ( To run the scenario by tags )

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

  • Open up UI tag in scenario.xml and start writing your tested scenario

choseUItag.png


YouTube Instructions UI

Instructions for working with the UI test scenario

'To start, click on the running line'

⤵️

Typing SVG

⤴️


Create REST - API script

Steps:

  • In an already created scenario or in a new one created:
  • Make sure that UI tag is closed or absent
  • Open http tag
  • Select a request method

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

http_scenario.png


  • Indicate endpoint and response code

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

http_response.png


  • Create an empty expected_file.json in the scenario folder
  • Indicate expected_file.json in http request
  • The number of expected_file - depends on the number of the tested scenario’s step

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

expected_api.png


  • Select a transfer type of the body of the request
  • Open <body> tag

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

body_hrrp.png

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • An example of transferring the body of the request using <from> file parameter
  • Create request_file.json to transfer the body of the request (in the scenario folder)
  • Put the body of the request into created request_file.json
  • Set a name request_file.json - inside of <from file=""/>

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

request_file.png


YouTube Instructions API

Instructions for working with the API test scenario

'To start, click on the running line'

⤵️

Typing SVG

⤴️


Merge test (UI/API)

More possibilities

UI & API in one tested scenario

High coverage level

Merge mode

YouTube Instructions Merge Test

Instructions for working with the Merge test scenario

'To start, click on the running line'

⤵️

Typing SVG

⤴️



COTT + CI/CD

In simple words, CI / CD (Continuous Integration, Continuous Delivery - continuous integration and delivery) is a technology for automating testing and delivering new modules of a project under development to interested parties (developers, analysts, quality engineers, end users, etc.).

COTT easily integrates with software development and testing tools that use the CI / CD method. Namely with tools such as:

Bitbucket

  • The environment allows you to manage project repositories, document the functionality and results of improvements and tests, as well as track errors and work with the CI / CD pipeline

Docker

  • Automatic project deployment system. It supports containerization and allows you to pack the project along with all the environment and dependencies into a container

Jenkins

  • Customize CI/CD processes for specific product development requirements

pipeline.png

All together it will allow us: to ensure the promptness of the output of new product functionality (working with customer requests). As a rule, it is a matter of days or weeks. At the same time, with the classical approach to developing client’s software, this could take a year. In addition, the development team receives a pool of code alternatives, which optimizes the cost of resources for solving the problem (by automating the initial testing of the functionality). Parallel testing of the functional blocks of the future system enhances the quality of the product. Bottlenecks and critical moments are captured and processed early in the cycle.

With the help of COTT and CI / CD development, the Product Owner takes full control of the entire phase development and testing of his product. This allows the testing team to improve software quality, run continuous regression testing, and monitor software quality in real time with the ability to run parallel tests. Moreover, it will allow developers to monitor the cleanliness of the written code by running their branches on builds. Thus, we come to the conclusion that it becomes even easier to develop the perfect software.


  • Logs in Jenkins have become more pleasant to read by customizing logs in COTT

lognJenkins.png

Integrations out of the box

COTT supports a set of integrations out-of-the-box that you can use to configure your project. If you need a certain integration to test your software, it will not be difficult to add and use it in the future. As you can imagine, we can automate the testing for a given project structure.

"Out-of-the-Box Integrations:"

Integrations

  • Clickhouse
  • DynamoDB
  • Elasticsearch
  • Kafka
  • MongoDB
  • MySQL
  • Oracle
  • Postgres
  • RabbitMQ
  • Redis
  • AWS (S3, SES, SQS)
  • Sendgrid

More about integrations - Click here

IntelliJ IDEA


Simple writing structure

The test scenario has a very clean and readable look due to the structure of tags and XML. By looking at this test scenario, any member of your team will be able to figure out what this scenario is testing, and what methods and data are used for this. That is because each tag that is used in COTT has a mandatory 'comment' field, which allows you to quickly understand what is happening.

simpleStructure.png


Test scenarios are living test cases

New members of your team will not need to spend a lot of time sorting through a large amount of test documentation, they will simply open the project in COTT and personally will be able to apply the test case they see in practice.

Speed ​​of learning

COTT has a high level of learnability due to our unique and uniformed structure for writing test scenarios, as well as due to declarative programming:

  • programming paradigm that sets the specification of the solution to the problem, that is, the expected result is described, and not the way to get it

The transition from Manual QA to Automation QA will take your employees about 3 weeks, which will significantly speed up the process and quality of testing. To develop autotests, QA-specialist will not need to delve into the programming language in which the product is written, he will a set of scripts that do all the work for him.

With the help of COTT, you can create a team of automatizers in your company who can easily cope with testing the functionality of any complexity.

QA -specialist will not need to use a bunch of additional testing tools. Instead, he only needs to create a test scenario within which he can implement the necessary methods and approaches to testing.

With COTT, we also provide you with technical documentation that will help speed up the training process for your specialists, and help new members of your team get through the training quickly.

Flexibility

One of the key differences from competitive products is that the COTT tool supports connecting modules as dependencies - if there is any special one, it will not be a problem to add it and use it in the future. We have an individual approach to the client, and we will be happy to develop for you any feature that only you will have.

As we can see, by using our tool you will not be limited only to the functionality that is available in the box.

Full test coverage

Due to the possibilities and integrations that are available in COTT, the team that has mastered this tool will be able to cover all the functionality of the project with tests without any problems.

Developers have the ability to create Unit (tests allowing to check the correctness of individual modules of the source code of the program, and check the performance of the written code).

After that, QA -specialists can start writing integration tests that check the interaction of system modules with each other. All together, it will move into the stage of regression testing, where all the above actions will be repeated until the successful completion of software development with regular regression tests.

This approach will ensure high-test coverage of your product.


Reporting Tool 📊

System Stability Monitoring

Peculiarities:

  • Convenient dashboard with graphs of the results of passing test scenarios
  • Using the Reporting Tool locally and on the server
  • Step-by-step analysis of each step of the test script
  • Full stacktrace of each step
  • Access to view screenshots of each step

'To see how the Reporting Tool works, click on the ticker'

⤵️

Typing SVG

⤴️


Using the Reporting Tool locally and on the server

COTT gives you the ability to view the passing statistics of your test scenarios:

  • Locally
  • On the server

To set up Reporting Tool configurations:

  • Configure <report> in global.config.file

➖➖➖➖➖➖➖➖➖➖

reportConf.png

  • <extentReports projectName=""> - Indicating a name of the project
  • <htmlReportGenerator enable=""/> - Turning on/Turning off the local report htmp - ( true | false )
  • <klovServerReportGenerator enable=""> - Turning on/Turning off the report on the server ( true | false )
  • <mongoDB host="localhost" port="27017"/> - passing your host and port to generate the report on the server
  • <klovServer url="http://localhost:1010"/> - indicating url of the server

Local report generation

  1. To run a local html report, make sure that <htmlReportGenerator enable="true"/>
  2. Run your test scenarios
  3. Open the report folder
  4. Open the generated report in the suggested browser:

browserReport.png

  • Dashboard html - Report 📈

➖➖➖➖➖➖➖➖➖➖

htmlDashboard.png


  • The report dashboard contains all the necessary information about the tests passed

Number of running tests

Time and launch date

Test results

Number of passed/failed steps

Log events

Timeline

Tags


  • Detailed report Exception
  • Exceptions on each step of the test scenario

➖➖➖➖➖➖➖➖➖➖

htmlStep.png



  • Ability to view every step of the test scenario
  • Opening the screenshots of each UI step

➖➖➖➖➖➖➖➖➖➖

detailHtmlStep.png




Report generation on the server

  • To generate a report on the server, you must have a docer-compose-report.file created and configured

  • Running a report

  1. To run a report on the server, make sure that <klovServerReportGenerator enable="true">
  2. Run your docker-compose-report.file
  3. Go to the specified host in global.config.file - inside the tag <klovServer url="http://localhost:1010"/>
  4. Run the test scenarios

Dashboard Server Report

➖➖➖➖➖➖➖➖➖➖

dashboardServerReport.png


The dashboard page contains information about:

Total number of runs

Result of the last run

Number of tests passed

Number of failed tests

A general overview of your runs in the form of a graph

Performance graph

Ability to sort all runs

Ability to search for a specific test scenario

  • Displaying all the runs:

➖➖➖➖➖➖➖➖➖➖

allRunServer.png


  • Ability to view every step of the test scenario

➖➖➖➖➖➖➖➖➖➖

StepServerReport.png


  • Detailed display using 'Comparison' functionality

➖➖➖➖➖➖➖➖➖➖

comparisonServer.png


Ability to view all the screenshots of each UI step

➖➖➖➖➖➖➖➖➖➖

screenServerReport.png


  • Full stackTrace

➖➖➖➖➖➖➖➖➖➖

stackTraceServer.png


Welcome to configuration

Detailed configuration settings are in the running line below 🤝


'Click here ⬇️'

⤵️

Typing SVG

⤴️

⚠️ **GitHub.com Fallback** ⚠️