version 2.0 - Gibiscus/wiki GitHub Wiki

About Cost Optimization Testing Tool


Cost Optimization Testing tool:


An accessible and functionally unlimited framework for developing automated tests, which is easily integrated with CI/CD development tools, and can test WEB / REST - API within one test scenario with the ability to interact with multiple databases and API.

We are developing a product that will fully cover your needs, working with projects of any complexity. It is no secret that high-quality software is a product, which has a complete testing coverage. Our test framework for <end2end> - testing, will bring you a bit closer to developing a perfect software.

Installation

🚀 How to run the tool

We recommend that you use Intellij IDEA as your IDE when working with COTT.

The COTT application takes 2 arguments as input on startup:

  1. The name of the configuration xml file.

-c={configuration-file-name}.xml or --config={configuration-file-name}.xml

  1. Path to the folder with test resources containing your test scripts and configuration file.

-p={absolute-path-to-your-resources} or --path={absolute-path-to-test-resources}

Example 1: -c=cott-config.xml -p=/user/projects/test-resources

Example 2: --config=cott-config.xml --path=/user/projects/test-resources

(note that the filename and path is just an example, you should not create the same filename or the same directories on your device)

Run from CLI

cd cost-optimization-testing-tool
mvn clean install
cd target
java -jar cott-with-dependencies.jar --config={configuration-file-name}.xml --path={absolute-path-to-test-resources}

Run using Docker (host network)

Note that you must input your own values to {image-name}, {configuration-file-name}, {absolute-path-to-test-resources}.

You can pull the latest release image from packages

  • Pulling the image
docker pull ghcr.io/knubisoftofficial/cost-optimization-testing-tool:master 
docker run --rm --network=host --mount type=bind,source="{absolute-path-to-test-resources}",target="{absolute-path-to-test-resources}" "ghcr.io/knubisoftofficial/cost-optimization-testing-tool:master" "-c={configuration-file-name}.xml" "-p={absolute-path-to-test-resources}"

or you can use sh script 'run-docker-local' from project root to run docker image

docker pull ghcr.io/knubisoftofficial/cost-optimization-testing-tool:master
cd cost-optimization-testing-tool
./run-docker-local ghcr.io/knubisoftofficial/cost-optimization-testing-tool:master -c={configuration-file-name}.xml -p={absolute-path-to-test-resources}
  • Build your own image
cd cost-optimization-testing-tool
docker build . -t {image-name}
docker run --rm --network=host --mount type=bind,source="{absolute-path-to-test-resources}",target="{absolute-path-to-test-resources}" "{image-name}" "-c={configuration-file-name}.xml" "-p={absolute-path-to-test-resources}"

or you can use sh script 'run-docker-local' from project root to run docker image

cd cost-optimization-testing-tool
docker build . -t {image-name}
./run-docker-local {image-name} -c={configuration-file-name}.xml -p={absolute-path-to-test-resources}

Run via IDE (Intellij IDEA)

  • Opportunity 1:
  1. Click on Add Configuration...

add_configuration.png

  1. Click on Add new, then select Application

add_new_application.png

  1. Enter the settings as in the screenshot, input your own values for

    --config={configuration-file-name}.xml --path={absolute-path-to-test-resources}

    and

    {your-working-directory} (usually this is the root of the project, should already be set by default)

settings.png

  1. Сlick Apply + OK

  2. Run the COTT

run.png

  • Opportunity 2:
  1. Open src/test/java/com/knubisoft/cott/runner/TestRunner.java, right-click on launch icon then click on Modify Run Configuration

test_runner.png

  1. Repeat steps 3 to 5 from the first opportunity.

🎯 Run using site sample as a system for testing

  • Сlone the project with test resources

  • Run the site sample and report server

cd cott-test-resources
docker-compose -f docker-compose-site-sample.yaml up -d
docker-compose -f docker-compose-report-server.yaml up -d
  • Check if the site sample and report server started successfully

It usually takes 1-3 minutes to launch a site.

site-sample: http://localhost:8080

report-server: http://localhost:1010

  • Run the test tool using one of the options in the "How to run the tool" section above

Use the following arguments:

--config=config-local.xml --path=/{your-part-of-path}/cott-test-resources



Main functions

  • 🌎 High level of cross-browser
  • 📷 Screenshot errors fixation of a tested scenario
  • 📦 Consistent and clear tag structure out of the box
  • 🔀 Web/Mobile/API/Db testing, within one test scenario
  • 🔑 Custom authorization
  • 🔌 Ability to work with multiple databases and APIs using aliases
  • 📊 Reporting Tool
  • 🔧 Unlimited Integrations

Cross Browser

COTT can launch testing scripts in such browsers as:

  • Google Chrome
  • Firefox
  • Safari
  • Edge
  • Opera

Cross Browser Testing with COTT is easy:

  • Convenient cross browser configuration
  • Launched in 3 modes ( Local, Remote, Docker )
  • <screenRecording> ability in Docker
  • Configuration flexibility of each browser
  • Versioning
  • Maintaining capabilities and `options

Browser settings configuration structure:

    <web enabled="true">
        <baseUrl>http://localhost:4444</baseUrl>

        <browserSettings>
            <takeScreenshots enable="false"/>
            <elementAutowait seconds="3"/>

            <browsers>
                <chrome enable="true" maximizedBrowserWindow="true" headlessMode="false" browserWindowSize="800x600">
                    <browserType>
                        <localBrowser/>
                    </browserType>
                    <chromeOptionsArguments>
                        <argument>--incognito</argument>
                    </chromeOptionsArguments>
                </chrome>

                <chrome enable="true" maximizedBrowserWindow="true" headlessMode="true" browserWindowSize="1920x1080">
                    <browserType>
                        <browserInDocker browserVersion="102.0" enableVNC="true">
                            <screenRecording enable="true" outputFolder="/Users/user/e2e-testing-scenarios"/>
                        </browserInDocker>
                    </browserType>
                    <chromeOptionsArguments>
                        <argument>--disable-popup-blocking</argument>
                    </chromeOptionsArguments>
                </chrome>

                <firefox enable="false" maximizedBrowserWindow="false" headlessMode="true">
                    <browserType>
                        <remoteBrowser browserVersion="101.0" remoteBrowserURL="http://localhost:4444/"/>
                    </browserType>
                </firefox>

                <edge enable="false" maximizedBrowserWindow="false" headlessMode="true">
                    <browserType>
                        <remoteBrowser browserVersion="100.0" remoteBrowserURL="http://localhost:4444/"/>
                    </browserType>
                </edge>

Everything listed above allows you to run test scenarios flexibly without local browsers



Screenshots

There is a tag in the global configuration file called <takeScreenshots> - which is a function that captures each step of WEB scenario execution

Advantages:

  • Capturing screenshots without launching a browser
  • Capturing screenshots in docker
  • Automatic generation in scenario folder
  • Capturing exceptions

➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖

screenshots.png

➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖

This function also makes it easy to view the scenario step at which an error occurred as <screenshotsLogging> captures an exception. Thus, we are able to visually monitor the cause of the scenario error by running the scenario in different browser options



Consistent and clear tag structure out of the box

COTT has an easy to understand and functional set of tags with a uniform structure, which allows you to quickly master the skills of writing an automated test scenario


'WEB script'
<web comment="Start WEB scenario">

    <input comment="Input 'Email'"
           locatorId="locator.email"
           value="[email protected]"/>

    <input comment="Input 'Password'"
           locatorId="locator.password"
           value="Testing#1"/>

    <click comment="Click on 'Log in' button" locatorId="locator.logInButton"/>

</web>

<postgres comment="Check all user's in system" alias="Api_Alias" file="expected_2.json">
    <query>SELECT * FROM t_user</query>
</postgres>

'HTTP - request'
 `   <http comment="Re-login to get a new JWT token" alias="SHOPIZER">`
        `<post endpoint="/api/v1/customer/login">`
            `<response code="200" file="expected_28.json"/>`
            `<body>`
                `<from file="request_28.json"/>`
            `</body>`
        `</post>`
    `</http>`

⤵️⤵️⤵️

"The XSD schema makes working with tags even easier.
To compose a test scenario, you only need to set the necessary values ​​​​in the drop-down parameters"


Web/Mobile/API/Db testing, within one test scenario

One of the main features of COTT is that within one test scenario with a convenient structure and easy-to-read scripts, we can simultaneously test both WEB and perform REST - API - testing using a variety of approaches such as Data Driven Testing, Behavior Driven Development, Test Driven Development and others. We can also easily access the database and declare variables.

"Basically, perform end2end software testing:"

Advantages of end2end testing:

  • Coverage of all levels of the system
  • Regression
  • System description
  • CI/CD
  • Global testing report

"Tag's Structure - end2end script:"
   <web comment="Start WEB action">
        
        <click comment="Click on 'Add to cart' 'Spring in Action' book"
               locatorId="shopizer.addToCartSpringBook"/>
    </web>


  <var comment="Create variable for shp_cart_code value " name="CART_CODE">
        <jpath>$.[0].content.[0].shp_cart_code</jpath>
    </var>


 <http comment="Make sure the order is added to the cart" alias="API">
        <get endpoint="/api/v1/cart/{{CART_CODE}}">
            <response code="200" file="expected_18.json"/>
        </get>
    </http>


Custom authorization

In progress



Ability to work with multiple databases and APIs using aliases

When setting up a configuration file, for each integration tag in the structure there is a mandatory and unique alias parameter, with the help of which we get interaction with databases and APIs within the test scenario.
Each service has a flag: true & false

  • Configure the services you need and easily switch between them

Database Integrations

 <postgres alias="FIRST" enabled="true">
 <mongo alias="SECOND" enabled="true">
 <elasticsearch alias="THIRD" enabled="false">
 <mysql alias="FOURTH" enabled="false">

API Integrations

    <apiIntegration>
         <api alias="FIRST" url="http://localhost:4000/"/>
    </apiIntegration>

    <apiIntegration>
         <api alias="SECOND" url="http://localhost:8081/"/>
    </apiIntegration>

    <apiIntegration>
         <api alias="THIRD" url="http://localhost:8082/"/>
    </apiIntegration>


Reporting Tool

COTT - has the ability to generate reports both locally and on the server

Features of reports:

  • Convenient dashboard with graphs of the results of passing test scenarios
  • Step-by-step analysis of each step of the test script
  • Access to view screenshots of each step
  • Full stacktrace of each step
  • Availability for the entire development team

report (1) (1).gif



Unlimited Integrations

COTT supports a set of integrations out-of-the-box that you can use to configure your project. If you need a certain integration to test your software, it will not be difficult to add and use it in the future. As you can imagine, we can automate the testing for a given project structure.

"Out-of-the-Box Integrations:"

Integrations

  • Clickhouse
  • DynamoDB
  • Elasticsearch
  • Kafka
  • MongoDB
  • MySQL
  • Oracle
  • Postgres
  • RabbitMQ
  • Redis
  • AWS (S3, SES, SQS)
  • Sendgrid
  • graphQL


COTT is the perfect testing tool for less experienced testers who want to switch from manual to automated testing. They can start creating automated tests within several days without knowing a programming language. To write a test scenario, a QA specialist just needs to get familiar with the list of tags (commands), which are under the hood of the framework.

We chose XML to form the test scenario

The language does not depend on the operating system and processing environment. XML serves to represent some data in the form of a structure, which we continue to develop for you, so that QA specialists could write scripts that will be understandable to all team members.

Advantages:

  • Easy-to-read, simple form;
  • Standard coding type;
  • Ability to exchange data between any platforms;

The structure of each < tag > has a uniform writing standard, due to which the perception of the test scenario is visualized.

We have also added a mandatory description field to each test step to encourage not only test development but also its maintenance.


Ease to use


Structure COTT

To start working with COTT, you need to create a directory with your project's resources which will contain the main folders of your directory and global-config - file with global configuration settings for your project. As COTT requires the presence of mandatory folders that must be in the structure of your resources, these folders will contain the data for testing, and the test scenarios themselves.


Folder structure

'Mandatory folders':
  • 📁 data
  • 📁 locators
  • 📁 report
  • 📁 scenarios

Folder data - root folder for test data (used to store datasets and files for migration, in various formats)

Inside this, you also have the option to separate your test files into separate folders for a readable structure and ease of use

When you first start COTT - inside the "data" folder, we left the default folder structure for storing test data, which you can change in the future for yourself

Default folders inside data:

  • credentials
  • patches
  • variations

'Purpose' ⤵️
  • credentials - Folder for storing system user data for authorization within the test scenario
{
  "username": "Test",
  "password": "Qwerty12@"
}


  • patches - Folder to store datasets for testing ( sql, csv, javascript, xlsx, partiql, bson, shell and others)
INSERT INTO t_role (id, name, description, enabled)
VALUES (1, 'ADMIN', 'Owner company', true),
       (2, 'USER', 'Admin company', true),
       (3, 'SUPERUSER', 'Lawyer company', true);

var greeating = 'Hello, World';
console.log(greeating)

var="\"The content after crated a file.\""
echo "{ \"content\":" " $var }" > ./shell-1.json
rm -f ./shell-1.json

  • Variations - Folder in which a data set is created and stored for interacting with the WEB (in the format csv )

  • Folder locators - Folder where element locators are stored (For interaction with WEB)

  • Inside of locators folder there should be such folders as:

  • component - a folder for storing locators that refers to the footer, header elements of the pages. (It is recommended to separate these locators for structuredness and ease of use)

  • pages - a folder in which the locators of a particular page are stored

  • (In the xml file with locators, which is located in the pages folder, it is possible to request the desired footer and header component using the tag: <include>

include.png

<locator locatorId="registerButton">
            <id>registerLink</id>
        </locator>

This way we will be able to pass locators that are in the component folder to the scenarios through the xml file pages


report - FolderFolder in which the test report will be generated and divided by date where the test pass report will be generated


reportFolders.png


scenarios - Folder for creating and storing test scenarios

<scenario xmlns="http://www.knubisoft.com/e2e/testing/model/scenario">

    <overview>
        <description>Demonstration of the work of the 'assert' tag</description>
        <name>Assert</name>
    </overview>

    <tags>
        <tag>WEB</tag>
    </tags>

Initial Folders Structure Generation

It is also possible to easily create folder generation, which will contain the initial data, the script and the configured global config file, for the first test run and also in this folder you can add your own settings so that it does not take time to create new folder configurations

When you select a specific tag, a list of parameters that must be filled in to implement integrations, will be revealed

By automating these tags, you can easily configure your global-config.xml – file by substituting the necessary values for a quick start with the project

After the installation is completed and the global-config.xml file is formed, you can start studying the tags used for testing and writing the first test scenario.

There are two convenient options for generating folders:

  • Generate via terminal with command: java -jar cott-with-dependencies.jar -g=/yourFolder

  • First you need to specify the path to the target file

targetFile.png

  • The second step you need to write in the terminal is the command java -jar cott-with-dependencies.jar -g=/yourFolder

  • After the steps you have taken, a folder is generated

generateJar.png

  • Generation via edit configuration: -g=/home/admin/IdeaProjects/cott-test-resources/generateFolder

To configure, you need to go to edit configuration:

  • Now you need to specify the path to the folder.

configuration.png

  • by default -g=/home/admin/IdeaProjects/cott-test-resources/generateFolder.

generateConfigation1.png

  • The next step is to run Run'COTTStarter'

afterClick.png

After starting, you will see a message in the terminal, about the successful generation of the folder and your created folder will be displayed in the project.


generatePage1.png

  • After creation, you get a generated created folder with ready-made settings

📁 data

📑 greating.js

📑 shell-1.sh

🗂 locators

📁 component

📁 pages

📁 report

🗂 scenarios

🗂 default

📑 expected_1.json

📑 scenario.xml

📑 global-config-example.xml

There is also an example for familiarizing the generation of folders

⤵️

Typing SVG

⤴️


Global config file structure

global-config.xml - is a file which will contain the main configuration settings of your project You have the ability to create multiple configuration files and divide the testing environment.

For example:

  • global-config-local.xml
  • global-config-dev.xml
  • global.config.jenkins.xml

global-config.xml - is a XSD schema of which has an optional set of tags out-of-the-box, which extremely simplifies the settings configuration process.

The set of basic tags for configuration global.config.xml:

<stopScenarioOnFailure>
<delayBetweenScenariosRuns> 
<runScenariosByTag>
<report>
<web>
<auth authStrategy>
<integrations>

  • stopScenarioOnFailure

<stopScenarioOnFailure> - is a tag that controls the passage of your test scenarios.

<stopScenarioOnFailure>false</stopScenarioOnFailure>

Contains a flag:

true ( when running several or one test scenario, and detecting an exception in one of them, the starter will be stopped with an error output in a specific scenario and at a specific step )

false ( when running several or one test scenario, and detecting an exception in one of them, the starter will not be stopped, all scenarios and steps will be executed until the starter finishes, with a further output of an exception for all scenarios that have been passed and received an exception.


  • delayBetweenScenariosRuns

<delayBetweenScenariosRuns> - is a tag adds a timeout between script executions

<delayBetweenScenariosRuns seconds="3" enabled="true"/>

Contains a flag:

second - Sets the time counter between the script timeout

true - ( when selected, the ability to perform a timeout between scenarios is enabled )

false - ( when selected, the ability to perform a timeout between scenarios is turned off )


  • runScenariosByTag

<runScenariosByTag> - allows you to be able to implement a global launch of test scenarios for a specific tag. It is designed to separate the launch of test scenarios.

  • Create your own filterTag for your convenience

  • _ Set a unique tag for each scenario_

    <runScenariosByTag enable="true">
        <tag name="WEB" enable="true"/>
        <tag name="API" enable="false"/>
    </runScenariosByTag>

When configuring <runScenariosByTag> - you have the ability to create your own set of tags and give them names that will be used to separate and run test scenarios by tag.


  • report

<report> - configuring local and server reporting

    <report>
        <extentReports projectName="shop">
            <htmlReportGenerator enable="true"/>
            <klovServerReportGenerator enable="true">
                <mongoDB host="localhost" port="27017"/>
                <klovServer url="http://localhost:1010"/>
            </klovServerReportGenerator>
        </extentReports>

  • auth authStrategy

<auth authStrategy> - selection of the authorization strategy used in the test scenario

  • basic
  • jwt
  • Oauth2
  • custom

  • WEB config

<WEB> - is intended to set configurations of the WEB part

When this tag is opened, a list of internal WEB tags appears that must be filled in, to set the configuration. Tags inside WEB:

<browserSettings>
<takeScreenshots>
<elementAutowait>
<browsers>
<chrome>
<opera>
<safari>
<edge>
<firefox>
<browserType>
<localBrowser>
<browserInDocker>
<<remoteBrowser>
<chromeOptionsArguments>
<capabilities>
<baseUrl>

config_ui 2 (1).gif


  • integrations

<integrations> - is made to set up integrations with API, Database

Inside of integration tag there is a set of integrations out-of-the-box:

<apis>
<clickhouse>
<dynamo>
<elasticsearch>
<kafka>
<mongo>
<mysql>
<oracle>
<postgres>
<rabbitmq>
<redis>
<s3>
<sendgrid>
<ses>
<sqs>
<graphQl>
  • integration config

integration_conf.gif


Locators

WEB

Before describing the WEB related tags, it is necessary to know how to create a locator and connect it to a scenario

Available types of locators:

  • id
  • class
  • xpath
  • cssSelector
  • text

To create locators you need:

  • To create a file with the name of the page or component (e.g:loginPages.xml)
  • Inside loginPages.xml open <locators> tag
  • To give a unique name to the locator’s element
  • To set a type of the locator ( id, xpath, class )
  • Place content in the selected locator

Example ( быстрое видео создания локатора ) Animation


Structure of the tag:

<locator locatorId="firstName">   - a unique name of the locator
      <id>register_account_first-name_input</id>   - selected value of interaction
 </locator>

Do you wonder how there is a connection between the locator and tags?

Let's demonstrate the locator connection using the tag <click>

The tag that executes the ‘click’ command on the selected page element

     <click comment = "Click on 'Login' button"  - a comment of the action performed
        locatorId="locators.firstName"/>  - path to desired element

After the element locatorId= "" - we’re paving the way to the locator we need


Tag's

The WEB tag are the main interpreter containing the set of all tags for interacting with the user interface, which will be described below. You can use third-party link navigation.

Tag's usage example:

An example of using the tag:

 <web comment="Start web scripts">

        <navigate command="to" comment="Go to base page"
                  path="/shop/"/>
        
        <wait comment="Wait for visualize a click"
              time="2" unit="seconds"/>
        
        <click comment="Click on the 'Shopizer' website link which opens in a new window" 
               locatorId="shopizer.webSiteShopizer"/>

    </web>

WEB tag

  • click
  • input
  • dropDown
  • navigate
  • assert
  • scrollTo
  • scroll
  • javascript
  • include
  • wait
  • clear
  • closeTab
  • dragAndDrop
  • repeat
  • hovers
  • image
  • switchToFrame
  • hotKey

 <input>

A tag that executes a command to enter a value in the selected field

 <input comment="Input first name" 
     `locatorId="ownerRegistration.firstName" 
 value="Mario"/>`   -  в value -  in value there is the value of entering 'Mario'

Also, using the <input> tag, we have the opportunity to insert an image file of the desired format (for example, add an account photo)

To implement this function, the desired image must be located in the same folder as your test scenario, for example, in a folder called scenario

<input comment="Add profile photo" 
     locatorId="userPhoto.addProfilePhotoButton" -  selected element of interaction
  value="file:Firmino.jpg"/>   -  Inside of `value` we’re indicating that we want to insert a file using `file:` , after that we enter the name of the file (which is located in the scenario folder)

i.e. Firmino.jpg.

 <DropDown>

Tag of interaction with select function

< This tag is used for select and deselect function, with the ability to interact with multiselect

  1. Select One Value

Selects one value from the list

   <dropDown comment="Select 'Country'" locatorId="registration.country">
            <oneValue type="select" by="text" value="Spain"/>
        </dropDown>
  1. deSelect One Values

Drops one value from the list

   <dropDown comment="Deselect 'Country'" locatorId="registration.country">
            <oneValue type="deselect" by="text" value="Spain"/>
        </dropDown>

Параметр by= "" - parameter offers a choice of interaction:

  • text
  • value
  • index
  1. Deselect All Values

Drops all values from the list

<dropDown comment="Deselect all values'" locatorId="registration.country">
            <allValues type="deselect"/>
        </dropDown>

<navigate>

A tag that allows you to navigate through WEB pages using a URL

To use, you only need to add the path to the desired page, as by default your base URL will be used, which is specified in the configuration settings.

It’s got 3 commands:

  • to
  • back
  • reload

1.NavigateTo

Navigation to the page mentioned in path=""

 <navigate comment="Go to register account page" 
                  command="to" 
                  path="/registerAccount"/>  - path URL
  1. NavigateBack

Return to the previous page of the scenario

<navigate comment="Go to register account page"
                  command="back"/>
  1. NavigateReload

Reloading the current scenario page

<navigate comment="Go to register account page"
                  command="reload"/>

<assert>

The tag that allows you to confirm:

That the previous action led to the desired result

  • Example:

You used the <navigateBack> - tag to return to the previous page

To confirm the execution of this function, the <assert> tag is used

As a confirmation, you will need to create element’s locatorId of the needed page and forward it to the <assert> tag

 <assert comment="Verify that the transition to the previous page was successful" 
       locatorId="billing.billingPositiveAssert"  - the name of element locator of the needed page
           attribute="id">    - attribute choice ( type, autocomplete, name, placeholder )
           <content>
                   register_profile-photo_input    - the element itself 
          </content>
  </assert>

<scrollTo>

A tag that allows you to scroll the page to a specific element

 <scrollTo comment="Scroll to element" 
                  locatorId="footer.registerButton"/>

<scroll>

Scroll Up or Down by pixel

  • value="" - takes default value in pixels
    <scroll comment="Scroll Down" value="1024" direction="down" measure="pixel" type="page"/>

    <scroll comment="Scroll UP" value="976" direction="down" measure="pixel" type="page"/>

Scroll Up or Down by percent

  • value="" - has a maximum value of 100 (when scrolling as a percentage)
    <scroll comment="Scroll Down in percent" value="60" direction="down" measure="percent" type="page"/>

    <scroll comment="Scroll Up in percent" value="80" direction="up" measure="percent" type="page"/>

<javascript>

The tag that calls the code that is in the folder 'javascript'

<javascript comment="Apply javascript function" file="function.js"/>

 <include>

A tag that allows you to run a ‘scenario inside a scenario’ (often used to bundle different scenarios)

Implemented by passing the path to the scenario, which is located in a specific folder

The beginning of the path always starts with the scenarios folder, since all folders with created scenarios must be stored in this folder

 <include comment="Add scenario for login to the system"
       scenario="/nameOfScenarioFolderWithSlash"/> - a path to the scenario you need to launch

 <wait>

A tag that allows you to freeze the passage of the scenario to perform a certain function that requires a wait. Afterwards, the scenario will run at its normal pace

 <wait comment="Waiting for loading Dashboard page" 
      time="1" unit="seconds"/>  - There is a choice: `seconds` or `milliseconds`

 <clear>

A tag that allows you to clear the entered data in the fields

 <clear comment="Clear name field"  
   locatorId="updateProfile.name"/> 

 <closeTab>

A tag that allows you to close the current browser tab

 <closeTab comment="Check if closing 'second tab' works correctly"/>

A tag that allows you to check by clicking on an object and dragging it to another location

<dragAndDrop comment="Drag and drop action"
                     fromLocatorId="nativeDrag.dragFirst"
                     toLocatorId="nativeDrag.dropSecond"/>

It is also possible to upload any file to the site

<dragAndDrop comment="added image" toLocatorId="dragAndDrop.image" dropFile="true">
            <filePath>/home/bohdan/image.jpg</filePath>
        </dragAndDrop>

 <repeat>

A tag that allows you to repeat any action used in the tags above

Used both inside and outside the WEB tag

   <repeat comment="Repeat action" times="5">  - the number of repetitions 
          <click comment="Click on 'Send button'"
            locatorId="locator.clickButton"/>
  </repeat>`

 <hovers>

A tag that allows you expand the list dropdown which supports the function hover

        <hovers comment="Open dropdown list with 'hover' function">
            <hover comment="Open drop down 'Novels' tab" locatorId="locator.novels"/>
        </hovers>

  

Also, using the tag, you can check the display of the image of some element

compareWith locator="image.picture" ,

        <image comment="Compare image" file="picture.png" highlightDifference="true">
            <compareWith locator="image.picture"/>
        </image>

<switchToFrame>

A tag that allows you to connect to an external API on the page

switchToFrame - when a tag is open, the full global tag set can be used with web tag's

  • After closing the tag, you can continue to use locators inside the page
 <switchToFrame comment="Open api frame on the site"
                       locatorId="frame.page">

        <input comment="Add email the page"
               locatorId="frame.email"
               value="[email protected]"/>

               <clear comment="Clear email"
                      locatorId="frame.clear"
                      highlight="true"/>

                      <click comment="Click button"
                             locatorId="frame.button"/>

        </switchToFrame>

<hotKey>

A tag responsible for individual keys or their combinations that are on the keyboard

hotKey - when a tag is open, the full global tag set can be used with hotKey tag's

  • After closing the tag, you can continue to use locators inside the page
  • There is a division of tags without locators and with locators

Tags are used with locators

  • copy
  • cut
  • past
  • highlight
<hotKey comment="Paste the password">
            <paste comment="Paste the password"
                   locatorId="hotKey.password"/>
        </hotKey>

and the same example without using a locator

  • tab
  • space
  • backspace
  • escape
  • enter
 <hotKey comment="click enter">
            <enter comment="click enter"/>
        </hotKey>

Variations

What are ‘variations’ in COTT?

CSV is a file that contains compiled QA-data for testing

Variations play an important role in writing WEB automated tests. When your QA specialists are faced with testing the functionality of the fields and selectors, using 'variations' they will be able to prepare the necessary data set in a short time, which will be in a tabular form inside the csv file, and easily test the validation with the main functionality of the fields and selectors.

  • Variations are effective for Positive and Negative testing

  • Data is often organized into tables, so CSV files are very easy and efficient to use

Usage example:

We have a new user registration form, which consists of the following fields:

  • First name
  • Last name
  • EmailAddress
  • Password
  • Repat Password

The structure of this csv-file will have unique field names with enumeration of values for each of them.


variations.png


When we link our csv file to the scenario, the scenario will be run 5 times in a row, since the table has 5 columns with different data inside. Each run of the scenario will iterate through all the values that are in this file.

In order to test the functionality and validation of each field manually, it would take a lot of time to write test documentation and conduct manual testing, however the usage of Variations will take much less time, and the efficiency will increase too

Creation of such csv file with data for testing is quite simple because:

The CSV file consists of rows of data and delimiters that mark the boundaries of the columns. In one such positive or negative file, QA-specialists will be able to sort through all possible sets of values that will help to test this functionality more effectively.


Use of variation

To include variations in a script, add the value to the schema path 'variations="fileName"


variations web.png

registerN - the name of the brought out csv file


Usage example of variations in a scenario

 <input comment="Input first name"
    locatorId="registerAccount.firstName" 
     value="{{firstName}}"/>  the usage of variation in `value` 


 <input comment="Input lastname" 
    locatorId="registerAccount.lastName"
     value="{{lastName}}"/>  - the usage of variations



 <input comment="Input email" 
    locatorId="registerAccount.emailAddress"
     value="{{emailAddress}}"/>   - the usage of variations



 <input comment="Input password" 
    locatorId="registerAccount.password"
     value="{{password}}"/>   - the usage of variations



 <input comment="Repeat password" 
    locatorId="registerAccount.repeatPassword"
     value="{{repPassword}}"/>   - the usage of variations


Integrations tag's

Database tag's:

  • Postgres
  • Mongo
  • Oracle
  • MySQL
  • Dynamo
  • ClickHouse
  • Redis
<postgres>

Database interaction tags:

   <postgres comment="Get all user's from system" alias="shopDb" file="expected_1">
        <query>SELECT * FROM t_user</query>
   </postgres>

<mongo>
  <mongo comment="Get all user's from system" alias="shopDb" file="expected_1">
      <query>SELECT * FROM t_user</query>
  </mongo>

<oracle>
   <oracle comment="Get all user's from system" alias="shopDb" file="expected_1">
      <query>SELECT * FROM t_user</query>
   </oracle>

<mysql>
   <mysql comment="Get all user's from system" alias="shopDb" file="expected_1">
       <query>SELECT * FROM t_user</query>
   </mysql>

<dynamo>
   <dynamo comment="Get all user's from system" alias="shopDb" file="expected_1">
       <query>SELECT * FROM t_user</query>
   </dynamo>

<clickhouse>
   <clickhouse comment="Get all user's from system" alias="shopDb" file="expected_1">
       <query>SELECT * FROM t_user</query>
   </clickhouse>

<redis>
   <redis comment="Get all user's from system" alias="shopDb" file="expected_1">
       <query></query>
   </redis>

Queue tag's:

  • Rabbit
  • Kafka
  • SQS
<rabbit>
    <rabbit comment="Receive 2 times and send 1 time"
            alias="rabbit-one">
        <receive queue="queue">
            <file>expected_1.json</file>
        </receive>

        <receive queue="queue">
            <message>[]</message>
        </receive>

        <send routingKey="queue">
            <file>body_1.json</file>
        </send>
    </rabbit>

<kafka>
  <kafka comment="Receive then send and receive 2 time"
           alias="kafka-one">
        <send topic="queue2" correlationId="343gfrvs-dh4aakgksa-cgo60dmsw-sdf4gj62">
            <file>body_2.json</file>
        </send>

        <send topic="queue3" correlationId="dfskogdfa9sd-rekjdfnkv-sdfkjewnd8-erkfdn">
            <file>body_4.json</file>
        </send>

        <receive topic="queue2" timeoutMillis="1200">
            <value>
                [ {
                "key" : null,
                "value" : "{\n \"squadName\": \"Still rule cool\",\n \"homeTown\": \"Metro Tower\",\n \"formed\":
                2018,\n \"secretBase\": \"Rower\",\n \"active\": false\n}",
                "correlationId" : "343gfrvs-dh4aakgksa-cgo60dmsw-sdf4gj62",
                "headers" : { }
                } ]
            </value>
        </receive>

<sqs>
  <sqs comment="Compare message from empty queue with file content"
         alias="queue_one"
         queue="queue">
        <receive>expected_1.json</receive>
  </sqs>

Email Services:

  • SES
  • Sendgrid
<ses>
   <ses comment="Sending a message to the email"
         alias="ses_one">
        <destination>[email protected]</destination>
        <source>[email protected]</source>
        <message>
            <body>
                <html>Amazon SES test</html>
                <text>Hello World</text>
            </body>
            <subject>TITLE</subject>
        </message>
    </ses>

<sendgrid>
    <sendgrid comment="Sending a message to the email from file"
              alias="sendgrid_one">
        <post url="mail/send">
            <response code="202"/>
            <body>
                <from file="body1.json"/>
            </body>
        </post>
    </sendgrid>

Object storage:

  • S3
<s3>
   <s3 comment="Upload json file to the bucket"
        alias="bucket"
        key="/com/tool/integration-testing/expected_2.json">
        <upload>expected_2.json</upload>
    </s3>

Search system:

  • ElasticSearch
<elasticsearch>
    <elasticsearch comment="Execute elasticsearch commands - get list of indexes"
                   alias="elastic_one">
        <get url="/_cat/indices">
            <response file="expected_1.json">
                <header name="content-length" data="0"/>
            </response>
        </get>
    </elasticsearch>

Data request:

  • graphQL
<graphqlIntegration>
<graphqlIntegration>
            <api alias="graphql" url="https://rickandmortyapi.com" enabled="true"/>
        </graphqlIntegration


Multidatasets

COTT has the ability to migrate data to the database using extensions such as:

All test data is stored in the Data folder

  • sql
  • csv
  • xlsx
  • partiql
  • bson

Migrations

Migration tag's:

  • migrate
<migrate>
  • migrate tag interacts with all existing relational databases
    <migrate comment="Add data for database" alias="mySql">
        <data>dataset.sql</data>
    </migrate>

REST API - Scripts

  • <var>
  • <http>
  • <auth>

Var

A variable is a named or differently addressable area of ​​memory that can be used to access data. In simple words, a variable is a data store. You can put any value here (for example, a number, a line, or another data type). Variables store certain data that can later be used in the program.

Variable is flexible:

  • it can store information;
  • you can extract information from it, which will not affect the value of the variable itself;
  • New data can be written into it

In order to create a variable, you must declare it (i.e. reserve a memory cell for certain data)

How to create a variable in COTT?

  • Let's say our test scenario has a <postgres> step where we access the database to display data from a specific table and retrieve an authorization token.
  <postgres comment="Get token and user_id from t_one_time_token table" file="expected_7.json" alias="central">
        <query>SELECT token, user_id
            FROM t_user_one_time_token ORDER BY ID DESC LIMIT 1
        </query>
    </postgres>
  • After executing this request, we will get the actual_1.json file with the database response, and pass it to the expected_1.json

Declaration of variable with data extraction:

     <var comment="Created token var" name="TOKEN"> 
          <jpath>$.[0].content.[0].token</jpath> - an internal tag with which we extract the data we need from `expected_file`
     </var>
  • name="TOKEN" - the name of the declared variable <jpath>$.[0].content.[0].token</jpath> - a way to extract data from json (expected file)

An example of passing the variable in an http request

<http comment="Get user profile" alias="API">
        <get endpoint="/api/v1/profile">
            <response code="200" file="expected_1.json"/>
            <header name="Authorization" data="Bearer {{TOKEN}}"/> - variable usage
        </get>
   </http>

Structure HTTP

The HTTP request structure has all the basic features of API testing tools

HTTP in a test scenario

  <http comment="Check the ability login system" alias="API">  - action description + alias API
      <post endpoint="/api/v1/login"> - indication of the type of request + used endpoint
        <response code="200" file="expected_1.json"/>  - response code + expected result
          <body> 
                  <from file="request_1.json"/> - transfer of the request body
          </body>
       </post>
   </http>

   <var comment="Get JWT token from previous expected_file" - creating the variable
         name="profile"
         path="$.body.token"/>

   <http comment="Get user profile" alias="API">
        <get endpoint="/api/v1/profile">
            <response code="200" file="expected_1.json"/>
            <header name="Authorization" data="Bearer {{profile}}"/> - variable usage
        </get>
   </http>

Methods:

  • GET
  • POST
  • PUT
  • PATCH
  • DELETE
  • OPTIONS
  • HEAD
  • TRACE

Response code

  • 1xx
  • 2xx
  • 3xx
  • 4xx
  • 5xx

Request file

{
  "attributes": [
    {
      "id": 4
    }
  ],
  "product": 1,
  "quantity": 1
}

Expected Result

{
  "body": null,
  "errors": {
    "code": [
      "Forbidden"
    ],
    "message": [
      "Access is denied"
    ]
  },
  "debugInfo": {
    "requestId": "p(any)",
    "stackTrace": null
  }
}

Comparison

actual.png


After executing each HTTP request, comparison generates an actual_file which contains the API response in json format with the response code and data

actual_file is generated in order for QA-specialist to understand how the system reacted to this check right away. If QA is satisfied with the actual result, then all the data from actual_file, QA transfers to a file called expected_file ( which is in the http-request structure as the expected result of the test, to pass this test successfully.

HTTP in COTT allows you to perform smoke testing (and make sure that nothing important is broken), conduct unit and integration testing, run the same tests with various sets of input data, or quickly perform any supporting actions to create test data and situations.


Create HTTP - scripts

gif.gif


Function authentication

COTT - has a unique <auth> tag - allowing instant authorization by a system user within a test script

In the global-config-file settings you have the option to choose an authorization strategy by opening the tag:

<auth authStrategy=""

Types of authorization <auth> tag

  • Basic Authentication
  • Token Authentication
  • OAuth2 Authentication
  • Custom ( in progress )

After selecting an authorization strategy in the global-config-file by opening the <auth> tag in the test script, authorization will be performed in relation to the selected strategy

The <auth> tag is mainly used for REST - API - testing - as it allows you to perform many requests within yourself, under a specific system user

    <auth comment="Test case for auth tag" apiAlias="SHOPIZER" credentials="jwt_user.json" loginEndpoint="/api/v1/customer/login">
        <http comment="Get all stores in system" alias="SHOPIZER">
            <get endpoint="/api/v1/auth/customers/profile">
                <response code="200" file="expected_3.json"/>
            </get>
        </http>
    </auth>
  • apiAlias="" - API interaction alias
  • credentials - Authorization data file
  • loginEndpoint="" - Authorization endpoint used

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

    • where user_1.json the name of the file that contains the necessary data for authorization
    • closing the tag means logging out this user within the test script

The interaction of the tag and the HTTP request simplifies REST API testing. For example, when performing many checks inside the system (private), you will not need to pass the authorization token each time, since we will act already under an authorized user inside the tag

  • This function makes it easy to test functionality with complex logic and a high level of privacy, since <auth> will allow you to instantly switch from one user of the system to another inside the test script, which will allow the tester to effectively test functionality with a complex system of permissions and rights for users systems in a short time


Mobile Testing

Android local testing

With easy setup and connection, you can use mobile testing using emulators or a mobile browser:

  • You also have the option to run mobile tests using Appium server and Android Studio.
  • Convenient separation between app testing and mobile browser testing using an emulator .
  • Clearly adding information about devices in the global configuration. using settings global config file.
  • Multiple tags can be used in one test script
  • Using different android emulators in Android Studio
  • Easy setup and use of appium settings
  • Using multiple emulators in one test case when using global-config-file

Native Configuration

  • NATIVE config
  • Open <native> tag to configure mobile application

<native> is intended to set configurations of the mobile application part

When this tag is opened, a list of internal mobile application tags appears that must be filled in, to set the configuration. Tags inside mobile application:

<appiumServerUrl>
<deviceSettings>
<takeScreenshots>
<elementAutowait>
<devices>
<device>
<udid>
<deviceName>
<appPackage>
<appActivity>

  • Ability to run different applications on any emulators

  • It is possible to create several devices in the global config file

<native enabled="true">
       <appiumServerUrl>http://127.0.0.1:4723/wd/hub</appiumServerUrl>
       <deviceSettings>
           <takeScreenshots enable="true"/>
           <elementAutowait seconds="300"/>
           <devices>
               <device platformName="android" enabled="true">
                   <udid>emulator-5554</udid>
                   <deviceName>Pixel 5 API 30</deviceName>
                   <appPackage>com.todoist</appPackage>
                   <appActivity>com.todoist.alias.HomeActivityDefault</appActivity>
               </device>
           </devices>
       </deviceSettings>
   </native>
  • <native enabled="true/false“>- add the ability to turn on/off all native configurations in general
  • <appiumServerUrl>- path to the port where your Appium server is running

DEVICE tag where you specify your emulator/real device information:

  • <device platformName - gives you ability to choose android device
  • enable - ability to turn on/off your device
  • udid - unique identifier of your device, you can get it in terminal
  • deviceName - the name of your device, can see it in Android studio or get from terminal

App you can write here path to your app and Appium will install it for you on current device

  • appPackage - We use for application todoist com.todoist and you can use another application
  • appActivity - We use for application com.todoist.alias.HomeActivityDefault and you can use another application

The NATIVE tag are the main interpreter containing the set of all tags for interacting with the user interface, which will be described below. You can use third-party link navigation.

Tag's usage example:

An example of using the tag:

<native comment="Start running tests in mobile app">

        <wait comment="wait 5 sec"
              time="5"/>

        <click comment="Click 'Continue with email' the page"
               locatorId="native.chooseContinue"/>

</native>

NATIVE TAG'S

  • input
  • click
  • assert
  • wait
  • clear
  • image
  • dragAndDrop
  • refresh
  • scrollTo
  • swipe
  • webView
  • navigate
 <input>

A tag that executes a command to enter a value in the selected field

 <input comment="Input first name" 
     `locatorId="native.firstName" 
 value="Name"/>`   -  в value -  in value there is the value of entering 'Name'

 <wait>

A tag that allows you to freeze the passage of the scenario to perform a certain function that requires a wait. Afterwards, the scenario will run at its normal pace

 <wait comment="Waiting for loading Dashboard page" 
      time="1" unit="seconds"/>  - There is a choice: `seconds` or `milliseconds`

 <clear>

A tag that allows you to clear the entered data in the fields

 <clear comment="Clear name field"  
   locatorId="updateClear.name"/> 

  

Also, using the tag, you can check the display of the image of some element

compareWith locator="image.picture" ,

        <image comment="Compare image" file="picture.png" highlightDifference="true">
            <compareWith locator="image.picture"/>
        </image>

<assert>

A tag that allows you to check whether the data is entered or the element is displayed on the page

content The id locator is used or xpath

 <content>com.todoist:id/view_option_description</content> 
<assert comment="Assert plus element"
                locatorId="nativeAssert.assertOptions" attribute="resource-id">
            <content>com.todoist:id/view_option_description</content>
        </assert>

We use W3C setting for these tags, at the moment it only supports for linux android 11 and 12 in the emulator:

  • dragAndDrop
  • scrollTo
  • scroll
  • swipe
  • refresh
<dranAndDrop>

A tag that allows you to check by clicking on an object and dragging it to another location

<dragAndDrop comment="Drag and drop action"
                     fromLocatorId="nativeDrag.dragFirst"
                     toLocatorId="nativeDrag.dropSecond"/>

<scrollTo>

A tag that allows you to check if you scroll to the selected object locator

<scrollTo comment="Scroll the element"
                  locatorId="nativeScroll.data"/>

<scroll>

A tag that allows you to check whether the page is scrolled by the specified parameter with the selected locator

<scroll comment="Scroll the element"
                type="page" value="50"
                locator="nativeScroll.scroll"/>

<swipe>

A tag that allows you to check whether the device swipe left or right

<swipe comment="Swipe on the left"
               direction="left"/>

<refresh>

The tag that allows you to refresh the page

<refresh comment="Refresh the page"/>

        <wait comment="wait 5 second" time="5"/>

<webView>

A tag that allows data to be changed in another open window, inside the tag hybred app, you can use tags that are used in the mobile browser and on the web.

  • click
  • input
  • dropDown
  • navigate
  • assert
  • scrollTo
  • scroll
  • javascript
  • include
  • wait
  • clear
  • closeTab
  • repeat
  • hovers
  • image
  • switchToFrame
  • hotKey
<webView comment="Open webView the page">
            
            <input comment="Input 'test' name"
                   locatorId="webView.txt"
                   value="test"/>

        </webView>

<navigate>

A tag that allows you to interact with default buttons on android

  • back
  • home
  • overview
<navigate comment="Use navigate button home"
                  destination="home"/>

Before describing the native related tags, it is necessary to know how to create a locator and connect it to a scenario

Available types of locators:

  • id
  • xpath

To create locators you need:

  • To create a file with the name of the page or component (e.g:native.xml)
  • Inside native.xml open <locators> tag
  • To give a unique name to the locator’s element
  • To set a type of the locator ( id, xpath)
  • Place content in the selected locator

Structure of the tag:

<locator locatorId="native">   - a unique name of the locator
      <id>com.google:id/btn_welcome_email)</id>   - selected value of interaction
 </locator>

Do you wonder how there is a connection between the locator and tags?

Let's demonstrate the locator connection using the tag

The tag that executes the ‘click’ command on the selected page element

     <click comment = "Clear 'login' the application"  - a comment of the action performed
        locatorId="locators.native"/>  - path to desired element

After the element locatorId= "" - we’re paving the way to the locator we need



Run native test

Install Android Studio

  1. Install Android studio https://developer.android.com/studio or download Xcode from AppStore)
  • Сreate the emulator in Android Studio, choose emulator, example Pixel 6

androidStudio.png

  • Next step need choose Android Version

androidStudio1.png

  • Choose name emulator and click finish

androidStudio2.png

  • After these steps, you will see an emulator with the name you chose
  • We launch the emulator and download the application from the Play Market or download it from third-party resources

emulators.png

  • Run the downloaded application and open Logсat

  • Clear logs on the left

  • Enter the command in the search engine .android.intent.action.MAIN.

  • Find in the logs the value cmp=com We need this value in the config file


itentAction.png

Install Appium server

  1. Install the Appium server: https://github.com/appium/appium-desktop/releases/tag/v1.22.3-4 you can choose the version for Mac OS Windows and Linux
  • Here you can select any host or port connection

  • Pave the path to the SDK folder

  • Choose the chromeDriver that suits you in the emulator

  • We use the default value host:0.0.0.0 The default port is 4723, but you can choose another port that is convenient for you Need to go to change configuration


appiumSettings.png

  • Specify the path Android_Home, we need the final path / home / admin / Android / SDK

  • This folder can be found after installing Android Studio

  • Click after adding the path Save and Restart


home.png


If there is a driver error when launching the emulator for a mobile browser, the following will help you

/home/admin/node_modules/appium/node_modules/appium-chromedriver/chromedriver/linux/chromedriver_linux64_v74.0.3729


byPath.png

  • Now save your changes Save As Preset and your saved config will show up in Preset

presets.png


  • After this step we need to download Appium server

AppuimStart.png


Scenario for native tag

  • The structure of the script file is very simple, the same as a WEB or API test
  • All you need to do is open the native tag and you can start writing
  • Convenient selection of tags for application testing
  • Testing any android applications of your choice
  • Easy connection through the global config file

Step:1

  • Create a folder with the name of the test scenario inside the scenarios folder
  • Create a file scenario.xml inside of the folder

nativeFolder.png


Step:2

  • Open locators folder
  • Create a file called native page in the format: registration.xml
  • Inside the created file - open the <page> tag - fill in the data

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

nativeLocators.png

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • Open the tag <locators>
  • Create the necessary number of locators

Step:3

  • Open the created file scenario.xml
  • Open <scenario> tag
  • Populate the list of dropdown tags

native.png

 <description>
  • A description of the tested scenario

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

<name>
  • A name of the tested scenario

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

<tags>
  • Setting the tag written in global-config-file.xml ( To run the scenario by tags )

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

  • Open up native tag in scenario.xml and start writing your tested scenario

nativeScenario.png


How use uiautomatorviewer

  • To interact, you will need to run the file uiautomatorviewer
  • The file path is in the Android folder /home/admin/Android/Sdk/tools/bin
  • For linux users, you need to enable the feature in order for it to run as an application

linuxUser.png


Thanks to uiautomatorviewer you can select locators by simply clicking on one of the elements, you can also:

  • 📂You сan upload saved screenshots from folder.
  • 📲You can take a screenshot of the screen, provided that Android Studio is running.
  • 🗂You can save screenshots in folders

After launching the emulator and application the Android Studio in uiautomatorviewer need to take a screenshot of the device

avtomatorUI.png

After taking a screenshot of the screen, you have the opportunity to select locators

  • For <id> locator select resource-id just need to copy and paste it into the locator

recourseID.png

  • Example of using resource <id> locator:
    <locator locatorId="name">
         <id>com.todoist:id/fab</id>
    </locator>
  • It is also possible to use <xpath> this locator is divided into two types:
  1. Using // with class with [@content-desc='name'].
  <locator locatorId="name">
       <xpath>//android.widget.nameClass[@content-desc='name']</xpath>
  </locator>

classWithContentDesc.png

  1. Using // with class with [@text='name'].
  <locator locatorId="name">
       <xpath>//android.widget.nameClass[@text='name']</xpath>
  </locator>

textWithClass.png

  • An example test case can be viewed on our YouTube channel.

⤵️

Typing SVG

⤴️


mobileBrowser Configuration

  • MOBILEBROWSER config

<mobileBrowser> is intended to set configurations of the mobile browser part

When this tag is opened, a list of internal mobile Browser tags appears that must be filled in, to set the configuration. Tags inside mobile browser:

<baseUrl>
<appiumServerUrl>
<deviceSettings>
<takeScreenshots>
<elementAutowait>
<devices>
<device>
<udid>
<deviceName>

  • Ability to run different applications on any emulators

  • It is possible to create several devices in the global config file

<mobilebrowser enabled="true">
       <baseUrl>http://10.0.2.2:4000</baseUrl>
       <deviceSettings>
           <takeScreenshots enable="false"/>
           <elementAutowait seconds="300"/>
           <devices>
               <device platformName="android" enabled="true">
                   <udid>emulator-5554</udid>
                   <deviceName>Pixel 5 API 29</deviceName>
               </device>
           </devices>
       </deviceSettings>
   </mobilebrowser>
  • <mobilebrowser enabled="true/false“>- add the ability to turn on/off all native configurations in general
  • <baseUrl>- specify the path to your local port

DEVICE tag where you specify your emulator/real device information:

  • <device platformName - gives you ability to choose android device
  • enable - ability to turn on/off your device
  • udid - unique identifier of your device, you can get it in terminal
  • deviceName - the name of your device, can see it in Android studio or get from terminal

mobileBrowser

Before describing the mobileBrowser related tags, it is necessary to know how to create a locator and connect it to a scenario

Available types of locators:

  • id
  • xpath

To create locators you need:

  • To create a file with the name of the page or component (e.g:loginPages.xml)
  • Inside mobileBrowser.xml open <locators> tag
  • To give a unique name to the locator’s element
  • To set a type of the locator ( id, xpath,)
  • Place content in the selected locator

Structure of the tag:

<locator locatorId="mobileBrowser">   - a unique name of the locator
      <xpath>//android.widget.ImageView[@content-desc='More options']</xpath>   - selected value of interaction
 </locator>

Do you wonder how there is a connection between the locator and tags?

Let's demonstrate the locator connection using the tag

The tag that executes the ‘click’ command on the selected page element

     <click comment = "Click on 'Login' button"  - a comment of the action performed
        locatorId="locators.mobileBrowser"/>  - path to desired element

After the element locatorId= "" - we’re paving the way to the locator we need

We already know that all created locators are stored in a folder called: locators, respectively, we enter the name of the folder we need, and then use the unique name of the locator

In our case it is:

  • mobileBrowser

This is exactly how we implement the interaction between the locator and tags.

Now let's learn more about all the tags that will help you test the software qualitatively.


Tag's

The mobileBrowser tag's are the main interpreter containing the set of all tags for interacting with the user interface, which will be described below. You can use third-party link navigation.

Tag's usage example:

An example of using the tag:

 <mobileBrowser comment="Start web scripts">
        
        <wait comment="Wait for visualize a click"
              time="5" unit="seconds"/>
        
        <click comment="Log in email the mobile browser" 
               locatorId="mobileBrowser.email"/>

    </mobileBrowser>

mobileBrowser tag

  • click
  • input
  • dropDown
  • navigate
  • assert
  • scrollTo
  • scroll
  • javascript
  • include
  • wait
  • clear
  • closeTab
  • repeat
  • hovers
  • image
  • switchToFrame
  • hotKey

 <input>

A tag that executes a command to enter a value in the selected field

 <input comment="Input first name" 
     `locatorId="ownerRegistration.firstName" 
 value="Mario"/>`   -  в value -  in value there is the value of entering 'Mario'

Also, using the <input> tag, we have the opportunity to insert an image file of the desired format (for example, add an account photo)

To implement this function, the desired image must be located in the same folder as your test scenario, for example, in a folder called scenario

<input comment="Add profile photo" 
     locatorId="userPhoto.addProfilePhotoButton" -  selected element of interaction
  value="file:Firmino.jpg"/>   -  Inside of `value` we’re indicating that we want to insert a file using `file:` , after that we enter the name of the file (which is located in the scenario folder)

i.e. Firmino.jpg.

 <DropDown>

Tag of interaction with select function

< This tag is used for select and deselect function, with the ability to interact with multiselect

  1. Select One Value

Selects one value from the list

   <dropDown comment="Select 'Country'" locatorId="registration.country">
            <oneValue type="select" by="text" value="Spain"/>
        </dropDown>
  1. deSelect One Values

Drops one value from the list

   <dropDown comment="Deselect 'Country'" locatorId="registration.country">
            <oneValue type="deselect" by="text" value="Spain"/>
        </dropDown>

Параметр by= "" - parameter offers a choice of interaction:

  • text
  • value
  • index
  1. Deselect All Values

Drops all values from the list

<dropDown comment="Deselect all values'" locatorId="registration.country">
            <allValues type="deselect"/>
        </dropDown>

<navigate>

A tag that allows you to navigate through WEB pages using a URL

To use, you only need to add the path to the desired page, as by default your base URL will be used, which is specified in the configuration settings.

It’s got 3 commands:

  • to
  • back
  • reload

1.NavigateTo

Navigation to the page mentioned in path=""

 <navigate comment="Go to register account page" 
                  command="to" 
                  path="/registerAccount"/>  - path URL
  1. NavigateBack

Return to the previous page of the scenario

<navigate comment="Go to register account page"
                  command="back"/>
  1. NavigateReload

Reloading the current scenario page

<navigate comment="Go to register account page"
                  command="reload"/>

<assert>

The tag that allows you to confirm:

That the previous action led to the desired result

  • Example:

You used the <navigateBack> - tag to return to the previous page

To confirm the execution of this function, the <assert> tag is used

As a confirmation, you will need to create element’s locatorId of the needed page and forward it to the <assert> tag

 <assert comment="Verify that the transition to the previous page was successful" 
       locatorId="billing.billingPositiveAssert"  - the name of element locator of the needed page
           attribute="id">    - attribute choice ( type, autocomplete, name, placeholder )
           <content>
                   register_profile-photo_input    - the element itself 
          </content>
  </assert>

<scrollTo>

A tag that allows you to scroll the page to a specific element

 <scrollTo comment="Scroll to element" 
                  locatorId="footer.registerButton"/>

<scroll>

Scroll Up or Down by pixel

  • value="" - takes default value in pixels
    <scroll comment="Scroll Down" value="1024" direction="down" measure="pixel" type="page"/>

    <scroll comment="Scroll UP" value="976" direction="down" measure="pixel" type="page"/>

Scroll Up or Down by percent

  • value="" - has a maximum value of 100 (when scrolling as a percentage)
    <scroll comment="Scroll Down in percent" value="60" direction="down" measure="percent" type="page"/>

    <scroll comment="Scroll Up in percent" value="80" direction="up" measure="percent" type="page"/>

<javascript>

The tag that calls the code that is in the folder 'javascript'

<javascript comment="Apply javascript function" file="function.js"/>

 <include>

A tag that allows you to run a ‘scenario inside a scenario’ (often used to bundle different scenarios)

Implemented by passing the path to the scenario, which is located in a specific folder

The beginning of the path always starts with the scenarios folder, since all folders with created scenarios must be stored in this folder

 <include comment="Add scenario for login to the system"
       scenario="/nameOfScenarioFolderWithSlash"/> - a path to the scenario you need to launch

 <wait>

A tag that allows you to freeze the passage of the scenario to perform a certain function that requires a wait. Afterwards, the scenario will run at its normal pace

 <wait comment="Waiting for loading Dashboard page" 
      time="1" unit="seconds"/>  - There is a choice: `seconds` or `milliseconds`

 <clear>

A tag that allows you to clear the entered data in the fields

 <clear comment="Clear name field"  
   locatorId="updateProfile.name"/> 

 <closeTab>

A tag that allows you to close the current browser tab

 <closeTab comment="Check if closing 'second tab' works correctly"/>

 <repeat>

A tag that allows you to repeat any action used in the tags above

Used both inside and outside the WEB tag

   <repeat comment="Repeat action" times="5">  - the number of repetitions 
          <click comment="Click on 'Send button'"
            locatorId="locator.clickButton"/>
  </repeat>`

 <hovers>

A tag that allows you expand the list dropdown which supports the function hover

        <hovers comment="Open dropdown list with 'hover' function">
            <hover comment="Open drop down 'Novels' tab" locatorId="locator.novels"/>
        </hovers>

  

Also, using the tag, you can check the display of the image of some element

compareWith locator="image.picture" ,

        <image comment="Compare image" file="picture.png" highlightDifference="true">
            <compareWith locator="image.picture"/>
        </image>

<switchToFrame>

A tag that allows you to connect to an external API on the page

switchToFrame - when a tag is open, the full global tag set can be used with web tag's

  • After closing the tag, you can continue to use locators inside the page
 <switchToFrame comment="Open api frame on the site"
                       locatorId="frame.page">

        <input comment="Add email the page"
               locatorId="frame.email"
               value="[email protected]"/>

               <clear comment="Clear email"
                      locatorId="frame.clear"
                      highlight="true"/>

                      <click comment="Click button"
                             locatorId="frame.button"/>

        </switchToFrame>

<hotKey>

A tag responsible for individual keys or their combinations that are on the keyboard

hotKey - when a tag is open, the full global tag set can be used with hotKey tag's

  • After closing the tag, you can continue to use locators inside the page
  • There is a division of tags without locators and with locators

Tags are used with locators

  • copy - Allows you to copy the selected item
  • cut - Allows you to cut the selected element
  • past - Allows you to insert the selected element
  • highlight - Allows you to highlight the selected element
<hotKey comment="Paste the password">
            <paste comment="Paste the password"
                   locatorId="hotKey.password"/>
        </hotKey>

and the same example without using a locator

  • tab - Allows you to move to the next line
  • space - Allows for a space on the current line
  • backspace - Allows you to delete the current character on the page
  • escape - Allows you to click on escape
  • enter - Allows you to click on enter
 <hotKey comment="click enter">
            <enter comment="click enter"/>
        </hotKey>

Run mobileBrowser test

With a tag mobileBrowser can connect directly to the mobile emulator and use tests in chrome, going to sites to test them.

  • The structure of the script file is very simple, the same as a WEB or API test
  • Connect any android emulator
  • Using any version of chrome using chromedriver
  • All you need to do is open the mobileBrowser tag in the global-config and you can start writing
  • To run, only need to specify and can move on to writing the test
  • Using API, WEB and database validation in one scenario

Step:1

  • Create a folder with the name of the test scenario inside the scenarios folder
  • Create a file scenario.xml inside of the folder

mobileBrowser.png


Step:2

  • Open locators folder
  • Create a file called mobileBrowser page in the format: registration.xml
  • Inside the created file - open the <page> tag - fill in the data

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

mobileBrowserLocator.png

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • Open the tag <locators>
  • Create the necessary number of locators

Step:3

  • Open the created file scenario.xml
  • Open <scenario> tag
  • Populate the list of dropdown tags

mobileBrowserScenario.png

 <description>
  • A description of the tested scenario

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

<name>
  • A name of the tested scenario

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

<tags>
  • Setting the tag written in global-config-file.xml ( To run the scenario by tags )

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

  • Open up mobileBrowser tag in scenario.xml and start writing your tested scenario

mobileBrowserTags.png


  • Set up your docker and specify your local connection settings: 10.0.2.2:yourHost
  • Also in the settings you can specify a different port to run web and mobile tests

dockerUrl.png

  • After connecting, you will be able to access the site specified in your settings.
  • Here you can use the set of locators from the regular web version
  • Locators are used the same as in the WEB is the id, class, xpath

mobileConnect.png

  • You can look at an example of our created test script

Test video mobile browser

⤵️

Typing SVG

⤴️

Android Remote testing

in progress

In progress

iOs local testing

Native Configuration

mobileBrowser Configuration

iOs Remote testing

In progress



Websocket testing

Required config 🔧


To test websockets, you need to add to the config in the <integration>. Integration is a command located in the global.config.file structure which is responsible for integration settings.

<integrations>
        <websockets>
            <api alias="TESTER" url="ws://localhost:8080/ws-app" enabled="true" stomp="true"/>
            <api alias="DISABLED" url="http://example" enabled="true" stomp="false"/>
        </websockets>
 </integrations>

When command opened you should have arguments

  • api alias="myAlias" - Alias of websocket config what you add in config

  • url="https://Link of the resource.com/" - Link of the resource you want to subscribe to

  • enabled="false/true" - Gives the ability to enable or disable the given config

  • stomp="false/true" - Specify true or false depending on whether the resource you are subscribe to uses stomp protocol



Standart (Without useng stomp protocol) websocket command in scenario

To interact with a websocket - The main (parent) command in which all commands are executed, The disconnect argument specifies whether to disconnect the connection after closing the main tag (default disconnect="true") The alias argument is the same as in websocket integration config.

Inside main command <websocket>...</websocket> command you can use commands:

<receive comment="Check there are no other messages" count="1" timeoutMillis="3000">
   <message>[]</message> 
</receive> 
  • Receive command use to checking the receipt of messages from the server. The command have arguments:
  • comment="Your comment" - Which used to comment your command.
  • count="1" - Indicates the number of first messages that you want to compare.
  • timeoutMillis="3000" - The maximum time to wait before receiving a message, specified in milliseconds.
  • <message>[]</message>- Massage what you will get from server or File where will be massage what you will get from server <file>expected_25.json</file>

<send comment="Send 'subscribe' message">
            <message>
                {
                "event" : "subscribe"            
               }
            </message>
        
       </send>
  • Send command you use to send message to server. The command have arguments:
  • comment="Your comment" - Which used to comment your command.
  • <message>[]</message> - Massage what you will send to server or File where will send massage what you will get from server expected_35.json

  • Sample script for testing web sockets:
<websocket comment="Send and receive messages via websocket" alias="TESTER" disconnect="true">

        <receive comment="Check there are no other messages">
            <message>[]</message>
        </receive>

        <send comment="Send 'subscribe' message">
            <message>
                {
                "event" : "subscribe",
                "pair" : [ "XBT/EUR" ],
                "subscription" : { "name" : "ticker"}
                }
            </message>
        
       </send>
        
       <receive comment="Receive 'subscription response' message" count="1">
            <file>expected_25.json</file>
       </receive>
    </websocket> 

⤵️

Typing SVG

⤴️


Scenario for resource what use stomp protocol

To interact with a websocket - The main (parent) command in which all commands are executed, The disconnect argument specifies whether to disconnect the connection after closing the main tag (default disconnect="true") The alias argument is the same as in websocket integration config.

If the resource you want to test against uses the Stomp protocol, all commands inside the <websocket>...</websocket> command must be placed in the <stomp>...</stomp> subcommand, after it you can use commands:

 <subscribe comment="Subscribe to topic" topic="/topic/server"/>
  • Subscribe used to subscribe to a specific topic from which you want to receive messages The command have arguments:
  • comment="Your comment" - which used to comment your command.
  • topic=" " - Here you specify the topic from which you want to receive messages.
<receive comment="Receive 'ping response' message" topic="/topic/ping" count="1" timeoutMillis="100">
      <message>[{"value" : "ping message"}]</message>
 </receive>
  • Receive command use to checking the receipt of messages from the server. The command have arguments:
  • comment="Your comment" - Which used to comment your command.
  • topic="/topic/ping" - Here you specify the topic from which you want to receive messages.
  • count="1" - Indicates the number of first messages that you want to compare.
  • timeoutMillis="3000" - The maximum time to wait before receiving a message, specified in milliseconds.
  • <message>[]</message>- Massage what you will get from server or File where will be massage what you will get from server <file>expected_28.json</file>
 <send comment="Send 'ping' message" endpoint="/app/ping">
          <message>ping message</message>
 </send>
  • Send command you use to send message to server. The command have arguments:
  • comment="Your comment" - Which used to comment your command.
  • endpoint="/app/ping" - Here you specify the endpoint where you want to send messages.
  • <message>[]</message> - Massage what you will send to server or File where will send massage what you will get from server expected_35.json
  • Sample script for testing websockets of reresource what use stomp protocol:
  <websocket comment="Connect to stomp websocket api" alias="TESTER" disconnect="false">
        <stomp>

            <subscribe comment="Subscribe to topic" topic="/topic/server"/>

            <receive comment="Receive 'server periodic' messages" topic="/topic/server" count="3">
                <file>expected_3.json</file>
            </receive>

            <send comment="Send 'ping' message" endpoint="/app/ping">
                <message>ping message</message>
            </send>
            
        </stomp>
    </websocket>

⤵️

Typing SVG

⤴️




Features

Scenario Collecting

  • Each test scenario has its own status, which is processed during assembly to run test scenarios

Scenario statuses:

<active="true">
<active="false">
<onlyThis="true">
<variations="name">

<active="true">
  • Status of an active scenario
  • Every new test scenario has active=""true" status by default
  • Not required to be marked

scenario1.png


<active="false">
  • Status of an inactive scenario
  • Test scenario with status <active="false"> will not run during the assembly

activeFalse.png


<onlyThis="true">
  • Status of an active scenario
  • If the scenario status is <onlyThis="true"> - only this scenario will run regardless of the activity of other test scenarios
  • It is possible to assign this status to several test scenarios for selective launch

onlyThisTrue.png


<variations="">
  • Status of variations
  • <variations=""> is a status indicating that this scenario uses csv - variations
  • Assigned in addition to the above statuses:

variations.png


Work description

When running test scenarios in COTT - Scenario Collector collects all test scenarios located in the working directory and checks for scenarios’ statuses

Checking for status onlyThis=""true

  • When compiling test scripts to run, specifying the status onlyThis="true" - Scenario Runner - will only run those scenarios that are in this status, and will ignore the launch of other active scenarios in the status <active="true">
  • onlyThis=""true - is the most independent scenario status

➖➖➖➖➖➖➖➖➖➖

Checking for status active="true"

  • When compiling test scenarios to run, by specifying this status, Scenario Runner will run all test scenarios that are in active="true"
  • If there is at least one test onlyThis=""true in the test scenarios directory - running all scenarios in the active="true" state will be ignored and the scenario in onlyThis=""true will be run

➖➖➖➖➖➖➖➖➖➖

Checking for status active="false"

  • When compiling test scenarios for launch, by setting the status active="false", Scenario Runner will not let these scenarios run, and will simply ignore them.
  • Status active="false" - can be assigned to any scenario you don't want to run; when running all test scenarios in general, then all scripts in this status will be ignored.

Error processing

Before checking scenarios for statuses, COTT performs a global initialization of all test scenarios for validity.

  • Basic Validity Checks
  • Check for correct syntax in all files in a directory
  • Correct structure of each tag
  • Validity of locators, and their correct transfer to the scenario
  • Validity of variations, and their correct transfer to the scenario
  • Matching the paths and names of transferred files
  • Check for required files and folders to run test scenarios
  • All of the above steps are processed and initialized before each test scenario run. If an error is detected, the scenarios will not be launched, with the output of the corresponding error and indication of: directory, path, file name, and the nature of the error.

Pros:

  • The user is always aware of the validity of their test scenarios and test data
  • Elimination of bugs on early stages
  • Tests’ stability

Run scenario by <tag>

COTT can run scenarios on unique tags that you create and assign to a specific scenario

  • This feature is very useful when working with large volumes of test data, as using scenario triggering by tags gives you the ability to split your test scenarios into blocks, and easily switch between running them
  • Configuration example:
    <runScenariosByTag enable="true">
        <tag name="registrationFlow" enable="true"/>
        <tag name="loginFlow" enable="false"/>
        <tag name="createOrder" enable="true"/>
    </runScenariosByTag>

In this example, <runScenariosByTag enable="true"> means that tag run filtering is enabled and is ready to run on the specified tags

All test scenarios that have the registrationFlow & createOrder tag will be sent to launch when Scenario Runner starts

  • Assigning a tag to a test script:

navigate2.png


Where <tags> - is assigning the tag for the test scenario

    <tags>
        <tag>registrationFlow</tag>
    </tags>

Oauth2 Authentication

In progress

Comparison

COTT has a function of comparison

This function implies comparing the expected result with the actual one after the step is completed

To compare test results, the following files are used:

  • expected - expected test result
  • actual - actual test result

Having the structure expected_1.json, actual_1.json - the number is put, depending on the step of the test scenario

  • Presence of expected file is a mandatory parameter for http, postgres - requests

The principle of operation on the example of postgres:

<postgres comment="Check successfully adding Product to shopping cart"
              alias="Shop" file="expected_11.json">
        <query>
            SELECT shp_cart_id, customer_id, shp_cart_code
            FROM shopping_cart
            WHERE merchant_id = 1
        </query>
    </postgres>
  • Steps:
  • Make a request specifying the expected file (with the scenario step number, in this case expected_11.json)
  • Create a file in the scenario folder with the name specified inside the postgres request (in this case, expected_11.json)
  • Leave generated expected_11.json empty
  • Run test scenario
  • After running the test scenario and executing this query, camparison will automatically generate an actual_file with the scenario step number, see an empty file expected_11.json and compare it with the query result received in actual_11.json. If the result of the request is satisfactory to the user, it will transfer all data from the actual file to the expected file, for further comparison and successful completion of the test scenario
  • camparison - will generate the actual file only if the content between actual and expected does not match

Logs

COTT has informative and easy to read logs

  • Каждый тег имеет уникальную структуру отображения в логах, за счет индивидуального подхода и визуализации тегов в логах
  • Each tag has a unique display structure in the logs, due to the individual approach and visualization of tags in the logs
  • Uniqueness of logs:
  • Ease of perception
  • Table structure
  • Step by step analysis of the execution of each step of the scenario
  • Detailed analysis of the execution of each tag
  • Informative output of exceptions
  • Displaying the overall result of passing scenarios

WEB log's structure

  • An example of tags display:
<navigate>
<click>
<input>

web log's sturture.png


  • In these logs, we can see all the specific steps in passing our test scenarios, with a display of the transmitted and used data

  • In the <navigate> tag we see:

  • Comment - a unique parameter for each tag that describes the action of the step
  • Command type - a parameter indicating the command to be executed to, back or reload
  • URL - used navigation address
  • Execution time - a unique value for each tag that prints the execution time of each command

➖➖➖➖➖➖➖➖➖➖

  • In the <click> tag we see:
  • Comment - a unique parameter for each tag that describes the action of the step
  • Locator - passed WEB interaction element, with folder name, and unique element name
  • Execution time - a unique value for each tag that prints the execution time of each command

➖➖➖➖➖➖➖➖➖➖

  • In the <input> tag we see:
  • Comment - a unique parameter for each tag that describes the action of the step
  • Locator - passed WEB interaction element, with folder name, and unique element name
  • Value - output of the passed value
  • Execution time - a unique value for each tag that prints the execution time of each command

➖➖➖➖➖➖➖➖➖➖

  • This structure allows you to quickly find errors in the test scenarios, and see the display of high-quality logs

http log's structure

  • An example of displaying a <http> request:

httpLogs.png


  • In the <http> tag we see:
  • Comment - a unique parameter for each tag that describes the action of the step
  • Alias - the unique name of the API we are interacting with
  • Method - display of the http method used
  • Endpoint - used endpoint
  • Body - display of the transmitted request body
  • Status code - API response code
  • Execution time - a unique value for each tag that prints the execution time of each command

Structure of the DB log's

  • An example of displaying a <postgres> request:

postgresLog.png


  • In the <postgres> tag we see:
  • Comment - a unique parameter for each tag that describes the action of the step
  • Alias - unique alias of the database we are interacting with
  • Query - database queries
  • Execution time - a unique value for each tag that prints the execution time of each command

  • Demonstration of the logs

Go and find out more

'To start, click on the running line'

⤵️

Typing SVG

⤴️

Data Driven Testing

Any modern software, including web-based applications, is tested for errors. The speed of identifying these errors depends not only on the tools, the number of testers and their experience, but also on the chosen approach. This is what we will talk about.

This is data driven testing. With this approach, test data is stored separately from test cases, for example, in a file or in a database. This separation logically simplifies the tests.

  • Data-Driven Testing is used in those projects where you need to test individual applications in multiple environments with large data sets and stable test cases. Typically, the following operations are performed during DDT:
    • extracting part of the test data from the storage;
    • data entry in the application form;
    • checking the results;
    • continue testing with the next set of inputs.

This method allows a QA specialist to prepare a set of test data at the early stages of development to test the functionality and logic of your project.

To create a dataset, the tester only needs to create a file with the required extension and put it in the data folder (storage for test data)

Migration to any of the databases is easy to do with the global tag <migrate>⤵️

    <migrate comment="Add data set for database" alias="postgres">
        <data>site_sample_init_data.sql</data>
    </migrate>

After the data has been uploaded to the selected database, the QA specialist can create effective test cases, perform data-based testing and use http, SQL queries to interact with the system. This method covers all sections of the code and the system as a whole with tests, allows you to effectively create integration tests and conduct high-quality regression of the product.

  • Data-Driven Testing is also great for WEB testing. It allows you to track down a large number of bugs at an early stage of development. Especially effective for:
  • Detection of functionality bugs
  • Detection of unhandled exceptions when interacting with the interface
  • Detection of loss or distortion of data transmitted through interface elements;

Files available for migration:

  • sql
  • csv
  • xlsx
  • partiql
  • bson

Getting Started

Let's write the first test script in COTT 🚀

Automation is easy

On this page, we'll take a look at how easy it is:

  • ⚙️ Set up a global configuration file
  • ▶️ Run scripts
  • 🔧 Create the first WEB script
  • ⛏ Create an API script
  • 🛠 Create merge script
  • 🔍 View script logs

Recommended development environment for COTT - IntelliJ IDEA


Set up a global configuration file

Step:1

  • Indicate the default path to the XSD schema using the tag

global_config.png ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Step:2

  • Open a tag:
 <stopScenarioOnFailure>false</stopScenarioOnFailure> -  to manage ScenarioRunner during exception

true (when running several or one test scenario, and an exception is detected in one of them, the starter will be stopped with an error output, in a specific scenario and at a specific step)

false (when running several or one test scenario, and an exception is detected, the starter will not be stopped, all the scenarios and steps will be executed until the starter finishes working, with a further output of an exception for all the scenarios that have been passed and received an exception)

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Step:3

  • Open a tag:
<runScenariosByTag>

Name the tags, which will be used to run tags’ scenarios

  • Example:
   <runScenariosByTag enable="true">
       <tag name="positive_case" enable="true"/>
       <tag name="negative_case" enable="false"/>
   </runScenariosByTag>

Running the scenario by tags is not a mandatory requirement for the user. You have the ability to turn this feature on and off using enable="" |true or false|

If enable="false" - Scenario Runner will work as usual

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Step:4

<web>
  • Open <web> tag to configure cross browser

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

You have the opportunity to choose flexible configuration settings for one or more types of browsers individually

For example, you can configure multiple versions and launch modes Chrome Browser

The number of configurations for the browsers is unlimited

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Main tags for opening <web> tag:

<baseUrl>http//:locahost:8080</baseUrl>
  • Inside of it, there should be the URL of the tested site
<browserSettings>
  • Performs basic cross browser settings

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Tags inside <browserSettings>:

<takeScreenshots> 
  • To enable or disable the screenshot mode
  • Has true | false flags

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

<elementAutowait>
  • Interacts with such tags as:

click

<input>

assert

dropDown

navigate

  • Designed to wait and get elements <id>, <class>, <xpath> from the tree
  • <elementAutowait> - will work until it finds the page element you needed

Has a parameter seconds="5" - to assign a timeout

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

<browsers>

Opens up browser configuration settings:

  • chrome
  • opera
  • edge
  • firefox
  • safari

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

The structure of browser configuration:

  • Chrome Local Browser
          <chrome enable="false" maximizedBrowserWindow="false" headlessMode="false" browserWindowSize="1920x1080">
                <browserType>
                    <localBrowser driverVersion="102.0.5005.27"/>
                </browserType>
                    <chromeOptionsArguments>
                        <argument>--incognito</argument>
                    </chromeOptionsArguments>
          </chrome>
  • maximizedBrowserWindow="" - to select browser window size true | false
  • headlessMode="" - is a built-in option browser startup mode true | false
  • browserWindowSize="1920x1080" - to set browser size with maximizedBrowserWindow="false"
  • localBrowser driverVersion="" - to set a specific local browser version
  • <chromeOptionsArguments> - to set arguments

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • Edge in Docker Browser
                <edge enable="true" maximizedBrowserWindow="true" headlessMode="true" browserWindowSize="800x600">
                    <browserType>
                        <browserInDocker browserVersion="91.0.864.37" enableVNC="true">
                            <screenRecording enable="true" outputFolder="/Users/user/e2e-testing-scenarios"/>
                        </browserInDocker>
                    </browserType>
                    <edgeOptionsArguments>
                        <argument>--disable-extensions</argument>
                    </edgeOptionsArguments>
                </edge>
  • <browserInDocker browserVersion="" - to select a driver version to run in docker
  • enableVNC="" - allows you to connect to a remote desktop session by simply using the enableVnc() method of the dockerized browser When using this option, two different technologies are used internally:
  • Virtual Network Computing (VNC), is a desktop sharing graphic system, VNC-сервер he VNC server runs in a browser container.
  • noVNC is VNC web client with an open initial code that uses its own noVNC Docker image to connect via noVNC.
  • <screenRecording enable="" - is a mode for recording test scenarios that pass into docker has flag true | false when using enableVNC="true"
  • outputFolder="" - to set a specific directory to store the test scenarios records
  • <edgeOptionsArguments> - to set arguments

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Step:5

<integration>
  • Open up <integration> to configure integrations

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Configure several API

Configure several DB

Configure services

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • Examples of the configurations

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • APIsIntergration
        <apis>
            <api alias="SHOPIZER" url="http://localhost:8080/"/>
        </apis>

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • PostgresIntegration
       <postgresIntegration>
            <postgres alias="SHOPIZER" enabled="true">
                <jdbcDriver>org.postgresql.Driver</jdbcDriver>
                <username>postgres</username>
                <password>password</password>
                <connectionUrl>jdbc:postgresql://localhost:5433/SHOPIZER</connectionUrl>
                <schema>salesmanager</schema>
                <hikari>
                    <connectionTimeout>45000</connectionTimeout>
                    <idleTimeout>60000</idleTimeout>
                    <maxLifetime>180000</maxLifetime>
                    <maximumPoolSize>50</maximumPoolSize>
                    <minimumIdle>5</minimumIdle>
                    <connectionInitSql>SELECT 1</connectionInitSql>
                    <connectionTestQuery>SELECT 1</connectionTestQuery>
                    <poolName>core-postgres-db-pool</poolName>
                    <autoCommit>true</autoCommit>
                </hikari>
            </postgres>
        </postgresIntegration>

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • SendgridIntegration
        <sendgridIntegration>
            <sendgrid alias="email" enabled="true">
                <apiUrl>http://localhost:8080/</apiUrl>
                <apiKey>apiKey</apiKey>
            </sendgrid>
        </sendgridIntegration>

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • RedisIntegrations
        <redisIntegration>
            <redis alias="redis" enabled="true">
                <host>localhost</host>
                <port>6379</port>
            </redis>

            <redis alias="redis_two" enabled="true">
                <host>localhost</host>
                <port>6360</port>
            </redis>
        </redisIntegration>

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

After creating the configuration file, Edit Configuration in Test Runner

runner.png

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Specify the path of the file with global configurations

edit_configurations.png


➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

Create WEB test-script

Step:1

  • Create a folder with the name of the test scenario inside the scenarios folder
  • Create a file scenario.xml inside of the folder

webScript.png


Step:2

  • Open locators folder
  • Create a file called WEB page in the format: registration.xml
  • Inside the created file - open the <page> tag - fill in the data

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

scenario1.png

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • Open the tag <locators>
  • Create the necessary number of locators

Step:3

  • Open the created file scenario.xml
  • Open <scenario> tag
  • Populate the list of dropdown tags

tagsList.png

 <description>
  • A description of the tested scenario

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

<name>
  • A name of the tested scenario

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

<tags>
  • Setting the tag written in global-config-file.xml ( To run the scenario by tags )

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

  • Open up WEB tag in scenario.xml and start writing your tested scenario

openWebTags.png


YouTube Instructions WEB

Instructions for working with the WEB test scenario

'To start, click on the running line'

⤵️

Typing SVG

⤴️


Create REST - API script

Steps:

  • In an already created scenario or in a new one created:
  • Make sure that WEB tag is closed or absent
  • Open http tag
  • Select a request method

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

http_scenario.png


  • Indicate endpoint and response code

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

indicateEndpoint2.png


  • Create an empty expected_file.json in the scenario folder
  • Indicate expected_file.json in http request
  • The number of expected_file - depends on the number of the tested scenario’s step

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

full endpoint .png


  • Select a transfer type of the body of the request
  • Open <body> tag

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

bodyFromEndpoint .png

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

  • An example of transferring the body of the request using <from> file parameter
  • Create request_file.json to transfer the body of the request (in the scenario folder)
  • Put the body of the request into created request_file.json
  • Set a name request_file.json - inside of <from file=""/>

➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖ ➖

⤵️⤵️⤵️

requestEndpoint.png


YouTube Instructions API

Instructions for working with the API test scenario

'To start, click on the running line'

⤵️

Typing SVG

⤴️


Merge test (WEB/API)

More possibilities

WEB & API in one tested scenario

High coverage level

Merge mode

YouTube Instructions Merge Test

Instructions for working with the Merge test scenario

'To start, click on the running line'

⤵️

Typing SVG

⤴️



COTT + CI/CD

In simple words, CI / CD (Continuous Integration, Continuous Delivery - continuous integration and delivery) is a technology for automating testing and delivering new modules of a project under development to interested parties (developers, analysts, quality engineers, end users, etc.).

COTT easily integrates with software development and testing tools that use the CI / CD method. Namely with tools such as:

Bitbucket

  • The environment allows you to manage project repositories, document the functionality and results of improvements and tests, as well as track errors and work with the CI / CD pipeline

Docker

  • Automatic project deployment system. It supports containerization and allows you to pack the project along with all the environment and dependencies into a container

Jenkins

  • Customize CI/CD processes for specific product development requirements

pipeline.png

All together it will allow us: to ensure the promptness of the output of new product functionality (working with customer requests). As a rule, it is a matter of days or weeks. At the same time, with the classical approach to developing client’s software, this could take a year. In addition, the development team receives a pool of code alternatives, which optimizes the cost of resources for solving the problem (by automating the initial testing of the functionality). Parallel testing of the functional blocks of the future system enhances the quality of the product. Bottlenecks and critical moments are captured and processed early in the cycle.

With the help of COTT and CI / CD development, the Product Owner takes full control of the entire phase development and testing of his product. This allows the testing team to improve software quality, run continuous regression testing, and monitor software quality in real time with the ability to run parallel tests. Moreover, it will allow developers to monitor the cleanliness of the written code by running their branches on builds. Thus, we come to the conclusion that it becomes even easier to develop the perfect software.


  • Logs in Jenkins have become more pleasant to read by customizing logs in COTT

lognJenkins.png

Integrations out of the box

COTT supports a set of integrations out-of-the-box that you can use to configure your project. If you need a certain integration to test your software, it will not be difficult to add and use it in the future. As you can imagine, we can automate the testing for a given project structure.

"Out-of-the-Box Integrations:"

Integrations

  • Clickhouse
  • DynamoDB
  • Elasticsearch
  • Kafka
  • MongoDB
  • MySQL
  • Oracle
  • Postgres
  • RabbitMQ
  • Redis
  • AWS (S3, SES, SQS)
  • Sendgrid

More about integrations - Click here


Simple writing structure

The test scenario has a very clean and readable look due to the structure of tags and XML. By looking at this test scenario, any member of your team will be able to figure out what this scenario is testing, and what methods and data are used for this. That is because each tag that is used in COTT has a mandatory 'comment' field, which allows you to quickly understand what is happening.

simpleStructure.png


Test scenarios are living test cases

New members of your team will not need to spend a lot of time sorting through a large amount of test documentation, they will simply open the project in COTT and personally will be able to apply the test case they see in practice.

Speed ​​of learning

COTT has a high level of learnability due to our unique and uniformed structure for writing test scenarios, as well as due to declarative programming:

  • programming paradigm that sets the specification of the solution to the problem, that is, the expected result is described, and not the way to get it

The transition from Manual QA to Automation QA will take your employees about 3 weeks, which will significantly speed up the process and quality of testing. To develop autotests, QA-specialist will not need to delve into the programming language in which the product is written, he will a set of scripts that do all the work for him.

With the help of COTT, you can create a team of automatizers in your company who can easily cope with testing the functionality of any complexity.

QA -specialist will not need to use a bunch of additional testing tools. Instead, he only needs to create a test scenario within which he can implement the necessary methods and approaches to testing.

With COTT, we also provide you with technical documentation that will help speed up the training process for your specialists, and help new members of your team get through the training quickly.

Flexibility

One of the key differences from competitive products is that the COTT tool supports connecting modules as dependencies - if there is any special one, it will not be a problem to add it and use it in the future. We have an individual approach to the client, and we will be happy to develop for you any feature that only you will have.

As we can see, by using our tool you will not be limited only to the functionality that is available in the box.

Full test coverage

Due to the possibilities and integrations that are available in COTT, the team that has mastered this tool will be able to cover all the functionality of the project with tests without any problems.

Developers have the ability to create Unit (tests allowing to check the correctness of individual modules of the source code of the program, and check the performance of the written code).

After that, QA -specialists can start writing integration tests that check the interaction of system modules with each other. All together, it will move into the stage of regression testing, where all the above actions will be repeated until the successful completion of software development with regular regression tests.

This approach will ensure high-test coverage of your product.


Reporting Tool 📊

System Stability Monitoring

Peculiarities:

  • Convenient dashboard with graphs of the results of passing test scenarios
  • Using the Reporting Tool locally and on the server
  • Step-by-step analysis of each step of the test script
  • Full stacktrace of each step
  • Access to view screenshots of each step

'To see how the Reporting Tool works, click on the ticker'

⤵️

Typing SVG

⤴️


Using the Reporting Tool locally and on the server

COTT gives you the ability to view the passing statistics of your test scenarios:

  • Locally
  • On the server

To set up Reporting Tool configurations:

  • Configure <report> in global.config.file

➖➖➖➖➖➖➖➖➖➖

reportConf.png

  • <extentReports projectName=""> - Indicating a name of the project
  • <htmlReportGenerator enable=""/> - Turning on/Turning off the local report htmp - ( true | false )
  • <klovServerReportGenerator enable=""> - Turning on/Turning off the report on the server ( true | false )
  • <mongoDB host="localhost" port="27017"/> - passing your host and port to generate the report on the server
  • <klovServer url="http://localhost:1010"/> - indicating url of the server

Local report generation

  1. To run a local html report, make sure that <htmlReportGenerator enable="true"/>
  2. Run your test scenarios
  3. Open the report folder
  4. Open the generated report in the suggested browser:

browserReport.png

  • Dashboard html - Report 📈

➖➖➖➖➖➖➖➖➖➖

htmlDashboard.png


  • The report dashboard contains all the necessary information about the tests passed

Number of running tests

Time and launch date

Test results

Number of passed/failed steps

Log events

Timeline

Tags


  • Detailed report Exception
  • Exceptions on each step of the test scenario

➖➖➖➖➖➖➖➖➖➖

htmlStep.png



  • Ability to view every step of the test scenario
  • Opening the screenshots of each WEB step

➖➖➖➖➖➖➖➖➖➖

detailHtmlStep.png




Report generation on the server

  • To generate a report on the server, you must have a docer-compose-report.file created and configured

  • Running a report

  1. To run a report on the server, make sure that <klovServerReportGenerator enable="true">
  2. Run your docker-compose-report.file
  3. Go to the specified host in global.config.file - inside the tag <klovServer url="http://localhost:1010"/>
  4. Run the test scenarios

Dashboard Server Report

➖➖➖➖➖➖➖➖➖➖

dashboardServerReport.png


The dashboard page contains information about:

Total number of runs

Result of the last run

Number of tests passed

Number of failed tests

A general overview of your runs in the form of a graph

Performance graph

Ability to sort all runs

Ability to search for a specific test scenario

  • Displaying all the runs:

➖➖➖➖➖➖➖➖➖➖

allRunServer.png


  • Ability to view every step of the test scenario

➖➖➖➖➖➖➖➖➖➖

StepServerReport.png


  • Detailed display using 'Comparison' functionality

➖➖➖➖➖➖➖➖➖➖

comparisonServer.png


Ability to view all the screenshots of each WEB step

➖➖➖➖➖➖➖➖➖➖

screenServerReport.png


  • Full stackTrace

➖➖➖➖➖➖➖➖➖➖

stackTraceServer.png


Settings

Welcome to configuration

Detailed configuration settings are in the running line below 🤝


'Click here ⬇️'

⤵️

Typing SVG

⤴️

⚠️ **GitHub.com Fallback** ⚠️