Home - abeedal/Abeedal GitHub Wiki
Welcome to the Abeedal wiki!
Micro services
Microservices are a software architecture pattern where an application is built as a collection of small, independently deployable services that are focused on specific business capabilities. Each microservice runs in its own process and communicates with other microservices using lightweight protocols such as HTTP or message queues. Microservices are designed to be loosely coupled and highly scalable, allowing for easier development, deployment, and maintenance of complex software systems.
Spotlight Sports Group is a media company that focuses on racing, football and sport, with a particular accent on content that informs bettors.
Responsibilities · Define and Execute manual testing within one or more projects · Defect and issue tracking management with tools such as JIRA · Follow and suggest improvements to best practices and processes defined within the QA team · Attend daily stand-ups and communicate progress and status to team and QA · Identify tests and product areas that are suitable for automation · Assist in designing, implementing and executing automation tests (JavaScript) for mobile & web platforms · Integrating Test Automation into the CI / CD pipeline · Investigating and helping resolve manual & automation testing challenges · Work with QA lead and senior team members to improve manual & automated testing coverage
· knowledge / experience with automation frameworks (Java, JavaScript) · Excellent communication skills verbal & written · Good Understanding of manual testing approaches (e.g. Exploratory testing) · Working knowledge of Test Automation Frameworks (Cypress, Cucumber, selenium, Appium) · Good understanding of the standard web stack (HTML, CSS, JavaScript) · Working knowledge of BDD tools, approaches and structures (Cucumber, Gherkin) · Good understanding of mobile application testing (android, iOS, device emulation) · Experience of working in an Agile development environment (SCRUM, Kanban) · Self-motivated team player with a strong desire to learn new skills and grow, whilst also being able to self-manage
QA Tester– Role Description
• Can combine the mindset of a tester with both the desire and ability to put those tests into code for full automation, while being happy to perform a degree of manual, particularly exploratory, testing. • Experience in using web testing frameworks, for example Cypress. • Can describe the benefits of automating testing throughout the tech stack, and at multiple points in the path to production and have experience in the same. For example, automated API testing. • Familiar with distributed version control systems like Git and can describe the branching strategies you have used on previous and current projects. • Can give examples of what you consider to be good practices in automated testing, and how you have tried to implement those practices in previous roles. • Happy working in an Agile delivery environment, interacting directly with other team members and stakeholders on a daily basis. • Can describe how we could move from the test ice-cream cone anti-pattern to the test pyramid.
expose_internally = true traefik_enabled = true
traefik_additional_routers = [
{
name = "pickswise_legacy_api"
rule = "Host(pickswise-beffe.traefik.$${environment}.b2b.rp-cloudinfra.com) && PathPrefix(/wp-json/)"
service = "pickswise_wordpress_api@traefikee"
},
{
name = "predictions_endpoint_override"
rule = "Host(pickswise-beffe.traefik.$${environment}.b2b.rp-cloudinfra.com) && Path(/wp-json/pw/v1/predictions{slash:[/]?})"
},
Step 1 Create git hub repository
Step 2
Clone repository
Git Hub Token
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
https://github.com/abeedal/postman-newman-example.git
git remote add origin https://github.com/abeedal/Example1.git git branch -M main git push -u origin main
echo "# Example1" >> README.md git init
git add README.md git commit -m "first commit" git branch -M main git remote add origin https://github.com/abeedal/Example1.git git push -u origin main
git add .
git commit -m"test"
Glide - Managing Articles and Collections
* Glide CMS URL: https://dev-pub.nonprod-racingpost.gcpp.io/dashboard
- Articles
- Creating a new Article:
-
Collections
- Creating a new Collection:
Glide CMS URL: https://dev-pub.nonprod-racingpost.gcpp.io/dashboard Articles
- Creating a new Article:
- Select "Article" in the Main menu
- Select "Write" 
- Select the type of Article which we want to create (most of the time it will be Standard) 
- On this screen we must fulfill the information for the Article. The Headline (field 1)is required and it is visible when open the article. The Promo Title and Promo Subtitle (field 2)are the fields which are displayed in the News Index Screen with the selected image (field 3). We are able to add and additional information like a author, additional message and etc.
- When we are done with all changes we must select save icon button in right down corner (filed 4) 
- Now the status of out Article is "Draft" and we must change it to "Ready" and to select save icon button again (when is made some changes the save icon button will became in Active state) 
- Next step is to navigate to "Produce" tab (field 1) where we have to add "Taxonomies" (field 2) and to save it (field 3). The out Article is ready but to be available to adding in some collection we must publish it (field 4).
- NOTE: Every one article is not visible anywhere until is not added in some collection 
Collections
- Creating a new Collection:
- Select "Collections" in the Main menu
- Select "Add new" 
- Select the type of Collection which we want to create 
- On this screen we must fulfill the main information for the Collection. The Headline is required. 
- Next step is to add some item which we want to display with this Collection - Add new (field 1). There will be displayed pop-up window where we must select what we want to add - Article or System widget (filed 2). Then is presenter another pop-up window with search panel and suggestions when we are writing (filed 3). When we find our article select it and press Save. We are also able to add additional information like a author, picture and etc.
- We also have to add and number of articles in the bottom of the screen (filed 4)
- When we are done with all changes we must select save icon button in right down corner (filed 5)  The last step is from Produce tab to Publish our new Collection and then will be visible
How to test Firebase integration/events through DebugView
NOTE: Keep in mind that when DebugView is “ON” the events logged won’t be counted into the “Events” or “Dashboard” What is DebugView - Option of Firebase for testing events live. Usually events dispatched from a device are not being logged immediately! DebugView is only for testing and it forces the device to log events with very very short delay. Works for Android, iOS that means it also works with React-Native! iOS: You need to configure the ‘scheme’ which is being used when the build starts. You have to add ‘-FIRDebugEnabled' as an argument. Then you just have to open ‘Firebase-Console’ - ‘DebugView’ tab and rebuild the app. NOTE: Works on Simulator !  NOTE: If something is wrong and you are not seeing the events in “DebugView” then something is wrong with your configuration! Check that you are using the ‘GoogleService-info.plist’ with all the schemes that you have in the project! iOS AppCenter: To test the app on real device you need a build with pre-configured scheme with the argument from above. You can run build by editing the build configuration in AppCenter: Try using appcenter_uat
- Ask a developer to configure a scheme if no scheme is configured already  Android: In order to run in debug mode I recommend deploying to a real device from a PC directly! Just because you we will need a bridge connection with ‘adb’.
- Open Firebase-Console - DebugView tab
- Deploy from PC with - react-native run-android --variant='ProdDebug’
- Run from the same terminal the following command: adb shell setprop debug.firebase.analytics.app ent.racingpost.janus NOTE: This debug behaviour will be persisted until you run - adb shell setprop debug.firebase.analytics.app .none. - To stop debugging  NOTE: If something is wrong check ‘adb’ with - adb devices
How to send notifications
https://eu-dashboard.swrve.com/ Swrve is an integrated platform supporting every aspect of the mobile marketing experience: in-app communications, push campaigns, and A/B testing. We use SWRVE to send notifications and keep track history of events.
By following the below steps you'll be able to send test Push notifications to Android device. 1.https://eu-dashboard.swrve.com/ Log in with the racingpost account and select Google - Android Sandbox.
 2. Check that the Google Cloud Messaging Server key is valid / the Server Key in the box is the same as “AAAAVsnefHI:APA91bGGRooO26_F7Whb2E8A37alc2honKiMNJ6pQn9zQvSXBnOiFcR0TTdQHxCHEjmUb-C9E08cNuFCbUKH-GjZAGN-n7okVO9PO1I45FJTw1mUJ5s6Rl1B7aCv_YKgLJAzSpkPAXOe” / 
- Add the device on which you send notifications. Settings → QA devices - > Add QA Device 
 Note: You have to open an app(RacingPost) on your device. 3.1 Confirm the message for that you have opened the app on your device.  
3.2 Add the name of your device  
There are two ways to send notifications.
- First Way : Settings - > Integration Settings -> Push Notifications -> Select QA device - > Send Test Push 
- Second Way: 1.Campaigns - >Select campaign from created 
- Find your device and click on Test Push 
- If a notification is successfully sent you will have green ✓ to Test Push. 
Note: If you don't want to use created campaigns, you can create new. 
DB Connections In order to be connected to the RP DB, you'll need to have the correct credentials. There is one config.xml for every different API that you'll be using. Open the file and find the element:

Into DBeaver create a new connection with the same credentials as from the config.xml:
 
<title></title> <style type="text/css"> p.p1 {margin: 0.0px 0.0px 2.0px 0.0px; font: 16.0px 'Helvetica Neue'} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'; min-height: 15.0px} p.p4 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; min-height: 14.0px} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: center; font: 13.0px 'Helvetica Neue'} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'; color: #dca10d; min-height: 15.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'; color: #dca10d} span.s1 {color: #000000} table.t1 {border-collapse: collapse} td.td1 {border-style: solid; border-width: 1.0px 1.0px 1.0px 1.0px; border-color: #9a9a9a #9a9a9a #9a9a9a #9a9a9a; padding: 1.0px 5.0px 1.0px 5.0px} </style>Cost of defects
There are various empirical models that demonstrate that the later in the project life cycle a defect is found, the more expensive it is to resolve.
The most common is the standard curve as shown below which infers all defects cost the same to fix and the cost rises steadily throughout each phase
There is also “Schachs Summary” which takes very distinct and specific values and applies them to each phase.
While often these approaches to cost analysis on defects hold true they are very generic and may not truly represent the true costs within a project / business.
Both of these approaches can be quickly improved in accuracy by fully considering the roles involved in resolving a defect at each stage/phase of a delivery.
For example the table below shows the resources required to remove a single defect at each stage/phase of a standard delivery. It also shows an implied specific unit of “COST“ for each resource.
Using this table we can calculate the true saving/cost effectiveness of finding a defect earlier in the process.
| Assumed Unit "COST" per defect | Design | Requirements/Use Case Definition | Refinement sessions | Unit Test / CI | Automation Testing | System Testing | Performance analysis | User Acceptance Testing | Prodruction/Live System | |
|---|---|---|---|---|---|---|---|---|---|---|
| Designer | 10 | X | X | X | X | X | X | X | X | X |
| Product Owner | 10 | X | X | X | X | X | X | X | X | X |
| Business Analyst | 10 | X | X | X | X | X | X | X | X | |
| Developer | 10 | X | X | X | X | X | X | X | ||
| Automated Tests | 5 | X | X | X | X | |||||
| Manual Tester | 10 | X | X | X | ||||||
| Performance tester | 10 | X | X | X | ||||||
| Dev Ops | 10 | X | X | X | ||||||
| UA Tester (Business User/SME) | 10 | X | X | |||||||
| Customer Service | 10 | X | ||||||||
| TOTAL | 20 | 30 | 40 | 40 | 45 | 55 | 60 | 85 | 95 | |
| No.Of Defects found in Prod | 1 |
Cumaltive Total Unit "Cost" per fix * Defects in live | | 20 | 30 | 40 | 40 | 45 | 55 | 60 | 85 | 95 Assumed unit Saving Per defect per Phase | | 75 | 65 | 55 | 55 | 50 | 40 | 35 | 10 | 0 Potential % Saving | | 79% | 68% | 58% | 58% | 53% | 42% | 37% | 11% | 0%
A template to enable us to calculate project / business specific costs can be found here :
Defect Cost analysis - Template.xlsx
Assumed Unit "COST" per defect Design Requirements/Use Case Definition Refinement sessions Unit Test / CI Automation Testing System Testing Performance analysis User Acceptance Testing Prodruction/Live System
Designer 10 X X X X X X X X X Product Owner 10 X X X X X X X X X Business Analyst 10 X X X X X X X X Developer 10 X X X X X X X Automated Tests 5 X X X X Manual Tester 10 X X X Performance tester 10 X X X Dev Ops 10 X X X UA Tester (Business User/SME) 10 X X Customer Service 10 X TOTAL 20 30 40 40 45 55 60 85 95 No.Of Defects found in Prod 1 Cumaltive Total Unit "Cost" per fix * Defects in live 20 30 40 40 45 55 60 85 95 Assumed unit Saving Per defect per Phase 75 65 55 55 50 40 35 10 0 Potential % Saving 79% 68% 58% 58% 53% 42% 37% 11% 0%
Assumptions
- Most Resources Cost the same number of units per defect (in this case 10 Units) - Except automation runs, which only cost 5 as less manual interaction means less resource
- Units are not solely monetary (i.e. they may be units of effort)
- All Defects can be found at any stage (i.e. Does not account for defects introduced by defect fixes)
- Only resources listed expected to be involved in fixes, does not account for extra resources from unlisted or unknown business areas
- All previous resources involved in Delivering the system to a Phase need to be involved in order to fix a defect identified at that Phase A template to enable us to calculate project / business specific costs can be found here :  Defect Cost analysis - Template.xlsx
Affiliate Product Team Testing Lighthouse Tests
Share
Using NODE Cli
Created by Alex Chircu (Unlicensed) Aug 07, 201 min read Analytics The Node CLI provides the most flexibility in how Lighthouse runs can be configured and reported. Users who want more advanced usage, or want to run Lighthouse in an automated fashion should use the Node CLI.
Lighthouse requires Node 10 LTS (10.13) or later.
Installation:
npm install -g lighthouse
Run it: lighthouse https://airhorner.com/
By default, Lighthouse writes the report to an HTML file. You can control the output format by passing flags.
Develop
Read on for the basics of hacking on Lighthouse. Also, see Contributing for detailed information.
Setup
git clone https://github.com/GoogleChrome/lighthouse cd lighthouse yarn yarn build-all Run
node lighthouse-cli http://example.com# append --chrome-flags="--no-sandbox --headless --disable-gpu" if you run into problems connecting to Chrome
Running the lighthouse tests like this is a bit more comfortable than running it from the dev tools or from the browser.
Running it in GIT bash , lighthouse directory:
node lighthouse-cli https://uat-beta.pickswise.com/
The report usually it’s a html one like the ones below:
News page nba on UAT PW:
Preview unavailable uat-beta.pickswise.com_2020-08-06_14-27-41.report.html 23 Jun 2022, 09:34 am Home page on UAT PW:
Preview unavailable uat-beta.pickswise.com_2020-08-06_14-32-00.report.html 23 Jun 2022, 09:34 am
Test Cycle creation and Naming * Once the
Test Cycle folder structure and Naming is in place then we need to start creating Test Cycles so we can execute tests. New Story/Task Test Cycle creation
Test Cycles for new stories will live inside the Sprint/Release folders and contain the test cases we wish to execute. We have a number of options available to us on how we structure cycles, we can:
- Create a cycle per story or ticket, this gives us to ability to create cycles before hand or if necessary as stories are released to QA during the Sprint and ensures each cycle maps to specific deliverable within the sprint and can be easily cloned to later sprints should the relevant deliverable need to slip across sprints / releases. This does mean that we could have a large number of cycles in a release but allows us to manage each story individually enables us to report on progress at a granular level should we need to.
- Create a cycle for each functional / application area (feature) affected, this reduces the number of cycles created. This is best done at the start of the sprint when we know what stories are planned to be delivered and can be difficult to manage should a story not be completed by the end of the sprint / release as it can be complicated to pick apart the tests that need to move to the next sprint when a ticket is not completed.
- Create a single Cycle to cover all stories in a sprint, this is best done at the start of a sprint and is best suited for detailed reporting over a whole sprint/release (actual execution vs Planned being the best metric it supports) Option one is the preferred approach but all 3 are viable depending on the delivery approach of the project. It is also recommended that test cycles are created at the start of the sprint so we can better report on planned vs actual and clearly see what is slipping and allow us to clearly understand the how much testing is needed as the sprint/release nears it’s end.  TODO: UPDATE DOC TO COVER ALL CREATION OPTIONS Option 1 - Cycle per story/ticket
Test Cycle Naming Convention
Once the folder for the current / upcoming sprint is created then we add the test cycles. As using this option we are focusing on a cycle per story then it makes sense that the Test cycle be named the same as the story/task it will contain tests for, and will include the Jira Ticket reference. Only the tests related to the story will be included in this cycle e.g.  Option 2: TBD
Test Cycle Naming Convention
Once the folder for the current / upcoming sprint is created then we add the test cycles. Option 3: TBD
Test Cycle Naming Convention
Once the folder for the current / upcoming sprint is created then we add the test cycles.
Test Cycle creation and Naming
Janus - Performance Testing
1 Quality Assurance 10 - Performance Testing Janus - Performance Testing
PlanIT Workshop planning
What do we think PlanIT could offer us?
- Large scale end to end testing on a live size system.
- Multi-device testing on a wide range client devices.
- Expert insight; telling us what areas they think could be bottlenecks; any tech choices that may have performance/legal/support impacts; potential scalability issues. PlanIT Request
TODO PlanIT have asked for some technical information about our app; what it is, what tech and architecture we are using. Probably involving sharing this with an explanation:
Web and Mobile Application Architecture | Component diagram PlanIT asked of us: ”As a reminder, this 1 hour workshop is to help Planit to understand the architecture of the Racing Post system and the scope of the performance testing requirement so that we can provide an estimate of the effort and costs. We will need representatives attending who can cover questions we may have of the following roles:
- Business Analyst
- Product Owner
- Test Lead
- Environments Manager
- Project Manager
- Systems Architect
- Database team representative
- API team representative
- Application Performance Monitoring team representative We will need toinvestigate the following topics concerning the system to be tested:
- Details of the application architecture and environment. It would be ideal if we could see a solution diagram prior to the workshop
- What plans have been made for test data?
- Are there any existing performance related issues or pain points that we need to consider?
- Do and SLA’s exist for the solution, both internal to RP and with the 3rd parties?
- Is there any Application Performance Monitoring (APM) in production and test?
- Does RP have any existing load farm for driving performance testing?”
Janus - Performance Testing
1 Quality Assurance 10 - Performance Testing Janus - Performance Testing
PlanIT Workshop planning
What do we think PlanIT could offer us?
- Large scale end to end testing on a live size system.
- Multi-device testing on a wide range client devices.
- Expert insight; telling us what areas they think could be bottlenecks; any tech choices that may have performance/legal/support impacts; potential scalability issues. PlanIT Request
TODO PlanIT have asked for some technical information about our app; what it is, what tech and architecture we are using. Probably involving sharing this with an explanation:
Web and Mobile Application Architecture | Component diagram PlanIT asked of us: ”As a reminder, this 1 hour workshop is to help Planit to understand the architecture of the Racing Post system and the scope of the performance testing requirement so that we can provide an estimate of the effort and costs. We will need representatives attending who can cover questions we may have of the following roles:
- Business Analyst
- Product Owner
- Test Lead
- Environments Manager
- Project Manager
- Systems Architect
- Database team representative
- API team representative
- Application Performance Monitoring team representative We will need toinvestigate the following topics concerning the system to be tested:
- Details of the application architecture and environment. It would be ideal if we could see a solution diagram prior to the workshop
- What plans have been made for test data?
- Are there any existing performance related issues or pain points that we need to consider?
- Do and SLA’s exist for the solution, both internal to RP and with the 3rd parties?
- Is there any Application Performance Monitoring (APM) in production and test?
- Does RP have any existing load farm for driving performance testing?”
Exploratory testing in Zephyr
This is the step-by-step process for adding Test Charters to Zephyr Scale
Upon completion every User Story it to have Exploratory testing included The extend of the Exploratory testing varies from feature to feature e.g Stories that introduce minor changes like new background colour or just another string displayed are not suitable for extensive Exploratory testing On the other hand stories introducing / amending features that affect multiple screens / pages require extensive Exploratory testing on top of the User acceptance test that are performed against the Acceptance Criteria or the user story. Test cases names will follow the naming convention from:
Confluence Space Structure Two folders are created one for APP and one for the Web team For the Mission of the test Charter, we will use the Objective field Test Charters are to be added as test cases using the Plain text type for the Test Script All other necessary fields can be found in https://racingpost.atlassian.net/projects/JP?selectedItem=com.atlassian.plugins.atlassian-connect-plugin:com.kanoah.test-manager__main-project-page#!/testCase/JP-T1156  And just by cloning this test, the template is ready to be worked on.  For more information about the actual execution of the test Charters refer to
Generating feature files
Before the implementation of the the Zephyr scale test management tool in Jira all A/C were created as tests and added to a feature file within the automation framework, these test were then tagged as technical debt (INT_TODO) and the team then created the appropriate code(step Definitions) for the tests to execute and tagged the scenarios accordingly as per the
Automation Tagging Strategy.
This meant that there were a lot of scenarios that the automation framework needed to parse but never run
With the implementation of Zephyr scale, we will be able to refine our process for creating feature files and better manage the scenarios that are included in them so that there will be no manual or technical debt scenarios in the framework.
Identifying Technical debt
There will now be 2 clear forms of technical debt going forward
- Unrecognised technical debt: these are test scenarios that are still in Draft / Ready status that have yet to be evaluated for suitability for automation, see the Reviewing test cases for automation process to see how these will be managed
- Recognised technical debt: these are test scenarios that have been through the review process and now carry the status of ‘Automation Candidate’ This process purely focuses on how we manage recognised technical debt and generate the appropriate Feature files to enable automated test execution within the test framework Exporting scenarios for automation
Within Zephyr scale test scenarios will still be grouped by feature as per the
Test case folder structure and naming conventions. This is designed to fully replicate the structure of the automation project code(repo) base so that we can easily map automated test back to the relevant feature/scenario in zephyr scale. Once we have identified a feature in Zephyr scale that has one or more scenarios that fall into the recognised technical debt and we are ready to begin automating those scenarios we need to export those cases into a feature file and add it to our repo for development; this is something we can do directly from Zephyr scale. Steps to Export scenarios as feature files
- Select the Feature folder in the test structure to see the related tests 
- Select the Filter option to show 
- Select Add Criteria 
- Select Status from the Add Criteria drop down 
- In the newly added criteria , select ‘Automation candidate’ 
- This will filter the displayed list for the selected list show only test scenarios with a status of automation candidates. 
- At this point you can select one or more scenarios, which will enable some extra actions on the view 
- Select More -/- Export feature files (BDD - Gherkin) 
- This will down load a zip file that will contain a .feature file for each selected test.
- Open the relevant automation repo and create a new .feature file that matches the name of the feature folder in Zephyr scale
- Add a Feature: header in the new .feautre file in the repo and add the scenario description, this should be in the objective field of any of the tests exported
- Copy and paste the scenarios from each of the feature files in the zip into the new .feature file in the repo, making sure to exclude the pre-populated Feature header and to include the @testcase-ID tag
- Save the .feature and you are ready to begin automation Aligning scenarios in scale with Automation
There are a number of tasks that we will need to complete in order to ensure that the scenarios in scale match those in the framework. Test Step updates
It is often the case that the steps of a scenario will change from when they are created in scale to what is finally used in automation to allow for common language use or to make automation easier to complete. When this occurs you must update the original scenario in scale to do this we do the following
- Open the original test in Scale
- Select the New version Button 
- Confirm new version in the pop up dialog 
- Select the test script Tab and update the BDD Gherkin
- Click Save Add new test cases
Often when completing automation we find that we can add extra tests that will increase coverage and help us better judge the quality of the application under test. In this case we must ensure that the new test cases is added to the appropriate feature folder and is linked to the appropriate story. there are 2 ways this can be done
- Manually: follow the steps here: New test case using the form
- On execution: When tests are executed via Jenkins we utilise the zephyr scale plugin which can be configured to add new tests to the project when they are run as part of an automated test cycle
There is a known issue with wdio and cucumber where the test output is not correctly formatted and therefore not parsed as expected. So right now we are unable to update/record testsd in Zephyr scale via method 2. Update the scenario status
Once a test has been automated, reviewed and is successfully executing in an automation suite then we must update the scenario status to be ‘Automated’ within zephyr scale. this enables us to calculate our automation coverage and measure that against manual coverage and technical debt. To update the status of a test in Zephyr Scale:
- Open the test record (ensuring you are looking at the latest version)
- Select the Details Tab
- Select the Status Drop down (will be showing as Automation candidate)
- select Automated
- Save the test
Exploratory testing in Zephyr
This is the step-by-step process for adding Test Charters to Zephyr Scale
Upon completion every User Story it to have Exploratory testing included The extend of the Exploratory testing varies from feature to feature e.g Stories that introduce minor changes like new background colour or just another string displayed are not suitable for extensive Exploratory testing On the other hand stories introducing / amending features that affect multiple screens / pages require extensive Exploratory testing on top of the User acceptance test that are performed against the Acceptance Criteria or the user story. Test cases names will follow the naming convention from:
Confluence Space Structure Two folders are created one for APP and one for the Web team For the Mission of the test Charter, we will use the Objective field Test Charters are to be added as test cases using the Plain text type for the Test Script All other necessary fields can be found in https://racingpost.atlassian.net/projects/JP?selectedItem=com.atlassian.plugins.atlassian-connect-plugin:com.kanoah.test-manager__main-project-page#!/testCase/JP-T1156  And just by cloning this test, the template is ready to be worked on.  For more information about the actual execution of the test Charters refer to
Test Cycles
Test Cycles are how we manage Test runs in Zephyr scale, they contain a collection of one or more tests and can be made of multiple test executions. Test Cycles are used in two ways
-
Manual testing
-
Automation The information contained within a test cycle can differ depending on which way the cycle is used.
-
Quality Assurance 09 - Zephyr Scale Test Cycles
Test Cycle folder structure and Naming
Test cycles are managed in their own separate area of Zephyr Scale, this can be accessed by select the 'Test Cycles' clink at the top of the page  Base Folder Structure
Test Cycles can be managed in a folder structure in the same manner as Test Cases and as Scale is setup individually for each Jira project it is vital that we define a common base structure for each area. When Setting up a project for test execution we need to ensure that we can easily identify and report against specific sprints to ensure we understand what testing has been executed. With this in mind we should use the following simple structure within the Jira project for Test Cycles.
- Deliverable Product
- Automated Cycles
- Sprints/Releases
- Sprint/Release n
- Sprint/Release n+1
- Test Cycle Folder Naming Convention
Deliverable Product Level: This is base folder for a specific deliverable that will identify what the folders and cycles below are relevant to. Examples
- Janus App
- Janus Web
- Pickswise App
- Pickswise Web
- My Racing
- Free Super Tips Note: The separation of test cycles into the correct deliverable is a must when a jira project contains more than one deliverable product (e.g Janus & Affiliates). Automated Cycles: This folder needs to exist so that as we grow our automated testing capabilities we have a catch all folder for each product deliverable that will contain the relevant automations testing results and executions. Sprints/Releases parent level: Within each project there will be a containing folder called
- Sprints for Scrum based projects
- Releases for Kanban based projects Individual sprint/release level: These folders will be where we will create the relevant test Cycles for a specific Sprint or release, This Folder will be named for the sprint/release it is related to. Example:
- Sprint 73
- Sprint 74
- Release 1.2
- Release API v4 Test Cycle Level: Test Cycles will not be folders but the actual cycles that need to be executed, these will need to be sensibly named to clearly identify what is being tested and may differ considerably from project to project. Namimg conventions for these is covered in the next section :
- Quality Assurance 09 - Zephyr Scale Test Cycles
Test Cycle creation and Naming * Once the
Test Cycle folder structure and Naming is in place then we need to start creating Test Cycles so we can execute tests. New Story/Task Test Cycle creation
Test Cycles for new stories will live inside the Sprint/Release folders and contain the test cases we wish to execute. We have a number of options available to us on how we structure cycles, we can:
- Create a cycle per story or ticket, this gives us to ability to create cycles before hand or if necessary as stories are released to QA during the Sprint and ensures each cycle maps to specific deliverable within the sprint and can be easily cloned to later sprints should the relevant deliverable need to slip across sprints / releases. This does mean that we could have a large number of cycles in a release but allows us to manage each story individually enables us to report on progress at a granular level should we need to.
- Create a cycle for each functional / application area (feature) affected, this reduces the number of cycles created. This is best done at the start of the sprint when we know what stories are planned to be delivered and can be difficult to manage should a story not be completed by the end of the sprint / release as it can be complicated to pick apart the tests that need to move to the next sprint when a ticket is not completed.
- Create a single Cycle to cover all stories in a sprint, this is best done at the start of a sprint and is best suited for detailed reporting over a whole sprint/release (actual execution vs Planned being the best metric it supports) Option one is the preferred approach but all 3 are viable depending on the delivery approach of the project. It is also recommended that test cycles are created at the start of the sprint so we can better report on planned vs actual and clearly see what is slipping and allow us to clearly understand the how much testing is needed as the sprint/release nears it’s end.  TODO: UPDATE DOC TO COVER ALL CREATION OPTIONS Option 1 - Cycle per story/ticket
Test Cycle Naming Convention
Once the folder for the current / upcoming sprint is created then we add the test cycles. As using this option we are focusing on a cycle per story then it makes sense that the Test cycle be named the same as the story/task it will contain tests for, and will include the Jira Ticket reference. Only the tests related to the story will be included in this cycle e.g.  Option 2: TBD
Test Cycle Naming Convention
Once the folder for the current / upcoming sprint is created then we add the test cycles. Option 3: TBD
Test Cycle Naming Convention
Once the folder for the current / upcoming sprint is created then we add the test cycles.
Test Cycle creation and Naming
Janus Native App API Test Planning and Design
<title></title> <style type="text/css"> p.p1 {margin: 0.0px 0.0px 2.0px 0.0px; font: 16.0px 'Helvetica Neue'} p.p2 {margin: 0.0px 0.0px 2.0px 0.0px; font: 16.0px 'Helvetica Neue'; min-height: 19.0px} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'} p.p4 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'; color: #dca10d} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'; min-height: 15.0px} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: center; font: 13.0px 'Helvetica Neue'} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'; color: #dca10d; min-height: 15.0px} p.p8 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; min-height: 14.0px} li.li3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'} li.li4 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'; color: #dca10d} span.s1 {color: #dca10d} span.s2 {font: 9.0px Menlo} span.s3 {font: 9.0px Menlo; color: #000000} span.s4 {color: #000000} span.s5 {text-decoration: line-through} table.t1 {border-collapse: collapse} td.td1 {border-style: solid; border-width: 1.0px 1.0px 1.0px 1.0px; border-color: #9a9a9a #9a9a9a #9a9a9a #9a9a9a; padding: 1.0px 5.0px 1.0px 5.0px} ul.ul1 {list-style-type: disc} </style>Overview
This document details the planning and design of the second phase of Janus Native app testing, the back end API system. This enables the large number of virtual end users to be uncoupled from the need to have many physical or emulated mobile devices running the front end app. This is an application level test design, intended to replicate the system under use. Individual component tests already exist for each API.
This is intended to be a ‘living’ document to be maintained until the testing is a an integral part of the CI and development process.
Scope
The test should treat all components relied on either directly via the API or indirectly as dependencies as part of the system under test. With the caveat is that any bookmaker bet and price systems connected are out of Racing Post’s control.
This test has a wider scope than the Services performance tests, which test the individual bet and racing services.
Tooling
Performance Test Tool
The API testing is expected to be carried out using Apache JMeter. This tool has a number of advantages:
- Open Source licence, so no costs.
- Easy to get a basic test working which can be easily extended.
- GUI workspace with no coding for most functionality.
- Can add coded modules for non-standard functionality in a range of scripting languages, including Javascript.
- Has test agent functionality for distributed loads.
- Has built in results analytics tools for graphs and metrics creation.
- Has an active community creating plugins.
- Supports a range of request protocols and DB access via JDBC.
Test Environment
Target System
It was thought that the performance test environment will be created on AWS, like other test and development machines. Past experience has show AWS boxes provide a stable, consistent environment for performance testing.
However investigation showed it is actually using Amazon’s Fargate Docker container hosting service. This is a very different methodology of AWS' dedicated virtual servers. It allows containers to be created as stand alone instances with specified memory and CPU allocations. In the production environment this allows easy auto scaling of nodes in component clusters which then attach to load balancers (using simple round-robin load distribution and non-sticky sessions). In development and test environments scaling is disabled. However if we wanted to test scaling on UAT this can be set active manually, although the next redeployment would overwrite that.
Fortunately system resources assigned to each container are ring-fenced and dedicated so the risk that performance will vary between identical runs should be minimal.
The current plan is to reuse the existing UAT environment for initial performance testing. As this type of testing is not constantly running it means avoiding cost of an under utilised, dedicated environment.
Currently the only monitoring on the Fargate hosted services is via Amazon Cloudwatch. However work is planned to try and add Datadog (and therefore Grafana) to the container services.
Test Rig
The test agents should be deployed within the same data centre as the target system but with Fargate being used for hosting Janus this may not be practical. Regardless, multiple agent machines will need to be provisioned. However the specification required is not high. If DevOps is amenable the agents could be provisioned as cheap AWS spot instances and decommissioned after each test phase to save money. But the simplest solution may be an established estate of JMeter agents shared between RP performance testers.
While test agents are relatively easy to deploy there are some established prerequisites before use:
- Java is installed on the agent server.
- JMeter is deployed on the server (the same version as the computer acting as controller).
- Any supplemental files are manually deployed (test data, Jar files for handwritten java, JMeter plugins used in test).
After the agents are provisioned their IP addresses need to be added to the controller’s jmeter.properties file under the remote_hosts property (more details here).
TODO Establish JMeter test agent plan with other PT/SDET resources.
Test Data
Test data could be initially based on functional test data sets. However a much larger range of user bookmaker accounts would be required. These are handed out sparingly by operators. Therefore a stub service will need to be created. This too can be developed iteratively. An initial simple version takes the requested bet and turns it around as a successful placement response. Then more complex replies like insufficient funds or changed price can be added to the mix.
The UAT environment currently uses the Legacy DB server. This is a replica of production that receives updates in real time.
Test Design
Methodology
Since JMeter is mimicking the requests from the app for hundreds of users the scenarios in the script need to realistically replicate the app’s sequences of interactions with external systems. There are two key ways to do this.
The first is to understand the external API and how the stateless requests relate to each other. How to make initial requests and extract data to be used as parameters for subsequent requests. In RP's case the core API’s are documented:
- Racing data https://horses-api.apps.uat.global.rp-cloudinfra.com/horses/documentation
- Betslips https://betslip-api.ecs.dev.janus.rp-cloudinfra.com/swagger/#/
The second method is to carry out a ‘man in the middle’ attack on the tester’s device to capture the app's literal, real requests, as detailed here. This sequence is then used as a skeleton for a test script where results from one request are parsed for parameters for the subsequent call.
In reality the best approach is a blend of the two; a captured test run for the core data flow, then using the published documentation to create realistic variations on that snapshot of a user’s behaviour.
Requests in JMeter should be modularised for use in multiple scenarios within the test script. This will aid when adding later complexity.
Scenarios
As Janus is being developed in an iterative, Lean way the testing should be developed in a similar style. Initial simple scenarios, based on captured request flows, can then be extended with further variations. Scenarios should be prioritised by considered risk. High volume screens take priority over screens with little traffic. Complex screens take priority over simple ones.
| Stage | Screens/Requests | Description | Jira Ticket |
|---|---|---|---|
| 1 | Racing home page, Individual race, Horse details | Basic user navigation, starting at the home page, choosing a race then choosing a runner in then race. | JP-2389 |
| 2 | Add a bet from the horse detail screen and view the betslip | Using the screen modules from stage 1 | JP-3572: Performance - Add selection from horse detail screen to betslip CLOSED |
| 3 | Log into a bookmaker and place a bet | Using the screen modules from stage 1 and 2 | Deprecated to avoid stressing Bookmaker systems. |
| 4 | Using configurable variety create a mixed scenario based on DIU analysis of user behaviour | e.g. some users just view a few race and runner details with no bets. Other users build a betslip with 1 to 3 legs, with a mix of single and each way bets. Also maybe have subscribers log into the app and look at news posts. | |
| 5 | Integrate jmeter test into CI | ||
| 6 | Create user documentation for running and maintaining tests |
Cadence
The testing will be carried out at least every UAT release. However additional testing can be added for development tickets identified as potential performance risks at the test stage. Ideally performance testing will be “pushed left” into the continuous integration process.
AWS Network Stress Test Alert
In case the test load exceeds the maximum network load defined in the EC2 Testing Policy each full load test (not low load smoke tests) will need to be pre-authorised with Amazon. The AWS Stress Test Intake form needs to be filled and submitted before a full load test is run.
Further Non-Functional Testing
This section details further testing which depends on having an established performance test in place.
Resilience Testing
Resilience testing can use the load of the performance test to verify that a component failure does not stop request processing and data retrieval.
TODO Create resilience test planning child doc.
Soak Testing
A soak test uses an extended run of the performance test (12+ hours) to reveal any subtle cumulative software issues. Typically these will be issues like memory leaks, internal reserved storage size exceeded and accumulated unfinished threads.
TODO Create soak test planning child doc.
Stress Testing
Stress testing used a gradually increasing performance test load to identify the rate that causes the test environment to fail. This is useful as identifying the weakest point means mitigation against that type of failure can be designed and built in. This may be easier on a non-scaling Fargate environment.
TODO Create soak test planning child doc.
Scalability Testing
Usually scalability involves performance testing on a range of different environment specifications to see how capacity changes. This is often with a view to extrapolating results for future hosting. In Janus' case the ability to spin up more Fargate containers makes server capacity near infinite. However some fixed resources (eg DB servers) won’t scale easily. Also parallelising servers often has decreasing returns in capacity.
TODO Create scalability test planning child doc.
This document details the planning and design of the second phase of Janus Native app testing, the back end API system. This enables the large number of virtual end users to be uncoupled from the need to have many physical or emulated mobile devices running the front end app. This is an application level test design, intended to replicate the system under use. Individual component tests already exist for each API. This is intended to be a ‘living’ document to be maintained until the testing is a an integral part of the CI and development process. Scope
The test should treat all components relied on either directly via the API or indirectly as dependencies as part of the system under test. With the caveat is that any bookmaker bet and price systems connected are out of Racing Post’s control. This test has a wider scope than the Services performance tests, which test the individual bet and racing services. Tooling
Performance Test Tool
The API testing is expected to be carried out using Apache JMeter. This tool has a number of advantages:
- Open Source licence, so no costs.
- Easy to get a basic test working which can be easily extended.
- GUI workspace with no coding for most functionality.
- Can add coded modules for non-standard functionality in a range of scripting languages, including Javascript.
- Has test agent functionality for distributed loads.
- Has built in results analytics tools for graphs and metrics creation.
- Has an active community creating plugins.
- Supports a range of request protocols and DB access via JDBC. Test Environment
Target System
It was thought that the performance test environment will be created on AWS, like other test and development machines. Past experience has show AWS boxes provide a stable, consistent environment for performance testing. However investigation showed it is actually using Amazon’s Fargate Docker container hosting service. This is a very different methodology of AWS' dedicated virtual servers. It allows containers to be created as stand alone instances with specified memory and CPU allocations. In the production environment this allows easy auto scaling of nodes in component clusters which then attach to load balancers (using simple round-robin load distribution and non-sticky sessions). In development and test environments scaling is disabled. However if we wanted to test scaling on UAT this can be set active manually, although the next redeployment would overwrite that. Fortunately system resources assigned to each container are ring-fenced and dedicated so the risk that performance will vary between identical runs should be minimal. The current plan is to reuse the existing UAT environment for initial performance testing. As this type of testing is not constantly running it means avoiding cost of an under utilised, dedicated environment. Currently the only monitoring on the Fargate hosted services is via Amazon Cloudwatch. However work is planned to try and add Datadog (and therefore Grafana) to the container services. Test Rig
The test agents should be deployed within the same data centre as the target system but with Fargate being used for hosting Janus this may not be practical. Regardless, multiple agent machines will need to be provisioned. However the specification required is not high. If DevOps is amenable the agents could be provisioned as cheap AWS spot instances and decommissioned after each test phase to save money. But the simplest solution may be an established estate of JMeter agents shared between RP performance testers. While test agents are relatively easy to deploy there are some established prerequisites before use:
- Java is installed on the agent server.
- JMeter is deployed on the server (the same version as the computer acting as controller).
- Any supplemental files are manually deployed (test data, Jar files for handwritten java, JMeter plugins used in test). After the agents are provisioned their IP addresses need to be added to the controller’s jmeter.properties file under the remote_hosts property (more details here). TODO Establish JMeter test agent plan with other PT/SDET resources. Test Data
Test data could be initially based on functional test data sets. However a much larger range of user bookmaker accounts would be required. These are handed out sparingly by operators. Therefore a stub service will need to be created. This too can be developed iteratively. An initial simple version takes the requested bet and turns it around as a successful placement response. Then more complex replies like insufficient funds or changed price can be added to the mix. The UAT environment currently uses the Legacy DB server. This is a replica of production that receives updates in real time. Test Design
Methodology
Since JMeter is mimicking the requests from the app for hundreds of users the scenarios in the script need to realistically replicate the app’s sequences of interactions with external systems. There are two key ways to do this. The first is to understand the external API and how the stateless requests relate to each other. How to make initial requests and extract data to be used as parameters for subsequent requests. In RP's case the core API’s are documented:
- Racing data https://horses-api.apps.uat.global.rp-cloudinfra.com/horses/documentation
- Betslips https://betslip-api.ecs.dev.janus.rp-cloudinfra.com/swagger/#/ The second method is to carry out a ‘man in the middle’ attack on the tester’s device to capture the app's literal, real requests, as detailed here. This sequence is then used as a skeleton for a test script where results from one request are parsed for parameters for the subsequent call. In reality the best approach is a blend of the two; a captured test run for the core data flow, then using the published documentation to create realistic variations on that snapshot of a user’s behaviour. Requests in JMeter should be modularised for use in multiple scenarios within the test script. This will aid when adding later complexity. Scenarios
As Janus is being developed in an iterative, Lean way the testing should be developed in a similar style. Initial simple scenarios, based on captured request flows, can then be extended with further variations. Scenarios should be prioritised by considered risk. High volume screens take priority over screens with little traffic. Complex screens take priority over simple ones.
Stage Screens/Requests Description Jira Ticket 1 Racing home page, Individual race, Horse details Basic user navigation, starting at the home page, choosing a race then choosing a runner in then race. JP-2389 2 Add a bet from the horse detail screen and view the betslip Using the screen modules from stage 1 JP-3572: Performance - Add selection from horse detail screen to betslip CLOSED 3 Log into a bookmaker and place a bet Using the screen modules from stage 1 and 2 Deprecated to avoid stressing Bookmaker systems. 4 Using configurable variety create a mixed scenario based on DIU analysis of user behaviour e.g. some users just view a few race and runner details with no bets. Other users build a betslip with 1 to 3 legs, with a mix of single and each way bets. Also maybe have subscribers log into the app and look at news posts. 5 Integrate jmeter test into CI 6 Create user documentation for running and maintaining tests Further functionality will be added to scenarios, such as Racing Post subscriber log in and viewing of the future news screen. But race card navigation and bet placement are likely to be highest throughput areas. There should also be a mix amongst users of those on simulated iOS and Android devices. Scripts are expected to be developed using the JMeter GUI environment (no agents) on the tester Macbook to run them with a tiny load against UAT. The DIU don’t have information about user journeys through the legacy app but we do have google analytics data for the legacy web site. This may give us guidance on the requests mix. User Load
DIU have been unable to supply measurements of concurrent connected users on production. The metrics they have supplied for the legacy apps show users per minute. It is fair to assume that most users remain on the app for a minute or more as they view race details. Metrics for an average day show that for most of the afternoon the iOS users per minute does not go below 5000, with one sharp peak reaching around 9000. The Android users for the same period don’t go much below 1500 and peak at about 3300 per minute. As the UAT environment does not dynamically scale like the production one it is unlikely to support production levels of load. It is usually advisable to ramp up load over a few tests and see what the test environment will support. On a technical note JMeter documentation says each agent should have no more that 400 threads (virtual users). Therefore each load multiple or part thereof of 400 will require a new agent server. Test Duration
During script development a run of a few minutes normally suffices to reveal scripting issues. But for an actual test we want to give it time to generate the full range of scenario variations. A run of 15 minutes should be sufficient. Planned Timeline
Step Man Days Estimate Notes Create Stage 1 script (detailed above) 5 App requests capture needs to be carried out for the first time. Lots or request analysis and learning is expected. Also authentication/access issues are anticipated. Script dev to be carried out against UAT. Add Stage 2 functionality 3 Issues found and resolved in Stage 1 means a lower time for subsequent scripting. Provision and smoke test JMeter agent server 1 May be able to use existing Paris servers (see Scott Redden’s comment below) Jira ticket:
JP-3178: Give Performance Test the capacity to provision temporary AWS servers to act as JMeter agent servers CLOSED Run load tests 3 Carry out a series of tests against UAT exploring load capacity and performance. Report results to Janus team and get feedback on areas for improvement or further investigation (feeding into stages 3 & 4). Add Stage 2 functionality 3 Having enough test accounts for the number of users may be an issue. This step may be started while the output of the previous phase is being discussed. Carry out full load tests against UAT 1 As this could be near the Beta release there may be demand for the UAT env. Some of the team input from above may be implemented here. Add stage 4 TBD12 This will include some of the team input from above. This is expected to be post Beta release and be an ongoing improvement process to launch using feedback in each sprint. Cadence
The testing will be carried out at least every UAT release. However additional testing can be added for development tickets identified as potential performance risks at the test stage. Ideally performance testing will be “pushed left” into the continuous integration process. AWS Network Stress Test Alert
In case the test load exceeds the maximum network load defined in the EC2 Testing Policy each full load test (not low load smoke tests) will need to be pre-authorised with Amazon. The AWS Stress Test Intake form needs to be filled and submitted before a full load test is run. Further Non-Functional Testing
This section details further testing which depends on having an established performance test in place. Resilience Testing
Resilience testing can use the load of the performance test to verify that a component failure does not stop request processing and data retrieval. TODO Create resilience test planning child doc. Soak Testing
A soak test uses an extended run of the performance test (12+ hours) to reveal any subtle cumulative software issues. Typically these will be issues like memory leaks, internal reserved storage size exceeded and accumulated unfinished threads. TODO Create soak test planning child doc. Stress Testing
Stress testing used a gradually increasing performance test load to identify the rate that causes the test environment to fail. This is useful as identifying the weakest point means mitigation against that type of failure can be designed and built in. This may be easier on a non-scaling Fargate environment. TODO Create soak test planning child doc. Scalability Testing
Usually scalability involves performance testing on a range of different environment specifications to see how capacity changes. This is often with a view to extrapolating results for future hosting. In Janus' case the ability to spin up more Fargate containers makes server capacity near infinite. However some fixed resources (eg DB servers) won’t scale easily. Also parallelising servers often has decreasing returns in capacity. TODO Create scalability test planning child doc.
<title></title> <style type="text/css"> p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 20.0px 'Helvetica Neue'} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'; min-height: 15.0px} p.p4 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'} p.p5 {margin: 0.0px 0.0px 2.0px 0.0px; font: 16.0px 'Helvetica Neue'; min-height: 19.0px} p.p6 {margin: 0.0px 0.0px 2.0px 0.0px; font: 16.0px 'Helvetica Neue'} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: center; font: 13.0px 'Helvetica Neue'} li.li3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'; color: #dca10d} li.li4 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px 'Helvetica Neue'} span.s1 {font: 9.0px Menlo; color: #000000} span.s2 {font: 13.0px 'Helvetica Neue'} span.s3 {font: 9.0px Menlo} span.s4 {font: 13.0px 'Helvetica Neue'; color: #dca10d} span.s5 {color: #dca10d} span.s6 {text-decoration: line-through} span.s7 {text-decoration: line-through ; color: #dca10d} table.t1 {border-collapse: collapse} td.td1 {border-style: solid; border-width: 1.0px 1.0px 1.0px 1.0px; border-color: #9a9a9a #9a9a9a #9a9a9a #9a9a9a; padding: 1.0px 5.0px 1.0px 5.0px} ol.ol1 {list-style-type: decimal} ul.ul1 {list-style-type: disc} </style>Janus Client App Device Testing Manual
- Overview
- Tooling
- Process
- Running the Manual Test
- Results Collection
- Results Reporting
- Cadence
- Troubleshooting
Overview
One of the primary drivers of the Janus project is the on device performance of the current legacy app. This was reported to have a detrimental effect on the user experience. While The whole system performance contributes to this the on device app is the component nearest to the user. Therefore we need to ensure it has good responsiveness.
Tooling
After research (documented here) The Apptim tool was chosen for its low cost and ability to test on both Android and iOS. The tool has Windows and MacOS versions available to download.
Apptim Dependencies
Apptim has a guide for common issues it has with Andoid and iOS.
These instructions presumes the testing is being done on a MacOS computer.
Android
MacOS requires Android drivers for Apptim to work. The best way is to install the Android File Transfer application for Mac. Then transferring APK files is also easy.
iOS
Apptim uses the Apple Xcode library to interact with the phone, but it is also needed for installation of the test app (see below). It can be installed from the AppStore.
Process
Janus App Installation on Test Device
The deployment builds are found in the RP AppCenter area. Get the latest UAT builds if testing a release or at a sprint end.
Android
Simply transfer the APK file to the handset over the USB cable connection then run it from the file browser. You may need to go into the device settings to allow sideloading of apps that are not from the Google Play store, and change the USB connection setting from ‘power only’ to ‘power and data’.
iOS
iOS is much stricter when installing apps that are not from the AppStore. Instead Start the Xcode dev tool. With the handset connected by USB select the menu item Windows->Devices and Simulators. The device should appear here. If not check that the device and Macbook both trust each other. You can then click and drag the IPA file on to this window to start the installation process.
The app needs to have it’s developer certificate accepted before the device will run it. In the Settings choose General->Profiles and Device Management and click on the Centurycomm Limited option to accept it.
Running the Manual Test
Most of the process is identical . However for Android the developer settings must be unlocked for use (this varies by handset manufacturer) and in the developer setting ‘Allow USB debugging’ needs to be selected. When connected via USB you may need to change connection rights from ‘Charge only’ to ‘file transfer’ to allow Apptim to retrieve system logs.
Before running a test ensure the app and backend are working by starting the app and navigating around some screens. This also helps ensure that any caches have been warmed. Also log into a bookmaker so selections can be added to the betslip during the test.
Ensure no other apps are active when carrying out the test to reduce competition for system resources.
Running on Apptim
- Connect the phone to the laptop via USB.
- Start Apptim.
- Ensure you are logged into the janus-team workspace, using the folder in the top right corner of the home page. This ensures colleagues in that group can see the results in Apptim.
- Select the Start a new test icon. Apptim will search for connected devices. If the test handset is not found refer to the troubleshooting section at the bottom of this page. After selecting the test handset Apptim searches the device and presents a list of applications to test against.
- Select the Janus app. Be careful if multiple releases are installed on the device that the correct one is selected.
- Apptim then asks for a test run name. This will be used in the result archive to identify this test run.
- Apptim starts the app on the phone; this is not a manual step. When started the tester carries out user actions in the app (see User Interactions below).
- When the manual interaction is complete press the End Session button. A message appears while Apptim pulls its test logging off the device into its internal archive.
- On the result screen there will be a box for the session report. Do not click in the link icon (shaped like a chain); this will upload the report to the Apptim site. Click in the monitor icon to view the report locally.
- The report has various sections (explained in the Apptim Documentation). The key one for the reporting is the logs section where preserved system logs can be downloaded.
User Interactions - Original Process
These steps were used for the Apptim testing from the start of performance testing. They will be superceded by the extended process below.
In the pre-test preperation gog in to two bookmakers instead of one and use Best odds. It is understood this will create the most work for the common layer when retrieving prices.
- Apptim opens Janus app, arrives on home screen.
- Switch to racing screen.
- Expand a race meet
- Open a race.
- Open the top horse details
- Expand the breeding section
- Scroll to the bottom of the runner history.
- Return to race
- Repeat steps 4 to 8 for the second and third horses.
- Add the third horse to the betslip by pressing the price button. The first time this is done will bring up the current logged in bookmaker message box to be actioned away.
- Repeat 3 to 10 for two different race meets.
- Go to the betslip via the main menu button
- Add 1.00 in the treble stake box
- Go to the News Home via the main menu button.
- Open an article. Scroll to the bottom of the article and then go back to News Home.
- Repeat 15 for 4 more articles, ideally from different sections.
User Interactions - Extended Process
The user actions are intended to exercise key paths in the app multiple times. Over time extra functionality has been added that was not being exercised by the original test process. Adding extra steps makes comparisons in the trend document harder. Therefore a ‘big bang’ step will be made with multiple functionality added in one go. Two tests will be carried out on one release, one with the old process and one with this new one. This will provide information on the impact of the additional steps.
- Apptim opens Janus app, arrives on home screen.
- Switch to racing screen.
- On Racing Home go back to the ‘previous day’.
- View 3 results from 3 different meets. After picking the first result navigate to the other two using the meets pull down on the result screen.
- Return to today’s races with the ‘today’ button.
- Use the search functionality to find and view a horse’s statistics from the magnifyimng glass icon on Race Home. Search for “King”. Choose one suggested result to view.
- Expand a meeting to list races.
- Open a race.
- View each of today’s races with Tips & Spotlights switched on (activated on the first race viewed).
- Open the Verdict & Predictor.
- Open the top horse details
- Expand the breeding section
- Scroll to the bottom of the runner history.
- Return to race
- Repeat steps 11 to 14 for the second and third horses.
- Add the third horse to the betslip by pressing the price button. The first time this is done will bring up the current logged in bookmaker message box to be actioned away.
- Repeat 7 to 16 for two different race meets.
- Go to the betslip via the main menu button
- Add 1.00 in the treble stake box
- Go to the News Home via the main menu button.
- Open an article. Scroll to the bottom of the article and then go back to News Home.
- Repeat step 21 for 4 more articles, ideally from different sections.
Results Collection
The graphs in Apptim are a good at illustrating performance but are not so useful for comparing between app releases. Specific measurements need to be collated for reporting and trend analysis. These metrics are extracted from logs downloaded from Apptim. The logs are easiest to analyse if loaded into a Google Sheet which has column average/sum/max functionality at the bottom of the sheet
Some metrics are not available from iOS testing.
Round values to two decimal places.
| Metric(units) | Log File (Name Prefixed by datetime) | Log File Column |
|---|---|---|
| Drawn Frames | renderinfo.tsv | Frames (column sum) |
| Janked frames (over 16ms between redraws) (Android Only) | renderinfo.tsv | max_jank (column sum) |
| Max Frame Draw Time (ms) (Android Only) | renderinfo.tsv | max_frame (column max) |
| Avg. Frames per Second (redraws) | renderinfo.tsv | fps (column average) |
| Max CPU usage(%) | cpuinfo.tsv | cpu (column max) |
| Average CPU usage(%) | cpuinfo.tsv | cpu (column average) |
| Max Native Heap Usage(kB) | meminfo.tsv | native_heap_size (column max) |
| Average Native Heap Usage(kB) | meminfo.tsv | native_heap_size (column average) |
| Proportional Size Set (max) | meminfo.tsv | pss (column max) |
| PSS(avg) | meminfo.tsv | pss (column average) |
- Overview
- Tooling
- Process
- Running the Manual Test
- Results Collection
- Results Reporting
- Cadence
- Troubleshooting
Overview
One of the primary drivers of the Janus project is the on device performance of the current legacy app. This was reported to have a detrimental effect on the user experience. While The whole system performance contributes to this the on device app is the component nearest to the user. Therefore we need to ensure it has good responsiveness. Tooling
After research (documented here) The Apptim tool was chosen for its low cost and ability to test on both Android and iOS. The tool has Windows and MacOS versions available to download. Apptim Dependencies
Apptim has a guide for common issues it has with Andoid and iOS. These instructions presumes the testing is being done on a MacOS computer. Android
MacOS requires Android drivers for Apptim to work. The best way is to install the Android File Transfer application for Mac. Then transferring APK files is also easy. iOS
Apptim uses the Apple Xcode library to interact with the phone, but it is also needed for installation of the test app (see below). It can be installed from the AppStore. Process
Janus App Installation on Test Device
The deployment builds are found in the RP AppCenter area. Get the latest UAT builds if testing a release or at a sprint end. Android
Simply transfer the APK file to the handset over the USB cable connection then run it from the file browser. You may need to go into the device settings to allow sideloading of apps that are not from the Google Play store, and change the USB connection setting from ‘power only’ to ‘power and data’. iOS
iOS is much stricter when installing apps that are not from the AppStore. Instead Start the Xcode dev tool. With the handset connected by USB select the menu item Windows->Devices and Simulators. The device should appear here. If not check that the device and Macbook both trust each other. You can then click and drag the IPA file on to this window to start the installation process. The app needs to have it’s developer certificate accepted before the device will run it. In the Settings choose General->Profiles and Device Management and click on the Centurycomm Limited option to accept it. Running the Manual Test
Most of the process is identical . However for Android the developer settings must be unlocked for use (this varies by handset manufacturer) and in the developer setting ‘Allow USB debugging’ needs to be selected. When connected via USB you may need to change connection rights from ‘Charge only’ to ‘file transfer’ to allow Apptim to retrieve system logs. Before running a test ensure the app and backend are working by starting the app and navigating around some screens. This also helps ensure that any caches have been warmed. Also log into a bookmaker so selections can be added to the betslip during the test. Ensure no other apps are active when carrying out the test to reduce competition for system resources. Running on Apptim
- Connect the phone to the laptop via USB.
- Start Apptim.
- Ensure you are logged into the janus-team workspace, using the folder in the top right corner of the home page. This ensures colleagues in that group can see the results in Apptim.
- Select the Start a new test icon. Apptim will search for connected devices. If the test handset is not found refer to the troubleshooting section at the bottom of this page. After selecting the test handset Apptim searches the device and presents a list of applications to test against.
- Select the Janus app. Be careful if multiple releases are installed on the device that the correct one is selected.
- Apptim then asks for a test run name. This will be used in the result archive to identify this test run.
- Apptim starts the app on the phone; this is not a manual step. When started the tester carries out user actions in the app (see User Interactions below).
- When the manual interaction is complete press the End Session button. A message appears while Apptim pulls its test logging off the device into its internal archive.
- On the result screen there will be a box for the session report. Do not click in the link icon (shaped like a chain); this will upload the report to the Apptim site. Click in the monitor icon to view the report locally.
- The report has various sections (explained in the Apptim Documentation). The key one for the reporting is the logs section where preserved system logs can be downloaded. User Interactions - Original Process
These steps were used for the Apptim testing from the start of performance testing. They will be superceded by the extended process below. In the pre-test preperation gog in to two bookmakers instead of one and use Best odds. It is understood this will create the most work for the common layer when retrieving prices.
- Apptim opens Janus app, arrives on home screen.
- Switch to racing screen.
- Expand a race meet
- Open a race.
- Open the top horse details
- Expand the breeding section
- Scroll to the bottom of the runner history.
- Return to race
- Repeat steps 4 to 8 for the second and third horses.
- Add the third horse to the betslip by pressing the price button. The first time this is done will bring up the current logged in bookmaker message box to be actioned away.
- Repeat 3 to 10 for two different race meets.
- Go to the betslip via the main menu button
- Add 1.00 in the treble stake box
- Go to the News Home via the main menu button.
- Open an article. Scroll to the bottom of the article and then go back to News Home.
- Repeat 15 for 4 more articles, ideally from different sections. User Interactions - Extended Process
The user actions are intended to exercise key paths in the app multiple times. Over time extra functionality has been added that was not being exercised by the original test process. Adding extra steps makes comparisons in the trend document harder. Therefore a ‘big bang’ step will be made with multiple functionality added in one go. Two tests will be carried out on one release, one with the old process and one with this new one. This will provide information on the impact of the additional steps.
- Apptim opens Janus app, arrives on home screen.
- Switch to racing screen.
- On Racing Home go back to the ‘previous day’.
- View 3 results from 3 different meets. After picking the first result navigate to the other two using the meets pull down on the result screen.
- Return to today’s races with the ‘today’ button.
- Use the search functionality to find and view a horse’s statistics from the magnifyimng glass icon on Race Home. Search for “King”. Choose one suggested result to view.
- Expand a meeting to list races.
- Open a race.
- View each of today’s races with Tips & Spotlights switched on (activated on the first race viewed).
- Open the Verdict & Predictor.
- Open the top horse details
- Expand the breeding section
- Scroll to the bottom of the runner history.
- Return to race
- Repeat steps 11 to 14 for the second and third horses.
- Add the third horse to the betslip by pressing the price button. The first time this is done will bring up the current logged in bookmaker message box to be actioned away.
- Repeat 7 to 16 for two different race meets.
- Go to the betslip via the main menu button
- Add 1.00 in the treble stake box
- Go to the News Home via the main menu button.
- Open an article. Scroll to the bottom of the article and then go back to News Home.
- Repeat step 21 for 4 more articles, ideally from different sections. Results Collection
The graphs in Apptim are a good at illustrating performance but are not so useful for comparing between app releases. Specific measurements need to be collated for reporting and trend analysis. These metrics are extracted from logs downloaded from Apptim. The logs are easiest to analyse if loaded into a Google Sheet which has column average/sum/max functionality at the bottom of the sheet Some metrics are not available from iOS testing. Round values to two decimal places.
Metric(units) Log File (Name Prefixed by datetime) Log File Column Drawn Frames renderinfo.tsv Frames (column sum) Janked frames (over 16ms between redraws) (Android Only) renderinfo.tsv max_jank (column sum) Max Frame Draw Time (ms) (Android Only) renderinfo.tsv max_frame (column max) Avg. Frames per Second (redraws) renderinfo.tsv fps (column average) Max CPU usage(%) cpuinfo.tsv cpu (column max) Average CPU usage(%) cpuinfo.tsv cpu (column average) Max Native Heap Usage(kB) meminfo.tsv native_heap_size (column max) Average Native Heap Usage(kB) meminfo.tsv native_heap_size (column average) Proportional Size Set (max) meminfo.tsv pss (column max) PSS(avg) meminfo.tsv pss (column average) In the appendices are the Firebase tracking for the recorded app start up times. These are retrieved from the Firebase console. On the left menu select the performance tracking set. Then choose the _app_start metric. There is a filter menu at the top which can be used to limit the data to the Android build being tested, but the iOS metrics do not have that option. The median and 95% values can be retrieved from the interactive 7 day history graphs. The Firebase start times have been unavailable for some time. The Android start time for Android is currently found in the Apptim resukts summary. The iOS equivalent is unavailable without a debug build. Currently Apptim Marks instrumentation only works in the Android test. This can be found in the event.tsv log file. Double check that all time splits have corresponding START and STOP messages. Use the apptimMarks.sh script to turn the start/stop times into durations for the report table. Results Reporting
The test result pages are archived here. There is a template document that can be cloned. All reporting should be constructed intended for its target audience. With testing that audience ranges from management who want to understand the application has sufficient quality to developers who want to know which specific areas need remedy. Record the test metrics in the historical results spreadsheet for trend analysis. Reports should be shared with the team via Slack for visibility. Cadence
Testing is carried out every sprint on the previous sprint’s build. However if a code update is considered a performance risk or intended to improve performance a separate test can be carried out against a dev build. Troubleshooting
Issue Solutions Issue Solutions USB cable is connected but device is not seen on computer and vice versa. Some USB cables lack the data connections and are intended for recharging only. Test with a different cable. Check ‘Allow USB debugging’ is set in developer options in the settings menu. Practice shows that it is often unset by the phone between tests. Check the USB connection setting on the phone is “File transfer” not “Power Only”. Apptim claims it is missing dependencies after the dependency installation phase. Usually caused by an interruption during dependency installation, notably the MacBook going into sleep mode. Rerun the dependency install again. If that does not clear the issue the next step is to uninstall and reinstall Apptim.
How to test Deep-linking Via Social Media (Twitter/Instagram/Facebook) - Using AppsFlyer. 3 people viewed
Prerequisites
- Request an AppsFlyer account from IT Support
- Working VPN pointed to the USA
- Pickswise App downloaded on a Mobile Device - For this example iOS device
- Login to AppsFlyer and navigate to the OneLink Management screen 
- Click on New Link Button 
- 3.Select Social-to-app , Select a Media source name and click the Next button 
- In the General settings pane click the Next button 
- In the Social-to-app settings pane click the Next button 
- In the Deep linking & redirection pane, confirm:
- When app isn’t installed iOS redirects to App Store
- When app isn’t installed Android redirects to Google Play
- When link is clicked on desktop redirects to App Store website
- Click the Next button 
- In the Attribution parameters pane 
- Click on Add parameters Button, and fill all mandatory fields
- Click on Create link Button

Note:
Within the Param’s The ID value = 311629 is a New article post taken from WordPress Backend (UAT or Prod)
The article ID will be in the url of the post as seen below. Click

The section Pram value can be identified from the following document
 DeepLinking Testing 
- On the “Your social-to-app link is ready” overlay 
- Click on the Done Button and you could see below the Social App link created  Please Note :
- All media source will follow the same process to create a social-to-app link. Facebook/Instagram/Twitter. So On the New link pane, please remember to select the appropriate media source name.
- Once you have the link, you can then copy the link and add it to a social media post.
How To Login Segment & How To Test Analytics Using Segment
Prerequisites
- Request a Segment account from IT Support
- launch https://app.segment.com/login?
- Download Google Authenticator
- Login Credential Using work email
- Identify your Device ID for example [B06D9D2B-CD7A-4603-B527-3E16DA3186D3]
- iPhone 11 Test Device
- Login to the Segment console and enter the Main dashboard 
- Click on the Next Button 3. Enter your Password and Click the Next Button
- Mult-Factor Authentication Page 
- Open up Google Authenticator App on your mobile device 5. 6 Digit One-time Password should be available for you.  
- Mult-Factor Authentication Page 
- Enter the 6 Digit One-Time Password
- Click on The Sign In Button 9. Segment Main Dash Console. 10. Connection Overview  
- You will land on Profiles Tab on First launch
- Select Connection Tab from the Side Menu Bar
- Connection sub menu will be Displayed.
- Select Source Tab from the Side Menu Bar
- My Sources Page. iOS 11b. Android  
- To test Android Select Pickswise App - Android Prod from My Source list
- To Test iOS Select Pickswise App - Prod from My Source list **As we are testing only one platform as an example we will be selecting the iOS Pickswise App - Prod
- Overview Page 
- Select the Debugger button 
- You will now see all the live events that have been triggered on the iOS App
- Select one of triggered events from the List
- Application Background page will be populated
- Select Raw Button from the Application Background section
- You will now see the information to all devices and you will need to identify your device from the information provided [Example Below] **. device": { "id": "B95B5F79-583A-4F87-937D-53615D3F38D6", "manufacturer": "Apple", "model": "iPhone12,3", "name": "iPhone", "type": "ios"
- Application Background Section

- Select the Event to see the right side section populate Application Background Section

- To identify your mobile device ID select Raw Tab
- Copy your device ID [B06D9D2B-CD7A-4603-B527-3E16DA3186D3]
- Insert the ID it into search field
- Now only the events from your device will be triggered
- iPhone ID 
- My Triggered Events Only 
- Example of what is tapped Mobile Device to trigger the events in segment Console [iOS]   
How to Install Live Pickswise App Android 3 people viewed
- To get the app you need, create a secondary Google account without using a VPN. To do this, tap on your profile image in Google Chrome-> A pop-up overlay will appear on Google → Here, the details of your current signed-in email will appear along with Manage your Google Account, Add another account & sign out Here you select “Add another account”. Middle of the overlay, This will open another sign-in page. you now tap on “Create account” and follow the on-screen instructions. Finally, the account will be created and added to the Google Play Store.     Note : if you are given the option to skip adding mobile number then take this option
- Once you have created a second account, You will need to use a VPN connected to a United States IP t to be able to use the Pickswise App
- We currently use the HMA (Hide My Ass) VPN as a team
- Once connected to the VPN, open the Google Play Store and tap on the hamburger menu on the top-left corner. Here, tap on the new top-right icon to switch to the new account on the Play Store. 
- After switching to the new account on the Play Store, open the hamburger menu again and go to “Account”. Here, you will find “Switch to the United States Play Store” and underneath an option to add credit or debit card. Tap on it. 
- Now tap on “Continue” to add a credit or debit card. You can add a credit card if you want or just cancel/skip the prompt straight away. In any case, you will be moved to the US Play Store. However, keep in mind, it might take 48 hours to switch the region. To confirm if you have moved permanently, go to the “Account” section again and check if it shows “Switch to UK/Your Region Play Store”. If so, you have switched successfully. Note: Once you move to the US Play Store, you will be locked in for one year. And that’s precisely why I have recommended using a secondary account so that you don’t lose your active subscriptions, payment methods, family sharing benefits and more on your primary Google account. 
- Finally, your secondary account will move to the US Play Store and now you can search and install any Android app without any issue. Once you have switched the region, disconnect the VPN and now onwards, you don’t need to use a VPN to search and install apps.