Testing: progress tracker - ansible/community GitHub Wiki

Testing Actions

This page aims to improve Ansible by:

  • Fix existing issues identified by existing tests (listed in skip/ignore files)
  • Add more tests
  • Improve test Stability

Topics

GitHub test issues GitHub test PRs

This document lists the progress of various ongoing cleanup actions

At a high level this falls into

  • Fix existing issues identified by existing tests (listed in skip/ignore files)
  • Add more tests
  • Improve test Stability

Fix issues identified by existing tests

Sanity: validate-modules

List: test/sanity/validate-modules/ignore.txt

NOTE: The list of ignored errors may increase over time as additional tests are added to the validator.

How to fix

We suggest issues are fixed per-module

Remove entry from file, run: ansible-test sanity --test validate-modules nameofmodule

Review issue against documenting modules

Sanity: Window's pslint

List: test/sanity/pslint/ignore.txt

How to fix

Ask in #ansible-windows PRs may be in flight

Sanity: pylint

List: test/sanity/pylint/ignore.txt

How to fix

Remove entry from ignore list ansible-test sanity --test pylint nameofmodule

Sanity: Broken imports

NOTE: The last of the broken imports are fixed by https://github.com/ansible/ansible/pull/35024 Once that is merged we can remove this section.

Testing: boilerplate

Tracking a cleanup to the code to add from __future__ boilerplate here: https://github.com/ansible/community/wiki/Testing:-boilerplate,-wildcard-imports,-and-get\_exception

Once clean we'll be able to enable test coverage for boilerplate to every file.

Testing requirements

Note: When we talk about tests in this context we mean tests that are run via Shippable. A test that doesn't run doesn't count

All new modules (propose - starting from those to be merged after 2.6 is branched) should have at least one test case which runs in standard ansible shippable which exercises the main path through the whole module. This will normally be an integration test case, however in the case of modules which require specialised hardware to test against (e.g. network modules for specific hardware), that is not a requirement and instead a "unit" test case which runs against mocked out functionality of the module is encouraged.

In the case where integration test cases are being run in a vendor environment then it is very much encouraged to provide sufficient unit test coverage to allow the ansible team to refactor the module with very low risk of breaking things

Certain categories of modules have stronger requirements: (where there's a working group shouldn't this be in their responsibility - see below)

  • modules/network/ Since Ansible 2.4 all new modules, and any new feature MUST have tests
  • modules/aws/ Since Ansible 2.5 all new modules, and any new feature MUST/SHOULD? have tests
  • supported: Core (not just modules) MUST have test

Testing Requirements for Plugins

  • every plugin must be used in at least one integration test case (in my case this means lookup plugins, but can it apply to all plugins?? - not having it done meant it turned out to be almost impossible for some of the AWS ones)

Testing Requirements for Core code and Module utils.

each exposed module_utils entry point should have at least one unit test which exercises it

ACTION: Formally document this, get sign off and make it law

ACTION: Can we enforce this via CI?

Extending tests: Modules

How to write good tests

Basic information should be in dev_guide/testing

There is also a need for each working group to define standards for their own area.

The use of the terms "integration" and "unit" in ansible testing doesn't always match expectations. E.g. unit tests are the only way to drive python test cases and so some requited full module test cases have to come under unit tests. Also integration tests actually go most of the way up into functional testing. Clearning up what is allowed and encouraged in different types of tests - what should be mocked - that unit tests should still not connect to external services etc. should be added to the documents.

List of working group pages: * AWS https://github.com/ansible/community/blob/master/group-aws/integration.md * Networking https://github.com/ansible/community/blob/master/group-network/network_test.rst (have checked all other development working groups and this seems to be a complete list) -> discuss if container people want a page about this (AP to who?) -> discuss if windows people want to move their existing materials into their working group pages?

List of important modules that don't have integration tests

NOTE: Some modules have tests which are disabled due to being unstable in CI, rather than due to lack of tests. In these cases work is needed to improve test stability so they can be enabled again. Do we have a list of these disabled tests?See section below on test stability.

Generated using:

./hacking/report.py populate
./hacking/report.py query
# Core modules excluding Windows with under 40% coverage
select m.namespace || '/' || m.module, c.coverage from modules m left join coverage c on m.path = c.path where m.supported_by == 'core' and m.namespace != 'windows' and ifnull(c.coverage, 0) <= 40 order by m.namespace, m.module;
  • utilities.logic/async_status
  • cloud.amazon/aws_s3
  • cloud.amazon/cloudformation
  • cloud.amazon/ec2
  • cloud.amazon/ec2_facts
  • cloud.amazon/ec2_group
  • cloud.amazon/ec2_metadata_facts
  • cloud.amazon/ec2_snapshot
  • cloud.amazon/ec2_vol
  • cloud.amazon/ec2_vpc_net
  • cloud.amazon/ec2_vpc_net_facts
  • cloud.amazon/ec2_vpc_subnet
  • cloud.amazon/ec2_vpc_subnet_facts
  • cloud.amazon/s3
  • cloud.amazon/s3_bucket
  • system/service
  • system/user

To list all modules with coverage under 20% use:

select m.namespace || '/' || m.module, c.coverage from modules m left join coverage c on m.path = c.path where m.namespace != 'windows' and ifnull(c.coverage, 0) <= 20 order by m.namespace, m.module;

Some simple modules that have low/no coverage that do not require external services to test:

If you wish to work on one of these please add your name next to it (or ask in #ansible-devel if you don't have permission) this will avoid overlap

  • files/replace|14.94253
  • files/xml|15.27778
  • packaging.language/bower|15.84158
  • packaging.language/bundler|14.28571
  • packaging.language/composer|20.0
  • packaging.language/cpanm|17.1875
  • packaging.language/easy_install|15.66265
  • packaging.language/maven_artifact|16.41221
  • packaging.language/pear|14.56311
  • packaging.os/apk|17.68707
  • packaging.os/homebrew_cask|18.26923
  • packaging.os/homebrew_tap|13.59223
  • packaging.os/macports|15.38462
  • packaging.os/openbsd_pkg|7.36842
  • packaging.os/opkg|19.35484
  • packaging.os/pacman|8.5
  • packaging.os/pkg5_publisher|15.18987
  • packaging.os/pkgin|14.28571
  • packaging.os/pkgutil|10.7438
  • packaging.os/portage|13.28671
  • packaging.os/portinstall|15.78947
  • packaging.os/pulp_repo|10.50584
  • packaging.os/redhat_subscription|17.01389
  • packaging.os/rhsm_repository|13.76147
  • packaging.os/slackpkg|17.14286
  • packaging.os/sorcery|9.09091
  • packaging.os/svr4pkg|14.73684
  • packaging.os/swdepot|14.44444
  • packaging.os/swupd|16.55172
  • packaging.os/xbps|14.03509
  • source_control/git_config
  • system/cron|12.75362
  • web_infrastructure/htpasswd

Existing test PRs in flight

Look though existing PRs

Excluding new modules

Extending tests: module_utils

  • FIXME Need a list here, may just be driven by Coverage data

Unit test may work well here

Extending tests: cli

ansible

ansible-config

ansible-connection

With Networking Team

ansible-console

Maybe leave for the moment

ansible-doc

ansible-galaxy

ansible-inventory

ansible-playbook

ansible-pull

ansible-vault

Improve test stability

Tests to update to stop using requirements.txt

  • filters ( json_query -> jmespath )
  • expect ( expect )
  • htpasswd ( passlib )
  • user ( passlib on OSX )
  • password_hash ( passlib on OSX )

Pythons' SimpleHTTPServer timeout

It appears that this is occasionally timing out.

Package test instabilities

Retrying package installs to work around external instabilities

We are often seeing "unstable" results due to external package repos being unavailable Once updated https://github.com/willthames/ansible-lint/pull/324/ could help us identify playbooks that need updating

- name: "Apt has retry test success"
  apt:
    pkg: foo
  register: apt_retry_workaround
  until: apt_retry_workaround|success
  retries: 5
  delay: 10

Retries should not be used on modules being tested. The "apt" module test should not use retries, but an unrelated test that happens to use apt can.

NOTE: Retries will not cause tests to pass, but rather to be reported as unstable. +1

I'd say it should be avoided as much as possible (if we are to document this): Anything that's flakey due to network reasons/delays should probably get a retry. (Or include a retry counter in the module, but that's defining a new interface/expectations for modules, which is not -IMO- a good idea) We have a number of different package modules, though not sure which are the worse offenders Apt and yum are definitely bad at this. I heard dnf was better, but I have no evidence of this. To what I recall, worse part is when key management is involved (apt_key) SUGGESTION: Maybe we need to formally track the issues somewhere over a period of timeSee section below on test stability. Those two sections should probably be merged.

http://docs.ansible.com/ansible/latest/playbooks_loops.html#do-until-loops

Tests that use unstable external dependencies

A number of pull data over http(s) (and other protocols) occasionally these fail due to unrelated issues. The integration test should setup what it needs locally. We shouldn't add more into httptester for the reasons stated in https://github.com/ansible/ansible/issues/16864

Files pulled from ansible-ci-files.s3.amazonaws.com/ are OK.

  • apt
  • apt_repository
  • get_url
  • git
  • yum
  • yum_repository
  • zipper_repository
  • and others
  • List of tests here, and what should be used instead

List of things that need unit tests (e.g libraries under module_utils)

Tracking and fixing instability issues

Unstable tests should have a GitHub issue opened.

Existing issues tracking unstable tests: https://github.com/ansible/ansible/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+%22unstable+integration+test%22

Question: Do we want a tag for issues that report unstable tests, perhaps unstable_test?

Question: Do we want to differentiate between unstable tests which are still enabled and those which have been disabled? Disabled tests could be simply unstable or consistently failing, due to bugs in the tests and/or code under test.

Docs: dev_guide/testing

  • [STRIKEOUT:Main pages done]
  • Codecover detail
  • Quickstart page
⚠️ **GitHub.com Fallback** ⚠️