Pytest Markers - SatelliteQE/robottelo GitHub Wiki
Pytest markers
This page covers the markers currently used in Robottelo and, more importantly, when to use them.
Markers in this repository are not just labels. Many of them change parametrization, collection, infrastructure choice, or reporting behavior. That is why choosing the right marker matters more here than in a simple unit-test project.
First rule: not every marker should be written by hand
Some markers are normal authoring markers, while others are added automatically by fixtures or plugins during collection.
Markers you will usually add yourself
| Marker | Use it when |
|---|---|
@pytest.mark.e2e |
the test verifies an end-to-end workflow rather than a narrow unit of behavior |
@pytest.mark.upgrade |
the test belongs to upgrade coverage selected with upgrade-focused runs |
@pytest.mark.destructive |
the test needs a fresh Satellite instance and must not share the default one |
@pytest.mark.run_in_one_thread |
the test must not run concurrently with similar tests because it mutates shared global state |
@pytest.mark.rhel_ver_match(...) |
the test should run only on selected supported RHEL content-host versions |
@pytest.mark.rhel_ver_list([...]) |
the test should run only on an explicit list of RHEL content-host versions |
@pytest.mark.no_containers |
the content-host scenario must use VMs instead of container hosts |
@pytest.mark.skip_if_not_set(...) |
the test depends on a configured settings section such as libvirt or external auth |
@pytest.mark.build_sanity |
the test is part of the fast sanity subset used to confirm a build is usable |
@pytest.mark.first_sanity |
the installer/bootstrap sanity test that must run first in a sanity selection |
@pytest.mark.foremanctl |
the test requires foremanctl behavior and the plugin should switch version source accordingly |
@pytest.mark.on_premises_provisioning |
the provisioning test depends on on-prem provider infrastructure and should only be included when that infra is requested |
@pytest.mark.ipv6_provisioning |
the provisioning test is specific to IPv6 provisioning coverage |
@pytest.mark.stubbed |
the test is intentionally not automated yet and should normally be deselected unless stubbed tests are explicitly included |
Markers you usually should not add directly
| Marker | Why |
|---|---|
content_host |
added automatically for tests that use content-host fixtures |
factory_instance |
added automatically for tests that use fresh Satellite/Capsule factory fixtures |
manifester |
added automatically when a test uses a manifest fixture |
ldap |
added automatically when a test uses an ldap fixture |
component, importance, team |
derived from testimony-style docstring fields |
blocked_by, verifies_issues |
derived from docstring metadata and issue-handler logic |
deselect |
generally plugin-managed during collection rather than hand-authored |
Standard pytest markers you will still see
@pytest.mark.parametrize(...)@pytest.mark.usefixtures(...)@pytest.mark.skipif(...)
Those behave as standard pytest features; Robottelo mainly adds project-specific meaning on top of the custom markers below.
Marker guide by purpose
Scope and test-shape markers
e2e
Use for full workflows that cross multiple steps or interfaces and are meant to validate an end-to-end outcome.
Use it when the value of the test is the workflow as a whole, not when it is just a long test.
upgrade
Use for upgrade-oriented coverage outside the newer SharedResource-based
tests/new_upgrades/ scenarios.
If the test belongs in the newer upgrade framework, prefer the scenario-specific upgrade markers described later on this page.
destructive
Use when the test changes the Satellite under test in a way that should not be
shared. The target_sat fixture checks this marker and provisions a fresh
instance instead of reusing the default one.
Typical reasons:
- installation or reinstallation
- major configuration changes
- operations that leave the system in a hard-to-reuse state
Do not use destructive just because cleanup is annoying.
run_in_one_thread
Use when parallel execution is unsafe because the test changes shared global state, long-lived configuration, or resources that collide with similar tests. Do not add it just because a test is slow.
If the real issue is fixture scope or missing cleanup, fix that instead of silencing concurrency with this marker.
stubbed
Use only for tests that intentionally exist as placeholders or manual coverage. By default, stubbed tests are deselected unless explicitly included.
This is a meaningful workflow state, not a generic “temporarily flaky” marker.
Content-host parametrization markers
These markers are handled by pytest_plugins/fixture_markers.py.
rhel_ver_match
Use when the test should run on a subset of supported RHEL versions matched by
regex or the N-x shorthand.
Examples:
@pytest.mark.rhel_ver_match(r'^(9|10)')
@pytest.mark.rhel_ver_match('N-1')
rhel_ver_list
Use when you want exact versions instead of a regex pattern.
Example:
@pytest.mark.rhel_ver_list([9, 10, '10_fips'])
no_containers
Use when the content host must be a VM and the scenario cannot run against a container-backed host.
Add this only when there is a real technical reason, such as systemd, networking, kernel-level behavior, or another dependency that containers cannot represent accurately enough.
network
Use only for content-host tests that must be limited to specific network types. Keep usage aligned with the current fixture/plugin behavior.
If you are not certain the scenario truly depends on IPv4 or IPv6 selection, leave the test unmarked and let the default infrastructure behavior drive it.
Satellite-maintain execution markers
These marks change how sat_maintain is parametrized.
| Marker | Use it when |
|---|---|
include_capsule |
the test should run for both Satellite and Capsule |
capsule_only |
the test only makes sense on Capsule |
include_satellite_iop |
the test should run on both default Satellite and IoP Satellite |
satellite_iop_only |
the test is only meaningful on IoP Satellite |
These are execution-shaping markers, not just categorization tags.
Infra-selection markers
These are mostly for provisioning and specialized infrastructure pipelines.
| Marker | Use it when |
|---|---|
on_premises_provisioning |
the test needs on-prem provisioning providers |
ipv6_provisioning |
the test covers IPv6 provisioning specifically |
pit_server / pit_client |
the test belongs to PIT server/client scenarios |
client_release |
the test is part of client release coverage |
These are specialized marks. If you are writing ordinary feature coverage, you probably do not need them.
Sanity markers
| Marker | Use it when |
|---|---|
build_sanity |
the test belongs in the fast build-readiness sanity subset |
first_sanity |
the test is the installer/bootstrap sanity test that must be forced to the front of that subset |
no_compose |
the sanity test should be skipped for nightly compose runs |
Upgrade scenario markers in tests/new_upgrades/
Newer upgrade scenarios also use scenario-specific markers in
tests/new_upgrades/, for example:
content_upgradessearch_upgradescapsule_upgradespuppet_upgradesdiscovery_upgradesperf_tuning_upgradeshostgroup_upgradesusergroup_upgradeserrata_upgradesclient_upgrades
Use these only for tests in that upgrade framework, and keep the marker list in
tests/new_upgrades/conftest.py in sync with the markers actually used by the
tests there.
Metadata-driven markers
The metadata plugin converts docstring fields into collection markers and report properties.
Prefer docstrings over handwritten markers for:
componentimportanceteamblocked_byverifies_issues
FAQ
Should I add a marker or make a new test directory?
Usually add a marker only when the repository already uses that vocabulary and the marker changes behavior or selection in a meaningful way. Directory layout and markers solve different problems.
I want to mark a test for a Jira issue. Should I add a pytest marker?
Usually no. Put the issue in the docstring with :BlockedBy: or :Verifies:
so the metadata and issue-handler plugins can do the right thing.
The test only fails in parallel. Should I immediately add run_in_one_thread?
No. First ask whether the real issue is shared state, missing teardown, broad
fixture scope, or an avoidable dependency on global configuration. Use
run_in_one_thread when serialization is genuinely the correct behavior.
When should I create a new custom marker?
Only when the existing marker vocabulary cannot express the behavior you need
and the new mark has clear semantics that will stay useful. If you add one,
register it immediately in the relevant plugin or conftest.py.
Practical rules of thumb
- Add a marker only when it changes collection, parametrization, or test intent in a meaningful way.
- Prefer the existing marker vocabulary over inventing a new label.
- If you introduce a new custom marker, register it in the relevant plugin or
conftest.pyimmediately. - If the marker meaning is really metadata from the docstring, put it in the docstring instead of as a handwritten mark.