Robottelo Contributing Guidelines - SatelliteQE/robottelo GitHub Wiki

Robottelo contributing guidelines

This guide describes the current contributor expectations for the repository and the conventions that shape how tests, fixtures, helpers, and plugins are organized.

Repository structure

Most contributions land in one of these areas:

  • tests/foreman/api/, tests/foreman/cli/, tests/foreman/ui/
  • tests/new_upgrades/ for SharedResource-based upgrade scenarios
  • pytest_fixtures/ for reusable fixtures
  • pytest_plugins/ for collection and execution behavior
  • robottelo/ for framework code, host abstractions, and helpers

The important thing is not just where the code lives, but why. Robottelo is a layered test framework, so good contributions usually preserve that separation:

  • tests express intent and assertions
  • fixtures manage setup and teardown
  • helpers and host abstractions hide repeated framework behavior
  • plugins shape collection, metadata, deselection, and infrastructure behavior

General expectations

  • Prefer API-based setup over CLI or UI setup when the test does not depend on the interface being exercised.
  • Reuse existing fixtures and helpers before adding new ones.
  • Keep non-reusable helpers close to the test that uses them.
  • Avoid time.sleep() in tests; use existing wait helpers and polling patterns.
  • Keep assertions readable and intentional.
  • Favor flat, understandable test flow over clever abstraction.

Why API-first setup is preferred

The repo tests multiple interfaces, but setup does not always need to use the same interface the assertion path uses. API setup is usually:

  • faster
  • less brittle than UI setup
  • easier to keep deterministic
  • more reusable across API, CLI, and UI tests

A common pattern is:

  • arrange with API
  • act with CLI or UI
  • assert on the behavior you actually care about

Test writing conventions

Naming

  • test modules use test_<feature>.py
  • test functions usually follow:
    • test_positive_<action>_<entity>
    • test_negative_<action>_<entity>
    • test_post_<action>_<entity> for post-upgrade checks

See Test Case Naming Conventions for more.

Test structure

Robottelo follows pytest patterns, but tests are much easier to review and maintain when they also follow a clear Arrange-Act-Assert structure.

Examples:

  • Arrange with fixtures and helper creation
  • Act through the interface the test is exercising
  • Assert with focused, readable checks

For UI and CLI tests, prefer to keep the setup in API or fixture code unless the setup itself is part of what the test is validating.

Test IDs and intent

  • keep an existing :id: when the test still represents the same logical case
  • generate a new :id: when the purpose or essential flow of the test changes

Changing implementation details alone is usually not enough reason to change the test id.

Docstrings and testimony metadata

Functional tests should carry the testimony fields used by the repository. Common required fields are:

  • :Requirement:
  • :id:
  • :steps:
  • :expectedresults:
  • :CaseAutomation:
  • :CaseComponent:
  • :CaseImportance:
  • :Team:

Optional fields commonly used in the repo include:

  • :BlockedBy:
  • :Verifies:
  • :Parametrized:
  • :Setup:
  • :Teardown:

Why docstrings matter here

In many projects, docstrings are just documentation. In Robottelo, they also feed collection metadata and downstream reporting. That is why “good enough” freeform prose is not enough for many functional tests.

Validate docstrings with:

make test-docstrings

Markers

Use repository markers intentionally. Several marks drive fixture parametrization, collection, or infrastructure behavior rather than simply labeling tests.

See Pytest Markers for current guidance.

Fixtures and helpers

When to add a fixture

Use a fixture when you need reusable setup/teardown behavior, scope-aware caching, or composition with other fixtures.

Place fixtures in:

  • pytest_fixtures/core/ for framework-wide setup
  • pytest_fixtures/component/ for component-specific reusable setup

Prefer fixtures when:

  1. setup and teardown need to be paired
  2. the same setup is reused across multiple tests
  3. scope-based caching will materially reduce cost
  4. the setup depends naturally on other fixtures

Avoid forcing something into a fixture when a local helper keeps the test easier to read.

When to add a helper

Use helpers for reusable operations that do not need pytest fixture semantics.

Preferred locations:

  • robottelo/host_helpers/api_factory.py for cross-interface setup helpers
  • robottelo/host_helpers/cli_factory.py or ui_factory.py only when the helper truly belongs to that interface
  • robottelo/host_helpers/*_mixins.py for host-oriented behavior
  • robottelo/utils/ for framework utilities not tied to a host object

Practical placement guidance

Use this rule of thumb:

If the code... Put it in...
describes a host object robottelo/hosts.py or the relevant host class
performs reusable operations on a host object robottelo/host_helpers/*_mixins.py
is a reusable cross-interface setup helper api_factory.py first, unless API is not appropriate
is a generic utility not tied to a host robottelo/utils/
is only useful to one test module the test module itself

Edge case: duplicate helpers

If you find two helpers doing nearly the same thing:

  • merge them when the behavior is truly the same and optional arguments keep the result clear
  • keep them separate when interface-specific behavior would make a merged helper harder to understand

Reducing duplication is good, but not if it produces a confusing “do everything” helper.

Additional coding guidance

Use constants intentionally

If the same literal value appears across tests or helper code, prefer an existing constant or add one when it improves clarity. Hardcoding is especially painful in long-lived functional suites.

Clean up correctly

Use fixture teardown or finalizers where appropriate. Functional tests often touch real infrastructure, so cleanup is not optional bookkeeping; it is part of keeping later tests reliable.

Log enough to debug failures

Robottelo failures can come from infrastructure, product state, network, test logic, or external services. Good logging makes triage dramatically easier.

Useful rule of thumb:

  • log enough to reconstruct the important steps
  • avoid hiding the main action behind vague helper output

Linting and validation

Current local checks are centered around ruff, pytest, and the Makefile.

Common commands:

ruff check .
ruff format .
make test-robottelo
make test-docstrings

make docs builds the Sphinx docs if you touched documentation-related code.

Pull requests

  • keep PRs focused
  • reuse existing patterns in nearby tests and fixtures
  • explain new fixtures, markers, or collection behavior in the PR description
  • if you add a new custom marker, register it in the relevant plugin or conftest.py so pytest does not warn about unknown markers

What reviewers usually need from you

Good PR descriptions save reviewers time. The most helpful PRs explain:

  • what changed
  • why the current behavior was wrong or incomplete
  • whether the change affects setup, collection, fixtures, or infrastructure
  • whether there are special run instructions or environment assumptions

Labels and branch context

Repository labels matter beyond cosmetics. In particular, backport and branch labels affect how maintainers reason about cherry-picks and stream-only changes.

If your change is branch-specific, say that clearly rather than making reviewers infer it from the diff.

For review-specific guidance, see the Reviewers Guide.

FAQ

Where should a new property or method describing Satellite, Capsule, or ContentHost go?

Put descriptive properties and methods on the relevant host classes. If the code is mainly “what this host is,” that usually belongs with the host object.

Where should operations on Satellite, Capsule, or ContentHost go?

Put reusable operational behavior in the appropriate mixin under robottelo/host_helpers/*_mixins.py.

When should I prefer host-object methods over utils functions?

Prefer host-object methods when the behavior depends on existing host state, host methods, or host-specific semantics. Prefer utils when the logic is generic and not meaningfully tied to a host abstraction.

When should I not create a new fixture?

If the behavior is local, not reusable, and does not benefit from pytest scope or teardown semantics, a small local helper is often better.

I need a helper for API, CLI, and UI tests. Which interface should drive it?

Prefer API first. Move to CLI or UI only when the setup genuinely depends on that interface.

⚠️ **GitHub.com Fallback** ⚠️