Geeky Stuff: Build Process - gpillon/k4all GitHub Wiki

K4all Build Process: Two-Stage Ignition

The k4all build process involves two distinct ignition configurations that serve different purposes:

1. Installation Image Ignition (install.bu -> install.ign)

This ignition configuration is embedded into the live ISO and controls the installation process.

Key components:

  • Configures the live installation environment
  • Contains installation scripts and utilities
  • Sets up the automated installation process
  • Handles configuration (disk partitioning, network configurations, etc...)

2. Target System Ignition (k8s-<role>.ign -> k8s-<role>.ign)

This is the ignition configuration for the actual installed system, generated from multiple butane files:

  1. Base configuration (k8s-base.bu):
  2. Role-specific configurations:
  • Bootstrap node (k8s-bootstrap.bu)
  • Control plane node (k8s-control.bu)
  • Worker node (k8s-worker.bu)

Build Process Flow

sequenceDiagram
participant Builder
participant Butane
Builder->>Butane: Generate install.ign
Note over Butane: Processes install.bu
Builder->>Butane: Generate k8s.ign
Note over Butane: Combines k8s-base.bu<br/>with role-specific .bu
Builder->>Output: Embed ignitions & files
Output->>"ISO / Virt IMGs": Creates installation images
Loading

Installation Flow

  1. System boots with install.ign
  2. Installation scripts prepare the system
  3. CoreOS installer runs with k8s.ign
  4. System reboots into final configuration

Key Files Involved

  • install.bu: Installation environment configuration
  • k8s-base.bu: Base system configuration
  • Role-specific .bu files:
    • k8s-bootstrap.bu
    • k8s-control.bu
    • k8s-worker.bu

Build Methods

1. Local Build

The local build process uses the build.sh:

sequenceDiagram
participant User
participant BuildScript
participant Docker/Podman
participant Butane
participant CoreOSInstaller
User->>BuildScript: Execute build.sh
BuildScript->>BuildScript: Check for Docker/Podman
BuildScript->>BuildScript: Fetch latest Fedora CoreOS ISO
BuildScript->>Docker/Podman: Run Butane container
Docker/Podman->>Butane: Generate base ignition config
BuildScript->>Docker/Podman: Generate role-specific configs
Docker/Podman->>CoreOSInstaller: Embed ignition into ISO
CoreOSInstaller->>BuildScript: Return customized ISO
BuildScript->>User: Final ISO files
Loading

2. GitHub Workflows

The GitHub workflow build process is triggered by:

  • Pushes to the release branch
  • Version tags (v*)
  • Manual workflow dispatch
sequenceDiagram
participant GitHub
participant Runner
participant Container
participant Release
GitHub->>Runner: Trigger workflow
Runner->>Runner: Setup environment
Runner->>Runner: Download Fedora CoreOS
Runner->>Container: Generate configs
Container->>Container: Build ISOs for roles
Container->>Container: Create QCOW2/VDI images
Runner->>Release: Create GitHub release
Release->>Release: Upload artifacts
Loading

Build Outputs

Workflow build method produce:

  1. Bootstrap node ISO/image (single-node / single-master cluster)
  2. Control plane node ISO/image (for HA clusters)
  3. Worker node ISO/image (for adding worker nodes)

build.sh build method produce:

  1. Bootstrap node ISO (single-node / single-master cluster)
  2. Control plane node ISO (for HA clusters)
  3. Worker node ISO (for adding worker nodes)

Each role gets three formats:

  • ISO file (for bare metal installation)
  • QCOW2 image (for KVM/QEMU)
  • VDI image (for VirtualBox)

Key Components

The build process includes several key components:

  • Base configuration (k8s-base.bu)
  • Role-specific configurations (k8s-bootstrap.bu, k8s-control.bu, k8s-worker.bu)
  • Installation scripts and utilities
  • Custom manifests and configurations

Version Handling

The build process includes version management:

  • Release tags (v*) trigger production builds
  • Alpha/Beta releases are marked as pre-releases
  • The 'latest' tag is used for development builds

Post-Build Verification

Workflow build method include verification steps:

  • Image format conversions
  • Proper tagging and labeling
  • Release artifact organization

The resulting images are (should be 😀) ready for deployment in various environments, from single-node development clusters to multi-node production setups.

Source Container Build Workflow

This GitHub workflow (build-container.yaml) creates a minimal container that packages the source files for k4all updates. It's currently in testing phase.

Purpose

The container serves as a lightweight package to distribute source files for updates, making it easier to version and distribute k4all components.

Workflow Triggers

The workflow runs when:

  • Code is pushed to the release branch
  • A version tag is created (e.g., v1.0.0, v1.0.0-alpha)
  • Manually triggered through GitHub UI

Container Structure

The container is extremely lightweight:

  • Uses scratch as base image (completely empty)
  • Only contains the source files copied to /src
  • No operating system, no runtime, just files

Version Handling

The workflow implements smart versioning:

  • Alpha/Beta releases: Uses the actual version (e.g., v1.0.0-alpha)
  • Other releases: Tagged as latest

Container Registry

  • Images are pushed to GitHub Container Registry (ghcr.io)
  • Format: ghcr.io/<owner>/k4all:<version>
⚠️ **GitHub.com Fallback** ⚠️