tool_dev - OpenNebula/one-apps GitHub Wiki
Developer Information
Repository Structure
├── appliances # src for "service appliances", containing the appliance logic
│ ├── example # example appliance implementation logic
│ ├── Harbor # Harbor Docker Registry appliance
│ ├── lib # Ruby libraries for appliance service management
│ ├── MinIO # MinIO storage solution appliance logic
│ ├── OneKE # OneKE service logic
│ ├── scripts
│ ├── VRouter # VRouter logic
│ └── Wordpress # Logic for the WordPress appliance from the OpenNebula Marketplace
├── build
├── context-linux # src for Linux content packages
├── context-windows # src for Windows context packages
├── export # where built images are dumped to
│ ├── alma8.qcow2
│ ├── alpine318.qcow2
└── packer # Packer build scripts for each distribution
├── alma
├── alpine
├── alt
├── amazon
├── build.sh
├── debian
├── devuan
├── fedora
├── freebsd
├── ol
├── opensuse
├── plugins.pkr.hcl
├── postprocess.sh # Script used by every appliance to remove any ID acquired during the build process from the VM
├── rhel
├── rocky
├── service_example
├── service_Harbor
├── service_MinIO
├── service_OneKE
├── service_OneKEa -> service_OneKE/
├── service_VRouter
├── service_Wordpress
└── ubuntu
Appliance Build Process
Appliances are constructed using Packer.io templates, each tailored to accommodate the unique characteristics of various distributions. Within the packer directory, individual folders for each distribution contain:
- A Packer template file
- Customization scripts (*.sh files)
cloud-init.yaml
user data (if required)
└── packer
├── alma
│ ├── 10-upgrade-distro.sh
│ ├── 11-update-grub.sh
│ ├── 11-zzz.sh
│ ├── 80-install-context.sh.8
│ ├── 80-install-context.sh.9
│ ├── 81-configure-ssh.sh
│ ├── 98-collect-garbage.sh
│ ├── alma.pkr.hcl # packer template
│ ├── cloud-init.yml
│ ├── plugins.pkr.hcl -> ../plugins.pkr.hcl
│ └── variables.pkr.hcl
If there are multiple versions of the same distribution (as in the case of Debian 10, 11 and 12), a shared Packer template is used for all them, and the version number is passed to Packer as a parameter.
Whenever feasible, distributions with "cloud" images serve as the starting point, launched via cloud-init
within the Packer Qemu Builder. In cases where "cloud" images aren't available, the default image is used, and an installer is executed.
Primarily, this stage focuses on installing "one-context" packages, updating packages, and removing redundant elements.
Subsequently, post-processing takes place within Packer. This involves the utilization of virt-sysprep and virt-sparsify.
Lastly, the image undergoes compression using qemu-img convert -c
. While image compression can be optionally be performed within Packer, a separate final compression step often yields smaller image sizes.
Creating a New Appliance
To create a new appliance, you'll need to:
- Create your custom appliance logic within a directory at
./appliances
.- Use the provided set of Bash and Ruby libraries for the framework that handles interaction with
one-context
and streamlines the build process. - Define the logic for the install phase. This logic will be invoked when running the
make <your_appliance_name>
command. - Define the logic for the configure phase. This logic will be invoked when the appliance is instantiated as a VM by the OpenNebula Front-end.
- Use the provided set of Bash and Ruby libraries for the framework that handles interaction with
- Create a new set of Packer scripts for the appliance:
- Link your custom appliance logic on the Packer entry point.
- Source a base Linux distribution to extend from that as the base OS.
- Add a
Makefile
andMakefile.cfg
entry linking those Packer scripts.
Example Use Case: Database as a Service
An example appliance is provided at ./appliances/example
. It will create a qcow2 image that holds a Database as a Service workload using MariaDB behind the scenes. To build it, run:
make service_example
This will:
- Build the context packages if these are not already built.
- Build the Alma8 Linux image if it is not already built. This appliance is based on AlmaLinux 8.
- Build the
service_example.qcow2
image in the./export
directory. The appliance build logic for this step is defined at./appliances/example/example.sh
within the functionservice_install
.
This example.sh
script is sourced by Packer at ./packer/service_example/example.pkr.hcl
using the following code:
provisioner "file" {
source = "appliances/example/example.sh"
destination = "/etc/one-appliance/service.d/appliance.sh"
}
The source code in this example is thoroughly commented; please refer to it for more details.
Using the New Appliance
To use the new appliance, create a VM template with the following contents:
CONTEXT=[
NETWORK="YES",
ONEAPP_DB_NAME="mariadb_database",
ONEAPP_DB_PASSWORD="mariadb_user_password",
ONEAPP_DB_ROOT_PASSWORD="mariadb_root_password",
ONEAPP_DB_USER="mariadb_user",
SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]
CPU="1"
DISK=[
IMAGE_ID="9" ]
GRAPHICS=[
LISTEN="0.0.0.0",
TYPE="vnc" ]
MEMORY="1024"
NIC=[
NETWORK_ID="0" ]
OS=[
ARCH="x86_64" ]
VCPU="2"
Note the following attributes in the CONTEXT section:
ONEAPP_DB_NAME="mariadb_database",
ONEAPP_DB_PASSWORD="mariadb_user_password",
ONEAPP_DB_ROOT_PASSWORD="mariadb_root_password",
ONEAPP_DB_USER="mariadb_user",
These parameters define the information for connecting to the database.
When you instantiate this template, you should obtain a VM with a MariaDB server running a database that you can connect to. The logic for this process is defined at ./appliances/example/example.sh
, within the function service_configure
.
You can also modify this VM Template and use OpenNebula's User Inputs feature to create a Wizard where the end user can provide values for the VM parameters, making it possible to instantiate the VM through both the CLI and the web UI.
Troubleshooting
You can check if your appliance performed the operations you expected by inspecting the logs corresponding to each of stages you used in your development.