uyuni terraform integration - uyuni-project/uyuni GitHub Wiki

The Public Cloud inhabitant

automatically register new VMs created by Terraform

Best practices when registering machines in management software

According to documentation, one should use cloud-init, which avoids configuring an ssh connection to the host. If cloud-init cannot be used for some reason, a remote execution provisioner should be used. Terraform even have a section talking about best pratices

Uyuni terraform provider (NOT RECOMMENDED)

A Uyuni provider, where we can use the output of another resource (a AWS instance for example) to extract the IPaddress or DNS, pass more properties, like ssh key, or bastion machine, and declaratively onboard systems to Uyuni.

For this solution is expected we need: - Access to Uyuni XML-RPC API. This can be a security issue, depending on where we are running the terraform apply - Know recently created machine private DNS or IP address to where Uyuni should connect - SSH connection information for Uyuni to connect and bootstrap the machine. Could be username and password or authentication key. - One resource should be defined by each created machine. The number of machines can be dynamic.

cons: - More codebase to support - Is not the recommend way to onboard machines on a system management tool - Users will be forced to define one more resource per machine - Possible security issue in exposing Uyuni XML-RPC API

Cloud-init (RECOMMENDED)

We can set a run cmd when creating the machine which will download the bootstrap script and register the machine. No connection between teraform running machine and the recently created machine is needed. No connection between teraform running machine and Uyuni machine is needed.

Configure bootstrap script: https://documentation.suse.com/external-tree/en-us/suma/4.0/suse-manager/client-configuration/registration-bootstrap.html

Run command: curl -s http://hub-server.tf.local/pub/bootstrap/bootstrap-default.sh | bash -s

Note: Anytime user_data will be updated to change the provisioning, terraform will destroy and then recreate the machines with new IP etc.

If bootstrapping failed for some reason but machine created successfully, ideally there should be a way to onboard these machines. Or put it in other words, should we find a way to bootstrap an existing machine created with terraform? We can use the existing mechanism and do it by hand?

Cloud-init ton aws examples:

remote-exec Provisioner (RECOMMENDED)

https://www.terraform.io/docs/provisioners/remote-exec.html Remote call running in the recently created machine. It will download the bootstrap script and register the machine in Uyuni. For this solution we need: - Open an ssh connection between the terraform running machine and the recently created one. - Define a bootstrap script the same way as defined in the cloud-init section - Download and run the command the same way as defined in the cloud-init section

Conclusion

Registration should be done using cloud-init and in case it's not possible, use the remove-exec provisioner. For both solutions, we don't need to develop any new code. Documentation should be developed to help customers defined the auto registration of recently created machines on the cloud.

Register existing cloud VMs

Option 1: enhance virtual host gatherer (VHG) modules 

Return IP addresses and hostnames, offer them in the onboarding page/API. The registration page should also be enhanced to support SSH bastion hosts.

Option 2 (discarded): get IP addresses and hostnames from Terraform state

Cons:

  • Schema dependents on provider and no backward compatibility insured
  • Terraform specific solution

Updated list of systems

Option 1

VHM (Virtual Host Manager) can connect to cloud providers and virtual host managers to inspect which machines are available. We can develop a feature which can look for host registered as systems in Uyuni and not visible on VHM anymore to find which can potentially be removed/deleted.

Machines analyzed for possible deletion should be first linked to machines inspected in the VHM. Workflow can be: - Register cloud configuration/provider in VHM for inspection - Start register machines - New registered machines are automatically linked to the corresponding machine in the VHM - When we run delete analyzes only machines with a corresponding match in the VHM should be analyzed. If the match in VHM is not present anymore, the registered machine should/can be proposed for deletion.

This implementation can work with all existing tools (terraform, cloud formation, etc) since it is tied to each provider and not to a specific tool.

Option 2

Terraform remote exec provisioner running a remove script. It should be discarded in favor of option 1, since this is terraform specific.

The Terraform (power) user

Sharing Terraform state

Option 1: implement endpoints for the http backend

Docs: https://www.terraform.io/docs/backends/types/http.html

Uyuni should implement all the needed API methods. Allows teams to share terraform state file. No direct support for terraform workspaces.

Option 2: implement an Uyuni backend

Not too difficult to implement and would be straightforward to add workspace support. There is a chance we can tie workspaces with CLM environments.

Only very basic security checks are possible. Nothing granular.

Option 3: implement the enhanced backend API

High barrier of entry (~300 methods) and the only implementation available at this point is Terraform Cloud/Enterprise. Not sure if this could be an option.