Uyuni Development Environment on macOS - uyuni-project/uyuni GitHub Wiki
The general architecture for developing Uyuni on macOS consists of an Uyuni server in a virtual machine that's controlled by Terraform on the host. The rest of the tooling is set up natively.
A fully native development environment on macOS is not possible as of this writing.
Remaining known showstoppers are:
As a possible future alternative, nested virtualization is currently blocked by the following tickets:
The below setup has been tested on macOS Catalina (10.15) through Ventura (13.3).
Hardware requirements largely depend on how many machines you want to use in your development setup. For a detailed overview, see SUSE Manager Hardware Requirements.
The recommended minimal values are:
- for a server: at least 2 processor cores, at least 4GB memory
- for a client and a minion: at least 1 processor core, at least 2GB memory
The default configuration uses three machines: one server, one client and one minion.
From the setup side, Xcode command line tools and Homebrew are used for tooling dependencies, while VMware Fusion is used for virtualization. Alternative setups are also possible, but are not addressed in this guide.
- Install Xcode command line tools
- Open Terminal and run
$ xcode-select --install
. If the service times out, you can also download the tools from the Apple Developer portal. - Once done, run
$ sudo xcodebuild -license
to accept the license agreement neither one of us read.
- Open Terminal and run
- Download and install Homebrew and Homebrew Cask
- Download and install VMware Fusion
After VMWare Fusion is installed, give it escalated security permissions. Open System Preferences → Privacy & Security → Accessibility, then add and enable VMWare Fusion.
Download and install JDK 11.
Install Python, Ant, RPM and rpm2cpio
:
$ brew install python ant rpm rpm2cpio
In your .bashrc
or equivalent, add or combine:
export PATH="/usr/local/bin:$PATH"
export LDFLAGS="-L/usr/local/opt/libarchive/lib"
export CPPFLAGS="-I/usr/local/opt/libarchive/include"
export PKG_CONFIG_PATH="/usr/local/opt/libarchive/lib/pkgconfig"
Source your rc or relaunch your terminal session to ensure the environment is correct.
Confirm you're using the Brew version of pip3, the command should be found in /usr/local/bin
, not /usr/bin
. If the latter is found instead, PATH
is incorrectly configured.
$ which pip3
/usr/local/bin/pip3
Install yaml for Python:
$ pip3 install PyYAML
In a location of your choice, clone obs-to-maven
, and then add it to $PATH
:
$ cd ~/Repositories
$ git clone [email protected]:uyuni-project/obs-to-maven.git
$ cd obs-to-maven
$ pip3 install setuptools
$ pip3 install .
Note: You can also run python3 setup.py install
instead of pip3 install .
to install the obs-to-maven
setup after installing setuptools
.
Then in your .bashrc
or equivalent:
export PATH="<path to parent directory>/obs-to-maven:$PATH"
Edit obs-to-maven/obs_maven > core.py
and update the shebang to use Homebrew's Python and the related packages, note the addition of local
:
#!/usr/local/bin/python3
- Download the openSUSE Leap DVD image
- Open VMWare Fusion, select File → New and then drag-n-drop the ISO onto the window
- Create the VM, then open the Processors & Memory settings pane
- for a server: at least 2 processor cores, at least 4GB memory
- for a client and a minion: at least 1 processor core, at least 2GB memory
- Open the Advanced settings pane, turn on Disable Side Channel Mitigations
- Start the VM
Handy tip: VMware Fusion captures your mouse and keyboard by default, the key combination to break out is cmd + ctrl.
If you have correctly inserted the disk image, you'll be greeted with the openSUSE boot menu.
- Choose Installation
- Follow the Green Next Button, most default settings suffice, but notably:
- Be sure you choose the correct keyboard layout depending on your keyboard
- For the system role, in the list of desktop environments, choose Server
- Be sure to note down and remember your user and password, you will need those later
- Once you've confirmed the installation, grab a snack because the installation may take a while
- If you're having trouble connecting to repositories and are using a VPN, retry without the VPN
If everything's correct, you'll now have a functional openSUSE virtual machine set up.
It is a good idea to take a snapshot of your VM before proceeding with further configuration, and in general, periodically when you're done with bigger changes. Use the menu item Virtual Machine → Snapshots.
Once your installation is successful, you'll be greeted by a login prompt, you can log in with the username and password you set during the installation.
Open YaST
$ sudo yast
To navigate in YaST, use arrow keys and enter to navigate within a section, tab to navigate between sections and menus, space to choose checklist items, and F-key shortcuts outlined at the bottom of the layout.
Go to Security and Users → Firewall → Zones → Public (this is the default zone).
From the list of services on the left, add to the allowed list on the right:
- SSH (TCP port 22)
- DNS (UDP port 53)
- mDNS (UDP port 5353)
Choose Accept to save your changes and quit YaST.
If you want to use different ports for services, you can switch to the Ports tab and manually open ports as needed.
The server's FQDN is tied to the SSL certificate it uses to securely communicate with clients. Below we're going to configure the VM as server.tf.local
, if you want to use a different setup, change the values accordingly.
In the VM, set the system's hostname:
$ sudo sysctl kernel.hostname=server
$ echo "server" | sudo tee /etc/hostname
$ echo -e ".local\n.tf.local" | sudo tee -a /etc/mdns.allow
Install Avahi:
$ sudo zypper install avahi
Edit the Avahi configuration
$ sudo vim /etc/avahi/avahi-daemon.conf
Uncomment and set the following fields as below:
host-name=server
domain-name=tf.local
browse-domains=local,tf.local
use-ipv4=yes
use-ipv6=no
allow-interfaces=eth0
deny-interfaces=
publish-hinfo=yes
publish-workstation=yes
publish-domain=yes
Edit the NSS configuration:
$ sudo vim /etc/nsswitch.conf
Uncomment and set the following fields as shown:
hosts: files mdns dns
networks: files mdns dns
Open YaST
$ sudo yast
Navigate to System → Network Settings. Under Global Options, disable IPv6. Save the settings and exit YaST.
This change requires a system reboot:
$ sudo reboot
After the reboot, enable Avahi and start it up:
$ sudo systemctl enable avahi-daemon
$ sudo systemctl start avahi-daemon
Run a status check and confirm the host name is configured as expected (should be server.tf.local
):
$ systemctl status avahi-daemon
● avahi-daemon.service - Avahi mDNS/DNS-SD Stack
Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-10-07 14:43:03 CEST; 1 day 8h ago
Main PID: 5143 (avahi-daemon)
Status: "Server startup complete. Host name is server.tf.local. Local service cookie is 4268603158."
Tasks: 1
CGroup: /system.slice/avahi-daemon.service
└─5143 avahi-daemon: running [server.tf.local]
Confirm that everything works as expected by pinging the VM by its name from the host:
$ ping server.tf.local
If the daemon status is correct as shown above but you cannot ping the machine, firstly check whether you can ping the machine by the IP. If that doesn't work, make sure the VM network is set to Bridged Adapter and the firewall isn't blocking you. If you can ping the VM by IP but not by the hostname, try flushing the DNS on the host. If you're having trouble connecting after waking the host from sleep, restarting the Avahi daemon may help.
If everything works as expected, take a snapshot of your VM.
SSH will allow Terraform to control your virtual machine to reconfigure it as needed.
Your installation should come with an SSH server already installed and running. To confirm this, run
$ systemctl status sshd
The status check should return that the SSH daemon is active. If it's inactive or no such daemon is found, see this guide.
The default server setup should already have all the correct configuration values set. To confirm this, try SSHing into the machine as root from the host:
$ ssh [email protected]
If everything works as expected, take a snapshot of your VM state, and move to the next section.
If you can ping the machine but not SSH into it, edit the SSH daemon config in the VM with your editor of choice, for example
$ sudo vim /etc/ssh/sshd_config
Find, uncomment, and set the following configuration options as below:
PermitRootLogin yes
PasswordAuthentication yes
After saving the file, restart the daemon.
$ sudo systemctl restart sshd
Salt will allow the host machine to configure our virtual machine further over the SSH setup we've previously configured.
In the VM:
$ sudo zypper install salt-minion
If you've successfully trot through the previous steps, your virtual machines are ready for regular use and don't need to be run in windowed mode anymore. The following functions in your .bashrc
or similar can help manage VMs.
# List running VMs
vm-list () {
vmrun list
}
# Start a VM, takes a list of names, for example `$ vm-start server`
vm-start () {
for item in "$@"
do
vmrun start ~/Virtual\ Machines.localized/$item.vmwarevm nogui
done
}
# Save the current state of a VM and then suspend it, takes a list of names, for example `$ vm-stop-save server`
vm-stop-save () {
for item in "$@"
do
vmrun suspend ~/Virtual\ Machines.localized/$item.vmwarevm
done
}
# Save the current state of a VM and then stop it, takes a list of names, for example `$ vm-stop-shutdown server`
vm-stop-shutdown () {
for item in "$@"
do
vmrun stop ~/Virtual\ Machines.localized/$item.vmwarevm
done
}
Using the above, you can start three virtual machines in headless mode in one line:
$ vm-start server client minion
Note that there will be a delay between the machine starting and the operating system booting to being fully functional to handle your requests.
Sumaform is a collection of Terraform configurations and scripts tailored for Uyuni and SUSE Manager.
On the host machine, add to your .bashrc
or similar:
export GODEBUG=netdns=cgo
This turns on the cgo resolver which will allow Terraform to correctly resolve your VM's name.
Source your rc or relaunch your terminal session to ensure the environment is correct.
Install tfenv from Homebrew, then install Terraform 1.0.10:
$ brew install tfenv
$ tfenv install 1.0.10
$ tfenv use 1.0.10
In a location of your choice, clone Sumaform:
$ git clone [email protected]:uyuni-project/sumaform.git
Next, we're going to configure Sumaform to use the SSH connection we used earlier. For more examples and detailed explanations, see the SSH backend guide.
$ cd sumaform
$ cp main.tf.ssh.example main.tf
$ cd modules
$ ln -sfn ../backend_modules/ssh backend
Editing main.tf
in an editor of your choice, fill in your SCC credentials in cc_username
and cc_password
in the base module.
If you only plan to use a fixed set of machines, you can fill in images
in the base module to save space as well, for example images = ["opensuse153o"]
Configure the server
module as below:
module "server" {
source = "./modules/server"
base_configuration = module.base.configuration
name = "server"
image = "opensuse152o"
product_version = "uyuni-master"
provider_settings = {
host = "server.tf.local"
port = "22"
user = "root"
password = <YOUR_PASSWORD>
}
}
Update, add and/or remove client and minion definitions as needed, depending on how many VMs you configured.
Module definitions for clients and minions need to be updated with image configuration, provider settings etc in the same manner as shown above for the server.
It is crucial that the source property is configured correspondingly to what you expect the machine to do:
- for a server:
source = "./modules/server"
- for a client:
source = "./modules/client"
- for a minion:
source = "./modules/minion"
Once the configuration is set up, run Terraform in the repository root:
$ terraform init
For any errors, the Sumaform repository has a thorough troubleshooting guide.
If init was successful, ensure all the required virtual machines are running, then apply the changes. Before doing so, make sure you have made a recent snapshot of your VMs.
For debugging purposes it may be handy to store the output along with any errors in a file as well.
$ terraform apply |& tee /tmp/terraform_apply.log
After successfully terraforming, your system should now be ready for development.
Opening server.tf.local
in your browser, you should see the product login page.
You can see the Java development guide for instructions on how to test and deploy changes.