Proxmox - zbrewer/homelab GitHub Wiki
I use Proxmox VE as my primary bare metal hypervisor.
Proxmox can be installed by downloading the ISO image from their downloads page, writing it to removable media (such as a flash drive) and booting into the installer which walks you through the rest of the process.
After the basic install is complete, the web UI will be available at https://<ip_address>:8006
. Before use, open the web UI and select the node itself in the left-hand panel. Then, on the node's page, select Repositories
under the Updates
heading and then press the Add
button on the page that opens. Select No-Subscription
in the dropdown and then confirm by pressing Add
. This enables the No-Subscription Proxmox repository for home (free) use. The package list can then be updated and package updates installed. The system should now be ready for basic use.
Drivers should work out of the box for Mellanox ConnectX-3 NICs; however, there are a few things to be aware of. For starters, the instructions on the Telegraf page should be followed for installing Mellanox Firmware tools and monitoring NIC temperatures, if desired.
In addition, if using multiple VLANs with the Mellanox NIC, the "VLAN aware" option should be enabled for the bridge (not physical) network adapter. This will require a system reboot. After that, /etc/network/interfaces
should be updated to specify only the VLANs needed for the same bridge interface. This is because the Mellanox ConnectX-3 can load a maximum of 128 VLANs at a time so the default configuration will cause it to fail to load any above VLAN 127.
The original file will look something like:
auto vmbr1
iface vmbr1 inet manual
bridge-ports enp1s0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
And the updated file should look like (for example):
auto vmbr1
iface vmbr1 inet manual
bridge-ports enp1s0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 20,21,30,40,50,60,90,200,210,254,600,664-668
These instructions come from the guide here and will also require a reboot.
Proxmox includes native support for external metric servers. Specifically, InfluxDB 2 will be used for this purpose. This can be enabled by selecting Datacenter
in the left-hand column (not the node specifically) and then selecting Metric Server
in the next column over. From here, select Add
and InfluxDB
in the dropdown. The Name
can be whatever you would like, the Server
should be the IP address of the InfluxDB server, the Port
should be the port number InfluxDB is configured with (typically 8086), and the Protocol
should be HTTP
. At this point, make sure Enabled
is selected and fill in the Organization
and Bucket
from InfluxDB. Finally, create an API token in InfluxDB with write permissions in the bucket selected and copy that into the Token
field. This should complete the setup.
Email notifications can be set up through the web UI using a Gmail account. Go to Datacenter > Notifications
and add a notification target. Select SMTP
from the dropdown. The Name
can be whatever you'd like (to identify the endpoint) but the Server
should be smtp.gmail.com
, Encryption
should be TLS
, and Port
should be 465
(the default). Set the Username
and From Address
to the email address of the Gmail account sending the email and the Password
to an app password generated for that Gmail account and this specific Proxmox datacenter. Finally, select the Recipient(s)
that should receive the notifications (their email addresses can be changed under Datacenter > Permissions > Users
, if needed), any Additional Recipient(s)
, and set the Author
(tick the Advanced
box to show this setting) to something descriptive, such as Proxmox - <node>
, since this will show as the author of the email to alert recipients.
Once the SMTP settings are saved, use the Test
button at the top to send a test email and make sure that the settings are working correctly. Assuming they are, modify the default-matcher
under the Notification Matchers
heading on the same Notifications
page. Update the Comment
, if desired, and select the new SMTP notification target under the Targets to notify
tab. This will cause all notifications to be sent via the SMTP notification target that was just set up. Alternatively, a new notification matcher can be created to route a subset of notifications.
See the Proxmox documentation for more details.
Both manual and automatic backups can be created for VMs and CTs on Proxmox. These backups include not only the configuration of the guest(s) but also the full state. While snapshots are handy for reverting back to previous guest state in case something breaks in the guest, backups allow guests to be restored easily in the event that the Proxmox host is corrupted. Examples of what could cause this include hardware failures and failed updates.
Backups require a storage location where they will be saved. While local storage can be used, that defeats the purpose of backups to some extent and is generally not a viable long-term solution. Instead, a remote file server should be used. There are numerous supported protocols with the easiest being SMB/CIFS and NFS. Shares can be easily setup (including permissions) on a remote NAS and added to Proxmox.
For example, I created a new dataset in TrueNAS, created an ACL group with read+write access to it, created an ACL user in that group for the specific Proxmox instance I wanted to back up (to make revoking credentials easier), and then created a SMB share in Truenas for the dataset using these credentials.
It should be noted that, while SMB or NFS shares are easy to setup and don't require any additional servers besides a NAS, they do involve storing the full state for each backup. This can balloon quickly if you have a decent number of Proxmox guests and would like to keep several backup versions of each. Instead, a Proxmox Backup Server can be setup. This can still use a NAS as its backing storage but by sitting between Proxmox VE and the backup storage it can deduplicate shared blocks in order to save space. This can make frequent (such as nightly) backups with a decent retention period much more reasonable space wise although it does come with some drawbacks of its own.
The primary downside to the Proxmox Backup Server is the increased complexity and the need for somewhere to run the backup server. While it can be virtualized, this creates a dependency on Proxmox meaning that, in the event of a host failure, a Backup Server will need to be freshly installed somewhere in order to restore other VMs. On the other hand, dedicated hardware can be used but that comes with acquisition and running costs.
Ultimately, any number of storage solutions may be reasonable depending on the specific situation and set of acceptable tradeoffs.
Once a storage solution has been decided on, it must be added to the Proxmox VE datacenter. Click on Datacenter
in the far left pane (where nodes and guests are also listed) and then click the Storage
option. On this page, click the Add
button to open the dropdown and select the appropriate storage type. Enter the necessary details (IP address, credentials, etc.) and select VZDump backup file
for Content
. When done, click Add
and confirm that it completed successfully.
At this point, the Backup
page can be selected under Datacenter
. Click on Add
to open the configuration menu. From here, select the guests that should be backed up, the Mode
(Snapshot
is usually fine although Stop
may be used for guests that should maintain a higher consistency guarantee), and set a Schedule
. Also, make sure that Storage
is set appropriately to the backup storage previously added. The Retention
tab can also be opened in this same dialog to choose how many backups to keep. Click Create
when done to save the backup job. It should now appear on the backup tab and will run on the set schedule although it can be run on demand (a good idea after initial creation) by selecting it and clicking on the Run now
button.
In addition to running the entire automated backup job (for all included guests), a specific guest can be backed up on demand by selecting the Backup
tab on that guest's page. Here, existing backups of the guest can be seen and restored directly, if desired (see below). There is also a Backup now
button that will just back up that specific guest. This is useful in the event of a large change (major version update, large migration, etc.) in order to ensure that the recent work is included in a backup.
Backups can be restored by first ensuring that the same Storage
location that the backups were made to is added to the Proxmox Datacenter to which they should be restored. The same process described above for adding the Storage
destination can be followed. With this done, the storage location should show up (listed by name) in the left-hand panel under the Datacenter
heading (next to the local
and local-lvm
storage locations, for example). Select this and then go to the Backups
tab. This should list all backups in the remote backup location. From here, simply select the desired backup and click the Restore
button.
Backups can also be restored directly from the guest's page if it is still accessible. Do this by going to the Backup
tab on the guest's page, selecting the backup desired, and clicking Restore
(this is based on guest ID so backups for guests on other nodes with the same ID may show up as well).