Work in Progress - dcasota/vesxi-on-azure-scripts GitHub Wiki
Status: 26.07.2022
- Does this ESXi-setup-in-an-Azure-VM work? No.
- The provisioning is more user-friendly. With the new script version only the creation of an Azure image of Photon OS is a prerequisite.
- Older prerequisites have been automated: The ESXi iso is dynamically created inside a helper Windows Server VM with installed ESXi-Customizer script and later stored inside a Photon OS VM do be startable at boot time using the Ventoy injection meccano.
Status: 04.11.2021
- Does this ESXi-setup-in-an-Azure-VM work? No.
- Why did you catch up after almost 18 months? There were several interesting updates: ESXi 7.0.3 has been released with somewhat more functionality for Tanzu stuff. And the Azure Standard_E4s_v3 virtual machine offering has changed and includes actually Mellanox ConnectX-4 adapters. Hence, it was time to refresh this lab project.
- What did you change in the setup? Instead of trying to make run an older version of ESXi (6.x) which offers VMkernel compatibility for Linux drivers, the goal always was to learn from a cross-nested setup with latest bits. In addition, from Photon 2.0 up to Photon 4.0 revision 1, you can make run flawlessly an Azure GenV2 image for uefi compatibility purposes. Also, an improvement for the setup was to switch to Ventoy, a tiny tool which makes run ISO images easily for mbr/uefi and it offers injection entrypoints for serial console redirection params and for files changements.
Status: 25.02.2020
- Does this ESXi-setup-in-an-Azure-VM work? No.
- Are you working on this? Yes. It's some sort of a homelab project.
- What hardware are you using? The hardware actually used is an Azure Standard_E4s_v3 offering. A Standard_E4s_v3 offering includes a Hyper-V biased Virtual Machine with 4 vCPU, 32 GB RAM, accelerated networking and with premium disk support.
- What digital Bios is used? Microsoft's Hyper-V uses the American Megatrends Inc. Bios for VMs. In Azure it is not possible to access the Bios and the digital motherboard like a VMware vSphere VM. There is no Esc or Del key. In relation to boot, disk controllers or other Bios features, VMs running on-premises have some features that aren't supported in Azure. An Azure VM as example cannot attach an ISO like a VMware vSphere VM. Most of the culprits are listed here https://docs.microsoft.com/en-us/azure/virtual-machines/windows/generation-2.
- What virtual hardware components are used? ESXi requires a VMhost with at least two CPU cores, x86 processor, 4 GB RAM, and supported storage and ethernet controllers. There is no Azure Marketplace image for ESXi, but, a provisioned Azure VMware Photon OS VM with enabled accelerated networking works like a charm, and exposes the network adapter Mellanox ConnectX-3 virtual function as the VM its digital device. Lack of network interoperability is the main reason the ESXi setup does not work yet.
- Why do you use VMware Photon OS? In my studies so far the simplest solution make begin provisioning ESXi on Azure is creating the VM with temporary installed VMware Photon OS. VMware Photon OS is a tiny IoT cloud OS. See https://vmware.github.io/photon/. In short: from the VMware Photon OS .vhd, the Azure VM is created using it as osdisk as well as an attached data disk. The data disk is installed with the ISO bits of ESXi. Then, the disks are switched and ESXi boots from the prepared data disk. During ESXi setup, you select the second disk as installation disk, and detach the .vhdified ISO after ESXi setup.
- Does the Azure VMware Photon OS setup work? Yes. The VMware Linux-distro is delivered in several disk formats. It is important to know that actually (December 2019), .vhd is still the only Azure supported interoperability disk format. See .vhd limitations https://docs.microsoft.com/en-us/azure/virtual-machines/windows/generation-2#features-and-capabilities. The scripts
create-AzImage-PhotonOS.ps1
andcreate-AzVM-vESXi_usingAzImage-PhotonOS.ps1
uses the Photon OS 3.0rev2 Azure .vhd release. But some earlier setups with Photon OS 2.0 and 3.0 as Azure VM were successfully too. - How differs the driver integration of the network adapter Mellanox ConnectX-3 virtual function in Photon OS and in ESXi? Photon OS is a native 64bit OS. On the ESXi setup Shell phase, the two Mellanox nic adapters are not listed through 'lspci'. The two Mellanox adapters using 'lspci' DO show up on Photon OS.
- Which network adapter driver are used on Photon OS? The network paths actively in use its driver is called 'hv_netsvc'. 'mlx4_en' is the driver for the network paths in state configuring. See output below:
On Photon OS:
root@photonos [ / ]# networkctl
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback carrier unmanaged
2 eth0 ether routable configured
3 eth1 ether routable configured
4 eth2 ether carrier configuring
5 eth3 ether carrier configuring
6 docker0 bridge no-carrier unmanaged
6 links listed.
root@photonos [ / ]# networkctl status eth0
● 2: eth0
Link File: /usr/lib/systemd/network/99-default.link
Network File: /etc/systemd/network/99-dhcp-en.network
Type: ether
State: routable (configured)
Path: acpi-VMBUS:01
Driver: hv_netvsc
HW Address: 00:0d:3a:83:4a:81 (Microsoft Corp.)
Address: 192.168.1.6
fe80::20d:3aff:fe83:4a81
Gateway: 192.168.1.1
DNS: 168.63.129.16
CLIENTID: ff4482d23800020000ab113b7a24242f64acc0
root@photonos [ / ]# networkctl status eth1
● 3: eth1
Link File: /usr/lib/systemd/network/99-default.link
Network File: /etc/systemd/network/99-dhcp-en.network
Type: ether
State: routable (configured)
Path: acpi-VMBUS:01
Driver: hv_netvsc
HW Address: 00:0d:3a:83:5f:bf (Microsoft Corp.)
Address: 192.168.1.5
fe80::20d:3aff:fe83:5fbf
DNS: 168.63.129.16
CLIENTID: ffd584d2a200020000ab113b7a24242f64acc0
root@photonos [ / ]# networkctl status eth2
● 4: eth2
Link File: /usr/lib/systemd/network/99-default.link
Network File: /etc/systemd/network/99-dhcp-en.network
Type: ether
State: carrier (configuring)
Path: acpi-VMBUS:01-pci-9c78:00:02.0
Driver: mlx4_en
Vendor: Mellanox Technologies
Model: MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
HW Address: 00:0d:3a:83:4a:81 (Microsoft Corp.)
root@photonos [ / ]# networkctl status eth3
● 5: eth3
Link File: /usr/lib/systemd/network/99-default.link
Network File: /etc/systemd/network/99-dhcp-en.network
Type: ether
State: carrier (configuring)
Path: acpi-VMBUS:01-pci-923c:00:02.0
Driver: mlx4_en
Vendor: Mellanox Technologies
Model: MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
HW Address: 00:0d:3a:83:5f:bf (Microsoft Corp.)
root@photonos [ / ]#
- Which driver support for ESXi on Azure did you find? Unfortunately there is no ootb driver support at all. 'hv_netsvc' is part of the Linux Integration Services for Hyper-V and Azure only. None of the ESXi adapter drivers 'MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function' includes in 'mlx4_en' some sort of missing piece in ACPI-based detection and the PCI ID '15b3:61b0 Mellanox Technologies Device'. ESXi on a Hyper-V VM differs from an Azure VM. In my findings the community supported net-tulip ethernet driver (https://vibsdepot.v-front.de/wiki/index.php/Net-tulip, https://www.vembu.com/blog/installing-esxi-6-0-in-a-hyper-v-virtual-machine/) does not work in an Azure VM setup.