(SEC480) Intro to Hypervisors (Week1) - ConnorEast/Tech-Journal GitHub Wiki

Intro to Hypervisors

Assignment Prompt

Do some self-guided research into the components and considerations of a hypervisor implementation. Start by thinking about what we discussed in the lecture and build on that by finding external sources (they don't have to be scholarly sources, but try to avoid heavily sponsored or AI generated content).

Scenario

You work for a medium sized business (200 employees) that has found itself in need of a home grown virtualization solution, on premises with physical hardware. You need to be able to host ~100 VMs & 100 containers, mostly Linux, that serve external web apps, internal services, and developer platforms. Keep the following in mind during your research.

Consider the following (rough estimations for hardware are fine):
Hardware:

Server-grade rack-mounted solutions. RAM & CPU resource needs. Consider Intel VT & AMD-V SSD vs HDD (and why) RAID cards, NICS (what speeds?) etc. Compare at least 2 different hypervisor options

Licensing costs Compatibility Scalability Features Network and Storage

Research a basic network design for a small-mid scale hypervisor implementation Compare options for VM storage NFS, iSCSI, direct attached, etc. Will there be separate vlans for storage Redundancy considerations



Deliverable:

To begin, let's discuss hardware. I’d suggest a total of 4 servers, rack mounted, and hosting proxmox [type 1 hypervisor]. The servers should be Dell PowerEdge R720’s (12th gen, 2U form factor). Each of the Dell PowerEdge R720’s should be able to handle up to 80 boxes concurrently with minimal difficulty if configured with E5-2600 v1/v2 processors and 768GB DDR3 Ram. As such that would mean you should be able to host 320 VM’s leaving room for further expansion. If expansion was not an issue one R720 should be used as a redundant server for any and all necessary VM’s required for everyday functionality by hosting Proxmox Backup Server.


I'd suggest Intel vt over AMD-v because of its compatibility with the Dell servers which have been specified. In relation to storage I’d suggest SSD’s as they have quicker read write speeds as well as improved latency which are better for more unpredictable applications and services. Should you decide to run HDD’s the likelihood of system failure is high when enabling/starting multiple VMS at once.

Hardware

Servers Processor Ram/Memory RAID COST
Dell R720 Intel® Xeon® processor E5-2600 or E5-2600 v2 product family DDR3 DIMMs [24 slots] PERC Series [S110, H310, H710, H710P] $465 [[Ebay](https://www.ebay.com/b/Poweredge-R720/11211/bn_7023333030)]

For the raid controller it would be a PERC Series H710P as it has the ability to protect the write cache during power loss; protects integrity. [dell-perc-h710]. For storage I would do NFS as I use it personally, but here is a comparison table.

NFS
~ Simple and allows for heightened automatability
ISCSI
~Accessed via ethernet, great for clusters
Das
~ Local Storage [not feasible for this network]
Ceph
~ Hard to setup but high redundancy and availability
Redundant Storage:
— 2TB NFS SSD Drives, [Raid 1] (connected between 2 separate hosts for failover; available through proxmox nfs share)

Proxmox Vs ESXI Comparison

Category Proxmox ESXI
User Friendliness User-friendly web interface with strong community support and forums (Ultahost). User-friendly with extensive professional documentation and enterprise support Setup Ease Steeper learning curve, requires more manual configuration (Ultahost). Streamlined setup process, easier for beginners Performance Superior IOPS performance - beat ESXi in 56 of 57 tests with 50% performance gains (blockbridge) Optimized resource allocation, efficient even under high workloads Security Strong security but requires regular updates to maintain protection (Wiki) Strong security with regular automatic updates Cost Free (Community Edition) (O’Reilly) Standard: $1,394/CPU/year Enterprise Plus: $4,780/CPU/year (V2Cloud) Built In Firewall IPTables-based, IPv4/IPv6, cluster-wide rules Stateless firewall, IPv4/IPv6, activity logging Storage Support Local, LVM, NFS, iSCSI, Ceph, GlusterFS, ZFS (O’Reilly) VAAI, vSAN, VASA, iSCSI, NFS, multipathing Scalability Up to 32 nodes, 50,000 VMs per cluster (veeam) Up to 64 nodes, 8,000 VMs per cluster (veeam) Best For Budget-conscious deployments, native Linux/container support Enterprise environments requiring vendor support

Mid-sized Network Design

To begin with the network setup, I'd suggest using 2 core switches (medium) for redundancy running at 10GbE speeds with Layer 3 capabilities for handling inter-VLAN routing. These would connect to 4 Dell R720 servers and a few access switches for the employee network. You'd also need an edge firewall at the perimeter for internet access. I’d recommend VLAN segmentation between networks. VLAN 10 could handle Proxmox GUI/ iDRAC access. VLAN 20 would be for external web apps. VLAN 30 could handle file servers. VLAN 40 would be isolated and handle network storage to decrease both network congestion as well as the chance for PII to get out of the network. VLAN 50 would be for vms in development. Finally VLAN 100 for regular employee workstations and office devices. The purpose for Vlan segmentation is to decrease the attack surface present for attackers to target (infosecinstitute). Each R720 should have dual Intel (X710-DA2) (10GbE) NICs bonded via LACP for VM traffic (VLANs 10, 20, 30, 50, 100) and another dual 10GbE pair bonded for dedicated storage traffic (VLAN 40 - isolated with no inter-VLAN routing). The onboard 1GbE ports handle management on VLAN 100. Each server connects to both core switches for redundancy and increased bandwidth. All VLANs would be set at switch level [likely cisco].



Sources

  • Ahmed, Wasim. “Mastering Proxmox - Third Edition.” O’Reilly Online Learning, Packt Publishing, 16 Nov. 2017, learning.oreilly.com/library/view/mastering-proxmox/9781788397605/bc0ef7a1-efb1-4d14-8b94-eaa93643b248.xhtml.
  • Cashwell, Billy. “Proxmox vs Esxi: Which Is Better for You?” Veeam Software Official Blog, 29 Oct. 2024, www.veeam.com/blog/proxmox-vs-esxi.html#:~:text=To%20summarize%2C%20ESXi%20provides%20a,and%20configuration%20require%20more%20expertise.
  • High Availability and Redundancy Best Practices in CCIE Enterprise Network | by RIA | Medium, medium.com/@ria07473/high-availability-and-redundancy-best-practices-in-ccie-enterprise-network-c6041ae06f16. Accessed 14 Jan. 2026.
  • Jason. “VMware vSphere Pricing in 2024: Licensing and Overhead Costs.” V2 Cloud, 22 Nov. 2024, v2cloud.com/blog/vmware-vsphere-licensing-and-costs. “Poweredge R720.” PowerEdge R720, Dell, i.dell.com/sites/content/shared-content/data-sheets/en/Documents/Dell-PowerEdge-R720-Spec-Sheet.pdf. Accessed 14 Jan. 2026.
  • “Proxmox vs Esxi: Which Is Better?” UltaHost Blog, UltaHost, 2 Aug. 2024, ultahost.com/blog/proxmox-vs-esxi/#:~:text=Proxmox%20is%20cost%2Deffective%20and,invest%20in%20a%20commercial%20solution.
  • “Proxmox vs. Vmware Esxi: A Performance Comparison Using NVME/TCP.” Proxmox vs. VMware ESXi: A Performance Comparison Using NVMe/TCP | Blockbridge Knowledgebase, kb.blockbridge.com/technote/proxmox-vs-vmware-nvmetcp/. Accessed 14 Jan. 2026.
  • “Security Reporting.” Security Reporting - Proxmox VE, pve.proxmox.com/wiki/Security_Reporting#:~:text=Infrastructure%20Issues,and%20contact%20us%20via%20email. Accessed 14 Jan. 2026.
  • Sheldon, Robert, et al. “What Is Disk Mirroring (Raid 1)?: Definition from TechTarget.” Search Storage, TechTarget, 2 Feb. 2024, www.techtarget.com/searchstorage/definition/disk-mirroring.
  • “VLAN Network Segmentation and Security- Chapter Five [Updated 2021].” Infosec, www.infosecinstitute.com/resources/management-compliance-auditing/vlan-network-chapter-5/. Accessed 14 Jan. 2026.
⚠️ **GitHub.com Fallback** ⚠️