OSVDC Series: Cluster Specifications - rharmonson/richtech GitHub Wiki

OSVDC: Cluster Specifications

Revised: April 7, 2016

Article 3 of the Open Source Virtual Data Center Series

Platforms

Storage Host Specifications

Hardware Requirements

  • Low cost
  • Open source
  • Quiet
  • Intel multi-core 64 bit processor
  • Dual core processor
  • Threading processor
  • Processor supporting ECC
  • 32 GB ECC RAM
  • 16 GB USB flash drive
  • Storage controller supporting JBOD
  • Storage controller supporting eight SATA/SAS devices
  • 8 TB usable storage
  • Supercapacitor SSD (SLOG)
  • Separation of management and storage traffic
  • 1 Gb redundant management interfaces
  • 10 Gb storage network
  • 10 Gb redundant storage interfaces
  • Storage network TOE

Additional Specifications

  • host-to-host network attached storage
  • Array of mirrors (similar to RAID 10)
  • NFS version 4
  • SLOG device

Note

Due to the cost of a 10 Gb switch, redundant storage network requirement per host has been dropped. For production environment, I would advise spending the $5,000+ for an enterprise 10 GB switch supporting VLANs, LACP, etc.


10 GB switch is now available from Ubiquiti at a very reasonable price. An upgrade for me at a later date and alternative to using host-to-host network attached storage.

References

Compute Host Specifications

Hardware Requirements

  • Low cost
  • Open source
  • Quiet
  • Intel multi-core 64 bit processor
  • Quad core processor
  • Threading processor
  • Intel VT-x processor
  • Processor supporting ECC
  • Single socket
  • 32 GB ECC RAM
  • 32 GB USB flash drive
  • Separation of management, storage, and virtual machine traffic
  • 1 Gb redundant management interfaces
  • 10 Gb storage network
  • 10 Gb redundant storage interfaces
  • Storage network TOE
  • 1 Gb redundant virtual machine interfaces

Note:

I am estimating 3 vCPU to each CPU thread and 1.5 GB RAM to each 1 GB of physical RAM. If your workloads require more processor cores and/or RAM, you will need to scale-out by increasing the number Compute hosts or scale-up by purchasing hardware that permits dual processor sockets and/or more installed RAM.


Additional Specifications

  • host-to-host network attached storage

References

Management Host Specifications

Requirements

Initial testing will be to use a self-hosted (virtual machine) oVirt Engine Appliance.

  • Low cost
  • Open source
  • Ease of use
  • Dual core processor
  • 3 GB RAM
  • 10 GB disk space
  • 1 Gb NIC

References

Network Services Specifications

Hardware Requirements

Switch & Router

  • Cost
  • Managed
  • Open source
  • Port count
  • Quiet
  • Ease of use
  • 802.1q: VLAN
  • 802.3ad: LACP
  • Switch: layer 3
  • 6 storage ports
  • 10 management ports
  • 4 VM ports

Ports Requirement:

Management ports = (2 + 1 IPMI from Storage host) + (4 + 2 IPMI from Compute hosts) + 1 from administrator workstation = 10

VM ports = 4 with 2 per Compute host

If not using 10 Gb for direct attached storage network, then 6 storage ports = 2 Storage host + 4 from Compute hosts for a total of 20 ports.


Additional Specifications

Networks

Description Abbr VLAN Network Mask Gateway Notes
Management mgmt 1 192.168.1.0 24 192.168.1.254
Infrastructure infra 101 192.168.101.0 24 192.168.101.254 core infra svcs
Storage 1 stor1 111 192.168.111.0 30 N/A h2h nas
Storage 2 stor2 112 192.168.112.0 30 N/A h2h nas
Security sec 121 192.168.121.0 24 192.168.121.254 syslog, snort, etc
Intranet intra 131 192.168.131.0 24 192.168.131.254 non-public svcs
DMZ 1 dmz1 201 192.168.201.0 24 192.168.201.254 perimeter
DMZ 2 dmz2 211 192.168.211.0 24 192.168.211.254 public svcs

Switch Port Allocation

Description Ports Notes
Default Route 1 to router eth1
Infrastructure 3 - 10 including IPMI
Virtual Machines 13 - 16 all VM traffic

IP Addresses

Host Desc Network IP Addr No. Inf Notes
nasv Storage host infra 192.168.101.10 2 default route
nasv IPMI infra 192.168.101.1 2
nasv stor1 192.168.111.1 1 h2h nas
nasv stor2 192.168.112.1 1 h2h nas
node1 Compute host infra 192.168.101.11 2
node1 IPMI infra 192.168.101.2 2
node1 stor1 192.168.111.2 1 h2h nas
node2 Compute host infra 192.168.101.12 2
node2 IPMI infra 192.168.101.3 2
node2 stor2 192.168.112.2 1 h2h nas
eng1 VM Mgmt host infra 192.168.101.21 1 vm
ifw Router infra 192.168.69.1 1 eth1
csw Switch infra 192.168.69.2 1 eth1

References

Next

Next article in the series is Cluster Hardware.

⚠️ **GitHub.com Fallback** ⚠️