OSVDC Series: Virtualization Cluster - rharmonson/richtech GitHub Wiki

OSVDC: Virtualization Cluster

Revised: April 4, 2016

Article 2 of the Open Source Virtual Data Center Series

Physical Architecture

A Virtual Data Center smallest building block is a virtualization cluster. A cluster is comprised of connected hosts. A basic virtualization cluster will have one or more shared Storage hosts and two or more Compute hosts, hardware and software based network services (hosts), and one or more management hosts.

For production workloads, redundancy of most if not all components may be necessary. For example, redundant storage components implies two or more storage hosts, two or more interfaces for each Compute host to Storage hosts, redundant power supplies, redundant storage controllers, etc. All of which increases the final solution's cost. Typically, how much redundancy is needed will be determined by business requirements. As we move forward, I will "note" opportunities for increasing high-availability through redundancy, but the architecture below is sufficient for a lab or non-production work loads.

High-level Architecture

What follows is a brief discussion of each cluster host and platform selection.

Storage Host

There are two primary characteristics of storage in an OSVDC. First, it must meet the input and output operations per seconds (IOPS) of the supported products. For example, high transaction databases or virtual desktops have higher IOPS requirements than a typical web server. Second, it must be accessible by all Compute hosts within a cluster and the management host to insure high-availability.

Requirements

  • Cost
  • Open source
  • Support iSCSI & NFS
  • Automated SMART testing
  • Graphic User Interface (GUI)
  • Storage monitoring & reporting
  • Notifications

Selection

  • FreeNAS

I selected FreeNAS, however, for production work loads, Gluster is a serious contender. Gluster is specifically designed to be a distributed storage solution. However, Gluster was not a strong contender using the architecture above with a single Storage host due to its complexity and performance was in question with only one Gluster host. Also, the cost of two or three Gluster hosts far exceeds my budget, thus removed from consideration.

Compute Host

A Compute Host executes processes for the management or support of virtual machines. Its primary components are processors and memory. Processors need to support a set of instructions specifically designed for virtualization. Intel and AMD processors designed for virtualization have VT-x and RVI, respectively. Also, there must be a sufficient number of CPU cores and memory for the host operating system, management, and hosted virtual machines. Virtual machines access the Compute host's resources through the use of a hypervisor which abstracts physical devices or resources for consumption via native or installed device drivers within the virtual machine.

Requirements

  • Cost
  • Open source
  • Central management

Selection

  • oVirt on Centos 7

After several months of research, I ended up with OpenStack and oVirt as the final contenders. Most Compute options including OpenStack have multi-tenancy as one of their core design goals. I selected oVirt over OpenStack due to oVirt being significantly less complex.

Management Host

For a single cluster containing two Compute hosts, you could make a case that central management is not needed, however, most commercial and open source virtualization solutions require central management for services such as virtual machine templates, high availability, power management, etc. As such, central management is listed as a requirement.

Requirements

  • Cost
  • Open source
  • Ease of use

Selection

  • oVirt Engine

There are many options for managing the KVM hypervisor, but using oVirt is really the most logical choice with oVirt Compute hosts.

Define: The name oVirt is derived from Open Virtualization. Open Virtualization Manager is synonymous with oVirt Engine. This guide will use the oVirt Engine Appliance which may be referenced as oVirt Appliance, oVirt Engine, or appliance. Next major release, I will endeavor to pick one and stick to it.

Networking Services

Networking in a virtual or non-virtual environment fundamentally remains the same unless we are developing overlay and underlay networks for multi-tenancy. The most significant change in networking is where network services are provided. We are no longer solely reliant on networking device manufacturers to provide network services. Software or Software Defined Networking (SDN) is now providing many of the networking services in a Virtual Data Center. SDN encompasses solutions such as Open vSwitch, Neutron, Linux bonding and bridging, and VMware NSX. We will be using some of the aforementioned services, but multi-tenancy is not included as a requirement.

The theme of this series is "Open Source" but if you have a managed small business switch or router, save the money and use it. This assumes that the device, especially the switch, performance is acceptable for your use case.

Switch

Requirements

  • Cost
  • Open source
  • Ease of use
  • Port count

Selection

  • Ubiquiti Networks' EdgeSwtich

Ubiquiti Network's switch models have 5, 8, 16, 24, and 48 ports and range in price from $85 to $800. Its network OS is not open source, but is based on a fork of Vyatta. Vyatta was an open source community Network OS.

Router

Requirements

  • Cost
  • Open source
  • Ease of use
  • Port count

Selection

  • Ubiquiti Networks' EdgeRouter

With 3, 4, 5, and 8 port router models, ranging from $95 to $325 USD, and its network OS is also based on Vyatta.

Next

Next article in the series is Cluster Specifications.