Pleiades - shawfdong/hyades GitHub Wiki
Pleiades was the master node of the Pleiades cluster; after the old cluster was succeeded by Hyades in 2013, it is repurposed as the web server of the new cluster. Pleiades is a Dell PowerEdge 2950 server, equipped with two (2x) quad-core Intel Clovertown Xeon E5345 processors at 2.33GHz, and 32GB memory.
The RAID controller on Pleiades is a PERC (PowerEdge Expandable RAID controller) 5:
# lspci | grep RAID 02:0e.0 RAID bus controller: Dell PowerEdge Expandable RAID controller 5
After we start OMSA (OpenManage Server Administrator) services (see also Dell OpenManage):
# /opt/dell/srvadmin/sbin/srvadmin-services.sh startwe can query the storage subsystem:
# omreport storage controller Controller PERC 5/i Integrated (Embedded) # omreport storage pdisk controller=0
which lists four (4x) 2TB nearline Seagate SAS drives on the PERC 5/i RAID controller.
A RAID-10 virtual disk (volume) is created from the 4 physical disks:
# omreport storage vdisk controller=0 List of Virtual Disks on Controller PERC 5/i Integrated (Embedded) Controller PERC 5/i Integrated (Embedded) ID : 0 Status : Ok Name : VD0 State : Ready Hot Spare Policy violated : Not Assigned Encrypted : Not Applicable Layout : RAID-10 Size : 3,725.00 GB (3999688294400 bytes) Device Name : /dev/sda Bus Protocol : SAS Media : HDD Read Policy : No Read Ahead Write Policy : Write Through Cache Policy : Not Applicable Stripe Element Size : 64 KB Disk Cache Policy : Disabled
There are 2 on-board Broadcom GbE interfaces (eth0 & eth1):
# lspci | grep Ethernet 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12)
We bought a Mellanox ConnectX-2 VPI dual-port adapter (at $695.41 in February 2013) for Pleiades:
# lspci 0c:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] (rev b0)
There are 2 QSFP ports on the Mellanox ConnectX-2 VPI adapter. One benefit of the Mellanox ConnectX-2 VPI adapter is that the each port can be individually configured as either InfiniBand or 10 GbE! On Pleiades, the first port is configured as IB, connecting to the QDR InfiniBand fabric; and the second is as 10 GbE, connecting to the 10 GbE Dell Dell 8132F Switch, via a QSFP TO SFP+ cable adapter (at $29.98). Here are the steps to configure the port setting for the Mellanox ConnectX-2 VPI adapter:
Determine the PCI ID of the adapter. The above lspci output tells us the PCI ID of the Mellanox ConnectX-2 VPI adapter is 0000:0c:00.0. When the mlx4_core module is loaded, you can also find a directory 0000:0c:00.0 in /sys/bus/pci/drivers/mlx4_core/.
Append the following line to /etc/rdma/mlx4.conf:
0000:0c:00.0 ib eththe format is:
<pci_device_of_card> <port1_type> <port2_type>
The IB port uses the kernel module mlx4_ib (Mellanox ConnectX HCA InfiniBand driver); and the 10 GbE port uses mlx4_en (Mellanox ConnectX HCA Ethernet driver). Both mlx4_ib and mlx4_en depend on mlx4_core (Mellanox ConnectX HCA low-level driver).
The consistent network device name for the 10 GbE port is eth2, set in /etc/udev/rules.d/70-persistent-net.rules:
# PCI device 0x15b3:0x673c (mlx4_core) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:02:c9:29:65:5f", ATTR{type}=="1", KERNEL=="eth*", NAM E="eth2"
The IP addresses of the network interfaces are (see also Hyades Networks):
Interface | IP Address | Network | Subnet | Netmask |
---|---|---|---|---|
eth0 | 10.6.8.2 | Private GbE | 10.6.0.0 | 255.255.0.0 |
eth0:1 | 10.9.8.22 | IPMI | 10.9.0.0 | 255.255.0.0 |
eth1 | 172.16.6.101 | Huawei Private | 172.16.6.0 | 255.255.255.0 |
eth2 | 128.114.126.230 | Public 10GbE | 128.114.126.224 | 255.255.255.224 |
eth2:1 | 128.114.126.231 | Public 10GbE | 128.114.126.224 | 255.255.255.224 |
ib0 | 10.8.8.2 | IPoIB | 10.8.0.0 | 255.255.0.0 |
ipmi0 | 10.9.8.2 | IPMI | 10.9.0.0 | 255.255.0.0 |