XFS - shawfdong/hyades GitHub Wiki
XFS is high-performance 64-bit journaling file system. XFS excels in the execution of parallel input/output (I/O) operations, and is the default file system of Red Hat Enterprise Linux 7, replacing ext4[1]. We'll create an XFS file system on the RAID-6 volume on the UCSC's Data Transfer Node.
The RAID controller of dtn is a PERC H700 Integrated, which is actually a rebranded LSI MegaRAID SAS 2108:
# lspci | grep RAID 02:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] (rev 05)We'll install Dell OpenManage Server Administrator (OMSA) to manage the RAID controller.
Install and start IPMI:
# yum -y install OpenIPMI ipmitool # chkconfig ipmi on # service ipmi start
Set up the Dell Linux Repository[2]:
# wget -q -O - http://linux.dell.com/repo/hardware/latest/bootstrap.cgi | bash
Install OpenManage Server Administrator:
# yum install srvadmin-base srvadmin-omcommon srvadmin-storageservices
We can use the script /opt/dell/srvadmin/sbin/srvadmin-services.sh to start/stop/restart/status all the services installed by OMSA, e.g.:
# /opt/dell/srvadmin/sbin/srvadmin-services.sh status
We can display the properties of the virtual disks (volumes) on the RAID controller:
# omreport storage vdisk controller=0 List of Virtual Disks on Controller PERC H700 Integrated (Slot 4) Controller PERC H700 Integrated (Slot 4) ID : 0 Status : Ok Name : Virtual Disk 0 State : Ready Hot Spare Policy violated : Not Assigned Encrypted : No Layout : RAID-1 Size : 136.13 GB (146163105792 bytes) T10 Protection Information Status : No Associated Fluid Cache State : Not Applicable Device Name : /dev/sda Bus Protocol : SAS Media : HDD Read Policy : Adaptive Read Ahead Write Policy : Write Back Cache Policy : Not Applicable Stripe Element Size : 64 KB Disk Cache Policy : Disabled ID : 1 Status : Ok Name : DATA State : Ready Hot Spare Policy violated : Not Assigned Encrypted : No Layout : RAID-6 Size : 9,310.00 GB (9996536381440 bytes) T10 Protection Information Status : No Associated Fluid Cache State : Not Applicable Device Name : /dev/sdb Bus Protocol : SAS Media : HDD Read Policy : Adaptive Read Ahead Write Policy : Write Back Cache Policy : Not Applicable Stripe Element Size : 64 KB Disk Cache Policy : Disabled
We note that the Stripe Element Size of RAID-6 virtual disk (volume) is 64 KB, which we'll use when we specify the stripe geometry for the XFS file system.
The RAID-6 volume is /dev/sdb on dtn:
# cat /proc/partitions major minor #blocks name 8 16 9762242560 sdb
which we'll use GNU Parted to partition.
Install GNU Parted:
# yum install parted
Partition the RAID-6 volume:
# parted /dev/sdb (parted) mklabel gpt (parted) unit s (parted) print free Model: DELL PERC H700 (scsi) Disk /dev/sdb: 19524485120s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 34s 19524485086s 19524485053s Free Space (parted) mkpart primary 0 100% Warning: You requested a partition from 0s to 19524485119s. The closest location we can manage is 34s to 19524485086s. Is this still acceptable to you? Yes/No? Yes Warning: The resulting partition is not properly aligned for best performance. Ignore/Cancel? Cancel (parted) mkpart primary 0% 100% (parted) print free Model: DELL PERC H700 (scsi) Disk /dev/sdb: 19524485120s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 34s 2047s 2014s Free Space 1 2048s 19524483071s 19524481024s primary 19524483072s 19524485086s 2015s Free Space (parted) quit
Note when we ran mkpart primary 0 100% we got a warning that "The resulting partition is not properly aligned"; but mkpart primary 0% 100% created a partition that is properly aligned. This method is much easier than manually selecting the start and end sectors of the partition[3].
Install XFS tools:
# yum install xfsdump xfsprogs
Create an XFS file system:
# mkfs.xfs -d su=64k,sw=10 /dev/sdb1 meta-data=/dev/sdb1 isize=256 agcount=32, agsize=76267504 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=2440560128, imaxpct=5 = sunit=16 swidth=160 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=16 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0Note to specify stripe geometry for XFS, use the following mkfs.xfs sub-options[4]:
- su=value
- Specifies a stripe unit or RAID chunk size. The value must be specified in bytes, with an optional k, m, or g suffix.
- sw=value
- Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe.
Assign a label to the XFS file system:
# xfs_admin -L data /dev/sdb1 writing all SBs new label = "data"
Test mounting the XFS file system:
# mkdir /data # mount /dev/sdb1 /data # chmod 1777 /data # cat /proc/mounts /dev/sdb1 /data xfs rw,relatime,attr2,delaylog,logbsize=64k,sunit=128,swidth=1280,noquota 0 0
Update /etc/fstab:
/dev/sdb1 /data xfs inode64,nobarrier 1 2Red Hat recommends the mount options inode64 & nobarrier[6].
Remount the XFS file system:
# umount /data # mount /data # cat /proc/mounts /dev/sdb1 /data xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=64k,sunit=128,swidth=1280,noquota 0 0