20131130 creating a software raid array - plembo/onemoretech GitHub Wiki

title: Creating a software RAID array link: https://onemoretech.wordpress.com/2013/11/30/creating-a-software-raid-array/ author: phil2nc description: post_id: 6630 created: 2013/11/30 18:13:22 created_gmt: 2013/11/30 23:13:22 comment_status: closed post_name: creating-a-software-raid-array status: publish post_type: post

Creating a software RAID array

Some notes on setting up a RAID 5 array on Linux. To do RAID 5 you'll need a minimum of 3 physical disks. In this example we have 3 unformatted and unpartitioned 1TB disks. Device names are /dev/sdb, /dev/sdc and /dev/sdd. Boot drive is /dev/sda and will not be part of the RAID array. Once the RAID array is created it will be partitioned with LVM2 and formatted ext4 for mounting on /data (we use LVM to allow maximum flexibility in both growing and shriking space available for data, and ext4 as currently most stable file system for our purposes). 1. Create Linux software RAID partitions.

gdisk /dev/sdb

Enter the following commands (in this order): n (to create a new, empty, partition) p (make primary partition) 1 (partition number) [Enter] (First sector) [Enter] (Last sector, default is the whole disk) t (set partition type) fd00 (Linux raid auto partition code) w (write the partition table) 2. Copy this partition scheme to all other disks.

sgdisk --backup=table /dev/sdb
sgdisk --load-backup=table /dev/sdc
sgdisk -G /dev/sdc

3. Reboot. 4. Create RAID device using mdadm tool.

mdadm --create /dev/md0 --level=5 \
--raid-devices=3 \
/dev/sdb1 /dev/sdc1 /dev/sdd1

5. Make /etc/mdadm.conf 6. Assign LVM physical volume.

pvcreate /dev/md0

7. Create volume group.

vgcreate vg1 /dev/md0

8. Make logical volume.

lvcreate -n data -l 100%FREE /dev/vg1

9. Format ext4.

mkfs.ext4 /dev/vg1/data

6. Mount disk array.

mkdir /data
mount /dev/md0 /data

7. Update /etc/fstab.

/dev/md0 /data ext4 defaults 0 0

8. Reboot and run df -h once system is back up to verify disk is mounted and reporting expected size (2/3 of total disk space -- parity data takes up 1/3 of a RAID 5 array's space -- in my case of 3x 1TB disks this turned out to be 1.8 TB after formatting). Copy some data to validate read/write operations.

Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/fedora-root   46G  5.4G   38G  13% /
devtmpfs                 3.9G     0  3.9G   0% /dev
tmpfs                    3.9G   80K  3.9G   1% /dev/shm
tmpfs                    3.9G  792K  3.9G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                    3.9G   36K  3.9G   1% /tmp
/dev/sda1                477M  101M  347M  23% /boot
/dev/mapper/fedora-home  7.6G   61M  7.1G   1% /home
/dev/mapper/vg1-data     1.8T  125G  1.6T   8% /data

Copyright 2004-2019 Phil Lembo