raid via mdadm - alex-aleyan/linux_wiki GitHub Wiki
https://techjourney.net/create-migrate-existing-working-redhat-centos-system-to-raid-1-raid-10/
https://www.howtoforge.com/set-up-raid1-on-a-running-lvm-system-debian-etch
http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/
https://raid.wiki.kernel.org/index.php/RAID_setup
https://lists.debian.org/debian-user/2015/04/msg00761.html
https://gist.github.com/plepe/52ecc9f18efb32c68d18
- Show current situation:
mdadm --detail /dev/md? | grep dev
cat /proc/mdstat
mdadm --detail --scan
mdadm --examine --scan
mdadm --examine /dev/sd??
- Create
mdadm --create --verbose /dev/md0 --level=1 /dev/sda1 /dev/sdb2
- Create Config
mdadm --detail --scan >> /etc/mdadm.conf
- Remove drive:
mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1
- Add drive:
mdadm --add /dev/md0 /dev/sdb1
mdadm --grow --raid-devices=4 /dev/md1 --add /dev/md1 /dev/sdb1 /dev/sdc1
-
Reduce the number of raided devices to 1 to prevent the raid synchronization (in case if the drives to be added contained corrupted raid devices).
mdadm --grow --raid-devices=1 /dev/md1 --force
-
Check the current situation:
mdadm --examine --scan cat /proc/mdstat
-
Remove RAID for the new disks, just in case the RAID was set up on the “new” disks before:
mdadm --stop --scan mdadm --zero-superblock /dev/sdb{1..9}
-
Change the partition table of disks to MBR with the following commands. Skip this step if your disks do not have GTP partition table.
parted /dev/sdb (parted) mktable msdos (parted) quit Answer Y(es) when prompted with “warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?” Linux Convert GTP to MBR
-
Send update to kernel:
partprobe /dev/sdb
-
Partition new drive like raid drive:
sfdisk -d /dev/sda | sfdisk --force /dev/sdb
-
Set the partition type to Linux Raid Autodetect:
fdisk /dev/sdb Command (m for help): t Partition number (1-5): 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Repeat the process for the rest of the partitions on the disk. Then, make the change permanent: Command (m for help): w Linux RAID AutoDetect Partition Type
-
And then send update to kernel:
partprobe /dev/sdb
-
Install GRUB onto the drives:
GRUB> root (hd0,0) GRUB> setup (hd0)
-
Add the drives to begin synchronization
mdadm --grow --raid-devices=3 /dev/md1 --add /dev/sdb1 /dev/sdc1
-
Launch Rescue DVD.
-
Zero the Software RAID Super blocks:
mdadm --zero-superblock /dev/sdb{1..9}
-
Change the Partition Type from the fd (Active RAID) to the 83 (Linux) for the
/dev/sda1
partition and to the 8e (Linux LVM) for the/dev/sda{2..4}
partitions.Command (m for help): t Partition number (1-5): 1 Hex code (type L to list codes): 83 Command (m for help): w
-
Activate the LVMs:
vgchange -a y
-
Run the fsck on the partitions to check the file system integrity:
fsck -fy /dev/sda1 fsck -fy /dev/mapper/vg1-*
-
Mount the
/dev/mapper/vg1-root
at/mnt/root
-
Change the
/boot
mount entry in the fstab file from the/dev/md1
device to the/dev/sda1
partition.sed -i -e 's!/dev/md1!/dev/sda1!g' /mnt/root/etc/fstab
-
Bring up the LVMs to life:
vgchange -a y vg1
-
Mount the
/dev/mapper/vg1-root
at/mnt/root
-
Check the current situation with the physical volumes, volume groups, logic volumes, and the file system's sizes/occupation:
pvs vgs lvs df -TH
-
Check the integrity of the file system on the LVM we are going to resize:
fsck -fy /dev/mapper/vg1-root
-
Unmount the LVMs:
umount -l /dev/mapper/vg1*
-
Resize the filesystem on the LVM:
resize2fs /dev/mapper/vg1-root 900G
-
Resize the LVM:
lvreduce -L 950G /dev/mapper/vg1-root
-
Check the physical volume distribution across the physical partitions to make sure the physical volume to be moved will fit the destination partition:
pvs -0+pv_used
-
Move the physical volume to the destination partition:
pvmove /dev/sda2 /dev/sda3
-
Remove the source partition from the Volume Group:
vgreduce vg1 /dev/sda2 parted /dev/sda -> rm 2
-
Optionally, clone the partitions to a new drive:
dd if=/dev/sda1 of=/dev/sdb1 conv=notrunc # for boot partition To move LVMs from sda2 to sdb2, just add sdb2 to the VG pool, move the PV to sdb2 and remove sdba2
mdadm --examine --scan
mdadm --examine --scan >> /etc/mdadm.conf
mdadm --assemble --scan
mdadm --examine --scan
mdadm --detail
cat /proc/mdstat
watch "mdadm --detail /dev/md? | grep dev"
watch "cat /proc/mdstat"
UUIDs:
blkid /dev/md1
blkid /dev/sda1
pvdisplay | grep UUID
vgdisplay | grep UUID
lvdisplay | grep UUID
mdadm --examine --scan
ll /dev/disk/by-uuid/
mdadm --stop --scan
mdadm --create --verbose /dev/md1 --metadata=0.9 --level=1 --raid-devices=4 missing /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm --zero-superblock /dev/sdb{1..9}
dracut --mdadmconf --add-drivers "raid1 raid10" --filesystems "ext4 ext3 swap tmpfs devpts sysfs proc" --force
/boot/initramfs-$(uname -r).img $(uname -r)
mdadm --manage /dev/md1 -a /dev/sdb1
OR
mdadm --add /dev/md1 /dev/sdb1
-
Remove physical volume:
pvremove /dev/sdX# vgchange --refresh vg1
-
FIXME: say something here
mdadm --examine --scan mdadm --examine --scan >> /etc/mdadm.conf
-
FIXME: say something here
mdadm --assemble --scan cat /proc/mdstat
-
Reboot
Boot from sda \\
sdb is clone of sda \\
create raid device on sdc \\
df -h
fdisk -l
Make sure to begin with the clean start:
mdadm --examine --scan > /etc/mdadm.conf #should return nothing/empty
mdadm --assemble --scan
mdadm --detail --scan
cat /proc/mdstat
Shutdown all RAID Arrays:
mdadm --stop --scan
Configure the partition table to MBR (MSDOS):
parted /dev/sdc
(parted) mktable msdos
(parted) quit
partprobe /dev/sdc
Partition the target HD (sdc) according to the partition layout of the source HD (sdb)
sfdisk -d /dev/sdb | sfdisk --force /dev/sdc
Using fdisk utility, configure the partition type to be Linux RAID Autodetect:
fdisk /dev/sdc
Command (m for help): t
Partition number (1-5): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
Make sure to remove RAID for the new disks:
mdadm --stop --scan
mdadm --zero-superblock /dev/sdc{1..9}
Send the update to KERNEL:
partprobe /dev/sdc
Load the required modules into the KERNEL:
modprobe raid1
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10]
unused devices: <none>
mdadm --create --verbose /dev/md1 --metadata=0.9 --level=1 --raid-devices=2 missing /dev/sdc1
mdadm --create --verbose /dev/md2 --metadata=0.9 --level=1 --raid-devices=2 missing /dev/sdc2
mdadm --create --verbose /dev/md3 --metadata=0.9 --level=1 --raid-devices=2 missing /dev/sdc3
cat /proc/mdstat
[raid1]
md1 : active raid1 sdd1[3]
4189184 blocks 512K chunks 2 near-copies [4/3] [_UUU]
md2 : active raid1 sdd2[3]
1939915776 blocks 512K chunks 2 near-copies [4/3] [_UUU]
bitmap: 15/15 pages [60KB], 65536KB chunk
md3 : active raid1 sdd3[3]
511936 blocks [4/3] [_UUU]
unused devices: <none>
mdadm --examine --scan >> /etc/mdadm.conf
mkfs.ext3 /dev/md1
pvcreate /dev/md2
pvcreate /dev/md3
vgextend vg1 /dev/md2
vgextend vg1 /dev/md3
pvdisplay
vi /etc/fstab
/dev/md0 /boot ext3 defaults
vi /etc/mtab
/dev/md0 /boot ext3 rw 0 0
vi /boot/grub/menu.lst
[...]
default 0
fallback 1
[...]
<remove> NO rn_NO_DM and rn_NO_MD
mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.old
dracut --mdadmconf --add-drivers "raid1 raid10" --filesystems "ext4 ext3 swap tmpfs devpts sysfs proc" --force /boot/initramfs-$(uname -r).img $(uname -r)
pvmove /dev/sdb2 /dev/md2
pvmove /dev/sdb3 /dev/md3
vgreduce vg1 /dev/sdb2
vgreduce vg1 /dev/sdb3
pvremove /dev/sdb2
pvremove /dev/sdb3
pvdisplay
fdisk /dev/sdb
Command (m for help): <- t
Partition number (1-5): <- 5
Hex code (type L to list codes): <- fd
Changed system type of partition 5 to fd (Linux raid autodetect)
Command (m for help): <- w
mdadm --add /dev/md2 /dev/sdb2
mdadm --add /dev/md2 /dev/sdb2
mdadm --add /dev/md3 /dev/sdb3
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdc1[2] sdb1[1]
4988032 blocks [2/1] [_U]
[==>..........] recovery = 52.5% (2623232/4988032) finish=0.5min speed=74705K/sec
md2 : active raid1 sdc2[2] sdb2[1]
4988032 blocks [2/1] [_U]
[==>..........] recovery = 52.5% (2623232/4988032) finish=0.5min speed=74705K/sec
md3 : active raid1 sdc2[2] sdb2[1]
4988032 blocks [2/1] [_U]
[==>..........] recovery = 52.5% (2623232/4988032) finish=0.5min speed=74705K/sec
unused devices: <none>
mount /dev/md0 /mnt/md0
cd /boot
cp -dpRx . /mnt/md0