I have a server at hetzner that I activated raid10 on by setting SWRAID to 1, and SWRAIDLEVEL to 10. But I'm confused when I look at /proc/mdstat as it indicates both RAID1 and RAID10. If you would please have a look and help me understand, it would be greatly appreciated.
Code:
# cat /proc/mdstat
Personalities : [raid1] [raid10]
md3 : active raid10 sdd4[0] sda4[3] sdc4[2] sdb4[1]
7533800448 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
bitmap: 1/57 pages [4KB], 65536KB chunk
md2 : active raid10 sdd3[0] sda3[3] sdc3[2] sdb3[1]
262012928 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
bitmap: 1/2 pages [4KB], 65536KB chunk
md1 : active raid1 sdd2[0] sda2[3] sdc2[2] sdb2[1]
523712 blocks super 1.2 [4/4] [UUUU]
md0 : active raid1 sdd1[0] sda1[3] sdc1[2] sdb1[1]
8380416 blocks super 1.2 [4/4] [UUUU]
Mount points
Code:
# cat /etc/fstab
proc /proc proc defaults 0 0
/dev/md/0 none swap sw 0 0
/dev/md/1 /boot ext3 defaults 0 0
/dev/md/2 / ext4 defaults 0 0
/dev/md/3 /home ext4 defaults 0 0
RAID10 detail sample
Code:
# mdadm --detail /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Tue Jan 31 19:45:23 2017
Raid Level : raid10
Array Size : 7533800448 (7184.79 GiB 7714.61 GB)
Used Dev Size : 3766900224 (3592.40 GiB 3857.31 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Feb 2 19:40:38 2017
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : rescue:3
UUID : f5d2b235:8f514355:f069c3cd:48360d55
Events : 20324
Number Major Minor RaidDevice State
0 8 52 0 active sync set-A /dev/sdd4
1 8 20 1 active sync set-B /dev/sdb4
2 8 36 2 active sync set-A /dev/sdc4
3 8 4 3 active sync set-B /dev/sda4
RAID1 detail sample
Code:
# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Jan 31 19:45:19 2017
Raid Level : raid1
Array Size : 523712 (511.52 MiB 536.28 MB)
Used Dev Size : 523712 (511.52 MiB 536.28 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Feb 1 22:24:49 2017
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Name : rescue:1
UUID : a11746d6:7dda85b6:b17da174:74d6f7c2
Events : 25
Number Major Minor RaidDevice State
0 8 50 0 active sync /dev/sdd2
1 8 18 1 active sync /dev/sdb2
2 8 34 2 active sync /dev/sdc2
3 8 2 3 active sync /dev/sda2
My interpretation of the information above, is that swap space and /boot is actually just RAID1. While / and /home is RAID10. I feel bad for not having a stronger grasp on what I'm seeing, even if I never played around with mdadm before. I have only barely learned how to replace a drive if bad, but I want to understand it fully. Maybe I am, but I don't get why hetzner would call it fully RAID10 then.
Thanks. And hi, my first post. 8)
EDIT: Could this the right interpretation?
Code:
SDA SDB SDC SDD
M M M M md0 = RAID1 = All Mirrored partitions.
M M M M md1 = RAID1 = All Mirrored partitions.
==============================
| S S M S S | md2 = RAID10 = Mirrored Striped partitions.
==============================
| S S M S S | md3 = RAID10 = Mirrored Striped partitions.
==============================
This would make /boot and swap extra sturdy, as they have twice the redundancy.
I think to create a "real" pure RAID1+0 volume, it would go something like this:
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/{sda,sdb}
# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/{sdc,sdd}
# mdadm --create /dev/md2 --run --level=0 --raid-devices=2 /dev/md{0,1}
One could add as many RAID1 arrays into it as wanted, depending on number of drives available. Or mix it up with bigger groups of e.g. 3 if you had 6 or 9 drives etc to save space. Am I getting this somewhat right? Very new with mdadm, but usually a quick study.