LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Is this really raid10? (https://www.linuxquestions.org/questions/linux-newbie-8/is-this-really-raid10-4175598819/)

thirdbird 02-02-2017 01:45 PM

Is this really raid10?
 
I have a server at hetzner that I activated raid10 on by setting SWRAID to 1, and SWRAIDLEVEL to 10. But I'm confused when I look at /proc/mdstat as it indicates both RAID1 and RAID10. If you would please have a look and help me understand, it would be greatly appreciated.

Code:

# cat /proc/mdstat
Personalities : [raid1] [raid10]
md3 : active raid10 sdd4[0] sda4[3] sdc4[2] sdb4[1]
      7533800448 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 1/57 pages [4KB], 65536KB chunk

md2 : active raid10 sdd3[0] sda3[3] sdc3[2] sdb3[1]
      262012928 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 1/2 pages [4KB], 65536KB chunk

md1 : active raid1 sdd2[0] sda2[3] sdc2[2] sdb2[1]
      523712 blocks super 1.2 [4/4] [UUUU]

md0 : active raid1 sdd1[0] sda1[3] sdc1[2] sdb1[1]
      8380416 blocks super 1.2 [4/4] [UUUU]

Mount points
Code:

# cat /etc/fstab
proc /proc proc defaults 0 0
/dev/md/0 none swap sw 0 0
/dev/md/1 /boot ext3 defaults 0 0
/dev/md/2 / ext4 defaults 0 0
/dev/md/3 /home ext4 defaults 0 0

RAID10 detail sample
Code:

# mdadm --detail /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Tue Jan 31 19:45:23 2017
    Raid Level : raid10
    Array Size : 7533800448 (7184.79 GiB 7714.61 GB)
  Used Dev Size : 3766900224 (3592.40 GiB 3857.31 GB)
  Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Feb  2 19:40:38 2017
          State : active
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

        Layout : near=2
    Chunk Size : 512K

          Name : rescue:3
          UUID : f5d2b235:8f514355:f069c3cd:48360d55
        Events : 20324

    Number  Major  Minor  RaidDevice State
      0      8      52        0      active sync set-A  /dev/sdd4
      1      8      20        1      active sync set-B  /dev/sdb4
      2      8      36        2      active sync set-A  /dev/sdc4
      3      8        4        3      active sync set-B  /dev/sda4

RAID1 detail sample
Code:

# mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Tue Jan 31 19:45:19 2017
    Raid Level : raid1
    Array Size : 523712 (511.52 MiB 536.28 MB)
  Used Dev Size : 523712 (511.52 MiB 536.28 MB)
  Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Feb  1 22:24:49 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

          Name : rescue:1
          UUID : a11746d6:7dda85b6:b17da174:74d6f7c2
        Events : 25

    Number  Major  Minor  RaidDevice State
      0      8      50        0      active sync  /dev/sdd2
      1      8      18        1      active sync  /dev/sdb2
      2      8      34        2      active sync  /dev/sdc2
      3      8        2        3      active sync  /dev/sda2

My interpretation of the information above, is that swap space and /boot is actually just RAID1. While / and /home is RAID10. I feel bad for not having a stronger grasp on what I'm seeing, even if I never played around with mdadm before. I have only barely learned how to replace a drive if bad, but I want to understand it fully. Maybe I am, but I don't get why hetzner would call it fully RAID10 then.

Thanks. And hi, my first post. 8)

EDIT: Could this the right interpretation?

Code:

SDA        SDB        SDC        SDD
  M      M      M      M        md0 = RAID1 = All Mirrored partitions.
  M      M      M      M        md1 = RAID1 = All Mirrored partitions.
============================== 
| S      S  M  S      S |        md2 = RAID10 = Mirrored Striped partitions.
==============================
| S      S  M  S      S |        md3 = RAID10 = Mirrored Striped partitions.
==============================

This would make /boot and swap extra sturdy, as they have twice the redundancy.

I think to create a "real" pure RAID1+0 volume, it would go something like this:
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/{sda,sdb}
# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/{sdc,sdd}
# mdadm --create /dev/md2 --run --level=0 --raid-devices=2 /dev/md{0,1}

One could add as many RAID1 arrays into it as wanted, depending on number of drives available. Or mix it up with bigger groups of e.g. 3 if you had 6 or 9 drives etc to save space. Am I getting this somewhat right? Very new with mdadm, but usually a quick study.

suicidaleggroll 02-02-2017 02:42 PM

Quote:

Originally Posted by thirdbird (Post 5664042)
My interpretation of the information above, is that swap space and /boot is actually just RAID1. While / and /home is RAID10.

Yes that looks correct.

Quote:

Originally Posted by thirdbird (Post 5664042)
Maybe I am, but I don't get why hetzner would call it fully RAID10 then.

Well it doesn't really matter, the advantage of RAID 10 over RAID 1 is additional storage space and slightly better write performance, neither of which matter for /boot and swap (if you're digging hard into swap on an HDD, the system is going to be unusably slow regardless).

suicidaleggroll 02-02-2017 02:47 PM

Quote:

Originally Posted by thirdbird (Post 5664042)
EDIT: Could this the right interpretation?

Code:

SDA        SDB        SDC        SDD
  M      M      M      M        md0 = RAID1 = All Mirrored partitions.
  M      M      M      M        md1 = RAID1 = All Mirrored partitions.
============================== 
| S      S  M  S      S |        md2 = RAID10 = Mirrored Striped partitions.
==============================
| S      S  M  S      S |        md3 = RAID10 = Mirrored Striped partitions.
==============================


Sort of, but what you've drawn is RAID01, RAID10 is reversed.
RAID10 is preferable because it's more fault-tolerant. Take 4 drives split into two pairs, then randomly remove 2 of the drives. RAID10 can only survive 2 drive failures if they're from opposite pairs, RAID01 can only survive if they're from the same pair. There's a 67% chance that the two drives you remove will be from opposite pairs, a 33% chance they're from the same pair. That means you want a stripe of mirrors (RAID10), rather than a mirror of stripes (RAID01), as RAID10 will be twice as likely to survive a second drive failure than RAID01.

Quote:

Originally Posted by thirdbird (Post 5664042)
This would make /boot and swap extra sturdy, as they have twice the redundancy.

Who needs /boot and swap when you've already lost / and /home? ;)

thirdbird 02-02-2017 02:53 PM

Quote:

Originally Posted by suicidaleggroll (Post 5664076)
Sort of, but what you've drawn is RAID01, RAID10 is reversed.


Who needs /boot and swap when you've already lost / and /home? ;)

It was the closest I managed to explain to myself the setup they have created :)

I realized what you're saying about it being a 0+1 drawing. But with 4 drives like that, 1+0 looks exactly like that doesn't it? With e.g. 6 drives I could have drawn 3 striped groups which would have represented 1+0 more explicitly, while 0+1 would have still been 2 groups with 3 drives in each. Does that make sense?

EDIT: Wait a bit.. I think I know what you're saying now... How's this:
Code:

SDA        SDB        SDC        SDD
  M      M      M      M        md0 = RAID1 = All Mirrored partitions.
  M      M      M      M        md1 = RAID1 = All Mirrored partitions.
============================== 
| M      M  S  M      M |        md2 = RAID1+0 = Striped Mirrored partitions.
==============================
| M      M  S  M      M |        md3 = RAID1+0 = Striped Mirrored partitions.
==============================


suicidaleggroll 02-02-2017 02:57 PM

Quote:

Originally Posted by thirdbird (Post 5664079)
EDIT: Wait a bit.. I think I know what you're saying now... How's this:
Code:

SDA        SDB        SDC        SDD
  M      M      M      M        md0 = RAID1 = All Mirrored partitions.
  M      M      M      M        md1 = RAID1 = All Mirrored partitions.
============================== 
| M      M  S  M      M |        md2 = RAID1+0 = Striped Mirrored partitions.
==============================
| M      M  S  M      M |        md3 = RAID1+0 = Striped Mirrored partitions.
==============================


Yes, that's correct

thirdbird 02-02-2017 03:38 PM

Thank you for helping me see it more clearly.

I thought about why they did it like that a bit more, and it makes sense - to me at least - to protect the booting part of the system against degradation a little bit more than root and home so the system is at least bootable. Like you said before it probably doesn't matter much, but if it has to make sense, I'm guessing that's it.

Jjanel 02-04-2017 04:08 AM

Hi & welcome! If you feel so, you can mark this as 'Solved', under ThreadTools at top
(so it won't show up on [my 'invention' of an] ANT list ;) )


All times are GMT -5. The time now is 01:25 PM.