LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 02-02-2017, 01:45 PM   #1
thirdbird
LQ Newbie
 
Registered: Feb 2017
Distribution: Debian
Posts: 20

Rep: Reputation: Disabled
Is this really raid10?


I have a server at hetzner that I activated raid10 on by setting SWRAID to 1, and SWRAIDLEVEL to 10. But I'm confused when I look at /proc/mdstat as it indicates both RAID1 and RAID10. If you would please have a look and help me understand, it would be greatly appreciated.

Code:
# cat /proc/mdstat
Personalities : [raid1] [raid10]
md3 : active raid10 sdd4[0] sda4[3] sdc4[2] sdb4[1]
      7533800448 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 1/57 pages [4KB], 65536KB chunk

md2 : active raid10 sdd3[0] sda3[3] sdc3[2] sdb3[1]
      262012928 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 1/2 pages [4KB], 65536KB chunk

md1 : active raid1 sdd2[0] sda2[3] sdc2[2] sdb2[1]
      523712 blocks super 1.2 [4/4] [UUUU]

md0 : active raid1 sdd1[0] sda1[3] sdc1[2] sdb1[1]
      8380416 blocks super 1.2 [4/4] [UUUU]
Mount points
Code:
# cat /etc/fstab
proc /proc proc defaults 0 0
/dev/md/0 none swap sw 0 0
/dev/md/1 /boot ext3 defaults 0 0
/dev/md/2 / ext4 defaults 0 0
/dev/md/3 /home ext4 defaults 0 0
RAID10 detail sample
Code:
# mdadm --detail /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Tue Jan 31 19:45:23 2017
     Raid Level : raid10
     Array Size : 7533800448 (7184.79 GiB 7714.61 GB)
  Used Dev Size : 3766900224 (3592.40 GiB 3857.31 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Feb  2 19:40:38 2017
          State : active
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : rescue:3
           UUID : f5d2b235:8f514355:f069c3cd:48360d55
         Events : 20324

    Number   Major   Minor   RaidDevice State
       0       8       52        0      active sync set-A   /dev/sdd4
       1       8       20        1      active sync set-B   /dev/sdb4
       2       8       36        2      active sync set-A   /dev/sdc4
       3       8        4        3      active sync set-B   /dev/sda4
RAID1 detail sample
Code:
# mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Tue Jan 31 19:45:19 2017
     Raid Level : raid1
     Array Size : 523712 (511.52 MiB 536.28 MB)
  Used Dev Size : 523712 (511.52 MiB 536.28 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Feb  1 22:24:49 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:1
           UUID : a11746d6:7dda85b6:b17da174:74d6f7c2
         Events : 25

    Number   Major   Minor   RaidDevice State
       0       8       50        0      active sync   /dev/sdd2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       3       8        2        3      active sync   /dev/sda2
My interpretation of the information above, is that swap space and /boot is actually just RAID1. While / and /home is RAID10. I feel bad for not having a stronger grasp on what I'm seeing, even if I never played around with mdadm before. I have only barely learned how to replace a drive if bad, but I want to understand it fully. Maybe I am, but I don't get why hetzner would call it fully RAID10 then.

Thanks. And hi, my first post. 8)

EDIT: Could this the right interpretation?

Code:
 SDA	 SDB	 SDC	 SDD
  M       M       M       M 	md0 = RAID1 = All Mirrored partitions.
  M       M       M       M 	md1 = RAID1 = All Mirrored partitions.
==============================  
| S       S   M   S       S | 	md2 = RAID10 = Mirrored Striped partitions.
==============================
| S       S   M   S       S | 	md3 = RAID10 = Mirrored Striped partitions.
==============================
This would make /boot and swap extra sturdy, as they have twice the redundancy.

I think to create a "real" pure RAID1+0 volume, it would go something like this:
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/{sda,sdb}
# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/{sdc,sdd}
# mdadm --create /dev/md2 --run --level=0 --raid-devices=2 /dev/md{0,1}

One could add as many RAID1 arrays into it as wanted, depending on number of drives available. Or mix it up with bigger groups of e.g. 3 if you had 6 or 9 drives etc to save space. Am I getting this somewhat right? Very new with mdadm, but usually a quick study.

Last edited by thirdbird; 02-02-2017 at 02:42 PM.
 
Old 02-02-2017, 02:42 PM   #2
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143
Quote:
Originally Posted by thirdbird View Post
My interpretation of the information above, is that swap space and /boot is actually just RAID1. While / and /home is RAID10.
Yes that looks correct.

Quote:
Originally Posted by thirdbird View Post
Maybe I am, but I don't get why hetzner would call it fully RAID10 then.
Well it doesn't really matter, the advantage of RAID 10 over RAID 1 is additional storage space and slightly better write performance, neither of which matter for /boot and swap (if you're digging hard into swap on an HDD, the system is going to be unusably slow regardless).
 
Old 02-02-2017, 02:47 PM   #3
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143
Quote:
Originally Posted by thirdbird View Post
EDIT: Could this the right interpretation?

Code:
 SDA	 SDB	 SDC	 SDD
  M       M       M       M 	md0 = RAID1 = All Mirrored partitions.
  M       M       M       M 	md1 = RAID1 = All Mirrored partitions.
==============================  
| S       S   M   S       S | 	md2 = RAID10 = Mirrored Striped partitions.
==============================
| S       S   M   S       S | 	md3 = RAID10 = Mirrored Striped partitions.
==============================
Sort of, but what you've drawn is RAID01, RAID10 is reversed.
RAID10 is preferable because it's more fault-tolerant. Take 4 drives split into two pairs, then randomly remove 2 of the drives. RAID10 can only survive 2 drive failures if they're from opposite pairs, RAID01 can only survive if they're from the same pair. There's a 67% chance that the two drives you remove will be from opposite pairs, a 33% chance they're from the same pair. That means you want a stripe of mirrors (RAID10), rather than a mirror of stripes (RAID01), as RAID10 will be twice as likely to survive a second drive failure than RAID01.

Quote:
Originally Posted by thirdbird View Post
This would make /boot and swap extra sturdy, as they have twice the redundancy.
Who needs /boot and swap when you've already lost / and /home?

Last edited by suicidaleggroll; 02-02-2017 at 02:55 PM.
 
Old 02-02-2017, 02:53 PM   #4
thirdbird
LQ Newbie
 
Registered: Feb 2017
Distribution: Debian
Posts: 20

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by suicidaleggroll View Post
Sort of, but what you've drawn is RAID01, RAID10 is reversed.


Who needs /boot and swap when you've already lost / and /home?
It was the closest I managed to explain to myself the setup they have created

I realized what you're saying about it being a 0+1 drawing. But with 4 drives like that, 1+0 looks exactly like that doesn't it? With e.g. 6 drives I could have drawn 3 striped groups which would have represented 1+0 more explicitly, while 0+1 would have still been 2 groups with 3 drives in each. Does that make sense?

EDIT: Wait a bit.. I think I know what you're saying now... How's this:
Code:
 SDA	 SDB	 SDC	 SDD
  M       M       M       M 	md0 = RAID1 = All Mirrored partitions.
  M       M       M       M 	md1 = RAID1 = All Mirrored partitions.
==============================  
| M       M   S   M       M | 	md2 = RAID1+0 = Striped Mirrored partitions.
==============================
| M       M   S   M       M | 	md3 = RAID1+0 = Striped Mirrored partitions.
==============================

Last edited by thirdbird; 02-02-2017 at 02:56 PM.
 
Old 02-02-2017, 02:57 PM   #5
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143
Quote:
Originally Posted by thirdbird View Post
EDIT: Wait a bit.. I think I know what you're saying now... How's this:
Code:
 SDA	 SDB	 SDC	 SDD
  M       M       M       M 	md0 = RAID1 = All Mirrored partitions.
  M       M       M       M 	md1 = RAID1 = All Mirrored partitions.
==============================  
| M       M   S   M       M | 	md2 = RAID1+0 = Striped Mirrored partitions.
==============================
| M       M   S   M       M | 	md3 = RAID1+0 = Striped Mirrored partitions.
==============================
Yes, that's correct
 
1 members found this post helpful.
Old 02-02-2017, 03:38 PM   #6
thirdbird
LQ Newbie
 
Registered: Feb 2017
Distribution: Debian
Posts: 20

Original Poster
Rep: Reputation: Disabled
Thank you for helping me see it more clearly.

I thought about why they did it like that a bit more, and it makes sense - to me at least - to protect the booting part of the system against degradation a little bit more than root and home so the system is at least bootable. Like you said before it probably doesn't matter much, but if it has to make sense, I'm guessing that's it.
 
Old 02-04-2017, 04:08 AM   #7
Jjanel
Member
 
Registered: Jun 2016
Distribution: any&all, in VBox; Ol'UnixCLI; NO GUI resources
Posts: 999
Blog Entries: 12

Rep: Reputation: 364Reputation: 364Reputation: 364Reputation: 364
Hi & welcome! If you feel so, you can mark this as 'Solved', under ThreadTools at top
(so it won't show up on [my 'invention' of an] ANT list )
 
1 members found this post helpful.
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Rebuilding a Failed Raid10 dragonfly-uk Linux - Server 5 09-30-2014 03:01 PM
mdadm RAID10 layout : near vs. far badkuk Linux - Software 1 07-14-2012 02:55 AM
virt-v2v and RAID10 just a man Red Hat 5 12-16-2011 04:42 AM
mdadm RAID10 failure(s) grimm26 Linux - Server 1 02-14-2011 03:32 PM
Many Raid1 vs a Raid10 humbletech99 Linux - Hardware 2 06-21-2006 08:37 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 07:50 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration