LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 07-12-2014, 05:17 PM   #1
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Rep: Reputation: 54
Odd situation with raid10 array, odd number of drives, and it took, can't regrow now


I bought 8 2TB drives to build a raid 10 but one was DOA, so I went ahead and built it anyway thinking it would just build it degraded till I get the drive back. Instead, it did something really weird and it ended up using the one drive on it's own but not degraded mode.

If I try to add the other drive now it says the array is too complex to grow. It must have done something more than just raid10. This is what the array looks like now:

Code:
[root@isengard volumes]# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Sat Jun 28 14:47:10 2014
     Raid Level : raid10
     Array Size : 6836839936 (6520.12 GiB 7000.92 GB)
  Used Dev Size : 1953382912 (1862.89 GiB 2000.26 GB)
   Raid Devices : 7
  Total Devices : 8
    Persistence : Superblock is persistent

    Update Time : Sat Jul 12 18:10:24 2014
          State : clean 
 Active Devices : 7
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 1

         Layout : near=2
     Chunk Size : 512K

           Name : isengard.loc:2  (local to host isengard.loc)
           UUID : 12ea5e09:f9c62e88:0b46fb96:54199143
         Events : 23

    Number   Major   Minor   RaidDevice State
       0       8      224        0      active sync   /dev/sdo
       1       8      240        1      active sync   /dev/sdp
       2      65        0        2      active sync   /dev/sdq
       3      65       16        3      active sync   /dev/sdr
       4      65       32        4      active sync   /dev/sds
       5      65       48        5      active sync   /dev/sdt
       6      65       64        6      active sync   /dev/sdu

       7      65       80        -      spare   /dev/sdv
[root@isengard volumes]#
7TB is what gets me, it should be either 6TB if it's ignoring the extra drive, or it should be 8TB if it's degraded. It's like if it split the single drive in two and did a raid 1 with itself or something.

Is there a way to reverse what it did? At this point there is no data on the array so recreating it from scratch would be an option, but I'm just wondering what to do in this situation if there was data or if I try to add two drives in the future and one fails or something.

If I try to grow it, I get this message:
Code:
[root@isengard volumes]# mdadm --grow -n 8 /dev/md2
mdadm: RAID10 layout too complex for Grow operation
[root@isengard volumes]#
 
Old 07-13-2014, 06:35 AM   #2
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Original Poster
Rep: Reputation: 54
Hmm from what I'm reading raid 10 can't be expanded. Is this really true? I would expect it's not any different than expand a raid 0, but need 2 drives.

I guess what I could do is make a bunch of raid 1 arrays then do a raid 0. Or if I'm going to get into nested raids, maybe I can do something funky like a raid 0 of raid 5's. Wonder which one would have the best performance. I got 8 drives to play with.

raid 50:

2TBx3 raid 5 = 4TB x 3 = 12TB using 9 drives. (would have to buy another)

Raid 50: (alt)

2TBx4 raid 5 = 6TB x 2 = 12TB using 8 drives (no need to buy another drive but probably slightly less performance)

Or stick to raid 10

2TBx2 raid 1 = 2TB x 4 = 8TB using 8 drives

Hmm almost tempting to buy another drive now. :P

Last edited by Red Squirrel; 07-13-2014 at 06:38 AM.
 
Old 07-13-2014, 08:50 AM   #3
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,124

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
Seeing as how you've removed yourself from the "zero reply" auto-promotion ...

I have never liked the concept of mdadm and dm/md. zfs seemed like an awesome idea when I first saw it, and for a Linux-native filesystem, btrfs carries the flag. Put the logic at the filesystem layer, and treat the actual disk as un-reliable commodity items. Which they are.
Have you thought of using btrfs - yes, I know everyone says it's beta - but some still say the same of ext4 too.
 
Old 07-13-2014, 05:20 PM   #4
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Original Poster
Rep: Reputation: 54
I will stick to mdadm for now, the rest of the server uses it so rather keep all the luns consistent with one system. In the future I may look at other solutions once they become more mature. Also a lot of these other solutions like zfs don't really support live grow without some big work around. mdadm normally does support it, but it seems in this situation it's not, so that's what I'm wondering why. Worse case scenario I will just make a bunch of raid 1's then do raid 0, then if I need to expand I just add another 2 disk raid 1 and expand the raid 0.

Actually, is btrfs still in development? Last I read it was basically abandoned. If it's actually still in development then that is good to hear, as it's definitely something I may consider in the future.

Last edited by Red Squirrel; 07-13-2014 at 05:22 PM.
 
Old 07-13-2014, 08:10 PM   #5
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,124

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
That's fair enough - can't help with the mdadm. As for btrfs, have a look at the news section here.
I use it in RAID5 for my photos - RAID10 was offered years ago, but RAID5/6 is relatively recent.
 
Old 07-14-2014, 09:25 PM   #6
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Original Poster
Rep: Reputation: 54
I've been reading up on btrfs... It actually has intruigued my interest to the point where I may give it a go on my production system. Any issues having both mdadm and btrfs on same system? I would keep my existing raid arrays but create a new btrfs volume with my new drives. I'd do a raid 10. It seems to me mdadm is excelent for raid 5/6 but seems to be very flaky with raid 10. My crapped out raid 10 array is being very problematic, it wont even let me destroy it. Keeps saying it's busy. My only recourse at this point is physically pulling all the drives but I have a feeling that will cause all sorts of stale entries like drive letters md names etc... So kinda at a loss. I've been reading more and it seems mdadm raid 10 or 0 can't be extended so that's a big bummer and one of the main reasons I was using mdadm. It sounds like with btrfs it CAN be extended and overall it's features sound nicer. I think I may give it a go as long as it can't negatively impact my existing arrays.
 
Old 07-14-2014, 10:03 PM   #7
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,124

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
Should be fine - btrfs is "just" another filesystem. Mind you, I'd reckon you'd get more confused than now with both active in the system ...
You'll need btrfs support in the kernel (module most likely) and btrfs-progs. Pays to stay pretty current on kernel(s) too.
 
Old 07-14-2014, 10:13 PM   #8
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Original Poster
Rep: Reputation: 54
This is a fairly new server (maybe 1 year old) so hopefully the support will be in my kernel, but I'll give it a go.


I should probably read up on how to update a kernel one of these days, one of those things I just never really took the time to read on or wanted to dare trying on a prod machine, but that's what VMs are for. :P
 
Old 08-08-2014, 01:22 PM   #9
Icefalcon
LQ Newbie
 
Registered: May 2002
Location: Toronto
Distribution: RedHat
Posts: 2

Rep: Reputation: 0
I realize this is a bit late but there doesn't seem to be an actual answer to your actual question here. I'm not an expert but I suspect I know what happened.

How did you build the array in the first place? From what I understand, when you specify raid10 to mdadm, it creates a "complex" raid10 by default rather than a nested raid10. In order to create an 8 disk complex raid10 with only 7 drives, you need to specify that there are a total of 8 drives when you create the array so that it creates it in degraded mode to start.

If you created it as a 7 disk array, because it is a 'complex' raid10, it can't be expanded to 8 disks later. From the detail posted, it looks like you created a 7 disk array and added another disk after, which ended up being added as a hot spare.

The raid10 you were expecting was probably 'nested' raid10 - where a raid1 is made up of multiple raid0 pairs, or the reverse where a raid0 is built on multiple raid1. In these types of raid10 you require even number of disks and thy are grouped in pairs etc.

With a complex raid10, it stripes pairs of data chunks across all of the disks in the array. So you aren't required to have an even number of disks. I believe that's why it's showing you the size you're seeing etc.
 
Old 08-08-2014, 02:15 PM   #10
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Original Poster
Rep: Reputation: 54
Yeah I figured when I specify 7 drives it would have done an 8 drive raid 10 in degraded mode (had to RMA the other drive) and did not know about the complex raid 10. So because of that I wanted to scrap it and start over but it would not let me turn it off. It's actually still running now but it has zero drives as I had to physically pull them out in order to free the drives so I can remake the array.

Code:
[root@isengard ~]# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Sat Jun 28 14:47:10 2014
     Raid Level : raid10
     Array Size : 6836839936 (6520.12 GiB 7000.92 GB)
  Used Dev Size : 1953382912 (1862.89 GiB 2000.26 GB)
   Raid Devices : 7
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Fri Aug  1 05:00:08 2014
          State : clean, FAILED 
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

    Number   Major   Minor   RaidDevice State
       0       8      224        0      active sync
       1       8      240        1      active sync
       2      65        0        2      active sync
       3      65       16        3      active sync
       4      65       32        4      active sync
       5       0        0        5      removed
       6       0        0        6      removed
I ended up having to name the new array md3 because of that, so it screws up my convention.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
odd situation nofear3829 Linux - Networking 1 11-29-2009 03:31 AM
test for odd number B-Boy Programming 7 08-13-2008 12:22 AM
Make 1:1 copy of disk in odd situation tomazN Linux - Software 3 12-29-2007 07:22 AM
unix cal command in an odd situation GUIPenguin General 5 10-12-2005 04:10 PM
Odd ISP situation... J.McLoughlin Linux - General 6 09-14-2002 07:48 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 12:32 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration