LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 04-28-2016, 01:04 PM   #1
fanoflq
Member
 
Registered: Nov 2015
Posts: 397

Rep: Reputation: Disabled
mdadm creating RAID device


To create a hot spare when creating the RAID array:

Code:
$ sudo mdadm --create /dev/md0 -l 5 -n3 -x 1 /dev/sda6 /dev/sda7 /dev/sda8 /dev/sda9
The -x 1 switch tells mdadm to use one spare device.
The -n, --raid-devices means "Specify the number of active devices in the array"

Above command line indicates n=3, but we have 4 devices /dev/sda[6-9], that means one of them is a hot spare.

Which device did mdadm choose as a hot spare and how can we find out post md0 creation?


-----From man mdadm page-----
Quote:
mdadm /dev/md0 --add /dev/sda1 --fail /dev/sdb1 --remove
/dev/sdb1

Each operation applies to all devices listed until the next operation.
Quote:
mdadm /dev/md4 --fail detached --remove detached

Any devices which are components of /dev/md4 will be marked as faulty and then remove from the array.
and

Quote:
mdadm /dev/md0 --add /dev/sda1 --fail /dev/sdb1 detached --remove detached /dev/sdb1
Each operation applies to all devices listed until the next operation.
The word detached is past tense.
So does it imply the device(s) MUST to be physically removed before running command with word "detached" in it?

Last edited by fanoflq; 04-28-2016 at 01:09 PM.
 
Old 04-28-2016, 03:19 PM   #2
Wells
Member
 
Registered: Nov 2004
Location: Florida, USA
Distribution: Debian, Redhat
Posts: 417

Rep: Reputation: 53
I think the information in /proc/mdstat will tell you which drives were used for each device and which drive is the spare...

Code:
$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md8 : active raid5 sdb8[0] sdd8[3] sdc8[2] sda8[1]
      5845787136 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

md7 : active raid1 sdb7[0] sdd7[3] sdc7[2] sda7[1]
      524276 blocks super 1.0 [4/4] [UUUU]

md6 : active raid1 sdb6[0] sdd6[3] sdc6[2] sda6[1]
      1048564 blocks super 1.0 [4/4] [UUUU]

md5 : active raid1 sdb5[0] sdd5[3] sdc5[2] sda5[1]
      1572852 blocks super 1.0 [4/4] [UUUU]

md4 : active raid1 sdb4[0] sdd4[3] sdc4[2] sda4[1]
      1572852 blocks super 1.0 [4/4] [UUUU]

unused devices: <none>
Somewhere in the description of an md device should be an indication of which drive is also a spare. I unfortunately do not have any examples of this.
 
1 members found this post helpful.
Old 04-28-2016, 04:21 PM   #3
fanoflq
Member
 
Registered: Nov 2015
Posts: 397

Original Poster
Rep: Reputation: Disabled
Thank you.

I do not have RAID.
Here is nice description of mdstat:

https://raid.wiki.kernel.org/index.php/Mdstat

I assume your /poc/mstat is a real world application example.

You have 4 disks, sd[a-d], and partition number 4 to 8 of each disk are used for multiple RAID devices.

See If I understand your /proc/mstat.

Quote:
md8 : active raid5 sdb8[0] sdd8[3] sdc8[2] sda8[1]
5845787136 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

This means you have RAID level 5 (using rotating parity stripe). You need a minimum of 3 partitions to operate at level 5.
[4/4] = [n/m]. Since m = 4, and you have 4 partitions, sd[a-d], there is no hot spare here. Correct?

Quote:
md5 : active raid1 sdb5[0] sdd5[3] sdc5[2] sda5[1]
1572852 blocks super 1.0 [4/4] [UUUU]
Here you are using RAID level 1, i.e disk mirroring.
[4/4] means [n/m] where m = number of working devices.
Since you have 4 working devices, that means you have 2 devices mirroring the other two. Correct?

What utility can you use to determine which devices is mirroring whom?

Why do you have so many RAID devices, md4 to md8?
 
Old 04-28-2016, 04:42 PM   #4
Wells
Member
 
Registered: Nov 2004
Location: Florida, USA
Distribution: Debian, Redhat
Posts: 417

Rep: Reputation: 53
This was off of a Seagate NAS device. Usually when I am creating software RAID devices with mdadm I will create one, perhaps two RAID arrays, but apparently Seagate thinks differently.

Regardles, if you are using mdadm to create software RAID devices (which is what Seagate has done), then /proc/mdstat is created by default to monitor and show you how those arrays are constructed.

Perhaps I have misunderstood your original question?
 
1 members found this post helpful.
Old 04-28-2016, 04:51 PM   #5
fanoflq
Member
 
Registered: Nov 2015
Posts: 397

Original Poster
Rep: Reputation: Disabled
Quote:
Perhaps I have misunderstood your original question?
No. You understood my question. Thank you.

Perhaps others can shed light on the reasons why
the Seagate NAS device RAID is configured that way.

I hope someone can shed a light on this question:
"What utility can you use to determine which devices is mirroring whom on a RAID level 1?"
 
Old 04-28-2016, 05:04 PM   #6
Wells
Member
 
Registered: Nov 2004
Location: Florida, USA
Distribution: Debian, Redhat
Posts: 417

Rep: Reputation: 53
Quote:
Originally Posted by fanoflq View Post
I hope someone can shed a light on this question:
"What utility can you use to determine which devices is mirroring whom on a RAID level 1?"
Well, in a two drive RAID level 1 mirror, it is kind of obvious which drive is mirroring which, since they mirror each other.

However, if you are talking about a RAID 1 mirror with more than two drives, you get into a bit of a weird area, which I am going to have to test out.

However, you might have some more luck if you get some detail on the raid device using the following command:

Code:
# mdadm --detail /dev/md8
/dev/md8:
        Version : 1.0
  Creation Time : Thu Dec 31 19:01:12 2009
     Raid Level : raid5
     Array Size : 5845787136 (5574.98 GiB 5986.09 GB)
  Used Dev Size : 1948595712 (1858.33 GiB 1995.36 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Thu Mar 31 16:44:53 2016
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : vg:8
           UUID : 9e8fe445:cea1f807:206b2823:21beb646
         Events : 258

    Number   Major   Minor   RaidDevice State
       0       8       24        0      active sync   /dev/sdb8
       1       8        8        1      active sync   /dev/sda8
       2       8       40        2      active sync   /dev/sdc8
       3       8       56        3      active sync   /dev/sdd8
Note the state of each device at the end there. I think that if you have a spare drive in the setup, it will indicate it there. Not sure what the result will be with a RAID 1 array with more than two drives in it, however... that is going to require some testing on my part, which I am going to do after a meeting I have to go to.
 
1 members found this post helpful.
Old 04-28-2016, 05:19 PM   #7
Wells
Member
 
Registered: Nov 2004
Location: Florida, USA
Distribution: Debian, Redhat
Posts: 417

Rep: Reputation: 53
OK... now for some REAL information!

Just created a test raid device with some files on loopback... the interesting thing here is that is a 4 drive mirror array, with a fifth drive created as a spare:
Code:
[root@puppy tmp]# mdadm --create /dev/md0 --level=1 --raid-devices=4 /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 --spare-devices=1 /dev/loop4
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@puppy tmp]# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Apr 28 18:15:23 2016
     Raid Level : raid1
     Array Size : 102272 (99.89 MiB 104.73 MB)
  Used Dev Size : 102272 (99.89 MiB 104.73 MB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Thu Apr 28 18:15:30 2016
          State : clean
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

           Name : puppy.phys.ufl.edu:0  (local to host puppy.phys.ufl.edu)
           UUID : 68371095:44120f20:2f881e7f:732d894e
         Events : 17

    Number   Major   Minor   RaidDevice State
       0       7        0        0      active sync   /dev/loop0
       1       7        1        1      active sync   /dev/loop1
       2       7        2        2      active sync   /dev/loop2
       3       7        3        3      active sync   /dev/loop3

       4       7        4        -      spare   /dev/loop4
Note the end, where the fifth drive is listed as a spare drive.

Each drive is 100mb in size, and the array is also 100mb in size, so what it is doing is mirroring everything across all four "drives". This is the epitome of "I don't want my data to be lost due to a sudden three drive failure!"

Now, if you wanted to create an array that was four drives, mirrored, but with 200mb in size, that is really a RAID 10 array...
 
1 members found this post helpful.
Old 04-28-2016, 05:22 PM   #8
Wells
Member
 
Registered: Nov 2004
Location: Florida, USA
Distribution: Debian, Redhat
Posts: 417

Rep: Reputation: 53
And here is what that RAID 10 looks like:
Code:
[root@puppy tmp]# mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 --spare-devices=1 /dev/loop4
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@puppy tmp]# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Apr 28 18:20:48 2016
     Raid Level : raid10
     Array Size : 202752 (198.03 MiB 207.62 MB)
  Used Dev Size : 101376 (99.02 MiB 103.81 MB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Thu Apr 28 18:20:49 2016
          State : clean
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

         Layout : near=2
     Chunk Size : 512K

           Name : puppy.phys.ufl.edu:0  (local to host puppy.phys.ufl.edu)
           UUID : ef849d8d:9501d890:6f6d19d0:bd85601d
         Events : 17

    Number   Major   Minor   RaidDevice State
       0       7        0        0      active sync set-A   /dev/loop0
       1       7        1        1      active sync set-B   /dev/loop1
       2       7        2        2      active sync set-A   /dev/loop2
       3       7        3        3      active sync set-B   /dev/loop3

       4       7        4        -      spare   /dev/loop4
Note in the last section there you have set-A and set-B. Each set is a mirror, and then those are striped together to form a 200mb array.
 
1 members found this post helpful.
Old 04-28-2016, 06:11 PM   #9
fanoflq
Member
 
Registered: Nov 2015
Posts: 397

Original Poster
Rep: Reputation: Disabled
Thank you that is helpful.

One more question:
Now and then I came across this or similar command line
where bash is used to run the command like so:

Quote:
1)
$ bash -c “mdadm --detail --scan “ >> /etc/mdadm.conf

instead of running it like this :
Quote:
2)
$ mdadm --detail --scan >> /etc/mdadm.conf
I suspect we cannot run 2) because mdadm is not able to interpret the >> symbol.
Thus we have to use 1).

Correct?

Addendum:

I found out both 1) and 2) would work.
Is there a reason why option 1) is required or preferred over option 2)?

Last edited by fanoflq; 04-28-2016 at 06:21 PM.
 
Old 04-29-2016, 11:31 AM   #10
Wells
Member
 
Registered: Nov 2004
Location: Florida, USA
Distribution: Debian, Redhat
Posts: 417

Rep: Reputation: 53
Quote:
Originally Posted by fanoflq View Post
I found out both 1) and 2) would work.
Is there a reason why option 1) is required or preferred over option 2)?
The only reason I can think of for doing it the first way is that by specifying the bash command interpreter, you are ensured that the command encapsulated in quotes will be handled by the bash interpreter and not some other interpreter that might do things a little differently (csh, tcsh, zsh, etc.)
 
1 members found this post helpful.
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Creating RAID array with mdadm question L1nuxn00b703 Linux - Newbie 2 05-26-2015 11:36 AM
MDADM error: Help creating a RAID 1 RAM disk abidbodal Linux - Newbie 3 04-10-2011 12:33 PM
Problem creating new mdadm raid 1 array jlowry Linux - Software 12 03-04-2011 05:13 PM
Creating backup disk image of RAID 1 array (MDADM) Caligari2k Linux - Server 1 10-29-2010 05:30 AM
Creating RAID 5 with 2 disks using mdadm amitpardesi Linux - Software 6 10-12-2008 11:02 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 11:00 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration