LinuxQuestions.org
Support LQ: Use code LQ3 and save $3 on Domain Registration
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices

Reply
 
Search this Thread
Old 05-31-2009, 10:59 PM   #1
jc_cpu
Member
 
Registered: May 2006
Distribution: Fedora 14
Posts: 71

Rep: Reputation: 15
Software RAID 6 Drives Marked As Spares And The md Drive is inactive


I added 5 1TB drives to my server and I'm trying to set them up as a linux software RAID 6. I formatted the drives as type "fd" before I ran mdadm to create the RAID. When I run "cat /proc/mdstat", it says that the new RAID is inactive and the drives appear as spares. Does anyone know how to remove this RAID drive and start from scratch?

My RAID disks:

------------------------------------------------------------

$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0] sdc1[1]
200704 blocks [2/2] [UU]

md_d2 : inactive sdg[4](S) sdf[3](S) sdb[0](S) sde[2](S)
3907049984 blocks

md1 : active raid1 sda2[0] sdc2[1]
488183104 blocks [2/2] [UU]

------------------------------------------------------------

My OS is on "md0 and "md1". I want to remove "md_d2" (not even sure how it got this name because I didn't specify this) and create a new "md2" that is a RAID 6 of the new 5 1TB drives.

I've read the man page and searched the web, but I haven't found anything that appears to help me with this problem. Any help will be greatly appreciated. Thanks in advance.
 
Old 06-01-2009, 12:57 AM   #2
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.5, Centos 5.10
Posts: 16,261

Rep: Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028
To disassemble, releasing all resources

mdam -S /dev/md_d2


check with

cat /proc/mdstat

create a raid 6 with 4 active, 1 spare

mdam -C /dev/md2 -a yes -l 6 -n 4 -x 1 sd{a,b,c,d,e}

Your 'md_d2' only has 4 drives, not 5 ... amend create cmd as reqd.
http://en.wikipedia.org/wiki/Redunda...ependent_disks
 
Old 06-01-2009, 11:37 AM   #3
jc_cpu
Member
 
Registered: May 2006
Distribution: Fedora 14
Posts: 71

Original Poster
Rep: Reputation: 15
Thanks for the reply. I'll give this a try later and let you know how it goes. I would like to have all 5 drives set to active with RAID 6. My understanding of RAID 6 with 5 drives is that I'll have 3TB of usable space. Is this possible or do I have to try to use RAID 5? I would rather use RAID 6 because I can replace 2 drives if they go bad. Thanks again for all of your help.
 
Old 06-01-2009, 08:55 PM   #4
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.5, Centos 5.10
Posts: 16,261

Rep: Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028
If you read that link you'll see the answer: min num of drives 4, space efficiency n-2, so yes 5 x 1TB disks = 3 x 1TB of data
 
Old 06-01-2009, 11:33 PM   #5
jc_cpu
Member
 
Registered: May 2006
Distribution: Fedora 14
Posts: 71

Original Poster
Rep: Reputation: 15
Thanks again for the reply. I did read that site and saw that the RAID 6 could have 4 or more disks.

Overall the instructions worked. For some reason, every time I tried to make the array with 5 disks (or 4 with 1 spare), it would fail /dev/sdf. Do you know why this happened? I also can't seem to find any links on Google for this problem. Does the linux software RAID6 require an even number of disks? Did I maybe find some sort of weird bug? Any info on this problem would be great.

As a test, I was successfully able to create a RAID6 with 4 of the disks and just create a ext3 file system on the 5th disk.

I guess I could use this 5th disk as a backup of critical files on the RAID just in case the RAID goes completely south.
 
Old 06-02-2009, 12:31 AM   #6
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.5, Centos 5.10
Posts: 16,261

Rep: Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028
Exactly what error did you get for /dev/sdf ?
Have you run fdisk -l to check its the right type (fd) and its got a a partition on it? (Recommended)
 
Old 06-02-2009, 11:46 AM   #7
jc_cpu
Member
 
Registered: May 2006
Distribution: Fedora 14
Posts: 71

Original Poster
Rep: Reputation: 15
I really appreciate all of the help that you are providing.

Yes when I used "fdisk" the first time, I created a partition and set it to "fd". I then verified it with "fdisk -l" and it stated that the partition was linux raid.

After I executed the "mdadm" raid command and ran "cat /proc/mdstat", it said "sdf[number] (F)" and it didn't list the drive as being used("_" instead of "U"). At the moment, I think that this is a linux software RAID issue because I was successfully able to add a single partition to the disk using "fdisk" and then format it using "mkfs.ext3 /dev/sdf1". I was also able to mount this partition to the file system. By being able to do these things, the drive definitely works.

I'm going to give this another try tonight.
 
Old 06-02-2009, 08:45 PM   #8
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.5, Centos 5.10
Posts: 16,261

Rep: Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028
1. you need to fdisk the disk & create a single partition, type fd
2. exit with save and either reboot or try partprobe to update in-kernel record
3. cat /proc/mdstat to check
4. create raid for /dev/md2
5. mkfs.ext3 /dev/md2
 
Old 06-02-2009, 09:59 PM   #9
jc_cpu
Member
 
Registered: May 2006
Distribution: Fedora 14
Posts: 71

Original Poster
Rep: Reputation: 15
when I enter "partprobe", I get this with "cat /proc/mdstat":

$ sudo partprobe
$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0] sdc1[1]
200704 blocks [2/2] [UU]

md_d2 : active raid6 sdg1[3] sde1[2] sdd1[1] sdb1[0]
1953519872 blocks level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
[>....................] resync = 3.2% (31963256/976759936) finish=129.9min speed=121216K/sec

md1 : active raid1 sda2[0] sdc2[1]
488183104 blocks [2/2] [UU]


What is "md_d2" and why is it automatically started after entering this command? My mdadm command to create the array was for /dev/md2. This was also before I ran the command to disassemble it due to the failed disk.
 
Old 06-02-2009, 11:39 PM   #10
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.5, Centos 5.10
Posts: 16,261

Rep: Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028
I don't have a raided linux in front of me, but I think raid defs are stored in /etc/mdadm.conf.
You prob need to edit that so it doesn't try to rebuild it at boot time.
Then reboot, disassemble, check disks and start again.
 
Old 06-03-2009, 09:37 PM   #11
jc_cpu
Member
 
Registered: May 2006
Distribution: Fedora 14
Posts: 71

Original Poster
Rep: Reputation: 15
What is "md_d2""? I still get this and my RAID creation commands didn't specify this. I had a RAID6 all formatted and running (took forever to do this), "mdadm.conf" was correct and saved (verified before reboot), and the fstab was correct for mounting. I rebooted and it listed "md2" and "md_d2", and it also took 2 of the drives from "md2" and added them as spares to "md_d2". I stopped /dev/md2, but left the other one running. The list below came from running "cat /proc/mdstat". I looked in "/dev" and I see "md2" and "md_d2". Is it safe to delete "/dev/md_d2" because I'm starting to think that mdadm is using it just because it is listed here? Please advise and thanks in advance for your reply.

------------------------------------------

$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0] sdc1[1]
200704 blocks [2/2] [UU]

md_d2 : inactive sdb[0](S) sde[2](S)
1953524992 blocks

md1 : active raid1 sda2[0] sdc2[1]
488183104 blocks [2/2] [UU]

unused devices: <none>
 
Old 06-04-2009, 11:06 AM   #12
jc_cpu
Member
 
Registered: May 2006
Distribution: Fedora 14
Posts: 71

Original Poster
Rep: Reputation: 15
I was on the fedora IRC channel and was able to fix the problem last night. I'm still not sure what "/dev/md_d2" is and how it got there, but I was able to fix the problem by doing the following:

1. Zeroed out the superblocks for the drives in my RAID.

2. Rebooted.

3. Used mdadm to create my RAID.

4. Formatted the RAID as ext3.

5. Updated "mdadm.conf".

6. Updated fstab.

7. Rebooted. RAID "/dev/md2" was there after reboot and "md_d2" was not running.

Do you have any idea what md_d2 is and how it was created?
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
recovering software raid - disk marked as failed rjstephens Linux - General 9 06-10-2008 03:29 AM
Raid Missing Spares shiftytitan Linux - Server 1 06-22-2007 06:48 PM
Upgrading hard drives on Software raid 1 boot drives. linuxboy123 Linux - General 0 12-11-2003 03:28 PM
software raid - add device wrongly marked faulty back into array? snoozy Linux - General 2 06-27-2003 02:11 PM


All times are GMT -5. The time now is 07:42 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration