LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 09-05-2010, 02:17 AM   #1
MrMakealotofsmoke
Member
 
Registered: Apr 2010
Posts: 30

Rep: Reputation: 1
mdadm drops drives


Ok, why is liux so shit? Since ive had it its been nothing but trouble (read grub thread).

So apart from grub deciding to install on other drives and linux not booting services opn startup eventhough they are set to boot on startup im having issues with mdadm.

It seams linux likes to change my drive letters around sometimes. I got a new network card and put into pcie x1 slot. I have 2 pciex1 sata cards as well. Ok, it will change how it boots so therefore it will change the drive letters. Ok, so my /dev/md1 drops a drive as its now got another letter. 40h later its rebuilt. Ok, now to restart server, oh the other raid drive (dev/md5) is completely gone and dev/md1 is missing a drive again. Reboot the server, the md5 array is back but the md1 is still missing the drive. Ok ill readd it.....40h rebuild go!

So why doesnt linux keep its drive letters and why doesnt mdadm detect the superblock and not have to rebuild every F***KING TIME!
 
Old 09-06-2010, 01:10 AM   #2
xeleema
Member
 
Registered: Aug 2005
Location: D.i.t.h.o, Texas
Distribution: Slackware 13.x, rhel3/5, Solaris 8-10(sparc), HP-UX 11.x (pa-risc)
Posts: 988
Blog Entries: 4

Rep: Reputation: 254Reputation: 254Reputation: 254
Greetingz!

1) When you play "Musical PCI Slots" with your SATA controllers, the order in which the disks are detected (and given cute names like "sda", "sdb", etc) is changed.

2) If you setup a RAID array using "mdadm", and you *plan* on playing "Musical PCI Slots", then you need to identify the arrays you built by UUID, rather than device name.

3) GRUB (not "grub") will install on whatever drive you, or your Linux Distribution's Installation Wizard, decided was the first drive.

4) The "md tools" suite does drive rebuilds at the lowest possible priority in order to minimize the impact it will have on the server's performance. If you're interested in changing that, read this.

5) Please use a browser with built-in spellchecking. My spelling is far from perfect, but FireFox can be a big help here.

6) Please visit and read the "Great Justice" links in my signature.

7) Words in all Caps are akin to screaming. Don't scream. It's impolite.

8) Welcome to LinuxQuestions. We love a challenge, just keep in mind the site is not called WhyWontLinux!@#$%DoWhatIwant.com (mainly because that's an invalide DNS name).

Cheers!
 
Old 09-06-2010, 11:56 PM   #3
MrMakealotofsmoke
Member
 
Registered: Apr 2010
Posts: 30

Original Poster
Rep: Reputation: 1
Heh, Sorry about that. I was pretty annoyed that i have just waited 40hours for a mdadm rebuild then i restart and it decided to do it again. I have it set to low priority but when you need to copy 400gb of files it makes it take quite a while :/.

Can i edit the mdadm config to use UUID without remaking the arrays?
 
Old 09-07-2010, 03:39 AM   #4
MrMakealotofsmoke
Member
 
Registered: Apr 2010
Posts: 30

Original Poster
Rep: Reputation: 1
Also now i have another problem.

I have /dev/md5 which is a RAID0 array. For some reason it has decided to drop out of mdadm so im trying to re-add it with the command:

mdadm --assemble /dev/md5 /dev/sdg1 /dev/sdi1

but it keeps telling me:
mdadm: /dev/md5 assembled from 1 drive - not enough to start the array.

yet both drives are present in linux and they are the same drives that where in the array before?
 
Old 09-07-2010, 03:50 AM   #5
xeleema
Member
 
Registered: Aug 2005
Location: D.i.t.h.o, Texas
Distribution: Slackware 13.x, rhel3/5, Solaris 8-10(sparc), HP-UX 11.x (pa-risc)
Posts: 988
Blog Entries: 4

Rep: Reputation: 254Reputation: 254Reputation: 254
Quote:
Originally Posted by MrMakealotofsmoke View Post
Heh, Sorry about that. I was pretty annoyed that i have just waited 40hours for a mdadm rebuild then i restart and it decided to do it again.
Understandable, that would have me throwing things and beating on the ol' Fire Safe in the back in no time.
Quote:
Originally Posted by MrMakealotofsmoke View Post
Can i edit the mdadm config to use UUID without remaking the arrays?
You should be able to. Pretty much anyone that faces the "device names changed on me" problem (like the guys that build RAID arrays out of USB sticks) have figured out the whole UUID thing. Check this article here, under the "Starting the Array, Again" section.

However, most people don't have an mdadm.conf configuration. I know I typically don't. Most arrays are built out of partitions tagged with "fd" (Linux RAID autodetect).
To find out if your setup is like this, find and review the contents of your mdadm.conf, and maybe run "fdisk -l" against your disks.

Here's the output from my VMware playground system;

Code:
[root@system ~]# fdisk -l /dev/sd[a-e]

Disk /dev/sda: 82.3 GB, 82348277760 bytes
255 heads, 63 sectors/track, 10011 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          16      128488+  fd  Linux raid autodetect
/dev/sda2              17        9964    79907310   fd  Linux raid autodetect

Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       19457   156288321   fd  Linux raid autodetect

Disk /dev/sdc: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       19457   156288321   fd  Linux raid autodetect

Disk /dev/sdd: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1       19457   156288321   fd  Linux raid autodetect

Disk /dev/sde: 81.9 GB, 81964302336 bytes
255 heads, 63 sectors/track, 9964 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1   *           1          16      128488+  fd  Linux raid autodetect
/dev/sde2              17        9964    79907310   fd  Linux raid autodetect
[root@system ~]#
Keep in mind there's nothing wrong with this type of setup, it's just the quick-n-dirty way most of us build RAIDs.

Last edited by xeleema; 09-07-2010 at 04:02 AM.
 
Old 09-07-2010, 04:02 AM   #6
xeleema
Member
 
Registered: Aug 2005
Location: D.i.t.h.o, Texas
Distribution: Slackware 13.x, rhel3/5, Solaris 8-10(sparc), HP-UX 11.x (pa-risc)
Posts: 988
Blog Entries: 4

Rep: Reputation: 254Reputation: 254Reputation: 254
Quote:
Originally Posted by MrMakealotofsmoke View Post
I have /dev/md5 which is a RAID0 array...
mdadm --assemble /dev/md5 /dev/sdg1 /dev/sdi1

but it keeps telling me:
mdadm: /dev/md5 assembled from 1 drive - not enough to start the array.
RAID0? Wow, well I hope nothing important was on there.

Quick Solution:

Just re-create the array, stick a new filesystem on there.
(Be sure to use UUIDs in your mdadm.conf.)
Restore from a backup (if you have one).

Long Solution:
First make sure the partitions (sdg1 & sdi1) show up as RAID devices;

Example:
Code:
[root@system ~]# mdadm --examine --verbose /dev/sda1 |grep .
/dev/sda1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : faa4d937:06449d92:6f6154ec:66cd97cc
  Creation Time : Mon Jun 28 23:00:55 2010
     Raid Level : raid1
  Used Dev Size : 128384 (125.40 MiB 131.47 MB)
     Array Size : 128384 (125.40 MiB 131.47 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Update Time : Sun Sep  5 04:22:05 2010
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 18f27a3f - correct
         Events : 28
      Number   Major   Minor   RaidDevice State
this     0       8        1        0      active sync   /dev/sda1
   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       65        1      active sync   /dev/sde1
Maybe do a quick scan, too
Code:
[root@system downloads]# mdadm --detail --scan
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=a31258de:28daa54b:d7874d07:1eabf8f0
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=faa4d937:06449d92:6f6154ec:66cd97cc
ARRAY /dev/md160 level=raid5 num-devices=3 metadata=0.90 UUID=49311ee7:5241d007:a4f0829a:6c3a12c7
If it still doesn't assemble, make sure you have an actual device for /dev/sdi (do a "ls -l /dev/sdi*" and see what pops up)
 
Old 09-07-2010, 06:19 AM   #7
MrMakealotofsmoke
Member
 
Registered: Apr 2010
Posts: 30

Original Poster
Rep: Reputation: 1
ok after about an hour and a half i fixed it. Just needed to use the --force command on assemble and ta-da its back That was close :S haha.

I also remade my mdadm.conf so hopefully that will be ok for when the drives decide to play musical names
 
1 members found this post helpful.
Old 09-07-2010, 07:42 AM   #8
xeleema
Member
 
Registered: Aug 2005
Location: D.i.t.h.o, Texas
Distribution: Slackware 13.x, rhel3/5, Solaris 8-10(sparc), HP-UX 11.x (pa-risc)
Posts: 988
Blog Entries: 4

Rep: Reputation: 254Reputation: 254Reputation: 254
Awesome! Glad it's working again!
Quote:
Originally Posted by MrMakealotofsmoke View Post
I also remade my mdadm.conf so hopefully that will be ok for when the drives decide to play musical names
Can you post your mdadm.conf as a reference to anyone else that comes across this thread?

One more favor; if all the issues have been taken care of, please mark this thread as [SOLVED].

Thanks for the reply!
 
Old 09-08-2010, 03:33 AM   #9
MrMakealotofsmoke
Member
 
Registered: Apr 2010
Posts: 30

Original Poster
Rep: Reputation: 1
I'll change some sata drives around and see if it still picks up the arrays. If so i will
 
Old 09-12-2010, 02:38 AM   #10
MrMakealotofsmoke
Member
 
Registered: Apr 2010
Posts: 30

Original Poster
Rep: Reputation: 1
Still doesnt work too well

Rebooted server today, drives changed around. md0 is always fine, but i suspect thats because its using the motherboard ports which never change.

md5 always drops the same drive, md5 (raid0 array) is now all broken and i cant re-assemble it properly. The force command doesnt work :/
 
Old 09-12-2010, 03:09 AM   #11
xeleema
Member
 
Registered: Aug 2005
Location: D.i.t.h.o, Texas
Distribution: Slackware 13.x, rhel3/5, Solaris 8-10(sparc), HP-UX 11.x (pa-risc)
Posts: 988
Blog Entries: 4

Rep: Reputation: 254Reputation: 254Reputation: 254
That's strange.
I'm sure the UUIDs of the drives haven't changed (even if the controller changes).
Can you post the output of the following command?

Code:
grep -v "#" /etc/mdadm.conf | grep .
(Note: you might have to change the path to mdadm.conf)
 
Old 09-12-2010, 03:43 AM   #12
MrMakealotofsmoke
Member
 
Registered: Apr 2010
Posts: 30

Original Poster
Rep: Reputation: 1
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=00.90 UUID=1857eb01:cc613ac3:d03374e0:37ba532a
ARRAY /dev/md1 level=raid5 num-devices=3 metadata=00.90 UUID=44fc3622:cf99fe69:5d53fc14:f3cbdc1c devices=/dev/sde1,/dev/sdi1,/dev/sdj1,/dev/sdh1
ARRAY /dev/md5 level=raid0 num-devices=2 UUID=ef31fd02:278dbfb1:5d53fc14:f3cbdc1c devices=/dev/sdg1,/dev/sdk1

Note: I put the device lines in today to try and get it to work better but because the letters sometimes change it doesnt really help lol.
 
Old 09-12-2010, 05:37 AM   #13
xeleema
Member
 
Registered: Aug 2005
Location: D.i.t.h.o, Texas
Distribution: Slackware 13.x, rhel3/5, Solaris 8-10(sparc), HP-UX 11.x (pa-risc)
Posts: 988
Blog Entries: 4

Rep: Reputation: 254Reputation: 254Reputation: 254
Interesting.
If you have a recent backup of your data (or don't care too much if the entire thing goes belly-up), I would suggest you try an mdadm.conf that contains the following;

Example mdadm.conf:
Code:
# Scan All partitions (/proc/partitions) for mdadm super-blocks
DEVICES=partitions
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=00.90 UUID=1857eb01:cc613ac3:d03374e0:37ba532a
ARRAY /dev/md1 level=raid5 num-devices=3 metadata=00.90 UUID=44fc3622:cf99fe69:5d53fc14:f3cbdc1c
ARRAY /dev/md5 level=raid0 num-devices=2 UUID=ef31fd02:278dbfb1:5d53fc14:f3cbdc1c
NOTE: There is only one "DEVICES=" line. That's deliberate.

If your Linux distribution supports udev, you could do what xushi did back in '07 and create the appropriate /etc/udev.d rule file. You could basically make sure that /dev/sda stays /dev/sda, no matter what controller or SATA port it was connected to.
 
Old 09-12-2010, 05:52 AM   #14
MrMakealotofsmoke
Member
 
Registered: Apr 2010
Posts: 30

Original Poster
Rep: Reputation: 1
well in my /etc/mdadm/mdadm.conf i have the DEVICES defined.

Main concern atm is getting this raid0 array working again. When i do --re-add it adds the drive as a spare and not the active drive. How can i fix this? As soon as i get it working again i think ill copy all the data off it and rebuild it :S lol

EDIT:

mdadm --assemble --scan
mdadm: failed to add /dev/sdg1 to /dev/md5: Device or resource busy
mdadm: /dev/md5 assembled from 1 drive and 1 spare - not enough to start the array.


mdadm --assemble /dev/md5 /dev/sdg1 /dev/sgh1
mdadm: cannot open device /dev/sgh1: No such file or directory
mdadm: /dev/sgh1 has no superblock - assembly aborted

Quote:
mdadm -E /dev/sdg1
/dev/sdg1:
Magic : a92b4efc
Version : 00.90.00
UUID : ef31fd02:278dbfb1:5d53fc14:f3cbdc1c (local to host svrfile)
Creation Time : Sun May 30 19:10:04 2010
Raid Level : raid0
Used Dev Size : 0
Raid Devices : 2
Total Devices : 2
Preferred Minor : 5

Update Time : Tue Sep 7 21:05:33 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : a9942ee9 - correct
Events : 65

Chunk Size : 64K

Number Major Minor RaidDevice State
this 2 8 177 -1 spare

0 0 8 97 0 active sync /dev/sdg1
1 1 8 129 1 active sync /dev/sdi1
Quote:
mdadm -E /dev/sdh1
/dev/sdh1:
Magic : a92b4efc
Version : 00.90.00
UUID : ef31fd02:278dbfb1:5d53fc14:f3cbdc1c (local to host svrfile)
Creation Time : Sun May 30 19:10:04 2010
Raid Level : raid0
Used Dev Size : 0
Raid Devices : 2
Total Devices : 2
Preferred Minor : 5

Update Time : Sun Sep 12 12:09:28 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : a99a4802 - correct
Events : 67

Chunk Size : 64K

Number Major Minor RaidDevice State
this 0 8 97 0 active sync /dev/sdg1

0 0 8 97 0 active sync /dev/sdg1
1 1 8 129 1 active sync /dev/sdi1

Last edited by MrMakealotofsmoke; 09-12-2010 at 06:47 AM.
 
Old 09-13-2010, 02:01 AM   #15
xeleema
Member
 
Registered: Aug 2005
Location: D.i.t.h.o, Texas
Distribution: Slackware 13.x, rhel3/5, Solaris 8-10(sparc), HP-UX 11.x (pa-risc)
Posts: 988
Blog Entries: 4

Rep: Reputation: 254Reputation: 254Reputation: 254
Quote:
Main concern atm is getting this raid0 array working again. When i do --re-add it adds the drive as a spare and not the active drive. How can i fix this? As soon as i get it working again i think ill copy all the data off it and rebuild it :S lol
This happened after you removed every "devices=" line except "DEVICES=partitions"?

You cannot --re-add a drive to a RAID-0 (Stripe) array, as that RAID type doesn't have redundancy. All drives have to be specified when the array is --create'd.

Confirm you do not have the drive already used in another array;
cat /proc/mdstat

Do the following on each drive for your RAID-0 array;
fdisk -l /dev/sd[g,k]1 | grep -A1 Device

Each drive should report it's ID/type as "fd" / "Linux raid autodetect". If not, correct with the "fdisk" command.

As we don't care about data we cannot recover, we "clean up" the drives;
mdadm --stop /dev/md5
mdadm --misc --zero-superblock /dev/sd[g,k]1


Then we create the RAID-0 array;
mdadm --create /dev/md5 --level=0 --raid-devices=2 --spare-devices=0 --chunk=128 /dev/sd[g,k]1
NOTE: Substitute the bolded section with however large you want your stripes. Example contains the max (I think).

Snag the UUID of the array so we can store it in the mdadm.conf file;
mdadm --detail /dev/md0 | grep UUID

WARNING: Your device names for the one RAID-0 array seem to have changed from "sdg1 & sdi1" to "sdg1 & sdk1".
Make sure you use the right device names.

P.S: What Linux distribution are you running?
EDIT: Never mind, I found out from a post of yours in a previous thread.

Last edited by xeleema; 09-13-2010 at 02:12 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Using mdadm with different capacity drives in RAID 5? lolhan Linux - Server 9 07-11-2019 06:47 AM
mdadm RAID5 when one of your drives contains your data? 1veedo Linux - Hardware 6 06-06-2009 01:41 AM
mdadm fails to assemble reconnected drives (debian, raid0) SandRock Linux - Software 12 11-09-2008 12:06 PM
mdadm shows 2 faulty drives steven.wong Linux - General 2 08-21-2006 03:39 AM
How robust is mdadm on Dell Precision 380 with twoo sata drives? rimkus Linux - Software 4 02-03-2006 07:25 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 07:17 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration