Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Ok, why is liux so shit? Since ive had it its been nothing but trouble (read grub thread).
So apart from grub deciding to install on other drives and linux not booting services opn startup eventhough they are set to boot on startup im having issues with mdadm.
It seams linux likes to change my drive letters around sometimes. I got a new network card and put into pcie x1 slot. I have 2 pciex1 sata cards as well. Ok, it will change how it boots so therefore it will change the drive letters. Ok, so my /dev/md1 drops a drive as its now got another letter. 40h later its rebuilt. Ok, now to restart server, oh the other raid drive (dev/md5) is completely gone and dev/md1 is missing a drive again. Reboot the server, the md5 array is back but the md1 is still missing the drive. Ok ill readd it.....40h rebuild go!
So why doesnt linux keep its drive letters and why doesnt mdadm detect the superblock and not have to rebuild every F***KING TIME!
1) When you play "Musical PCI Slots" with your SATA controllers, the order in which the disks are detected (and given cute names like "sda", "sdb", etc) is changed.
2) If you setup a RAID array using "mdadm", and you *plan* on playing "Musical PCI Slots", then you need to identify the arrays you built by UUID, rather than device name.
3) GRUB (not "grub") will install on whatever drive you, or your Linux Distribution's Installation Wizard, decided was the first drive.
4) The "md tools" suite does drive rebuilds at the lowest possible priority in order to minimize the impact it will have on the server's performance. If you're interested in changing that, read this.
5) Please use a browser with built-in spellchecking. My spelling is far from perfect, but FireFox can be a big help here.
6) Please visit and read the "Great Justice" links in my signature.
7) Words in all Caps are akin to screaming. Don't scream. It's impolite.
8) Welcome to LinuxQuestions. We love a challenge, just keep in mind the site is not called WhyWontLinux!@#$%DoWhatIwant.com (mainly because that's an invalide DNS name).
Heh, Sorry about that. I was pretty annoyed that i have just waited 40hours for a mdadm rebuild then i restart and it decided to do it again. I have it set to low priority but when you need to copy 400gb of files it makes it take quite a while :/.
Can i edit the mdadm config to use UUID without remaking the arrays?
Heh, Sorry about that. I was pretty annoyed that i have just waited 40hours for a mdadm rebuild then i restart and it decided to do it again.
Understandable, that would have me throwing things and beating on the ol' Fire Safe in the back in no time.
Quote:
Originally Posted by MrMakealotofsmoke
Can i edit the mdadm config to use UUID without remaking the arrays?
You should be able to. Pretty much anyone that faces the "device names changed on me" problem (like the guys that build RAID arrays out of USB sticks) have figured out the whole UUID thing. Check this article here, under the "Starting the Array, Again" section.
However, most people don't have an mdadm.conf configuration. I know I typically don't. Most arrays are built out of partitions tagged with "fd" (Linux RAID autodetect).
To find out if your setup is like this, find and review the contents of your mdadm.conf, and maybe run "fdisk -l" against your disks.
Here's the output from my VMware playground system;
Code:
[root@system ~]# fdisk -l /dev/sd[a-e]
Disk /dev/sda: 82.3 GB, 82348277760 bytes
255 heads, 63 sectors/track, 10011 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 16 128488+ fd Linux raid autodetect
/dev/sda2 17 9964 79907310 fd Linux raid autodetect
Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 19457 156288321 fd Linux raid autodetect
Disk /dev/sdc: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 19457 156288321 fd Linux raid autodetect
Disk /dev/sdd: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 19457 156288321 fd Linux raid autodetect
Disk /dev/sde: 81.9 GB, 81964302336 bytes
255 heads, 63 sectors/track, 9964 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 * 1 16 128488+ fd Linux raid autodetect
/dev/sde2 17 9964 79907310 fd Linux raid autodetect
[root@system ~]#
Keep in mind there's nothing wrong with this type of setup, it's just the quick-n-dirty way most of us build RAIDs.
I have /dev/md5 which is a RAID0 array...
mdadm --assemble /dev/md5 /dev/sdg1 /dev/sdi1
but it keeps telling me:
mdadm: /dev/md5 assembled from 1 drive - not enough to start the array.
RAID0? Wow, well I hope nothing important was on there.
Quick Solution:
Just re-create the array, stick a new filesystem on there.
(Be sure to use UUIDs in your mdadm.conf.)
Restore from a backup (if you have one).
Long Solution:
First make sure the partitions (sdg1 & sdi1) show up as RAID devices;
Example:
Code:
[root@system ~]# mdadm --examine --verbose /dev/sda1 |grep .
/dev/sda1:
Magic : a92b4efc
Version : 0.90.00
UUID : faa4d937:06449d92:6f6154ec:66cd97cc
Creation Time : Mon Jun 28 23:00:55 2010
Raid Level : raid1
Used Dev Size : 128384 (125.40 MiB 131.47 MB)
Array Size : 128384 (125.40 MiB 131.47 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Update Time : Sun Sep 5 04:22:05 2010
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 18f27a3f - correct
Events : 28
Number Major Minor RaidDevice State
this 0 8 1 0 active sync /dev/sda1
0 0 8 1 0 active sync /dev/sda1
1 1 8 65 1 active sync /dev/sde1
Interesting.
If you have a recent backup of your data (or don't care too much if the entire thing goes belly-up), I would suggest you try an mdadm.conf that contains the following;
NOTE: There is only one "DEVICES=" line. That's deliberate.
If your Linux distribution supports udev, you could do what xushi did back in '07 and create the appropriate /etc/udev.d rule file. You could basically make sure that /dev/sda stays /dev/sda, no matter what controller or SATA port it was connected to.
well in my /etc/mdadm/mdadm.conf i have the DEVICES defined.
Main concern atm is getting this raid0 array working again. When i do --re-add it adds the drive as a spare and not the active drive. How can i fix this? As soon as i get it working again i think ill copy all the data off it and rebuild it :S lol
EDIT:
mdadm --assemble --scan
mdadm: failed to add /dev/sdg1 to /dev/md5: Device or resource busy
mdadm: /dev/md5 assembled from 1 drive and 1 spare - not enough to start the array.
mdadm --assemble /dev/md5 /dev/sdg1 /dev/sgh1
mdadm: cannot open device /dev/sgh1: No such file or directory
mdadm: /dev/sgh1 has no superblock - assembly aborted
Quote:
mdadm -E /dev/sdg1
/dev/sdg1:
Magic : a92b4efc
Version : 00.90.00
UUID : ef31fd02:278dbfb1:5d53fc14:f3cbdc1c (local to host svrfile)
Creation Time : Sun May 30 19:10:04 2010
Raid Level : raid0
Used Dev Size : 0
Raid Devices : 2
Total Devices : 2
Preferred Minor : 5
Update Time : Tue Sep 7 21:05:33 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : a9942ee9 - correct
Events : 65
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 177 -1 spare
0 0 8 97 0 active sync /dev/sdg1
1 1 8 129 1 active sync /dev/sdi1
Quote:
mdadm -E /dev/sdh1
/dev/sdh1:
Magic : a92b4efc
Version : 00.90.00
UUID : ef31fd02:278dbfb1:5d53fc14:f3cbdc1c (local to host svrfile)
Creation Time : Sun May 30 19:10:04 2010
Raid Level : raid0
Used Dev Size : 0
Raid Devices : 2
Total Devices : 2
Preferred Minor : 5
Update Time : Sun Sep 12 12:09:28 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : a99a4802 - correct
Events : 67
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 97 0 active sync /dev/sdg1
0 0 8 97 0 active sync /dev/sdg1
1 1 8 129 1 active sync /dev/sdi1
Last edited by MrMakealotofsmoke; 09-12-2010 at 06:47 AM.
Main concern atm is getting this raid0 array working again. When i do --re-add it adds the drive as a spare and not the active drive. How can i fix this? As soon as i get it working again i think ill copy all the data off it and rebuild it :S lol
This happened after you removed every "devices=" line except "DEVICES=partitions"?
You cannot --re-add a drive to a RAID-0 (Stripe) array, as that RAID type doesn't have redundancy. All drives have to be specified when the array is --create'd.
Confirm you do not have the drive already used in another array;
cat /proc/mdstat
Do the following on each drive for your RAID-0 array;
fdisk -l /dev/sd[g,k]1 | grep -A1 Device
Each drive should report it's ID/type as "fd" / "Linux raid autodetect". If not, correct with the "fdisk" command.
As we don't care about data we cannot recover, we "clean up" the drives;
mdadm --stop /dev/md5
mdadm --misc --zero-superblock /dev/sd[g,k]1
Then we create the RAID-0 array;
mdadm --create /dev/md5 --level=0 --raid-devices=2 --spare-devices=0 --chunk=128 /dev/sd[g,k]1
NOTE: Substitute the bolded section with however large you want your stripes. Example contains the max (I think).
Snag the UUID of the array so we can store it in the mdadm.conf file;
mdadm --detail /dev/md0 | grep UUID
WARNING: Your device names for the one RAID-0 array seem to have changed from "sdg1 & sdi1" to "sdg1 & sdk1".
Make sure you use the right device names.
P.S: What Linux distribution are you running? EDIT: Never mind, I found out from a post of yours in a previous thread.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.