Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
so theres no way to get the data back? Cause the drive is a temp drive i used to move data from one drive to another so theres about 1tb of stuff on it :/
so theres no way to get the data back? Cause the drive is a temp drive i used to move data from one drive to another so theres about 1tb of stuff on it :/
RAID-0 is like speeding on a Harley;
It's fast and fun till you crash.
There's no redundancy in a RAID-0 (stripe) array. That's why other RAID levels are available.
EDIT: Unless someone out on the Internet knows how to reestablish the array...
thats the thing, i dont see why it cant be re-assembled. I have done nothing to it, it just decided to change drive letters?
EDIT: MORE
Quote:
mdadm --assemble /dev/md5 /dev/sdg1 /dev/sdh1 --force --verbose
mdadm: looking for devices for /dev/md5
mdadm: /dev/sdg1 is identified as a member of /dev/md5, slot -1.
mdadm: /dev/sdh1 is identified as a member of /dev/md5, slot 0.
mdadm: no uptodate device for slot 1 of /dev/md5
mdadm: added /dev/sdg1 to /dev/md5 as -1
mdadm: added /dev/sdh1 to /dev/md5 as 0
mdadm: /dev/md5 assembled from 1 drive and 1 spare - not enough to start the array.
Last edited by MrMakealotofsmoke; 09-13-2010 at 10:24 AM.
Nothing except move the PCI cards around and break the array, right? That's all?
No, it decides to change letters around all the time. I think it boots the PCIe cards in different orders each time, which is silly as when they are detect in bios they are always the same. I have not touched the PCIe cards for a month or so.
mdadm --assemble /dev/md5
mdadm: failed to add /dev/sdg1 to /dev/md5: Device or resource busy
mdadm: /dev/md5 assembled from 1 drive and 1 spare - not enough to start the array.
Im starting to think mdadm isnt as robust as everyone says :/
Last edited by MrMakealotofsmoke; 09-13-2010 at 10:55 PM.
Im starting to think mdadm isnt as robust as everyone says :/
mdadm is quite robust, supports all kinds of RAID configurations, and monitors them well.
It's saved my bacon probably close to a hundred times...when properly configured.
RAID-0, on the other hand, is a straight "it works" or "it doesn't".
That's never been robust. That's why it's not used in Production environments without either *very* good backups or built-in redundancy (like RAID-0+1, aka RAID-10).
Go read this.
If I were you, I would chalk this up to a (very painful) lesson in resource planning, then rebuild the RAID-0 as a RAID-1. Then I would go back and slam the 4 x 1TB disks you have into a RAID-10, and do the same to the 4 x 1.5TB disks. RAID-10 over RAID-5 any day - For both redundancy and performance.
P.S: It's been nine days. Experience that has been hard-earned managing a few thousand servers for the past couple of years tells me your data is far from recoverable, barring a $2,000 data-recovery job that could be done by a lil shop in Austin.
Last edited by xeleema; 09-13-2010 at 11:32 PM.
Reason: Multiple edits so my post makes sense. It's been a long 36hr day.
Yes i can see how it is robust, but if i have to rebuild my raid5 array a few times a week, when a drive dies and it has decided to become degraded from a changed drive letter, i dont think ill be able to restore it.
I still dont understand what happened to make the second raid drive become a spare are a simple reboot.
The only reason i had raid0 was for a temp drive to move data around. I have the two 500gb drives to make the 1tb and i needed 800gb of space. Seamed the easiest way, turns out i was wrong :/
Might have to use motherboard raid0 as i know it doesnt change what drive it uses.
but if i have to rebuild my raid5 array a few times a week
Okay, that's just strange. If I didn't know any better, I'd swear something was knocking your device names out of whack.
You have an IDE drive plugged in somewhere? Are you leaving a USB stick (or other USB storage device, like an ext. disk or CD-ROM) plugged in after a reboot?
EDIT: I'm outta time for today, gotta grab 12 hrs of sleep.
Okay, that's just strange. If I didn't know any better, I'd swear something was knocking your device names out of whack.
You have an IDE drive plugged in somewhere? Are you leaving a USB stick (or other USB storage device, like an ext. disk or CD-ROM) plugged in after a reboot?
EDIT: I'm outta time for today, gotta grab 12 hrs of sleep.
Thats a lot of sleep
IDE drive is my OS drive, hdi (yes idk y its hdi not hda). Only other thing i have is sometimes a usb keyboard. The server runs headless.
Morning!
The longer the rebuilds take, the less intensive it is for the drives. It's when you jack up the settings so it finishes in 2 hours that you put "strain" on the drives (Which shouldn't be too big of a problem, given the huge MTBF rating for most disks).
As for the 12 hours of sleep; Yep, it's necessary to cut-over into a normal sleep schedule on what would normally consitute my weekends. I was awake since 17:00 two days ago, went to sleep today at 00:00, and just woke up about 3 minutes ago.
And didn't we already go over how to tweak the kernel settings so the rebuilds wouldn't take two days? (I know it doesn't help too much, having 1TB & 1.5TB drives.)
well its 4x1.5tb drives. Build speed is 40000 or so and ive never seen it higher. rebuild min speed is at 50000 and max is at 100000 lol.
Ive tried many things, i think its just slow cause they are green drives.
So, how can i prevent having to rebuild when a drive letter changes? I plan on removing the crapped 2 port sata pcie cards which are being annoying and getting a supermicro aoc-usas-l8i. I fear i will loose both raid5 arrays when i change 1 or 2 sata plugs around.
Last edited by MrMakealotofsmoke; 09-14-2010 at 05:19 PM.
well its 4x1.5tb drives. Build speed is 40000 or so and ive never seen it higher. rebuild min speed is at 50000 and max is at 100000 lol.
You can change the build speeds by changing /proc/sys/dev/raid/speed_limit_min and _max, as this article suggests.
Quote:
Originally Posted by MrMakealotofsmoke
Ive tried many things, i think its just slow cause they are green drives.
Western Digital "Green" drives? From what I can tell, they have a variable-speed spindle. It might be possible for "force" the drives to always spin at top speed...
EDIT: read the hdparm man page, there should be something in there for hard-setting the drive's performance.
Quote:
Originally Posted by MrMakealotofsmoke
So, how can i prevent having to rebuild when a drive letter changes?
I plan on removing the crapped 2 port sata pcie cards which are being annoying and getting a supermicro aoc-usas-l8i.
If you plan on using that card to do the RAIDing of your disks, you will definetly lose whatever data you have on each array (all arrays).
There is no way to "convert" a software-based array to a hardware-based array.
And that card can't do RAID 5, but it *might* be able to do RAID-10. Regardless, I would never put all my eggs in one basket (read: all drives on one controller.)
Quote:
Originally Posted by MrMakealotofsmoke
I fear i will loose both raid5 arrays when i change 1 or 2 sata plugs around.
Setup udev rules for your drives, and they should always keep their device name assignments.
Have you not been reading my posts completly, or has my advice just sucked that bad? I'm open to critisizm, but I'd like to see it in a bunch of "Yes" or "No" clicks to the "Did you find this post helpful?" section of each post...
Also, here's an article from another source that explicity states the rebuild time of a RAID 10 array is faster by default than that of a RAID 5 array.
More importantly, it lists why. Bear in mind that it was written back in 2008, when RAID-10 was still experimental.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.