so theres no way to get the data back? Cause the drive is a temp drive i used to move data from one drive to another so theres about 1tb of stuff on it :/
|
Quote:
It's fast and fun till you crash. There's no redundancy in a RAID-0 (stripe) array. That's why other RAID levels are available. EDIT: Unless someone out on the Internet knows how to reestablish the array... |
thats the thing, i dont see why it cant be re-assembled. I have done nothing to it, it just decided to change drive letters?
EDIT: MORE Quote:
|
Quote:
Quote:
The only thing I could think of would be; Code:
mdadm --stop /dev/md5 ; mdamd --assemble /dev/md5 Code:
mdadm --stop /dev/md5 ; mdamd --assemble /dev/md5 --spare-devices=0 |
Quote:
mdadm --assemble /dev/md5 mdadm: failed to add /dev/sdg1 to /dev/md5: Device or resource busy mdadm: /dev/md5 assembled from 1 drive and 1 spare - not enough to start the array. Im starting to think mdadm isnt as robust as everyone says :/ |
Quote:
It's saved my bacon probably close to a hundred times...when properly configured. RAID-0, on the other hand, is a straight "it works" or "it doesn't". That's never been robust. That's why it's not used in Production environments without either *very* good backups or built-in redundancy (like RAID-0+1, aka RAID-10). Go read this. If I were you, I would chalk this up to a (very painful) lesson in resource planning, then rebuild the RAID-0 as a RAID-1. Then I would go back and slam the 4 x 1TB disks you have into a RAID-10, and do the same to the 4 x 1.5TB disks. RAID-10 over RAID-5 any day - For both redundancy and performance. P.S: It's been nine days. Experience that has been hard-earned managing a few thousand servers for the past couple of years tells me your data is far from recoverable, barring a $2,000 data-recovery job that could be done by a lil shop in Austin. |
Im aware of the different types of raid's.
Yes i can see how it is robust, but if i have to rebuild my raid5 array a few times a week, when a drive dies and it has decided to become degraded from a changed drive letter, i dont think ill be able to restore it. I still dont understand what happened to make the second raid drive become a spare are a simple reboot. The only reason i had raid0 was for a temp drive to move data around. I have the two 500gb drives to make the 1tb and i needed 800gb of space. Seamed the easiest way, turns out i was wrong :/ Might have to use motherboard raid0 as i know it doesnt change what drive it uses. |
Quote:
You have an IDE drive plugged in somewhere? Are you leaving a USB stick (or other USB storage device, like an ext. disk or CD-ROM) plugged in after a reboot? EDIT: I'm outta time for today, gotta grab 12 hrs of sleep. |
Quote:
IDE drive is my OS drive, hdi (yes idk y its hdi not hda). Only other thing i have is sometimes a usb keyboard. The server runs headless. |
Well i installed ubuntu on a spare hard drive. Still wont assemble, makes the 2nd hard drive a spare still :(
If only --force actually made it work with errors lol. That way i could still try to recover some of my data. |
sigh, drive letter change, rebuild array again, another 40h :/
I hope my drives dont die early from all this un-needed thrashing from rebuilding. |
Morning! :)
The longer the rebuilds take, the less intensive it is for the drives. It's when you jack up the settings so it finishes in 2 hours that you put "strain" on the drives (Which shouldn't be too big of a problem, given the huge MTBF rating for most disks). As for the 12 hours of sleep; Yep, it's necessary to cut-over into a normal sleep schedule on what would normally consitute my weekends. I was awake since 17:00 two days ago, went to sleep today at 00:00, and just woke up about 3 minutes ago. And didn't we already go over how to tweak the kernel settings so the rebuilds wouldn't take two days? (I know it doesn't help too much, having 1TB & 1.5TB drives.) |
well its 4x1.5tb drives. Build speed is 40000 or so and ive never seen it higher. rebuild min speed is at 50000 and max is at 100000 lol.
Ive tried many things, i think its just slow cause they are green drives. So, how can i prevent having to rebuild when a drive letter changes? I plan on removing the crapped 2 port sata pcie cards which are being annoying and getting a supermicro aoc-usas-l8i. I fear i will loose both raid5 arrays when i change 1 or 2 sata plugs around. |
Quote:
Quote:
EDIT: read the hdparm man page, there should be something in there for hard-setting the drive's performance. Quote:
Quote:
There is no way to "convert" a software-based array to a hardware-based array. And that card can't do RAID 5, but it *might* be able to do RAID-10. Regardless, I would never put all my eggs in one basket (read: all drives on one controller.) Quote:
Have you not been reading my posts completly, or has my advice just sucked that bad? I'm open to critisizm, but I'd like to see it in a bunch of "Yes" or "No" clicks to the "Did you find this post helpful?" section of each post... |
Also, here's an article from another source that explicity states the rebuild time of a RAID 10 array is faster by default than that of a RAID 5 array.
More importantly, it lists why. Bear in mind that it was written back in 2008, when RAID-10 was still experimental. |
All times are GMT -5. The time now is 02:03 PM. |