Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
For the purpose of full disclosure, I have 9 drives in 2 RAID5 arrays (3 in /dev/md0 and 6 in /dev/md1) that are combined over LVM. md1 works fine, but I'm having trouble with md0. I got an error trying to write to a folder on the array that it was mounted read-only, and when I checked the array, only 1 of the 3 drives was up. When I rebooted, 2 of the 3 came up. I re-added the first drive and kept an eye on the progress. After maybe 10%, the system locked up. Each time I would reboot, the array would come up degraded with 2 drives. After adding the third, it would lock up, usually with no error, although one time started getting buffer I/O errors on one drive, but it didn't happen again. After freaking out a bit, I realized that if I didn't add the third drive, everything seems fine, albeit with degraded status.
So, I'm backing up some stuff now, but what should I do about this? Before suggesting anything drastic, note that the entire array is 2.7 TB with 1.6 of that used.
Well, I don't want to format it just yet, but I did run badblocks on the drive (with non-destructive read-write test), and no bad blocks were found. The next step will be to scan the drive with the utility from seagate. I think I'll check all three while I'm at it.
Well, I don't want to format it just yet, but I did run badblocks on the drive (with non-destructive read-write test), and no bad blocks were found. The next step will be to scan the drive with the utility from seagate. I think I'll check all three while I'm at it.
You realize the drive get wiped (all blocks re-written) every time you re-add it to the array?
Huh. I guess I must have been thinking assemble. I haven't slept much the last few days. Well, since the drive has been wiped anyway, how do you format it like you said? I couldn't find the flag, but like I said, I'm kinda tired.
It depends on the filesystem, but generally the double "-c" options causes the read/write test and re-allocation. For example, to create an Ext3 filesystem:
How do you have 2.7TB in Raid 5 on 3 drives? Just curious. Seems that would take 1.35TB individual drives.
If you have 2 good drives, you should be able to wipe the other (as mentioned above) and rebuild from those, assuming they're still good. I'd run smartmontools against them to check their hardware SMART status.
How do you have 2.7TB in Raid 5 on 3 drives? Just curious. Seems that would take 1.35TB individual drives.
If you have 2 good drives, you should be able to wipe the other (as mentioned above) and rebuild from those, assuming they're still good. I'd run smartmontools against them to check their hardware SMART status.
You missed the first part about 2 arrays over LVM. The other array has 6 400 GB drives. I also think that the last time I tried smartmontools, they were not smart capable.
The only thing I don't understand why that format would solve anything since the file system is going to be destroyed anyway when the array is rebuilt. I don't think you can even run mkfs on a drive setup as Linux raid autodetect.
Well, when I tried to run the long smart test on the drives, one of the (working) drives dropped out of the array and started giving me buffer io errors, but I loaded from a seagate tools cd (which all it had was smart), and ran a long test on all three drives with no problem. I'm running fsck right now, but it'll take a damn long time. I'm thinking about trying to reduce the size of the lvm, but I'm not sure how to remove /dev/md0 from it, let alone how to make sure the data is all on /dev/md1 first so it isn't lost.
Ok, so after running fsck, I was able to add the third drive to the array. Everything was back to normal, but I wanted to make sure it would be fine after rebooting. Unfortunately, after I rebooted, it came up degraded with the third drive listed as removed. I'm adding it to the array again, but what do I need to do to get it to start correctly?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.