LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Debian (http://www.linuxquestions.org/questions/debian-26/)
-   -   WARNING: Some disks in your RAID arrays seem to have failed! (http://www.linuxquestions.org/questions/debian-26/warning-some-disks-in-your-raid-arrays-seem-to-have-failed-173284/)

patrickkenlock 04-22-2004 08:12 AM

WARNING: Some disks in your RAID arrays seem to have failed!
 
Hi everyone

Coming into work this morning I got this message:

/etc/cron.daily/raidtools2:
WARNING: Some disks in your RAID arrays seem to have failed!
Below is the content of /proc/mdstat:

Personalities : [linear] [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid1 sdc1[1] sdb1[0](F)
35559744 blocks [2/1] [_U]

unused devices: <none>

Hardware: Dell PowerEdge 2400, Maxtor 36Gb H/D's

I have two brand new Maxtor Atlas 73Gb 10k IV ultra 320 Scsi drives which I was going to add to the system but now I'm going to have to replace my existing Raid 1 setup.

There are backups of most of the data.

I have thought about shutting down, (the machine boots from a non raid drive) removing the drives, inserting the new ones, restarting, configuring partitions and raid then copying over the data by shutting down, inserting the remaining good drive restarting and copying over. This seems rather long winded in view of the time.

Suggestions on the best next move would be appreciated as the server can only be down a few hours :) at night or over the weekend.

Thanks
PK

ToniT 04-22-2004 10:33 AM

This '[2/1] [_U]' shows that the first hd in raid array is failed. You can safely remove this failed disk and the system should still be usable. Either you can regenerate the mirror by givin it a partition large enough to be mirrored in, or just put the new disks in as a new raid array and move the data.

First downtime here is the moment when you take the bad disk away and put 2 new disks in. Second (software) dowtime is when you drop using old disk and start using the new raid (drop processess using the old disk, umount it, mount the new array to the same place, start processess again).
If you are using LVM, then the second downtime can be avoided.

patrickkenlock 04-22-2004 10:47 AM

RE: WARNING: Some disks in your RAID arrays seem to have failed!
 
Thanks for your help.

Could A larger disk be used in the array to replace to faulty one, as it's the first disk, would the bigger one be limited to the existing Raid size (36Gb). ? What I want to do is migrate to the larger size, from what I read the Raid is governed by the smallest H/D

Thanks
PK

ToniT 04-22-2004 12:11 PM

It is true that smallest partition in the array is the limiting factor. If you want to keep old array and have it mirrored, one thing you can do, is to make 36GB partition to one of the new disks and use rest (73-36=37) GB for some other use (like to build a new array).

Other idea (not sure if it works, because I'm not sure if the raid volume can increase dynamically; never tested):
If the raid partition can be extended dynamically, then you could do the thing by:
1. first replacing the old disk with new one(s), giving first new disk to the raid array as whole. This step can be done, thus now it is using only first 36GB of the first disk.
2. Wait for the mirror to be ready (/proc/mdstat tells the status of the mirroring process).
3. Take the original 36GB disk out of the array (this can also be done). Array is now in degraded mode again.
4. Give second disk to the array. Now there are two 73GB disks in the array, so the limiting size is 73GB, not 36GB. This can also be done, but what I'm not sure of, is that does it understand to use the whole disk now.

I recommend seeing some documentation of the subject.

patrickkenlock 04-26-2004 02:19 AM

Thanks ToniT
I managed to source a 36Gb disk, rebuilt the array and put in the two 73Gb as originaly planned (as extra drives), there was a bit of downtime but it was worth it.
I will post a full description to this list as time permits
Thanks again.


All times are GMT -5. The time now is 11:44 AM.