LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Debian
User Name
Password
Debian This forum is for the discussion of Debian Linux.

Notices


Reply
  Search this Thread
Old 04-22-2004, 08:12 AM   #1
patrickkenlock
LQ Newbie
 
Registered: Apr 2004
Distribution: Debian, Suse
Posts: 5

Rep: Reputation: 0
WARNING: Some disks in your RAID arrays seem to have failed!


Hi everyone

Coming into work this morning I got this message:

/etc/cron.daily/raidtools2:
WARNING: Some disks in your RAID arrays seem to have failed!
Below is the content of /proc/mdstat:

Personalities : [linear] [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid1 sdc1[1] sdb1[0](F)
35559744 blocks [2/1] [_U]

unused devices: <none>

Hardware: Dell PowerEdge 2400, Maxtor 36Gb H/D's

I have two brand new Maxtor Atlas 73Gb 10k IV ultra 320 Scsi drives which I was going to add to the system but now I'm going to have to replace my existing Raid 1 setup.

There are backups of most of the data.

I have thought about shutting down, (the machine boots from a non raid drive) removing the drives, inserting the new ones, restarting, configuring partitions and raid then copying over the data by shutting down, inserting the remaining good drive restarting and copying over. This seems rather long winded in view of the time.

Suggestions on the best next move would be appreciated as the server can only be down a few hours at night or over the weekend.

Thanks
PK

Last edited by patrickkenlock; 04-22-2004 at 08:13 AM.
 
Old 04-22-2004, 10:33 AM   #2
ToniT
Senior Member
 
Registered: Oct 2003
Location: Zurich, Switzerland
Distribution: Debian/unstable
Posts: 1,357

Rep: Reputation: 47
This '[2/1] [_U]' shows that the first hd in raid array is failed. You can safely remove this failed disk and the system should still be usable. Either you can regenerate the mirror by givin it a partition large enough to be mirrored in, or just put the new disks in as a new raid array and move the data.

First downtime here is the moment when you take the bad disk away and put 2 new disks in. Second (software) dowtime is when you drop using old disk and start using the new raid (drop processess using the old disk, umount it, mount the new array to the same place, start processess again).
If you are using LVM, then the second downtime can be avoided.
 
Old 04-22-2004, 10:47 AM   #3
patrickkenlock
LQ Newbie
 
Registered: Apr 2004
Distribution: Debian, Suse
Posts: 5

Original Poster
Rep: Reputation: 0
RE: WARNING: Some disks in your RAID arrays seem to have failed!

Thanks for your help.

Could A larger disk be used in the array to replace to faulty one, as it's the first disk, would the bigger one be limited to the existing Raid size (36Gb). ? What I want to do is migrate to the larger size, from what I read the Raid is governed by the smallest H/D

Thanks
PK
 
Old 04-22-2004, 12:11 PM   #4
ToniT
Senior Member
 
Registered: Oct 2003
Location: Zurich, Switzerland
Distribution: Debian/unstable
Posts: 1,357

Rep: Reputation: 47
It is true that smallest partition in the array is the limiting factor. If you want to keep old array and have it mirrored, one thing you can do, is to make 36GB partition to one of the new disks and use rest (73-36=37) GB for some other use (like to build a new array).

Other idea (not sure if it works, because I'm not sure if the raid volume can increase dynamically; never tested):
If the raid partition can be extended dynamically, then you could do the thing by:
1. first replacing the old disk with new one(s), giving first new disk to the raid array as whole. This step can be done, thus now it is using only first 36GB of the first disk.
2. Wait for the mirror to be ready (/proc/mdstat tells the status of the mirroring process).
3. Take the original 36GB disk out of the array (this can also be done). Array is now in degraded mode again.
4. Give second disk to the array. Now there are two 73GB disks in the array, so the limiting size is 73GB, not 36GB. This can also be done, but what I'm not sure of, is that does it understand to use the whole disk now.

I recommend seeing some documentation of the subject.
 
Old 04-26-2004, 02:19 AM   #5
patrickkenlock
LQ Newbie
 
Registered: Apr 2004
Distribution: Debian, Suse
Posts: 5

Original Poster
Rep: Reputation: 0
Thanks ToniT
I managed to source a 36Gb disk, rebuilt the array and put in the two 73Gb as originaly planned (as extra drives), there was a bit of downtime but it was worth it.
I will post a full description to this list as time permits
Thanks again.

Last edited by patrickkenlock; 04-26-2004 at 02:23 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
reiserfs slight occasional corruption on software RAID-5 arrays Cairan Linux - Software 3 07-11-2006 04:11 PM
Raid Problem Fedora Core 3, RAID LOST DISKS ALWAYS icatalan Linux - Hardware 1 09-17-2005 03:14 AM
Transferring two RAID arrays Dee-ehn Linux - Software 2 09-01-2004 12:12 AM
Lilo problems useing Promise raid card with no arrays deadlyicon Red Hat 1 11-12-2003 03:54 AM
Raid Arrays and IDE Probes Tarball_Phreak Linux - Hardware 0 07-15-2003 04:42 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Debian

All times are GMT -5. The time now is 08:05 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration