Linux - Server This forum is for the discussion of Linux Software used in a server related context. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
09-15-2009, 08:31 AM
|
#1
|
Member
Registered: Apr 2008
Posts: 114
Rep:
|
question about raid 1
Hi,
I've a server with 2 IDE disk (80GB), with 3 partitions:
/boot (md0)
/ (md1)
swap (md2)
The second ide disk, hdb, have some hardware problem, so I'm going to replace it.
Meanwhile, I want to set hdb "fault" and remove it from raid.
Quote:
# mdadm /dev/md1 -f /dev/hdb2
mdadm: set /dev/hdb2 faulty in /dev/md1
|
Quote:
# mdadm /dev/md1 -r /dev/hdb2
mdadm: hot remove failed for /dev/hdb2: Device or resource busy
|
I think this is normal because / partition is mounted.. so how can I remove the device from raid? It's necessary to reboot system? In that case, I must edit /etc/mdadm/mdadm.conf before?
Another question:
Quote:
# mdadm --detail /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Thu Feb 17 18:29:44 2005
Raid Level : raid1
Array Size : 79101696 (75.44 GiB 81.00 GB)
Device Size : 79101696 (75.44 GiB 81.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Sep 15 10:33:58 2009
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 43% complete
UUID : fdb1286b:099c4e58aaf5:a35fd3824e86
Events : 0.76951239
Number Major Minor RaidDevice State
2 3 2 0 spare rebuilding /dev/hda2
1 3 66 1 active sync /dev/hdb2
|
Why there is one spare device?? I don't have any spare devices.. only the two disc.
Thankyou
|
|
|
09-15-2009, 03:29 PM
|
#2
|
LQ Newbie
Registered: Mar 2005
Distribution: Slackware
Posts: 7
Rep:
|
With most hardware IDE RAID controllers, for a RAID 1 failure, you shutdown the machine, swap the drive, and restart. When the controller sees the new drive, it will begin to rebuild the array.
Note this is for IDE, which normally doesn't support hot swap. If it were SAS, SATA, or some SCSI backplanes, you could hotswap and leave the machine running.
Hope this helps.
|
|
|
09-16-2009, 12:06 AM
|
#3
|
Member
Registered: Oct 2005
Location: Burley, WA
Distribution: Sabayon, Debian
Posts: 278
Rep:
|
You have to remove the drive from the other arrays before you can remove it.
|
|
|
09-16-2009, 08:11 AM
|
#4
|
Member
Registered: Apr 2008
Posts: 114
Original Poster
Rep:
|
Quote:
Originally Posted by leandean
You have to remove the drive from the other arrays before you can remove it.
|
Yes, I've removed /dev/hdb1 and /dev/hdb3 from other arrays .. but /dev/hdb2 still busy:
Quote:
# mdadm /dev/md2 -f /dev/hdb3
mdadm: set /dev/hdb3 faulty in /dev/md2
# mdadm /dev/md2 -r /dev/hdb3
mdadm: hot removed /dev/hdb3
# mdadm /dev/md0 -f /dev/hdb1
mdadm: set /dev/hdb1 faulty in /dev/md0
# mdadm /dev/md0 -r /dev/hdb1
mdadm: hot removed /dev/hdb1
|
Quote:
# cat /proc/mdstat
Personalities : [linear] [raid1]
md1 : active raid1 hdb2[1] hda2[2]
79101696 blocks [2/1] [_U]
[===============>.....] recovery = 79.8% (63132672/79101696) finish=9.8min speed=26990K/sec
md2 : active raid1 hda3[0]
843584 blocks [2/1] [U_]
md0 : active raid1 hda1[0]
97664 blocks [2/1] [U_]
unused devices: <none>
|
Quote:
# mdadm /dev/md1 -r /dev/hdb2
mdadm: hot remove failed for /dev/hdb2: Device or resource busy
|
So it's necessary to reboot the machine? How can I tell md to not rebuild array /dev/md1 after the reboot?
Thankyou
|
|
|
09-16-2009, 03:20 PM
|
#5
|
Senior Member
Registered: Sep 2009
Location: Srbobran, Serbia
Distribution: CentOS 5.5 i386 & x86_64
Posts: 1,118
Rep:
|
There is nice howto for removing failed PRIMARY HDD's from RAID1: http://www200.pair.com/mecham/raid/raid1-page3.html
However, is it possible that you are trying to remove the wrong HDD? You have [_U] for md1, but [U_] for md0 and md2. Be very carefull, check 3 times.
Also, from your posts looks like system is (automaticaly?) recovering failed hdX2 id md1 after you set the --failed flag.Notice this:
Quote:
Update Time : Tue Sep 15 10:33:58 2009
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 43% complete
|
even in your first post.
|
|
|
09-17-2009, 02:26 AM
|
#6
|
LQ Guru
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,414
|
Check /etc/mdadm.conf & /etc/fstab and comment out refs to that raid.
|
|
|
All times are GMT -5. The time now is 03:11 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|