Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
root@revolution1:/root> mdadm -E /dev/sda2
/dev/sda2:
Magic : a92b4efc
Version : 00.90.00
UUID : e3128c16:ee63fcad:a4135057:27c78241
Creation Time : Sat Jan 1 00:49:32 2000
Raid Level : raid0
Used Dev Size : 0
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Update Time : Sat Jan 1 01:00:48 2000
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 337 - expected b7584bde
Events : 0.3
Chunk Size : 64K
Number Major Minor RaidDevice State
this 1 8 2 1 active sync /dev/sda2
0 0 8 17 0 active sync /dev/sdb1
1 1 8 2 1 active sync /dev/sda2
When I want to hot-remove my disk, I get the error message that the device is busy.
By the way, I have already formatted my md0 and it's not mounted:
root@revolution1:/root> mdadm --manage /dev/md0 --remove /dev/sda2
mdadm: hot remove failed for /dev/sda2: Device or resource busy
root@revolution1:/root> mount
rootfs on / type rootfs (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw)
tmpfs on /dev/shm type tmpfs (rw)
rpc_pipefs on /drbd/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
Haha sorry, I should of read the whole thing more carefully.
You have a RAID-0 array...the command:
mdadm --manage /dev/md0 --fail /dev/sda2
Does not work on RAID-0 devices.
When you cat /proc/mdstat, the failed drive would have been marked like this if it worked:
md0 : active raid0 sdb1[0] sda2[1](F)
And not:
md0 : active raid0 sdb1[0] sda2[1]
Reason why this doesn't work is that soon as you fail a drive in a RAID-0 array, you would have corrupted the array. This was put in as a safety feature to prevent accidental failing a member disk, which results in a corrupted RAID-0 array (trust me, human error is a terrible thing).
That's why your --manage --remove command doesn't work...it makes no sense to "hot remove" a drive in a RAID-0 array, because once a member drive becomes faulty, you have to replace and build a new array anyway as all data is lost...you can only hot swap a disk in a RAID1,3,4,5,6 and nested arrays.
I suppose you can try forcing it, but I doubt it'll work.
[root@vikas_vicky /]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Mon Sep 29 00:41:01 2008
Raid Level : linear
Array Size : 3256896 (3.11 GiB 3.34 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Sep 29 00:41:01 2008
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Rounding : 64K
UUID : 31f20cd4:0122f710:a0624d77:41e2ee92
Events : 0.1
Number Major Minor RaidDevice State
0 3 5 0 active sync /dev/hda5
1 22 5 1 active sync /dev/hdc5
2 3 6 2 active sync /dev/hda6
[root@vikas_vicky /]#
Code:
[root@vikas_vicky /]# mdadm --manage /dev/md0 --fail /dev/hda6
mdadm: set /dev/hda6 faulty in /dev/md0
Code:
[root@vikas_vicky /]# umount /dev/md0
umount: /dev/md0: not mounted
Code:
[root@vikas_vicky /]# mdadm --manage /dev/md0 --remove /dev/hda6
mdadm: hot remove failed for /dev/hda6: Device or resource busy
Code:
[root@vikas_vicky /]# mount
/dev/hda1 on / type ext3 (rw,acl,usrquota,grpquota)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
[root@vikas_vicky /]#
[root@vikas_vicky /]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Mon Sep 29 00:41:01 2008
Raid Level : linear
Array Size : 3256896 (3.11 GiB 3.34 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Sep 29 00:41:01 2008
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Rounding : 64K
UUID : 31f20cd4:0122f710:a0624d77:41e2ee92
Events : 0.1
Number Major Minor RaidDevice State
0 3 5 0 active sync /dev/hda5
1 22 5 1 active sync /dev/hdc5
2 3 6 2 active sync /dev/hda6
[root@vikas_vicky /]#
Code:
[root@vikas_vicky /]# mdadm --manage /dev/md0 --fail /dev/hda6
mdadm: set /dev/hda6 faulty in /dev/md0
Code:
[root@vikas_vicky /]# umount /dev/md0
umount: /dev/md0: not mounted
Code:
[root@vikas_vicky /]# mdadm --manage /dev/md0 --remove /dev/hda6
mdadm: hot remove failed for /dev/hda6: Device or resource busy
Code:
[root@vikas_vicky /]# mount
/dev/hda1 on / type ext3 (rw,acl,usrquota,grpquota)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
[root@vikas_vicky /]#
PLEASE HELP !!
Regards,
VIKAS
I think your problem is that
1)you're trying to remove the failed disk way to quickly and MDADM has not released it yet (happened to me alot). In this case, just add it back, mark it again as failed, wait for 5-6 seconds, then remove it.
2) Depending on your redundancy level, you can't remove the failed disk or your data will be corrupted. When you mark the device as FAILED, print /proc/mdstat output and post it here so we can take a look at it.
[root@vikas_vicky /]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Mon Sep 29 00:41:01 2008
Raid Level : linear
Array Size : 3256896 (3.11 GiB 3.34 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Sep 29 00:41:01 2008
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Rounding : 64K
UUID : 31f20cd4:0122f710:a0624d77:41e2ee92
Events : 0.1
Number Major Minor RaidDevice State
0 3 5 0 active sync /dev/hda5
1 22 5 1 active sync /dev/hdc5
2 3 6 2 active sync /dev/hda6
[root@vikas_vicky /]#
Code:
[root@vikas_vicky /]# mdadm --manage /dev/md0 --fail /dev/hda6
mdadm: set /dev/hda6 faulty in /dev/md0
Code:
[root@vikas_vicky /]# umount /dev/md0
umount: /dev/md0: not mounted
Code:
[root@vikas_vicky /]# mdadm --manage /dev/md0 --remove /dev/hda6
mdadm: hot remove failed for /dev/hda6: Device or resource busy
Code:
[root@vikas_vicky /]# mount
/dev/hda1 on / type ext3 (rw,acl,usrquota,grpquota)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
[root@vikas_vicky /]#
PLEASE HELP !!
Regards,
VIKAS
I just noticed you have a "linear" RAID device. You can't remove partitions from a Linear RAID (RAID-0). Because when you have a linear RAID, you have practically merged two disks and created a larger disk. You can't simply throw away part of your "new" hard disk. You will lose data and MDADM wants to make sure you don't lose data => you can't remove/fail.
RAID-0 doesn't offer any redundancy and failure protection either.
I just noticed you have a "linear" RAID device. You can't remove partitions from a Linear RAID (RAID-0). Because when you have a linear RAID, you have practically merged two disks and created a larger disk. You can't simply throw away part of your "new" hard disk. You will lose data and MDADM wants to make sure you don't lose data => you can't remove/fail.
RAID-0 doesn't offer any redundancy and failure protection either.
Ali
Thank you so much for the useful information alirezan1, thanks a TON !!
I am new to RAIDS you see, But surely this community and its users are awesome.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.