Can't rebuild a linux RAID array after replacing disk, “mdadm: Cannot open /dev/sdb1:
Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Can't rebuild a linux RAID array after replacing disk, “mdadm: Cannot open /dev/sdb1:
I have replaced a failed drive in my server with a new one, and wish to add it back to the array. I am getting the following error, what can I do to resolve this?
Code:
[root@la ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0]
511936 blocks super 1.0 [2/1] [U_]
md1 : active raid1 sda2[0]
976117568 blocks super 1.1 [2/1] [U_]
bitmap: 6/8 pages [24KB], 65536KB chunk
unused devices: <none>
Adding partition table to new disk:
Code:
[root@la ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb --force
Checking that no-one is using this disk right now ...
OK
Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sdb1 * 0+ 63- 64- 512000 fd Linux raid autodetect
/dev/sdb2 63+ 121601- 121538- 976248832 fd Linux raid autodetect
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
Units = sectors of 512 bytes, counting from 0
Device Boot Start End #sectors Id System
/dev/sdb1 * 2048 1026047 1024000 fd Linux raid autodetect
/dev/sdb2 1026048 1953523711 1952497664 fd Linux raid autodetect
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
Warning: partition 1 does not end at a cylinder boundary
Successfully wrote the new partition table
Re-reading the partition table ...
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
Adding drive to array:
Code:
[root@la ~]# mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: Cannot open /dev/sdb1: Device or resource busy
What can I do to resolve this issue? Why is linux raid so useless when it's supposed to protect data?
Just a stab in the dark here, but does does /dev/sdb1 exist after the sfdisk command? I've had occasional rare instances where I've had to reboot for the kernel to "see" the new partition table on a disk.
I agree that mdadm is a bit balky; when it works, it's ghrwat, but it's left me tearing my hair out before. This is why I tend to prefer investing in a good hardware RAID controller...
Just a stab in the dark here, but does does /dev/sdb1 exist after the sfdisk command? I've had occasional rare instances where I've had to reboot for the kernel to "see" the new partition table on a disk.
I agree that mdadm is a bit balky; when it works, it's ghrwat, but it's left me tearing my hair out before. This is why I tend to prefer investing in a good hardware RAID controller...
I've tried rebooting after partitioning but no luck. Thanks for the suggestion though.
I've never terribly liked BIOS fakeraid set-ups, but it really depends on the BIOS. Some have worked out OK for me, some haven't. At work we generally prefer Areca RAID controllers - they're a bit pricey but well worth it IMO. The command line management tools are quite good.
Another stab in the dark -- it looks like your two disks use different sized sector units. I'm not an mdadm expert, but it would seem to me that this could cause problems in any RAID setup, software or hardware, since the controller (or software) has to stripe data accross the disks in chunks. Do you have another disk with an identical make/model as sda (and assumedly the failed disk)? I've had the best luck putting in an absolutely identical disk. Even if you can get a non-identical disk working, you may well wind up losing performance in the array.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.