Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm currently trying to grow my software raid 1 array from 2TB to 3TB and I'm running into issue after issue configuring it properly, the main one being that Fdisk, which seems to be the tool of choice for this kind of thing doesn't actually like 3TB/"advanced format" drives giving me an error similar to the one specified in this thread: http://www.linuxquestions.org/questi...parted-908055/
I tried to do an endrun around the issue by using Gparted to create the partition (single, primary) properly and then use Fdisk to set the partition type properly (fd), and while that seems to work just fine, now mdadm is complaining that it can't readd the drive to the array and start syncing because otherwise it's going to be regarded as a spare rather than a regular drive. The exact error message is as follows:
mdadm: /dev/sdb1 reports being an active member for /dev/md1, but a --re-add fails.
mdadm: not performing --add as that would convert /dev/sdb1 in to a spare.
mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sdb1" first.
How can I clear this error and finally get the ball rolling on my expanded raid 1 array?
fdisk can’t handle GPT. What might have happen is, that the wrapper “Protective MBR” to protect the real partitions under any tool which uses MBR style partition tables got a new partition type assigned. Wasn’t it possible to set the correct partition type already in Gparted?
Not that I am aware of, As far as I can tell Gparted doesn't/can't handle raid partitions, other than displaying them as unknown format. The tool of choice that I've seen for this kind of thing has always been fdisk across all of the forums that I've seen so far.
Not that I am aware of, As far as I can tell Gparted doesn't/can't handle raid partitions, other than displaying them as unknown format. The tool of choice that I've seen for this kind of thing has always been fdisk across all of the forums that I've seen so far.
First it’s necessary to create an uninitialized partition and apply the change. Then it’s possible to select “Manage flags...” from the “Partition” menu and to set the RAID flag. This will set the partition type to FD00 which reflects Linux RAID.
Thanks for the information, though I think I still might be missing something since the second error still appears after i deleted then recreated the partition using gparted on /dev/sdb1. I think it might have something to do with the fact that the drive still has a label on it that's named <system name>:1, despite it not being a part of a raid array yet and having the paritition table wiped. For some reason Gparted can't remove that label.
It's just ignored. Which is odd as I specifically created a new gpt partition table and partition each time I do it.
EDIT: Finally figured it out, it seems that reformatting and making a new partition table doesn't actually remove the superblock from the drive. Zeroing the superblock and doing a partition wipe and rewrite seems to have fixed the issue. Though I'm unsure of whether the extra partition table wipe and rewrite actually was necessary. Better to be safe than sorry though.
EDIT 2: Just finished tweaking the second drive in the raid array so I figured that now would be a good time for me to lay out the final working steps for future reference:
Note: This is coming from an array that's already synched and working on the larger drives. The only problem is that the current size of the partitions are too small for mdadm --grow to work properly
1) Mark one drive in the array as faulty: mdadm --manage /dev/md1 -f /dev/sdd1
2) Remove said drive from the array: mdadm --manage /dev/md1 -r /dev/sdd1
3) Zero Superblock on that drive: mdadm --zero-superblock /dev/sdd1
4) Create a new GPT partition table and partition using gdisk:
gdisk /dev/sdd
option o (creates a new GPT partition table)
option n (creates a new partition)
2x enter (accepts the default partition parameters, since I just want one 1 partition on the drive)
fd00 (sets partition type as Linux Raid)
w (writes changes to the drive and exits)
5) Readd the modified drive back to the array: mdadm --manage /dev/md1 --add /dev/sdd1
6) check on the progress of the synch (usually takes a few hours with 3TB drives): cat /proc/mdstat
When it's finally resynched you're done. Enjoy your new and improved array.
Last edited by valunthar; 08-11-2012 at 09:09 AM.
Reason: writing out the steps I figured out for future reference
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.