Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
A minimal RAID-1 is typically two drives + spares. A minimal RAID-10 is typically four drives + spares. What do you have to work with, and what drives will you be using to build the new array?
I have 2 SATA-II disks that are used in the RAID-1 and 2 unused disks (same model). The chassi doesn't have room for more than 4 disks, so thats the limit.
OK, I'll assume your existing RAID-1 is /dev/md0, composed of /dev/sda and /dev/sdb. The unused drives I'll call /dev/sdc and /dev/sdd. The process is:
- Create a new RAID-1, /dev/md1, with the two unused disks.
Hi there! That's a very concise recipe you've supplied, macemoneta. Thanks!
However, I have a situation in which I don't think I can use your approach, or perhaps it may indeed be able to be used, but because I don't properly understand what's going on, I'm left wondering how I'd be able to apply it to my case.
What if the existing RAID-1 device, composed of /dev/sda2 and /dev/sdb2, is where /root, /usr, /var, /proc, /swap, /lib... and all the system files are residing? How can one create the RAID-10 device (which would be RAID-0 of the RAID-1 arrays) without killing your system in the process?
I understand that you must first split the original RAID-1, /dev/md0, by failing and removing one of the drives, but wouldn't you still bring your system down in the process of striping /dev/md0 and /dev/md1? The striping destroys your data, and even though you have a healthy copy of all the files and data, you wouldn't then be able to boot into it to resume the operation.
Again, I probably have not understood what you're suggesting, so I'd really appreciate you shedding some more light on the topic.
sudo mkfs.ext3 /dev/md2
mke2fs 1.41.3 (12-Oct-2008)
mkfs.ext3: Device size reported to be zero. Invalid partition specified, or partition table wasn't reread after running fdisk, due to a modified partition being busy and in use. You may need to reboot to re-read your partition table.
I thought I had the 2nd (new) mirror set up as it started and "sync'd" and has just been sitting spinning, not mounted for a couple days till I got back to this tonight.
I have /dev/sdc 'failed' and 'removed' to build the stripe.
sda is the Ubuntu v8.10(Dotsch/UX) OS drive and is not involved.
md0 is normally mounted and has been running w/o problems for weeks. md0 is normally comprised of /dev/sdb and /dev/sdc.
I have md0 and md1 in /etc/mdadm/mdadm.conf:
Code:
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE /dev/sdb /dev/sdc /dev/sdd /dev/sde
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR skip@
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=252431f7:7393664c:180fba5d:ff5e4642
devices=/dev/sdb,/dev/sdc
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=00.90 UUID=54693081:6615cf45:180fba5d:ff5e4642
devices=/dev/sdd,/dev/sde
The fdisk -l output differences between the two mirrors bothers me greatly but I don't know enough about what it's telling me to know what to do.
Code:
Disk /dev/sdb: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x206e1c51
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x206e1c51
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00020ff9
Device Boot Start End Blocks Id System
/dev/sdd1 1 4863 39062016 fd Linux raid autodetect
Disk /dev/sde: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00020ff9
Device Boot Start End Blocks Id System
/dev/sde1 1 4863 39062016 fd Linux raid autodetect
Why do the md0 drives say "doesn't contain a valid partition table" but the md1 drives list the single partition (sd?1) and say it's raid auto? I suspect I formatted the individual drives differently when I built md0 vs formatting md1 drives. Remember the 1st time around I'd built md1 as devices sdd1 and sde1 instead of sdd and sde.
Note that mdadm detailed scan doesn't see a /dev/md2 but I have done these:
udo mdadm -vv --examine --scan /dev/md2
mdadm: metadata format 00.90 unknown, ignored.
mdadm: metadata format 00.90 unknown, ignored.
mdadm: No md superblock detected on /dev/md2.
If nothing else can someone tell me how to make both md2 and md1 go completely away so I can start over w/ formatting sdd and sde?
PS: Actually I'm tempted to try and tar off md0 and really start from ground 0 with all 4 drives just to make sure I get it down and get them all exactly the same... but that may be overkill.
That somebody someday reads this thread and happens into some of my same questions... Here's what I found out so far (and you'll notice any commands shown have been ubuntuized):
Quote:
Originally Posted by skipdashu
sda is the Ubuntu v8.10(Dotsch/UX) OS drive and is not involved.
md0 is normally mounted and has been running w/o problems for weeks. md0 is normally comprised of /dev/sdb and /dev/sdc.
The fdisk -l output differences between the two mirrors bothers me greatly but I don't know enough about what it's telling me to know what to do.
Code:
Disk /dev/sdb: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x206e1c51
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x206e1c51
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00020ff9
Device Boot Start End Blocks Id System
/dev/sdd1 1 4863 39062016 fd Linux raid autodetect
Disk /dev/sde: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00020ff9
Device Boot Start End Blocks Id System
/dev/sde1 1 4863 39062016 fd Linux raid autodetect
Why do the md0 drives say "doesn't contain a valid partition table" but the md1 drives list the single partition (sd?1) and say it's raid auto? I suspect I formatted the individual drives differently when I built md0 vs formatting md1 drives. Remember the 1st time around I'd built md1 as devices sdd1 and sde1 instead of sdd and sde.
The difference was in how you format the drives. The md0 devices (sdb and sdc) were NOT individually formated. md0 was created from two drives that had all partitions deleted. The mdadm create was done and THEN the array was formatted with mkfs.ext3 /dev/md0 similar to what is described in the above thread.
The md1 devices were each formatted prior to building the array with fdisk as detailed here:
Quote:
$ fdisk /dev/hdX
Type ‘n’, create a logic partition. Write down the cylinders you used because the partitions on the others disk have to fill the same exact amount of space (not required for full = drive mirror). Set it’s type to ‘fd’ by typing t. Finally, save the changes by typing w.
...remember once again than you have to follow this procedure twice, once of earch hard drive.
It would seem to me formatting the partitions first would be the way to go if you were mirroring partitions instead of whole drives. It looks like I'll be doing this on another machine shortly on order to create a mirrored partition to be mounted as a generic 'data' drive and another partition to be mounted for BackupPC to use as it's pool/archive storage drive both on a mirror of two 500GB drives.
Quote:
Originally Posted by skipdashu
If nothing else can someone tell me how to make both md2 and md1 go completely away so I can start over w/ formatting sdd and sde?
While I haven't tried to build md2 yet (waiting for something to finish) I did rebuild md1. First I made sure md1 wasn't mounted. Then you can simple STOP the array with:
Code:
sudo mdadm -S /dev/md1
It did complain about md1 being a part of another array (md2) but allows you to go ahead anyway.
Since I had the GUI up I went ahead and used Gparted (System->Parition Editior in Ubuntu) and deleted sdd1 and sdde1 leaving essentially bare metal drives.
OK, Now with sdd and sde void of any partitions I redid the mdadm create as described above (I didn't redo the mknod). The array md1 started and began sync'ing. While it was still sync'ing I decided to see if the suggestion that you can format during that process would work so went ahead and did:
And just to prove out that it was formatting method that caused the previous differences in the output from fdisk -l, I'll redo that (cutting cut out the sda stuff since it's the non-raid boot/OS drive:
Code:
sudo fdisk -l
Disk /dev/sdb: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x206e1c51
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x206e1c51
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/sde doesn't contain a valid partition table
Ahhh, they symmetry is a thing of beauty to us AR types ;-)
I'll add anything of possible value here once I get md2 built.
My situation is fairly similar, except I already have md0 & md1 on each of the first two existing drives. They consist of sdb1 (swap) and sdb2 (/) each and the same on sdc1 and sdc2. This is currently Raid 1.
To follow the guide earlier published, can anyone tell me how to go about this procedure to drives 3 & 4 to make the new RAID 10 configuration??
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.