when installing second raid0 array, drives shift places
Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
when installing second raid0 array, drives shift places
I have an ASUS P6T motherboard with Marvell RAID built in. Running Fedora Core 5.
Current setup is two 1TB drives in RAID 0 (mirrored). This has been working fine for years, but now I am nearly out of space. So, I bought two new 1TB drives and set up a second RAID 0 array using the Marvell utility.
Before adding the new disks:
Code:
#fdisk -l
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 13 104391 83 Linux
/dev/sda2 14 121600 976647577+ 8e Linux LVM
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 13 104391 83 Linux
/dev/sdb2 14 121600 976647577+ 8e Linux LVM
Disk /dev/dm-0: 1000.2 GB, 1000202178560 bytes
255 heads, 63 sectors/track, 121600 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/dm-0p1 1 13 104391 83 Linux
/dev/dm-0p2 14 121600 976647577+ 8e Linux LVM
Disk /dev/dm-1: 106 MB, 106896384 bytes
255 heads, 63 sectors/track, 12 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/dm-2: 1000.0 GB, 1000087119360 bytes
255 heads, 63 sectors/track, 121587 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3: 997.9 GB, 997942362112 bytes
255 heads, 63 sectors/track, 121326 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-3 doesn't contain a valid partition table
Disk /dev/dm-4: 2080 MB, 2080374784 bytes
255 heads, 63 sectors/track, 252 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-4 doesn't contain a valid partition table
#df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
944031184 828668812 66634720 93% /
/dev/dm-1 101086 15457 80410 17% /boot
tmpfs 2012740 0 2012740 0% /dev/shm
After adding the disks:
Code:
#fdisk -l
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 121601 976760001 83 Linux
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 121601 976760001 83 Linux
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 13 104391 83 Linux
/dev/sdc2 14 121600 976647577+ 8e Linux LVM
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 13 104391 83 Linux
/dev/sdd2 14 121600 976647577+ 8e Linux LVM
Disk /dev/dm-0: 1000.2 GB, 1000202178560 bytes
255 heads, 63 sectors/track, 121600 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/dm-0p1 1 121601 976760001 83 Linux
Disk /dev/dm-1: 997.9 GB, 997942362112 bytes
255 heads, 63 sectors/track, 121326 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/dm-2: 2080 MB, 2080374784 bytes
255 heads, 63 sectors/track, 252 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-2 doesn't contain a valid partition table
#df
/dev/mapper/VolGroup00-LogVol00
944031184 828668252 66635280 93% /
/dev/sdc1 101086 15457 80410 17% /boot
tmpfs 2012740 0 2012740 0% /dev/shm
As you can see the original disks (sda and sdb) are moved to sdc and sdd, and the "new" disks are now sda and sdb. Also /boot moves from /dev/dm-1 to /dev/sdc1. I imagine this has to do with where the drives are plugged into the motherboard.
The best solution to solving this problem, is start using labels, or UUIDs for all your partitions, and then mounting them by referencing these labels.
Alternatively, connect your drives in a different order on your motherboard to revert the situation.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.