Non-consistend device names (sda, sdb) on RAIDED system
Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Rep:
Non-consistend device names (sda, sdb) on RAIDED system
I have a RAID1 system, on which the devices are laid out as:
hdd1: /dev/sda
hdd2: /dev/sdb
Everything, root file system, data files are on this array.
I use mdadm. No problems here.
However, I have a external USB disk attached for daily backups. When I plug this USB disk into the running machine it becomes /dev/sdc. No problem here either.
However, when I boot the system again, the situation becomes:
usb: /dev/sda
hdd1: /dev/sdb
hdd2: /dev/sdc
The result is that the system starts to assemble /dev/sda and /dev/sdb. sda is considered defective or new and mdamd starts to recreate the array.
This is of course an unwanted situation because when it happens, it usually is due to an unexpected reboot event.
I know I can assign UUIDs in fstab, or do things in udev rules, but whatever I do, it is only accessible after the RAID is assembled.
How should I solve this? It is more or less an academic question as I will decommission this server soon, but sooner or later I have to set up something similar.
You are better off to use partitions with mdadm so that they are labeled as RAID devices. The partitions can be almost as large as the device. It also makes it easier if you have to replace a failed disk.
Make sure you do not have device names in /etc/mdadm.conf. It should build your arrays by the UUID in the RAID superblock.
Code:
MAILADDR root
AUTO -all
DEVICE partitions
ARRAY /dev/md1 UUID=3d29f5be:d6e4d69e:ccbae606:acfd4ebe
I use the "AUTO -all" to prevent mdadm from trying to build any arrays except the ones listed.
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Original Poster
Rep:
I do use partitions, and I have mdadm.conf the way you have it.
The problem seems to be that mdadm cannot distinguish between sda being an external hard disk and sda being a new disk which was inserted to replace a failed disk.
No you have stated this about partitions and mdadm.conf I think it is even more strange that mdadm tries to build this array.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.