UDEV: Persistent disk-name
after I reinstalled the system my disks are not in order and that
makes a big problem for my raids. I've been reading up on UDEV,
but I can't find a way of making "sdd" in kernel to show up as
"sdb" for instance.
Can someone please help me?
Running Gentoo and Udev v197
Writing udev rules to force a particular drive ordering in the /dev/sd* namespace is moderately difficult and can run afoul of race conditions where udev is adjusting names while the kernel is still discovering drives and creating names in that same namespace. Can you build your RAID array using the names from /dev/disk/by-id/ ? Those names are formed from the drive model and serial number, and will not change.
cables back and forth. Thought that it could be a bit easier with a rule in UDEV since it's populating the /dev/sd* in the
persistant-block-device rule. If I look with ls -lR in disks I see the disks that are sorted by labels and UUID so I thought
that I could change it in the udev-rules since there are rules for it already?
The problem I have is that I don't really understand them. The documentation is poor and I've read posts with the same problem;
like someone putting in a usb-stick and boot and it enumerates it first etc.
There must be some way to populate the links through the udev, or not?
The problem is that the kernel assigns a /dev/sd* name before udev starts processing the device. When you try to use a udev rule to swap two names, udev must first rename the first device to a temporary name, then rename the second device to it's intended name, then rename the temporary. Things get dicey if this is going on while the kernel is still discovering new devices and assigning names in that same namespace. If you want to avoid using the admittedly unwieldy /dev/disk/by-id/* names, you can write a udev rule that either creates a symlink or changes the name to something other than the "sd[a-z]" names the kernel is using, perhaps "sdx[a-z]". For example (untested):
When writing the udev rules, you can recognize the drive by any of its unique attributes, such as "ID_SERIAL_SHORT". To get a listing of all of a device's properties, run
Ok, I didn't realize it was so complicated. I managed to assemble my 2 raids with mdadm, but the disks are not in order.
md1 : active raid5 sde1 sdb1 sdd1 sdc1
8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
md0 : active raid6 sdh1 sdj1 sdg1 sdf1 sdi1 sdk1
7813527552 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
It seems like they are working fine. I use dm-crypt on them but I don't know what will happen if I forget my usb-stick in the
machine since it seems like the usb-drive has precedence over SATA. If I plug it in after the bootup all is good, but that is
why I wanted to map a disk with its UUID to a /dev/sdX so it always would be the same. Guess I will have to set up through
UUID the next time I create a raid. The order that mdadm list the disks in worries me a bit especially if kernel decides to
enumerate them differently because of some hardware like the USB-port or my eSATA-card...
But this means that the disks etc that the kernel find en enumerate are static and I can only make symlinks to other names?
Wonder how it makes that decision when it traverse the disks?
Thank you for your reply!
I thought that mdadm assembled RAIDs based upon the partition information on the disks? I know I've moved my RAID to another machine entirely and it just works without any modification. Surely mdadm won't try to add any partitions to the RAID that aren't marked as part of a RAID volume?
No, just that the order of the disks are 2,4,1,0,3,5. I hope that it's still ok, seems so, but I would rather have them 0-5 in order.
Since the only thing that I was to replace the systemdisk I didn't think it would rename my disks when I plugged them in after the
install. I even tried the old kernel but it was the same. I'm not a kernel-expert so I don't know how it decides what is what, but
it's kind of strange that it puts in the usb in the middle of it all if I forget to take it out etc. That is why I wanted and feel a
bit uneasy with the current display.
Maybe it's nitpicking but I want to know why it suddenly decides that a disk should be renamed to something else after a system
I can well understand you being a bit uneasy especially when it's not clear why the system did it. I wouldn't worry too much but I can see why you'd like to know.
It's just a matter of the order in which the drivers, all running in parallel, happen to detect the drives. The result is not necessarily repeatable, even with the same kernel booting on the same hardware.
I thought that it went throw the PCI-bus in order and then went and got the long pci-names (something like a Sun/Sparc) so it would be
the same all the time if it was on the same "number" but if you are saying that it is just picking the first disk this might very well
happen. Doesn't mean that I like it though. Would prefer thing to be consistent and have some repeatably :)
I was very happy and surprised that the assemble command worked so great in mdadm though!
I remembered now that I disconnected the "old" disk that I had placed on the last port on the motherboard (sdj) so the other
array should have shifted one step down. Now I'm not sure if I can plug in a new disk there in the future? Can I run mdadm assemble
Sorry for all this questions; well it's @LinuxQuestions so maybe this IS the right place to ask them :)
|All times are GMT -5. The time now is 10:44 PM.|