udev changes /dev/sd* most every reboot, can they be static?
I understand udev names devices as they become known during boot, so sda might be sdb, c, or d next reboot - I've already setup fstab to use UUID's for long time so always have bootable system. Replaced power supply 2 mo's ago and since then the /dev/sdX's are no longer persistent, but also maybe due to systemd updates. What I'm after is that my SSD root drive is always /dev/sda, and my HDD /home drive is always /dev/sdb, and two other internal HDDs follow suit.
Based on this and this I've come up with this udev rule /etc/udev/rules.d/10-local.rules Code:
ATTR{ID_MODEL}=="SanDisk_SDSSDHII120G", ATTR{ID_SERIAL_SHORT}=="153984400672", KERNEL=="sd?", NAME="sda" Not sure if /usr/lib/udev/rules.d/60-persistent-storage.rules will overwrite this. Reviewing Arch Wiki, seems I'm on the right path, but hoping I can get some clarification and guidance here. Anyone more knowledgeable in writing udev rule for static /dev/sdX can help? Thank you. EDIT: edited title |
I have not run into the problem, but I HAVE implemented a solution for other reasons. Here are TWO ideas that may help.
#1 if you use software raid, devices are recognized by the raid identifier. Clearly, this only helps if you have multiple devices and can organize them into raid devices. #2 if you LABEL the volumes, and then mount them by LABEL, the UUID can do anything it wants and it will have no effect. I suggest you look into applying a LABEL to your devices. |
@ wpeckham, Thank you, not sure I understand your suggestion. Nothing RAID, just a very basic setup:
Code:
lsblk Code:
blkid |
I'm not sure I understand this: Do the UUIDs of the disks change?
The "/dev/sd*" designation is handed out to the first drive to claim the letter and is irrelevant for anything but hotplug devices? |
@ 273, no the uuid's don't change, i just blanked them out "*****" in the post. "/dev/sd*" changes, and though I understand why as you mention, which is why I switched fstab to use UUID's a long time ago instead of sd*, the sd*'s never changed on every reboot until I replaced the power supply (about the same time systemd upgraded to 240). I was reading up on how to make internal SATA drives always claim the same sd* on every boot (like they used to). This so that the disk physically connected to the mobo at SATA1 which is my SSD and "/" drive will always claim /dev/sda1, and the disk physically connected to mobo at SATA2 which is my "/home" drive will always claim /dev/sdb1. Same for the other two physically connected to mobo at SATA3 and SATA4 respectively claim /dev/sdc... and /dev/sdd.. with every boot. Arch Wiki states that udev rule can be used to do this. I've read
Code:
man udev EDIT: so the Wiki using udev rule for /dev/sd* only applies to hotplug devices? it gives those examples but seems to imply it can be done for internal drives as well. |
My advice would be to just use UUIDs. This isn't new, I saw it 10 years ago, and so on. It's simply a result of how the motherboard and power supply are configured.
|
Quote:
Code:
man 7 udev |
I really don't understand why you are trying to do this?
|
Mostly has to do with sdd1 and sdd2. /dev/sdd I only mount manually occasionally and then rsync some files there, so have a couple rsync scripts I run and don't want to run them to the wrong place if after a pc reboot sdd is no longer sdd. So to keep from making that mistake want to set system to always claim same /dev/sd* per disk every reboot regardless of discovery. I thought to change the power connectors but realized reboot reordering wasn't consistent either, in other words if sda always booted to sdd on first cold boot or reboot, then the power cable approach may work, but sometimes its sdc, sometimes sdb, usually after 3 or 4 reboots or a second cold boot, everything falls into SATA1 = sda, SATA2 = sdb, SATA3 = sdc, SATA4 = sdd which is what I'm after. SATA5 and 6 = sr0 and 1 but not concerned if those two switch order. Because Arch is a rolling distro, pc gets rebooted often enough, the extra reboots for device names I want to eliminate.
|
rsync is all about files - and filesystems. It doesn't care about the real device, and neither should you - use -L on the mount command.
KISS Mind you, I'd be worried about that power supply. I've had situations where different distros on the same machine report different disk order, but they never changed within each distro. In my case it was because of a mix of (E)IDE and SATA drives. |
@ syg00 thank you. It's a brand new P/S but doesn't mean possible defect. I'm going to check the EC Mode switch as I think its on but is not accessible from outside the desktop case.
Understood on rysnc but completely missed the mount -L operation:doh: KISS is subjective in Linux; some of Linux greatest features are that one can build their own pc almost any way desired as linux freedom practically spans the entire spectrum of individual human:computer endeavor, including simplicity, complexity, triumph and error. Perhaps the udev rule in this forum was an unintended consequence of the writers of udev? i dunno. Back to original question, if I understand the Wiki and udev man pages correctly then the udev rule should work, haven't tried it, probably won't based on support in this forum, there are simpler and better means of approach :) |
WFV: the device names udev gives the devices should be irrelevant, you should not be using them. You use the UUID or LABEL to mount, and then address by the mount. If you are addressing the mounted data using rsync and the device name, you are already wrong.
(By wrong here I do not mean that you cannot make it work, but that you should not. You are building something that is easy to break and simple to have break without detection and with potentially catastrophic consequences.) WHY would you want to address the devices and partitions by the device name instead of the mount path? |
wpeckham: agreed regarding udev. The fourth drive I don't mount in fstab, just manually occasionally. So for years ran
Code:
# mount /dev/sddX /path/to/mount/point I checked the P/S and eco switch was off. Eco switch is about energy-saving when running so have no idea if it impacts startup or not but checked it anyway, as syg00 pointed out could <still> be problem with the new P/S. I reattached the first three drives so that the SSD "/" drive is first of the parallel connectors, "/home" drive is second, theoretically being parallel connections it shouldn't matter. Didn't test anything else about the P/S. Rebooted normal (sda,b,c,d = sata1,2,3,4), no special rule. Maybe the Arch Wiki should also warn against using udev to statically assign internal drive names? It implies that udev rule can be used to avert such changes: Quote:
I'm going to mark as solved because Code:
# mount -L |
As this seems to have begun when PS was replaced, there is a non-obvious implication that SATA cables may be involved too, as in having R&R'd them in working in and out the new and old power cables. So, it may be worth time spent on SATA cable swapping, and/or replacement.
If all the SATA power leads are not the same length, maybe putting the shortest one on intended sda and the longest one on intended sdd would make a difference. Same might apply if your SATA control cables are not all identical. Does smartctl report any issues with your sdd or sdc? You do have intended sda on SATA port 0/1 and intended sdd on SATA port 3/4, right? Is a motherboard BIOS update available? |
@ mrmazda: all of the drives pass smart although sdb had S.M.A.R.T. ERROR in the BIOS post before replacing P/S (but passed smartctl tests), after replacing P/S BIOS post shows "S.M.A.R.T. cable & status OK" on all 4 drives, smartctl ok also.
sdb (2TB HDD) and sdc (4TB HDD) are < 6mo old. The control cables are new with P/S (modular, all same length) but yes worth looking into the sata cables, they are different lengths and purchased most of them about same time as pc (almost 7yrs ago, some a year or two later). SDA cable is longer than the others - very good point (not sure if the shorter cable will reach without moving the drive location, but can do that too). Yes intended drives physically connected to mobo: sda SATA port 1, sdb port 2, sdc port 3, sdd port 4 (DVD's in ports 5 & 6). No BIOS updates available, is an Asus M5A88-M with last latest bios from Dec 2013, reports as May 2013 however, its flashed with the ROM from Asus website of Dec 2013: Code:
BIOS Info: #0 EDIT: redid the SATA cables so shortest to longest are SATA1,2,3,4 (1=/, 2=/home, 4TB hdd=3, umounted 2TB=4 physically) but made no difference, first boot rearranged... not a SATA cable thing. |
All times are GMT -5. The time now is 10:46 AM. |