(Semi-crosspost from the Gentoo forums, I rewrote this text, though.)
Here's everything I did:
(TL;DR: added two udev rules matching my MAC addresses and renaming eth* to net/lan, and ran "udevadm trigger net".)
Nothing whatsoever was related to the disk subsystem, except for, most likely, "udevadm trigger net" which appears to scan through a vast amount of device nodes, including all disks and partitions, according to "udevadm trigger --dry-run --verbose net".
The system has an uptime of 214 days, and the problem started within 5 minutes (that's how often I log the disk status) of me running udevadm, so hardly a coincidence.
They are three SATA disks, by the way.
The three disks are in a md RAID5 array, but the array is stopped and as such not mounted, and iostat shows *no* disk activity when they are spun up.
As soon as I try to spin them down (hdparm -y /dev/sdX), they start spinning again within 0.5-4 seconds at most... with the exception that I somehow, for a totally unknown reason, managed to get sdc to be in standby for a few hours now. I have no clue why it's that particular disk out of the three; they are
different models, but equally sized, same partitions (100% to the RAID array), etc.
(13:36) exscape ~ # hdparm -y /dev/sdb
issuing standby command
(13:36) exscape ~ # hdparm -C /dev/sdb
drive state is: active/idle ### should be "standby"
-- In another window, started before any hdparm commands were sent --
(13:36) exscape ~ # udevadm monitor
monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent
KERNEL[1262522217.633318] change /block/sdb (block) <-- when shutting it down
UDEV [1262522219.023900] change /block/sdb (block) <-- when shutting it down - or is this the startup? 1.4 seconds after the kernel event.
KERNEL[1262522226.228927] change /block/sdb (block) <-- when using hdparm -C to check status 7 seconds later
UDEV [1262522226.335246] change /block/sdb (block) <-- when using hdparm -C to check status 7 seconds later
The only way I've found to power them down, without the instant powerup, is "udevadm control --stop-exec-queue", which as far as I understand is basically pausing udev until you resume it... doesn't sound appealing, so I restarted it soon thereafter, which spun them right up again.
(Actually, I tried that again; sdb and sdd spun up, sdc is still in standby.
sdb is a WD Green 1TB (WD10EADS)
sdc is a Hitachi 7K1000(.B?) 1TB
sdd is a Hitachi 7K1000(.B?) 1TB
Anyone have a solution in mind - except for rebooting and hoping for the best?