External HD probably shutting down after prolonged periods of inactivity
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
External HD probably shutting down after prolonged periods of inactivity
Hello everyone,
I have a Western digital HD connected through external SATA connection to a slack11 server.
after prolonged periods of inactivity i see that i can't access the disk any longer.
There is a permanent reference for the disk in fstab, and there is still a reference of the disk in mtab, too...
however, the /dev/sd* device is gone, so i can't access the disk.
Now if i'm next to the server, i know i can umount, unplug and re-plug the disk and i'll mount and again and be on my way...
BUT what if 'm not close to the server?
is there a remote way to make udev (or otherwise) re-sense the existence of the disk, so that the disk will have a /dev/ reference again?
While working on a totally different problem once I had to write a script that prevented the drive from going into sleep (or more precisely, from head parking):
The point is, calling smartctl -a wakes up the drive. Obviously you don't need to pass any arguments like above, just call smartctl -a /dev/whatever. And clearly you should adjust that 3 seconds to a longer, more reasonable interval.
Edit: By the way, this is a solution supposing that sleeping is the problem. You may also want to check dmesg and other logs to see if there is another failure. I've never tried it on external HDs but the usb autosuspend feature (if enabled) also causes some USB stuff (like mice) to shut down.
First be sure the SATA drive supports the options that 'smoker' has suggested.
Quote:
excerpt 'man hdparm';
-s Enable/disable the power-on in standby feature, if supported by the drive. VERY DANGEROUS.
Do not use unless you are absolutely certain that both the system BIOS (or firmware) and the
operating system kernel (Linux >= 2.6.22) support probing for drives that use this feature.
When enabled, the drive is powered-up in the standby mode to allow the controller to sequence
the spin-up of devices, reducing the instantaneous current draw burden when many drives share
a power supply. Primarily for use in large RAID setups. This feature is usually disabled and
the drive is powered-up in the active mode (see -C above). Note that a drive may also allow
enabling this feature by a jumper. Some SATA drives support the control of this feature by
pin 11 of the SATA power connector. In these cases, this command may be unsupported or may
have no effect.
-S Put the drive into idle (low-power) mode, and also set the standby (spindown) timeout for the
drive. This timeout value is used by the drive to determine how long to wait (with no disk
activity) before turning off the spindle motor to save power. Under such circumstances, the
drive may take as long as 30 seconds to respond to a subsequent disk access, though most
drives are much quicker. The encoding of the timeout value is somewhat peculiar. A value of
zero means "timeouts are disabled": the device will not automatically enter standby mode.
Values from 1 to 240 specify multiples of 5 seconds, yielding timeouts from 5 seconds to 20
minutes. Values from 241 to 251 specify from 1 to 11 units of 30 minutes, yielding timeouts
from 30 minutes to 5.5 hours. A value of 252 signifies a timeout of 21 minutes. A value of
253 sets a vendor-defined timeout period between 8 and 12 hours, and the value 254 is
reserved. 255 is interpreted as 21 minutes plus 15 seconds. Note that some older drives may
have very different interpretations of these values.
-S Set the standby (spindown) timeout for the drive. This value is used by the drive to determine how long to wait (with no disk activity) before turning off the spindle motor to save power. Under such circumstances, the drive may take as long as 30 seconds to respond to a subsequent disk access, though most drives are much quicker. The encoding of the timeout value is somewhat peculiar. A value of zero means "timeouts are disabled": the device will not automatically enter standby mode. Values from 1 to 240 specify multiples of 5 seconds, yielding timeouts from 5 seconds to 20 minutes. Values from 241 to 251 specify from 1 to 11 units of 30 minutes, yielding timeouts from 30 minutes to 5.5 hours. A value of 252 signifies a timeout of 21 minutes. A value of 253 sets a vendor-defined timeout period between 8 and 12 hours, and the value 254 is reserved. 255 is interpreted as 21 minutes plus 15 seconds. Note that some older drives may have very different interpretations of these values.
Since your drive is a SATA then be sure to get the manufactures specs for the drive and decide to use 'sdparm' or 'hdparm' as some options may not be supported.
Yes but you could try (repeatedly) calling it while the drive works. If sleeping is the problem the device node won't go away. If it still goes away then sleeping is not the cause. The message above suggests bad blocks but it could also be the result of attempting I/O on a nonexistent device (is that log from before or after the drive died?). I think you could try doing a bad block check just to make sure. Also the output of smartctl -a may give hints on the drive's health.
eventually (i don't know it it happens simultaneously) the dev file handler is also lost..
i'll have to unplug and plug the disk again and try smartctl -a
restarting udev didn't make the disk visible...
but i'm confident there is nothing wrong with the drive (i'll do a check nevertheless)... i sort of believe it might be some power down function of the USB as you stated...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.