Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am not sure this problem has anything to do with Linux. I think it might, because it started right about when I installed and began using (Gentoo) Linux; I installed it on the same harddisk as Windows XP with grub in the MBR.
When I look at the harddisk's SMART data I see the "Reallocation Event Count" increasing every day with a small amount. It started at 4 sectors when I installed Linux, increases with 1, 2 or 3 sectors every day and right now is at 22 sectors.
Reallocation Event Count means that there were attempts to reallocate sectors.
When I look at "Reallocated Sector Count", which would indicate the real amount of sectors that indeed have been remapped, it says 0 sectors and it states that the drive is perfectly healthy.
The value "Uncorrectable Sector Count" (bad sectors which could not be remapped) also states 0 sectors.
So if I get this right, it is said that there were 22 attempts to remap a sector. These attempts failed, because there are no real remapped sectors.
I now think that these attempts are bogus, because then there would have to be sectors in the Uncorrectable Sector Count value, right?
So what is going on here? Has this anything to do with Linux maybe?
There is a list of attributes and meaning here: http://en.wikipedia.org/wiki/S.M.A.R....T._attributes
I would say the drive is fine. I think any errors it might have found have been corrected. I see 'Reallocated Sector Count' as more credible than 'Reallocation Event Count', which is ambiguous and isn't reported by all drives (mine doesn't have it).
Either way I run a long test around every 1000 power on hours.
It's indeed weird.
Some websites and indicators suggest that this is a critical value and could cause disk failure.
Other websites and indicators do not really take it as serious.
I am also just being curious what might trigger it. Especially that it seems to happen since I installed Linux.
Might be worth it to run the OEM hard drive tests. If the diag test cause it to increment then you may have to find out what is causing it.
As to if linux could directly cause it, maybe. Kind of hard to say yes or no. Usually the modern hard drive is a system unto it's own control for these reports. A bad driver/bad code or faulty hardware might be causing it.
Besides the Reallocated Sector Count, you also want to look at Current Pending Sector (attribute #197). That is a count of sectors that have been found bad and will be reallocated the next time they are written. Until such time as they are reallocated, they will remain as visible bad sectors to the OS. I believe that is where you will find your 22 (or more, now) sectors, and a steadily incrementing Reallocated Event Count and increases in either Reallocated Sector Count or Current Pending Sector is a very bad sign.
Might be worth it to run the OEM hard drive tests. If the diag test cause it to increment then you may have to find out what is causing it.
I have run the extended test from the Gnome Disk Utility. I also ran the program badblocks with a destructive read/write test on the largest partition. This all went fine.
There also seems to be a self test in the BIOS setup; I haven't tried that one yet. Will do soon.
Quote:
As to if linux could directly cause it, maybe. Kind of hard to say yes or no. Usually the modern hard drive is a system unto it's own control for these reports. A bad driver/bad code or faulty hardware might be causing it.
Besides the Reallocated Sector Count, you also want to look at Current Pending Sector (attribute #197). That is a count of sectors that have been found bad and will be reallocated the next time they are written. Until such time as they are reallocated, they will remain as visible bad sectors to the OS. I believe that is where you will find your 22 (or more, now) sectors, and a steadily incrementing Reallocated Event Count and increases in either Reallocated Sector Count or Current Pending Sector is a very bad sign.
The Current Pending Sector is also stating a value of 0.
The only value taht is 'nagging' is the Reallocated Event Count. The other values are staying steadily on 0 and thus healthy.
The Current Pending Sector is also stating a value of 0.
The only value taht is 'nagging' is the Reallocated Event Count. The other values are staying steadily on 0 and thus healthy.
Strange. Never encountered that before. Is the drive subject to vibration, or something else that might cause a sector to appear bad temporarily, but later be found OK? Unfortunately, there's no way to know just what events might cause the firmware to increment the Reallocated Event Count.
Strange. Never encountered that before. Is the drive subject to vibration, or something else that might cause a sector to appear bad temporarily, but later be found OK? Unfortunately, there's no way to know just what events might cause the firmware to increment the Reallocated Event Count.
The hard sound of the internal laptop speakers, which are sitting right above the drive bay, are causing vibrations.
Maybe this is also the cause for the strange reallocation attempts, which in the end do not seem to be real errors, but just interruptions.
I mean, the slow down of the hdd even triggered buffered I/O errors as found in /var/log/messages.
Though, it still remains weird. The other drives in my laptop, which were there for 6 whole years, never had a high value at the reallocation event count.
Last edited by diejengent; 11-02-2012 at 06:53 PM.
Here is a video of a demonstration of how the response times of a whole shelf of rack mounted disk drives is degraded just by yelling at it: http://www.youtube.com/watch?v=tDacjrSCeq4
I've personally seen a disk drive degraded almost to zero throughput by a vibrating tape drive in the same tower case.
Here is a video of a demonstration of how the response times of a whole shelf of rack mounted disk drives is degraded just by yelling at it: http://www.youtube.com/watch?v=tDacjrSCeq4
I've personally seen a disk drive degraded almost to zero throughput by a vibrating tape drive in the same tower case.
That indeed is a remarkable video.
Thanks for sharing your insights!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.