LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   The data from the command "df" is not updated (https://www.linuxquestions.org/questions/linux-server-73/the-data-from-the-command-df-is-not-updated-631203/)

homeini 03-28-2008 04:32 AM

The data from the command "df" is not updated
 
Hello,

I've got a server which has a cron that alerts me when the used space is above 80%. To do this the cron uses the "df" command.

Some days ago it alerts me, because it was used the 82% of the disk. Then I go to the server and I delete a log (11G) that was the cause of this. But the next day it alert me again, and the next again,...

So I go to the server and when I wrote "df", I saw that df was giving wrong data.

Someone says me to reboot the machine, and after doing it "df" works fine and the server stopped alerting me.

¿How often updates "df" its data? ¿Must I reboot the machine always to update it?

Thanks and greetings!!

colucix 03-28-2008 05:04 AM

No. I think it's not a problem of df. Being a log file most likely it was still in use from some application when you deleted it. Since the application points to the file through its inode number, even if you delete the entry in the directory list, that block of data is still there and it will be counted in the disk space usage until the application uses it.

Therefore a reboot its not necessary. You can simply restart the application which used the file, most likely the syslogd service.

homeini 03-28-2008 05:23 AM

Thks!!

I restart the application and it works!

ledow 03-28-2008 08:47 AM

Additionally, if you have 11Gb logfiles, you should really be rotating them properly or they will just become unmanageable like this.

Setting up log rotation so that, for example, you get nine individual 1Gb logfiles that cycle each day would mean that once your server's been running for ten days, it's disk requirements will stay pretty much the same all the time. And removing log files from underneath a program can cause problems (although it would have to be a particularly poorly-written program, I have to admit).

What if that program had gone mad and written hundreds of gigs instead of a few? The machine would have come to a halt and nobody would have been able to use it until you cleared it. Also, if the program that generated them was running as root, there's the possibility that you wouldn't have been ABLE to log in, even as root or in single-user-mode, to fix it because it would have written every byte it could onto the disk and filled it up completely, including the default 5% root-only emergency space.

And the next question is, how often do you (or anyone else) ever look at those logs? What's the point of spooling off logs to the disk if they get to 11Gb and then just get deleted? Either turn off logging, tone it down a bit or put in some form of log rotation/management.


All times are GMT -5. The time now is 02:10 PM.