-   Linux - Newbie (
-   -   cp /dev/null and ls issues (

pruss 02-12-2009 01:24 PM

cp /dev/null and ls issues
I ran into a problem when being forced to regularly empty log files that still are written to by some processes. Thought it was an idea to use

cp /dev/null logfile
, but it did not work in all cases. First I only had some nohup.out file that would not be emptied, but just a lot of garbage written to (at least the size didn't change).
Later on I had that issue with some of the log files (where it appeared to work without problems first):
ls -lh showed that the file was not emptied and still growing (but had a lower value for "total"), while df and du -sh showed that space had been freed. It is possible to see the new entries of those log files (tail), while less and head just show a lot of garbage and seem to hang (?).
I did not find anything on this issue when googling it. Wonder if I'm missing something completely here..

The server is running CentOS 4.7.

I'd be thankful for any information on this (or where I possibly can find any).

repo 02-12-2009 01:27 PM


cat /dev/null > logfile

> logfile
BTW, why don't you use logrotate
what logfiles are you talking about ?

pruss 02-12-2009 01:40 PM

Well, I also tried using

cat /dev/null > file

echo "" > file
, none of this seemed to have a different effect. The same with

> logfile
by now.

Log rotation is done every midnight, they set up something themselves, and I'm not supposed to change anything about that, I think. (They do not even seem have logrotate installed.) The logfiles are written to by some webapp I'm testing (people are using Weblogic here, I don't know much about this yet). I'm not able to restart those processes by now either, since this would mess up the test. :/

Btw, if the files really were as big as ls reports, it already would have stopped up because of full disc.

repo 02-12-2009 03:19 PM

you are doing this as root, right?

servat78 02-12-2009 07:08 PM

You need to make sure that you have permissions to change that logfile. Note that web scripts act as user apache or nobody usually, unless the web server has some other distinct username or is allowed to sudo according to the hosted domain.


pruss 02-13-2009 04:19 AM

thank you very much for the answers so far.

I log in with my user and use sudo -s. Whoami gives me "root". And I was able to change some of these log files without any problems in the beginning (also ls showed that the file size had decreased). However, it looks like the space is freed, otherwise I would have run out of disc space during the test, which didn't happen. It just seemed to be that nohup.out file that didn't get emptied but copied garbage upon(a permission problem with that one?).

After copying /dev/null onto the log files, ls reported a file size of still several GB, while the 'total'-overview of ls and df and du showed that disc space was freed. So it partly looked like at least emptying the log files was successful. But what happened to ls in this case? Is copying /dev/null onto a file generally something one shouldn't do? (It was -hopefully- a temporary solution in this case.)

repo 02-13-2009 05:48 AM

what log files are you talking about?
which processes are filling the logs?
Perhaps you can disable logging in config files, or use logrotate to rotate them, and then delete the backup files.

pruss 02-13-2009 07:28 AM

This time, it is a java-webapp running on a CentOS machine. There are 4 java-processes logging under the test run (using log4j). For this case, people here are going to provide a machine with more disc space for the next try. However, I ran into this problem previously (repeatedly) on other projects, where it was an httpd log growing over a longer test run. Another time it was a process (some simulator) started with nohup. So, when I get such problems, I usually copy /dev/null/ onto that big file if the others are ok with it.
(I'm just using these servers temporarily and people usually refuse to set down the log level or let me use logrotate - it's not even installed this time.)

What I'm wondering about is, that when I use cp /dev/null on some file, if I might mess up something on the server (or if I messed up something badly already). I regarded it as an ok makeshift solution, when things as described above happen. But this strange behaviour, when ending up with a huge "garbage file" or seemingly wrong information (?) returned by ls, made me unsure. Why do the files not get emptied, and the times they do, why do I in several cases get an unchanged file size returned by ls?

Here's a clipping of output of ls and du some time after emptying the log:


blid01# ls -lh trace/
total 54M
-rw-r-----  1 blid origo 1.9G Feb 13 13:40 BlidServer-trace.log

blid01# du -sh trace/
54M        trace/

All times are GMT -5. The time now is 08:13 PM.