This time, it is a java-webapp running on a CentOS machine. There are 4 java-processes logging under the test run (using log4j). For this case, people here are going to provide a machine with more disc space for the next try. However, I ran into this problem previously (repeatedly) on other projects, where it was an httpd log growing over a longer test run. Another time it was a process (some simulator) started with nohup. So, when I get such problems, I usually copy /dev/null/ onto that big file if the others are ok with it.
(I'm just using these servers temporarily and people usually refuse to set down the log level or let me use logrotate - it's not even installed this time.)
What I'm wondering about is, that when I use cp /dev/null on some file, if I might mess up something on the server (or if I messed up something badly already). I regarded it as an ok makeshift solution, when things as described above happen. But this strange behaviour, when ending up with a huge "garbage file" or seemingly wrong information (?) returned by ls, made me unsure. Why do the files not get emptied, and the times they do, why do I in several cases get an unchanged file size returned by ls?
Here's a clipping of output of ls and du some time after emptying the log:
blid01# ls -lh trace/
-rw-r----- 1 blid origo 1.9G Feb 13 13:40 BlidServer-trace.log
blid01# du -sh trace/