LinuxQuestions.org

LinuxQuestions.org (http://www.linuxquestions.org/questions/index.php)
-   Programming (http://www.linuxquestions.org/questions/forumdisplay.php?f=9)
-   -   [Bash] log file roll over issues (http://www.linuxquestions.org/questions/showthread.php?t=656039)

noir911 07-15-2008 10:55 PM

[Bash] log file roll over issues
 
My script will constantly run in the background looking for some
tcpdump filters, if it finds them, it'll put them in a file: /tmp/tcpdump.log
Once this file (/tmp/tcpdump.log) gets 1024 bytes long, it will move this file
to a new file (/tmp/tcpdump.log-date) & create a new tcpdump.log file and
start writing to that file.

I cannot seem to get the last bit working; roll-over the existing file and
start writing on a new file. Here's my script. Any help would be much
appreciated. Thanks.

Code:

#!/bin/bash
tcpdump not ssh 2>&1 > /tmp/tcpdump.log

# print the file size
LS="`ls -al /tmp/tcpdump.log | awk '{print $5}'`"

if [ $LS -gt "1024" ]; then
  /bin/mv /tmp/tcpdump.log /tmp/tcpdump.log.`date +%d-%b-%Y`
  touch /tmp/tcpdump.log
fi


Mr. C. 07-16-2008 02:51 AM

See the -C option in tcpdump. It will automatically rollover files for you.

1024 bytes is *awfully* small for a dump file - thats less than 1 full packet!

noir911 07-16-2008 05:16 AM

Quote:

Originally Posted by Mr. C. (Post 3216135)
See the -C option in tcpdump. It will automatically rollover files for you.

1024 bytes is *awfully* small for a dump file - thats less than 1 full packet!

Thanks for your help. My version of tcpdump (OpenBSD) don't have the -C flag. Also, I am more interested in doing this in the shell script to use the script later with some other program.

I know 1024 bytes is too small but I can always increase that value.

Thanks for your help again. Any further help on the code itself would be much appreciated.

jcookeman 07-16-2008 08:12 AM

In this scenario you will most likely have to test logrotate with copytruncate directive. But, since you're on OpenBSD, you should use PF for what you are trying to do -- much more flexible and robust and will probably save you time in the end.

Mr. C. 07-16-2008 09:08 AM

How are you stopping tcpdump from outputting more than 1024 bytes? That isn't shown in your script. How do you know when it has output 1024 bytes? And where is the loop to look again if it is < 1024 bytes?

While renaming a log file in use isn't problematic as long as they are in the same file system, at some point, you need to signal tcpdump to stop dumping into the old file and start dumping into the new one. That isn't shown in your script.

See man stat for obtain size information on a file, but again, this is silly, because tcpdump also can output a maximum number of bytes. I'm sure OpenBSD has a port of a more feature-rich tcpdump. I installed mine from pkgsrc on NetBSD.

jcookeman 07-16-2008 02:24 PM

As I suspected, the copytruncate directive is needed in this case. If you remove it from the logrotate config, it will not work. This is on a Linux box, but it should be the same for you on OpenBSD.

Code:

#!/usr/bin/env bash
FILE=/tmp/tcpdump.log
LGRTCNF=${HOME}/tcpdump_logrotate.conf
tcpdump > $FILE 2>&1 &

cat > $LGRTCNF <<EOF
$FILE {
    copytruncate
    rotate 5
}
EOF

while true
do
    [[ "`ls -al $FILE | awk '{print$5}'`" -gt "1024" ]] && \
    logrotate -f $LGRTCNF
    sleep 5
done


Mr. C. 07-16-2008 02:36 PM

My mistake - I thought you were trying to rotate the binary data files (eg. -w file) rather than just the STDOUT.

Two processes modifying an open file at the same time have unpredictable results. The copytruncate option is iffy at best.

Good luck.

jcookeman 07-16-2008 02:45 PM

Quote:

Originally Posted by Mr. C. (Post 3216762)
My mistake - I thought you were trying to rotate the binary data files (eg. -w file) rather than just the STDOUT.

Two processes modifying an open file at the same time have unpredictable results. The copytruncate option is iffy at best.

Good luck.

copytruncate is the only option in this instance because the file is open. In general it works pretty good, but that's why I suggested he use PF if he is not able to install a more feature-rich tcpdump.

noir911 07-23-2008 09:51 PM

[QUOTE=jcookeman;3216743]As I suspected, the copytruncate directive is needed in this case. If you remove it from the logrotate config, it will not work. This is on a Linux box, but it should be the same for you on OpenBSD.

Thanks for your script. This has been real helpful. I modified the script a bit and ran and I can see some weird things happening:

- as soon as tcpdump.log reaches 1024 bytes, the script copies the content of tcpdump.log to 5 other files tcpdump.log.1 tcpdump.log.2 etc. I know this is because the rotate 5 option on logrotate. But all I want it to copytruncate only once to tcpdump.log.1 & when tcpdump.log reaches 1024 bytes again, copytruncate again (once) to tcpdump.log.2

- after the copytruncate is done, the tcpdump.log file goes to 0 bytes and then goes to 1024 bytes & increasing - this goes on and on; that is, ls keeps reporting the file size 0 bytes and then suddenly >= 1024 bytes.

Here is my new script. Would really appreciate further help. Thanks.

Mr. C. 07-23-2008 10:08 PM

Welcome to the world of multiple processes writing to the same file at the same time. Unix 101.

noir911 07-23-2008 11:44 PM

Quote:

Originally Posted by Mr. C. (Post 3224398)
Welcome to the world of multiple processes writing to the same file at the same time. Unix 101.

Is there anything I can do to stop writing to the file for 5 seconds while logrotate does the copyturncate?

Mr. C. 07-24-2008 01:48 AM

You are misunderstanding how file I/O works.

When a file is opened by a process, it has a file pointer of where to perform its *next* write. In your script, the shell has $FILE open for the redirection. After the shell has written byte 1024, its file pointer is set to write starting at byte 1025. Since you truncated the file, the next write by the shell will create a sparse file, where bytes 0-1024 are empty, and tcpdump's redirected output will continue at bytes 1025+. (this is a generalization of the concept; I've omitted some details for the sake of simplicity). Let's confirm with some file sizes:

Code:

-rw-r--r--  1 root      root 113K Jul 23 23:14 tcpdump.log
-rw-r--r--  1 root      root  32K Jul 23 23:09 tcpdump.log.1
-rw-r--r--  1 root      root  48K Jul 23 23:10 tcpdump.log.2
-rw-r--r--  1 root      root  64K Jul 23 23:11 tcpdump.log.3
-rw-r--r--  1 root      root  80K Jul 23 23:11 tcpdump.log.4
-rw-r--r--  1 root      root  96K Jul 23 23:12 tcpdump.log.5
-rw-r--r--  1 root      root 112K Jul 23 23:13 tcpdump.log.6

Ok, that seems to agree with the theory. Now, let's check if these are sparse files:

od -b tcpdump.log

It will reveal that indeed all the initial bytes are zeros:

Code:

0000000  000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000
*
0240000  061 070 065 067 040 ...

Yup, that is also as predicted.

The astute observer will also notice that the tcpdump.log.# file sizes do not exactly equal 1024, but jump in size according the I/O buffer size and when it is flushed. On my system, the file size delta between consecutive tcpdump.log.# files is 16K (some runs have 32K or even 64K deltas, depending upon timing, and whether or not -l is used with tcpdump).

As I have been trying to tell you, you cannot force another process to change or reset its internal file pointer. I'll state it again - two random processes CANNOT write to the same file at the same time as the results are undefined. I hope you now understand this basic lesson.

So you need a different approach.

One approach is to write your own read-n-rotate program, that reads and writes its STDIN, and rotates its output file every N bytes. This can easily be done in C, perl, or your language of choice. This way - you control the open and close of the output files.

Another approach is to use tcpdump's -w, -C option and -W options to create N files, each of a specified maximum size. tcpdump will automatically close one file and open the next file for output. Using -W allows you to create a ring of files (eg. 1, 2, 3, 1, 2, 3, ...). When tcpdump has finished writing a file, it will advance to the next. You are now safe to read the file with tcpdump -r, and you can delete the file too. How do you know when to read the next file ? Answer: after tcpdump has create the file N+1 mod the max number of files you specify with -W. This is how you manage a ringer buffer with one writer and one reader (called the producer/consumer problem in computer science).

Good luck!


All times are GMT -5. The time now is 04:14 AM.