[Bash] log file roll over issues
My script will constantly run in the background looking for some
tcpdump filters, if it finds them, it'll put them in a file: /tmp/tcpdump.log Once this file (/tmp/tcpdump.log) gets 1024 bytes long, it will move this file to a new file (/tmp/tcpdump.log-date) & create a new tcpdump.log file and start writing to that file. I cannot seem to get the last bit working; roll-over the existing file and start writing on a new file. Here's my script. Any help would be much appreciated. Thanks. Code:
#!/bin/bash |
See the -C option in tcpdump. It will automatically rollover files for you.
1024 bytes is *awfully* small for a dump file - thats less than 1 full packet! |
Quote:
I know 1024 bytes is too small but I can always increase that value. Thanks for your help again. Any further help on the code itself would be much appreciated. |
In this scenario you will most likely have to test logrotate with copytruncate directive. But, since you're on OpenBSD, you should use PF for what you are trying to do -- much more flexible and robust and will probably save you time in the end.
|
How are you stopping tcpdump from outputting more than 1024 bytes? That isn't shown in your script. How do you know when it has output 1024 bytes? And where is the loop to look again if it is < 1024 bytes?
While renaming a log file in use isn't problematic as long as they are in the same file system, at some point, you need to signal tcpdump to stop dumping into the old file and start dumping into the new one. That isn't shown in your script. See man stat for obtain size information on a file, but again, this is silly, because tcpdump also can output a maximum number of bytes. I'm sure OpenBSD has a port of a more feature-rich tcpdump. I installed mine from pkgsrc on NetBSD. |
As I suspected, the copytruncate directive is needed in this case. If you remove it from the logrotate config, it will not work. This is on a Linux box, but it should be the same for you on OpenBSD.
Code:
#!/usr/bin/env bash |
My mistake - I thought you were trying to rotate the binary data files (eg. -w file) rather than just the STDOUT.
Two processes modifying an open file at the same time have unpredictable results. The copytruncate option is iffy at best. Good luck. |
Quote:
|
[QUOTE=jcookeman;3216743]As I suspected, the copytruncate directive is needed in this case. If you remove it from the logrotate config, it will not work. This is on a Linux box, but it should be the same for you on OpenBSD.
Thanks for your script. This has been real helpful. I modified the script a bit and ran and I can see some weird things happening: - as soon as tcpdump.log reaches 1024 bytes, the script copies the content of tcpdump.log to 5 other files tcpdump.log.1 tcpdump.log.2 etc. I know this is because the rotate 5 option on logrotate. But all I want it to copytruncate only once to tcpdump.log.1 & when tcpdump.log reaches 1024 bytes again, copytruncate again (once) to tcpdump.log.2 - after the copytruncate is done, the tcpdump.log file goes to 0 bytes and then goes to 1024 bytes & increasing - this goes on and on; that is, ls keeps reporting the file size 0 bytes and then suddenly >= 1024 bytes. Here is my new script. Would really appreciate further help. Thanks. |
Welcome to the world of multiple processes writing to the same file at the same time. Unix 101.
|
Quote:
|
You are misunderstanding how file I/O works.
When a file is opened by a process, it has a file pointer of where to perform its *next* write. In your script, the shell has $FILE open for the redirection. After the shell has written byte 1024, its file pointer is set to write starting at byte 1025. Since you truncated the file, the next write by the shell will create a sparse file, where bytes 0-1024 are empty, and tcpdump's redirected output will continue at bytes 1025+. (this is a generalization of the concept; I've omitted some details for the sake of simplicity). Let's confirm with some file sizes: Code:
-rw-r--r-- 1 root root 113K Jul 23 23:14 tcpdump.log od -b tcpdump.log It will reveal that indeed all the initial bytes are zeros: Code:
0000000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 The astute observer will also notice that the tcpdump.log.# file sizes do not exactly equal 1024, but jump in size according the I/O buffer size and when it is flushed. On my system, the file size delta between consecutive tcpdump.log.# files is 16K (some runs have 32K or even 64K deltas, depending upon timing, and whether or not -l is used with tcpdump). As I have been trying to tell you, you cannot force another process to change or reset its internal file pointer. I'll state it again - two random processes CANNOT write to the same file at the same time as the results are undefined. I hope you now understand this basic lesson. So you need a different approach. One approach is to write your own read-n-rotate program, that reads and writes its STDIN, and rotates its output file every N bytes. This can easily be done in C, perl, or your language of choice. This way - you control the open and close of the output files. Another approach is to use tcpdump's -w, -C option and -W options to create N files, each of a specified maximum size. tcpdump will automatically close one file and open the next file for output. Using -W allows you to create a ring of files (eg. 1, 2, 3, 1, 2, 3, ...). When tcpdump has finished writing a file, it will advance to the next. You are now safe to read the file with tcpdump -r, and you can delete the file too. How do you know when to read the next file ? Answer: after tcpdump has create the file N+1 mod the max number of files you specify with -W. This is how you manage a ringer buffer with one writer and one reader (called the producer/consumer problem in computer science). Good luck! |
All times are GMT -5. The time now is 09:37 AM. |