LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices

Reply
 
Search this Thread
Old 07-15-2008, 10:55 PM   #1
noir911
Member
 
Registered: Apr 2004
Location: Baltimore, MD
Posts: 681

Rep: Reputation: Disabled
[Bash] log file roll over issues


My script will constantly run in the background looking for some
tcpdump filters, if it finds them, it'll put them in a file: /tmp/tcpdump.log
Once this file (/tmp/tcpdump.log) gets 1024 bytes long, it will move this file
to a new file (/tmp/tcpdump.log-date) & create a new tcpdump.log file and
start writing to that file.

I cannot seem to get the last bit working; roll-over the existing file and
start writing on a new file. Here's my script. Any help would be much
appreciated. Thanks.

Code:
#!/bin/bash
tcpdump not ssh 2>&1 > /tmp/tcpdump.log

# print the file size
LS="`ls -al /tmp/tcpdump.log | awk '{print $5}'`"

if [ $LS -gt "1024" ]; then
  /bin/mv /tmp/tcpdump.log /tmp/tcpdump.log.`date +%d-%b-%Y`
  touch /tmp/tcpdump.log
fi
 
Old 07-16-2008, 02:51 AM   #2
Mr. C.
Senior Member
 
Registered: Jun 2008
Posts: 2,529

Rep: Reputation: 59
See the -C option in tcpdump. It will automatically rollover files for you.

1024 bytes is *awfully* small for a dump file - thats less than 1 full packet!
 
Old 07-16-2008, 05:16 AM   #3
noir911
Member
 
Registered: Apr 2004
Location: Baltimore, MD
Posts: 681

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by Mr. C. View Post
See the -C option in tcpdump. It will automatically rollover files for you.

1024 bytes is *awfully* small for a dump file - thats less than 1 full packet!
Thanks for your help. My version of tcpdump (OpenBSD) don't have the -C flag. Also, I am more interested in doing this in the shell script to use the script later with some other program.

I know 1024 bytes is too small but I can always increase that value.

Thanks for your help again. Any further help on the code itself would be much appreciated.
 
Old 07-16-2008, 08:12 AM   #4
jcookeman
Member
 
Registered: Jul 2003
Location: London, UK
Distribution: FreeBSD, OpenSuse, Ubuntu, RHEL
Posts: 417

Rep: Reputation: 33
In this scenario you will most likely have to test logrotate with copytruncate directive. But, since you're on OpenBSD, you should use PF for what you are trying to do -- much more flexible and robust and will probably save you time in the end.
 
Old 07-16-2008, 09:08 AM   #5
Mr. C.
Senior Member
 
Registered: Jun 2008
Posts: 2,529

Rep: Reputation: 59
How are you stopping tcpdump from outputting more than 1024 bytes? That isn't shown in your script. How do you know when it has output 1024 bytes? And where is the loop to look again if it is < 1024 bytes?

While renaming a log file in use isn't problematic as long as they are in the same file system, at some point, you need to signal tcpdump to stop dumping into the old file and start dumping into the new one. That isn't shown in your script.

See man stat for obtain size information on a file, but again, this is silly, because tcpdump also can output a maximum number of bytes. I'm sure OpenBSD has a port of a more feature-rich tcpdump. I installed mine from pkgsrc on NetBSD.
 
Old 07-16-2008, 02:24 PM   #6
jcookeman
Member
 
Registered: Jul 2003
Location: London, UK
Distribution: FreeBSD, OpenSuse, Ubuntu, RHEL
Posts: 417

Rep: Reputation: 33
As I suspected, the copytruncate directive is needed in this case. If you remove it from the logrotate config, it will not work. This is on a Linux box, but it should be the same for you on OpenBSD.

Code:
#!/usr/bin/env bash
FILE=/tmp/tcpdump.log
LGRTCNF=${HOME}/tcpdump_logrotate.conf
tcpdump > $FILE 2>&1 &

cat > $LGRTCNF <<EOF
$FILE {
     copytruncate
     rotate 5
}
EOF

while true
do
     [[ "`ls -al $FILE | awk '{print$5}'`" -gt "1024" ]] && \
     logrotate -f $LGRTCNF
     sleep 5
done
 
Old 07-16-2008, 02:36 PM   #7
Mr. C.
Senior Member
 
Registered: Jun 2008
Posts: 2,529

Rep: Reputation: 59
My mistake - I thought you were trying to rotate the binary data files (eg. -w file) rather than just the STDOUT.

Two processes modifying an open file at the same time have unpredictable results. The copytruncate option is iffy at best.

Good luck.
 
Old 07-16-2008, 02:45 PM   #8
jcookeman
Member
 
Registered: Jul 2003
Location: London, UK
Distribution: FreeBSD, OpenSuse, Ubuntu, RHEL
Posts: 417

Rep: Reputation: 33
Quote:
Originally Posted by Mr. C. View Post
My mistake - I thought you were trying to rotate the binary data files (eg. -w file) rather than just the STDOUT.

Two processes modifying an open file at the same time have unpredictable results. The copytruncate option is iffy at best.

Good luck.
copytruncate is the only option in this instance because the file is open. In general it works pretty good, but that's why I suggested he use PF if he is not able to install a more feature-rich tcpdump.
 
Old 07-23-2008, 09:51 PM   #9
noir911
Member
 
Registered: Apr 2004
Location: Baltimore, MD
Posts: 681

Original Poster
Rep: Reputation: Disabled
[QUOTE=jcookeman;3216743]As I suspected, the copytruncate directive is needed in this case. If you remove it from the logrotate config, it will not work. This is on a Linux box, but it should be the same for you on OpenBSD.

Thanks for your script. This has been real helpful. I modified the script a bit and ran and I can see some weird things happening:

- as soon as tcpdump.log reaches 1024 bytes, the script copies the content of tcpdump.log to 5 other files tcpdump.log.1 tcpdump.log.2 etc. I know this is because the rotate 5 option on logrotate. But all I want it to copytruncate only once to tcpdump.log.1 & when tcpdump.log reaches 1024 bytes again, copytruncate again (once) to tcpdump.log.2

- after the copytruncate is done, the tcpdump.log file goes to 0 bytes and then goes to 1024 bytes & increasing - this goes on and on; that is, ls keeps reporting the file size 0 bytes and then suddenly >= 1024 bytes.

Here is my new script. Would really appreciate further help. Thanks.

Last edited by noir911; 07-24-2008 at 06:14 PM. Reason: issue resolved now with two scripts
 
Old 07-23-2008, 10:08 PM   #10
Mr. C.
Senior Member
 
Registered: Jun 2008
Posts: 2,529

Rep: Reputation: 59
Welcome to the world of multiple processes writing to the same file at the same time. Unix 101.
 
Old 07-23-2008, 11:44 PM   #11
noir911
Member
 
Registered: Apr 2004
Location: Baltimore, MD
Posts: 681

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by Mr. C. View Post
Welcome to the world of multiple processes writing to the same file at the same time. Unix 101.
Is there anything I can do to stop writing to the file for 5 seconds while logrotate does the copyturncate?
 
Old 07-24-2008, 01:48 AM   #12
Mr. C.
Senior Member
 
Registered: Jun 2008
Posts: 2,529

Rep: Reputation: 59
You are misunderstanding how file I/O works.

When a file is opened by a process, it has a file pointer of where to perform its *next* write. In your script, the shell has $FILE open for the redirection. After the shell has written byte 1024, its file pointer is set to write starting at byte 1025. Since you truncated the file, the next write by the shell will create a sparse file, where bytes 0-1024 are empty, and tcpdump's redirected output will continue at bytes 1025+. (this is a generalization of the concept; I've omitted some details for the sake of simplicity). Let's confirm with some file sizes:

Code:
-rw-r--r--  1 root      root 113K Jul 23 23:14 tcpdump.log
-rw-r--r--  1 root      root  32K Jul 23 23:09 tcpdump.log.1
-rw-r--r--  1 root      root  48K Jul 23 23:10 tcpdump.log.2
-rw-r--r--  1 root      root  64K Jul 23 23:11 tcpdump.log.3
-rw-r--r--  1 root      root  80K Jul 23 23:11 tcpdump.log.4
-rw-r--r--  1 root      root  96K Jul 23 23:12 tcpdump.log.5
-rw-r--r--  1 root      root 112K Jul 23 23:13 tcpdump.log.6
Ok, that seems to agree with the theory. Now, let's check if these are sparse files:

od -b tcpdump.log

It will reveal that indeed all the initial bytes are zeros:

Code:
0000000  000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000
*
0240000  061 070 065 067 040 ...
Yup, that is also as predicted.

The astute observer will also notice that the tcpdump.log.# file sizes do not exactly equal 1024, but jump in size according the I/O buffer size and when it is flushed. On my system, the file size delta between consecutive tcpdump.log.# files is 16K (some runs have 32K or even 64K deltas, depending upon timing, and whether or not -l is used with tcpdump).

As I have been trying to tell you, you cannot force another process to change or reset its internal file pointer. I'll state it again - two random processes CANNOT write to the same file at the same time as the results are undefined. I hope you now understand this basic lesson.

So you need a different approach.

One approach is to write your own read-n-rotate program, that reads and writes its STDIN, and rotates its output file every N bytes. This can easily be done in C, perl, or your language of choice. This way - you control the open and close of the output files.

Another approach is to use tcpdump's -w, -C option and -W options to create N files, each of a specified maximum size. tcpdump will automatically close one file and open the next file for output. Using -W allows you to create a ring of files (eg. 1, 2, 3, 1, 2, 3, ...). When tcpdump has finished writing a file, it will advance to the next. You are now safe to read the file with tcpdump -r, and you can delete the file too. How do you know when to read the next file ? Answer: after tcpdump has create the file N+1 mod the max number of files you specify with -W. This is how you manage a ringer buffer with one writer and one reader (called the producer/consumer problem in computer science).

Good luck!
 
  


Reply

Tags
logrotate, tcpdump


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
how to log everything from a bash script to a file prodsac Linux - Server 2 04-15-2008 04:56 PM
Bash script to put log files into single file and email DragonM15 Programming 13 11-08-2007 03:27 AM
Help on parsing a log file in BASH globemast Programming 5 01-11-2007 01:56 AM
file copying issues with bash Furlinastis Programming 3 05-03-2006 01:32 AM
Where is the bash log file? alaios Linux - General 5 04-28-2006 03:28 AM


All times are GMT -5. The time now is 10:48 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration