Irregular TCPdump file sizes
I've recently configured the network at my work so that all our VoIP phones get routed through a (semi)managed switch and the port in/out to the internet is mirrored to a spare network interface on our linux file server. I'm then using TCPdump to capture RTP traffic so that if we need to review a phonecall we can use wireshark's new playback feature to listen to it.
The TCPdump command runs at startup:
sudo tcpdump -n -s 0 -T rtp -vvv -W 240 -i phonecap -w /home/[User]/VoipCalls/phonecap%Y-%b-%e-%H-%M-%S.pcap -G 3600
This means the dump files are split every hour (To more easily find specific calls) and the files are overwritten after 1 week.
The problem however is that the filesize varies greatly, most captures that have no calls are about 4kb and those with calls are about 1mb per minute of phonecall, so theoretically a maximum of 60mb. Some files however are 13-16GB for no apparent reason.
I've compared different captures and some of the ~30MB captures have MORE packets and larger average packet sizes than the 13GB+ files, where is the extra filesize coming from? I will test taking off the -vvv option, but the verbose information shouldn't be entered into the dump, should it?
Is there any 'extra' data other than packets that gets put into the dump files that I can disable?
I would upload an example of some of the dump files but unfortunately I can't due to the sensitive nature of the files and the fact they belong to my company, not me.
|