SARG buffer overflow detected
Hello everyone,
this is another buffer overflow from SARG. I've been searching the web and found a lot of them. But this is what's happening to me.
So, I try to create a monthly report from the first day of the month till the end of it. For that, I get all the log files from remote servers and process them to one running just sarg.
Anyway, Sarg begins to decompress the logs, pre-sort them, and even starts to generate and move html files to the output folder where the report should be stored. Still, before it generates the indexes, I get this:
*** buffer overflow detected ***: /usr/bin/sarg terminated
======= Backtrace: =========
/lib64/libc.so.6(__fortify_fail+0x37)[0x334f2ff617]
/lib64/libc.so.6[0x334f2fd500]
/lib64/libc.so.6[0x334f2fc959]
/lib64/libc.so.6(_IO_default_xsputn+0xc9)[0x334f273899]
/lib64/libc.so.6(_IO_vfprintf+0xcf9)[0x334f2449e9]
/lib64/libc.so.6(__vsprintf_chk+0x9d)[0x334f2fc9fd]
/lib64/libc.so.6(__sprintf_chk+0x7f)[0x334f2fc93f]
/usr/bin/sarg[0x4039c8]
/usr/bin/sarg[0x41000c]
/usr/bin/sarg[0x40d7a7]
/usr/bin/sarg[0x40964f]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x334f21ecdd]
/usr/bin/sarg[0x4029a9]
So, after the crash, in my output dir I have a folder for every IP address that access squid with the name of "AA_BB_CC_DD", each of them has html pages with the pages the IP accessed, but there is not AA_BB_CC_DD.html, which should be the index of the folder; also, the index.html from the folder where I keep the monthlies is not regenerated. And, in the temp dir I have 2 files for each IP, in which some statistics are stored, and they're named like AA_BB_CC_DD.day and AA_BB_CC_DD.txt.
Is there a way I can resume SARG? Or a way I can see what exactly is the problem? I used to have some other buffer overflow errors, some of them were because of TABs in the usertab file, other from a line in squid access log. But all of those were crashing sarg way early. This time is different. Any ideas?!
|