I am hoping someone can help!
I have a client with a very busy, traffic-heavy site, for whom we keep the logs. Previously the logs were configured to rotate and compress each week; somehow when we switched to a new drive this functionality got obliterated. The current raw log file is enormous.
I tried doing the log rotation with another client, and it worked fine. The existing large access_log file was zipped up into two .gz files and the access_log file was reset to 0.
However with this other client I was not able to get the same results. This is the syntax I'm using:
/sbin/killall -HUP httpd
I wanted the files to zip up when they reach 1000000b in size and for the raw log file to reset at that point. When I ran this command, it didn't work. I was expecting the current raw log which is a whopping 200,000,000 b in size to be zipped up into smaller files.
What am I missing to get this to execute properly?