This is not a question, I'm just providing this in case anybody else finds a use for it.
I've been frustrated with the fact that logrotate works separately from the logging process, which means when log files are moved, the move doesn't take effect until the logging process closes and re-opens its output file. Often this means having to reboot the machine, or at a minimum restart the logging process, neither of which is desirable in certain situations. logrotate often isn't available on small embedded systems either, where log rotation can be a critical issue due to size limitations on the filesystem.
So, I adapted a code written by Dark_Helment a few years back (
link) for another purpose so it can handle in-line log rotation.
For processes that dump to stdout, rather than redirecting this output to a file with '>', you can instead pipe it to this program to handle the logging and rotation.
For processes that dump to specific filenames, you can replace its target filename with a named pipe (mkfifo), and then run this program to pull from the pipe and handle the logging and rotation.
The program takes 4 command line arguments: 1) the name of the log file you want to write to, 2) the size limit of this log file, 3) the number of log files to maintain, 4) the buffer size to use for each read/write. #4 is optional, if left off it will default to 16 bytes. The program performs a flush after every write, so making this buffer size too small will increase CPU overhead, while making it too large will delay output. For processes that have very sparse output, this buffer can be set as small as 1 byte to flush after every character. For processes that have very rapid output, this buffer should be increased to a few kB to reduce CPU overhead.
An example calling sequence is:
Code:
logging_program arguments | logwrapper logfile.out 1024 5 1
To redirect the stdout of logging_program into log files. It will maintain 5 log files, each 1024B (1kB) in size, and it will flush the output after every character. The 5 log files will be named logfile.out, logfile.out.0, logfile.out.1, logfile.out.2, logfile.out.3, and logfile.out.4, with logfile.out always being the current one, followed by .0, .1, and so on. When logwrapper first starts, it will cycle the log files as necessary so they're not overwritten.
If you want to run the program in nohup, you can use:
Code:
nohup sh -c "logging_program arguments 2>&1 | logwrapper logfile.out 1073741824 3 4096" &> /dev/null &
Which would redirect both stdout and stderr from logging_program to logwrapper, and run the entire process in nohup detached from the shell. logwrapper would maintain 3 log files, each 1GB in size, with a 4kB read/write buffer to reduce overhead for rapid logging.
It can be compiled with your basic gcc, no special flags necessary:
Code:
gcc -o logwrapper logwrapper.c
If you find this useful, run into problems, or have any suggestions for improvement, please share. I am by no means an expert C programmer, so if you can think of a better way of doing things, you're probably right.
Code:
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
int main(int argc, char **argv)
{
int i;
// Control variables
char *filename;
char *filename_ext_src, *filename_ext_dest;
int fnameptr_src, fnameptr_dest;
unsigned long size;
unsigned int number;
// I/O variables
FILE *outputFile;
unsigned long byteCounter;
size_t bytesRead;
unsigned long buf_size;
// Dummy variable for stat
struct stat buf;
// Print the usage
if(argc < 4)
{
printf("Usage: logwrapper [filename] [size] [number] [buffer_size]\n");
return 1;
}
// Get the command line arguments
filename = argv[1];
size = atoi(argv[2]);
number = atoi(argv[3]);
// Number of bytes to read from stdin before dumping it to the output file
// Too small and it will load down the CPU during fast output
// Too big and the code will introduce a buffering delay
if(argc >= 5)
{
buf_size = atoi(argv[4]);
}
else
{
buf_size = 16;
}
char buffer[buf_size];
// Set up some temporary holders for our extended filenames
filename_ext_src = (char*)malloc(strlen(filename)+1+4); // +4 can handle up to 1000 files
filename_ext_dest = (char*)malloc(strlen(filename)+1+4); // +4 can handle up to 1000 files
// Loop backwards from number-1 to 0, moving log backups as necessary (3 to 4, 2 to 3, etc.)
for(i=number-1;i>=0;i--)
{
// Copy the filename to the src and dest variables
fnameptr_src = sprintf(filename_ext_src,"%s",filename);
fnameptr_dest = sprintf(filename_ext_dest,"%s",filename);
// Tack on the extension
if(i>0) fnameptr_src += sprintf(&filename_ext_src[fnameptr_src],".%d",i-1);
fnameptr_dest += sprintf(&filename_ext_dest[fnameptr_dest],".%d",i);
// Check if this src file exists, if so move it to the dest
if(stat(filename_ext_src,&buf) == 0) rename(filename_ext_src,filename_ext_dest);
}
// Initialize our output file
byteCounter = 0;
outputFile = fopen(filename,"w");
if(outputFile == NULL)
{
fprintf(stderr, "ERROR: Unable to open \"%s\" for writing.\n",filename);
return 1;
}
// Loop forever!!! muahahaha
while(1)
{
// Read from stdin
bytesRead = fread(buffer, sizeof(char), buf_size, stdin);
// fread does a blocking read of buf_size bytes from the source (stdin)
// if it exits having read 0 bytes, it means stdin has been closed and we should shut down
if(bytesRead == 0) return 0;
// Write to our output file
fwrite(buffer, sizeof(char), bytesRead, outputFile);
fflush(outputFile);
// Increment our counter
byteCounter += bytesRead;
// Check if we've passed the size threshold
if(byteCounter >= size)
{
// Time to shift our log files
// Close our current file and reset the counter
fclose(outputFile);
byteCounter = 0;
// Shift the log files like we did before (3 to 4, 2 to 3, etc.)
for(i=number-1;i>=0;i--)
{
fnameptr_src = sprintf(filename_ext_src,"%s",filename);
fnameptr_dest = sprintf(filename_ext_dest,"%s",filename);
if(i>0) fnameptr_src += sprintf(&filename_ext_src[fnameptr_src],".%d",i-1);
fnameptr_dest += sprintf(&filename_ext_dest[fnameptr_dest],".%d",i);
if(stat(filename_ext_src,&buf) == 0) rename(filename_ext_src,filename_ext_dest);
}
// Open our new output file
outputFile = fopen(filename,"w");
if(outputFile == NULL)
{
fprintf(stderr, "ERROR: Unable to open \"%s\" for writing.\n",filename);
return 1;
}
}
}
// The code will never reach this point, but this is good programming practice anyway
free(filename_ext_src);
free(filename_ext_dest);
return 0;
}