ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am just wondering why so many processes write to log files.
One of the biggest bottlenecks in any computer system is the hard drive - as programmers we try to optimise and minimalist the CPU's wait states by trying to work in memory as much as possible - minimalizing reads and writes from the hard drive, yet applications just seem to write to files Like there is no tommarow.
It's not even funny how many times my computer lags while there being 96% free processing time and 4GB of ram free - just because my applications are waiting to read or writing to my hard drive.
All I can see is that these logs are just wasted one of the most precious resources on you computer.
I don't know the complete Mechanics behind it, I still think there should be better handling between the hard drive and memory.
Just to clarify I written this on my IPhone - there may be some mistakes.
The main reason for writing to log files is to leave useful information to the developers/users about the application status, especially in case of errors.
Cutting out logging functionality may lead to some kind of performance improvement (remember that, when a process is waiting for its I/O operations to finish, other processes can be put in execution) but would leave you completely in the dark in case of a serious error in your application, i.e. an error which completely freezes the GUI, the console etc.
Like 414N said what you say is "wasted resources" may be important to others. I don't know if your question should be seen in the context of your specific setup or if you're trying to influence things generally because generally speaking the kernel is quite good at automagically optimizing reads / writes. If you need clients to get more out of your application then you could set HW / SW requirements and offer performance improvements ranging from type of HW (like RAM, RAID or SDD), spreading writes over different controllers, type of file system (like ext4 vs ext2 or XFS), overhead wrt file systems journaling, disabling disk buffers, tune2fs or equivalents for certain file systems to any related sysctls, RT kernel, I/O nicing, process prioritisation and syslog tuning and slimming. Suggestions, if any, you offer should be backed by compared SAR stats beforehand to make sense.
as programmers we try to optimise and minimalist the CPU's wait states by trying to work in memory as much as possible
So IT'S YOUR FAULT that applications require so much RAM these days!
That said, it's not normal for everyday hard-disk writes to cause noticeable amounts of lag. Especially not if the writes are from logging. Seriously, how heavy do you you think the disk activity caused by logging is? I would estimate that the disk activity caused by logging is significantly less than that caused by downloading (Linux distributions) over Bittorrent, and you can do that with no problem, can't you?
BitTorrent would be a major influence in that too...
To Be honest, I haven't really noticed any lag in Linux down to IO - to be honest I don't get any lag.
I'm more looking in on Windows - applications constantly crash when my RAM and CPU is free/idling but when my IO is through the roof!
I just thought with 100's of processes writing logs to 100's of log files - something could be done better.
I'm not saying you should get rid of log files - I was saying they could be implemented better. I personally like the sound of an additional Log stream which can mean logging can be directed to different devices, IE a flash drive.
Your not really getting rid of using the data bus, but at least your splitting your data off from your main drive, so logging and intensive tasks can both have priority speeds.
Hell you could even just write your logs in main memory, but I think in the end it would cause a bigger disrupt then if you had then on your HDD.
You seem to be forgetting about caches. Do you really think that, everytime a single process tries to write to a log file (or every other kind of file), the OS performs the write action immediately?
Regarding the logs on flash/ram drives, on Linux you could mount /var/log on a different device other than a hard disk partition, but this has to be looked at very cautiously...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.