ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
and in the same time it gets moved, by something like:
Code:
mv myfile myfile2
The idea is that the move happens after the writing has started and before the writing has ended. I know that moving (with "mv") only changes two entries in the filesystem inodes (and thus is fast) but I wonder what exactly would happen. Of course I can use a lot of workarounds, locks, and so on, it's just this particular case I'm interested in. I'm sorry I have no experience with C++ to know how open() or write() work.
I'll post this here; no need to start another thread for the next question - which is related to this discussion.
Let's say I continuously redirect the output of some command to a file; be it "tail -f".
Code:
tail -f mainfile > outfile
From time to time, I need to take the outfile out of the way, and replace it with a newly created file. And, by all means, the tail process (or any other) should not be interrupted.
I tried redirecting to a symbolic lynk, and then to force the re-creation of that link, like this:
Code:
touch outfile1
ln -s outfile1 outfile-link
tail -f mainfile > outfile-link
-- and after some time --
touch outfile2
ln -sf outfile2 outfile-link
But, (you guessed), tail keeps writing to outfile1.
Maybe if I'll instantly truncate outfile1 after copying it to outfile2 would do, though copying takes time and there won't be some way of telling if any new line was appended to outfile1 between those two actions.
If anyone has any idea I'll be most grateful - I hope someone has some more knowledge about what happens underneath the filesystem than me; cheers.
You can't do that when your output goes directly to a file. The solution is to use an intermediary program.
Excellent idea! Then I'll just pipe the output to a while loop which reads it line by line and writes it to a file. Let's say:
Code:
middle() {
while true
do
read MYLINE
-- test some condition --
echo $MYLINE >> outfile1 (or outfile 2)
done
}
tail -f mainfile | middle
(or: tail -f mainfile >( middle )
I'm not sure which one will work).
So far in my tests the read+echo loops are fast (bash uses its built-ins) so there shouldn't be a major performance penalty or delay (provided that the test inside the loop is a simple lock check and not more time-consuming, like du or grep).
Well, I'm not altering the log files, but your advices are always welcome.
I'm rather continuously watching / backing them up.
What kinda worries me is that if I watch them with tail -f, they will be checked for line additions one per second (this is the default). If one or more lines are added within this ONE second and a log rotating mechanism interferes - and moves the file (and having tail following the filename), these lines will be lost. So I probably should be using something like:
Code:
tail -f --follow=name --sleep-interval=0
Ok, this should check the file continuously but I don't see any overburden so far.
Just a thought - maybe I sound paranoid. Thanks.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.