[SOLVED] Wondering about Resolution of a File Descriptor
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
In an embedded Linux product, I have to make sure file descriptors (maybe an incorrect term by the way, feel free to correct my verbiage) are resolved.
Example and resolution, followed by current problem
Example and Resolution
I create a data file and when the device is attached to a server, the data file must be complete. An early problem was that the file was not guaranteed to be complete and sometimes the server, which was using Samba, would see an incomplete file. I had to find a way to make sure the file descriptor was fully closed before they could try to attach to the server. My solution, currently working with no problems is to write that file from within my application process with a temporary name, and then fork() and exec() to rename the file. The parent process waits for the termination signal, and as a result, we've never had problems with partial files available through the Samba attachment.
Current Problem
I have an upgrade process which runs before my application and as a result, it usually replaces the application. One of our points to operators of the device is that if they yank the battery, they're going to lose field data, so they're supposed to shut it down cleanly; or grab their field data before they yank any batteries. I've now noticed that if an upgrade was done, the executable files are some of the files which can be left with unresolved descriptors. If someone yanks the battery, the bad thing is that the system won't boot fully because the application executable is in place, but zero sized.
I am half answering my question as I write this, but looking for thoughts on this. The script is not a C program; hence I'm not invoking fork(), exec(), and waitpid(). This is more or less what the script does.
Code:
tar -xvf <tarfile>;
mv -f <executable> /product-executable-directory;
exit 0;
Very simplified there, my main point is that I got into the habit of terminating my script lines with the semi-colon, not sure if that matters versus not. Only sure it matters when you write it all on one line. Next, I've never queried for a signal from within a script and my assumption is that either the script, or the commands it invokes would be resolved by the parent controlling it. For instance the mv command would be closed by the completion of my upgrade script. My upgrade script runs from /etc/rcS.d as one of my startup links. But so doesn't other startup scripts. Seems pretty much a strong indicator that if my executable files can get corrupted without a system sync, prior to losing power, and the presence of a zombie when I upgrade, but no other time, that something is amiss here.
Anybody have any general thoughts on how I'd make sure the "mv" operation has been fully resolved so if we suffer power interruption after the upgrade is complete that we don't end up with the situation I'm describing?
(..) I got into the habit of terminating my script lines with the semi-colon, not sure if that matters versus not.
No, doesn't matter, only as you mentioned already, with one-liners because you use the semi-colon by default to separate commands.
Quote:
Originally Posted by rtmistler
Next, I've never queried for a signal from within a script and my assumption is that either the script, or the commands it invokes would be resolved by the parent controlling it. For instance the mv command would be closed by the completion of my upgrade script.
IIRC the parent would wait for the child to exit but you could test that running a rc script which ends "sleep 60s; exit"?
Quote:
Originally Posted by rtmistler
Anybody have any general thoughts on how I'd make sure the "mv" operation has been fully resolved so if we suffer power interruption after the upgrade is complete that we don't end up with the situation I'm describing?
If your tar ball only contains the executable (no paths) why don't you just remove the 'mv' and run tar directly?:
Code:
tar -C /product-executable-directory -xf <tarfile>; wait
Or maybe add a trap to ensure tar runs as last process in the script?:
This would run the tar command, wait for that process to complete and then on exit run a check (not the best performance it's just an example) to see if the file is still open on some fd. (Let's hope one of our resident BASH gurus sees this thread to correct me.)
If your tar ball only contains the executable (no paths) why don't you just remove the 'mv' and run tar directly?
I do like this idea and will consider it. Two concerns are that it may just back up the zombie instead to be against the tar versus the mv, and even though right now I'm not performing a md5sum to validate the files; I should. Therefore if they're bad and I de-tar them right over existing, good, last-version files, that would be bad. My example was also simplified, there are a number of files being replaced by the upgrade process.
Quote:
Originally Posted by unSpawn
Or maybe add a trap to ensure tar runs as last process in the script?:
Firstly it was not my upgrade script. I did a bit of searching and performing
Code:
ps -aux
showed me exactly what process was the zombie, therefore a non-script issue.
Secondly, case of RTM, "Read The Manual", which I should know given it's my initials. I was unaware of the "wait" command for scripts and further that you could identify jobs and such. This is a whole section in the bash manual http://tldp.org/LDP/abs/html/x9612.html#EX39. Thanks for the thoughts.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.