LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 12-19-2007, 03:12 AM   #1
maenho
Member
 
Registered: Nov 2003
Location: Belgium
Posts: 81

Rep: Reputation: 15
fork() and writing to files


Dear all,

I'm working on a C++ application in Slackware 10.0. Because of a non-reentrant library I'm using in this project I switched from pthreads to forked processes. However I'm not really sure how to handle file output. Each child has a list of jobs. For each job it does some work and dumps the results (as two lines of text) in the same text file. So each child opens, writes and closes that same file many times. I use the fstream library for this purpose. When I was using pthreads, each thread tried to obtain a defined mutex lock and if succesfull, it could write its results to the file.

Should I implement a similar locking mechanism (flock() I believe it's called) when using forked processes? At the moment I'm using no locking at all and everything seems to work just fine. I'm I just being lucky or is this normal? If this is normal, would it be possible for each child to just keep the file open all the time and write to it when a job is finished? This would avoid the iterative file opening and closing and would therefore be more efficient. However, it would require some sort of queuing mechanism from the kernel as it would be really bad if the output of one child gets mangled up with the output of another child.

Could anybody shed some light on these matters?

thank you and friendly regards,

Steven
 
Old 12-19-2007, 06:41 AM   #2
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: Mint, Armbian, NetBSD, Puppy, Raspbian
Posts: 3,515

Rep: Reputation: 239Reputation: 239Reputation: 239
I believe you are safe as long as you are appending to the files.

As the EOF pointer is the same for each fork'd file descriptor, and each
is updated (I think atomically) with an append.

random access wil be unsafe though.
 
Old 12-19-2007, 05:36 PM   #3
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
I'd recommend doing a an explicit lock just to be safe.
If you've only got a small num of processes and they write a small amt of data, you can get away with it, but it's not recommended. It's also possible (I think) you'll get interleaved results.
 
Old 12-23-2007, 07:23 PM   #4
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Rep: Reputation: Disabled
Quote:
Originally Posted by bigearsbilly View Post
As the EOF pointer is the same for each fork'd file descriptor, and each
is updated (I think atomically) with an append.
If you open the file in append mode (also requires reading mode,) all output will be written to the end regardless of how many processes are writing to the file. Without the append flag, though, that's not guaranteed.

Do the line pairs have to stay together in the file? If so, you should either use a file lock or switch the output buffer mode to "fully-buffered" with setvbuf and flush the buffer after outputting each pair of lines. Write operations are implicitly locking (in other words, they lock out writing by other processes,) so if output is fully-buffered then all output is written at once when you flush the buffer. It will probably be line-buffered by default, which means it will write every time a newline is output, allowing for the remote possibility of line pairs not showing up together.

Keep in mind that file locks aren't mandatory unless the file system is mounted with the mand option (for mandatory locks) and the file has certain flags set (setgid without group-execute.) You can't count on that, so your processes all need to honor a mutex-style locking system. I'd use fcntl to remain Unix-portable for file locks if you're going to use them.
ta0kira

Last edited by ta0kira; 12-23-2007 at 07:30 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Writing /etc files to a new file strello Linux - Newbie 4 11-19-2006 06:47 PM
limewire keeps writing files...? santiagosilva Linux - Desktop 1 09-20-2006 10:06 AM
writing to files in C ocularbob Programming 9 02-17-2004 12:06 PM
Command for writing over files? soulflyer Linux - Newbie 2 12-12-2003 05:26 PM
Writing Single files to a CD jimmmac Linux - Software 11 04-27-2003 04:20 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 11:07 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration