Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have multiple processes (writer) running on a linux machine (processes are written in perl, c++, php5 and java) which generate messages
I have one process (reader) (can write in any language depending on choice) which takes a message and transfer to a remote server.
Aim is that writer will write messages which reader must read on a near realtime basis (atmax a minute delay). What approach should i choose for this kind of job.
I am thinking of file based mechanism where each writer write to same file and reader reads the file after a minute keeping track of file size and max data to read in a minute slot. But this has problems like
1. How do i achieve multiple process appending to same file without data loss and without performance impact on writer
My first instinct would be to use multiple files, and have the reader poll them each in turn, looking for a timestamp to keep the operations in order. That gets away from corrupting a single file with mulitple writers.
My first instinct would be to use multiple files, and have the reader poll them each in turn, looking for a timestamp to keep the operations in order. That gets away from corrupting a single file with mulitple writers.
Yes, I gave it a thought .. The approach i came up with multiple files is this
Each process writes to a file with name based on it's pid, ensuring mutual exclution amongst processes. But reading the files is little too complex in this case, because
1. Reader need to maintain ptr to all the open files.
2. Reader must continously poll for any new file created.
3. Reader must maintain multiple recovery log in case of system crash. Recovery log will contain position of reader in each file.
Apart from that .. How will read data will be erased from files ?
that way you could have the data sit in the pipe until it's read by the reader at which point it's gone. No need for any sort of internal pointers, and if the reader goes offline, the pipe persists until it comes back up.
that way you could have the data sit in the pipe until it's read by the reader at which point it's gone. No need for any sort of internal pointers, and if the reader goes offline, the pipe persists until it comes back up.
Writer can be in perl, php, java too ... is it easy to implement pipes for them too.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.