Hi there,
I've written a small perl script that connects to a TCP/IP socket, read's in thunks of data and write's it to a fifo output. This script is invoked around 120 times, so I have 120 perl scripts read from 120 TCP/IP sockets (they are all different) and all pushing there data into a fifo.
I've controlled the access to the fifo by locking it before writing to it like this for 1 line:
Code:
flock($Index, LOCK_EX);
syswrite $Index, $Line;
flock($Index, LOCK_UN);
and like this if I have a multi-line thunks of data.
Code:
flock($Index, LOCK_EX);
syswrite $Index, join "", @Array;
flock($Index, LOCK_UN);
At the other end of the fifo, I have a process that is reading and passing the fifo through a TCP/IP socket to whomever connects to a certain port.
So, basically I have a serious of perl scripts that is combining 120 TCP/IP feeds down to one feed.
Here is my problem,
There seems to be a corruption of data occur and perl (with -w) doesn't seem to be picking it up. As far as the scripts are concerned everything is working fine. However, I am getting missing thunks of data especially during peak loads. I know this, because I have a system connected at the other end which complains about the data it's recieveing.
My questions are:
- Are there any traps in trying to pipe all this data (and it can be alot) into 1 fifo?
- Should I consider using something other than a fifo and what would that be?
- Is there an opensource eqivalent to my data combiner out there?
- Are there any other traps that could be causing the loss of data without reporting memory, cpu or data error's?
I need serious help here.
Thanks for reading.