ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
The following code snippet produces weird behavior on my FC4 machine.
The amount of data that the FIFO can hold appears to be dependent upon the # of bytes written per write() call. (varying from ~62K to 65535 bytes)
This prog. works fine on a SuSE9 distro (always fills up the FIFO until the space available is less than the # of bytes I want to write).
I am guessing this is because of changes in the Linux Kernel. Any suggestions on how I can fix the problem is greatly appreciated.
See what happens with fpathconf(fd, PC_PIPE_BUF) or PIPE_BUF from <limits.h>
Can you please show the fifo reader's source ?
It may actually depend on the number of bytes read...
There are many weird things in this code anyway:
1- The size parameter should not be > 1024 (sizeof(data))
2- You must test against -1 to correctly handle errors from write(). Change the "if (j > 0)" to "if (j >= 0)"
Quick note: Testing against < 0 isn't correct. write() returns a signed integer (ssize_t) but the 3rd parameter is unsigned (size_t). So any values between SSIZE_MAX and SIZE_MAX - 2 are "negative", if the implementation supports this range. It's better to get used to it early.
One of the guys here figured out the math behind this.
It appears that the write() system call continues using PIPE_BUF as the boundary.
Consider argv ==100.
write() left 96 bytes unused every time the PIPE_BUF boundary is reached. In total 96 * 16 = 1536 bytes were left unused in the FIFO.
64000 + 1536 = 65536
The math works for other argv values as well (I tested 335, and 336 as well)
The solution is obvious: set PIPE_BUF to 64k, and recompile. But this increases buffer usage for all other calls (perhaps even read())
But the behavior is definitely WEIRD.
I expected the FIFOs to work as 1 contigous block, not a bunch of smaller FIFOs (sized to PIPE_BUF) stacked on top of each other.
Perhaps something the Kernel Developers should look into fixing?
There's nothing weird. PIPE_BUF is the number of bytes that are guaranteed to be written atomically, and the limit in <linux/pipe_fs_i.h> is the maximum number of bytes that may be hanging. You see that with 512 it's maximized. It'd be the same with 4096. Any number of bytes <= PIPE_BUF are guaranteed to be written atomically, so the system call doesn't fill the internal buffer to reach the 64k limit, otherwise it would break the fifo semantics.
It's not a "bug", so recompiling may not be even possible. This number happens to be the same to getpagesize(). It would make it easier to DoS your machine by using all the kernel memory.
Um.. I have to disagree.
I think the FIFO semantics ARE being broken..
If the PIPE_SIZE is 65536 bytes, and there are only 64000 bytes in the FIFO, I expect to be able to write() to write 100 bytes 15 more times before I get EAGAIN.
On the 2.4 kernels (and older), where PIPE_SIZE and PIPE_BUF are both identical, I can write 100 bytes 40 times before EAGAIN.
The fifo semantics are not being broken because they stick with the API. You're making an assumption about how the use of the internal buffer should be, but the applications are only expected to handle EAGAIN waiting for the fifo reader to do its job. It may be that the linux kernel handles these buffers internally in page-sized chunks, for efficiency reasons, and it's not breaking the expected use because they can be read in the usual linear way.