Latest LQ Deal: Complete CCNA, CCNP & Red Hat Certification Training Bundle
Go Back > Forums > Non-*NIX Forums > Programming
User Name
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.


  Search this Thread
Old 04-05-2006, 06:57 AM   #1
LQ Newbie
Registered: Apr 2006
Posts: 3

Rep: Reputation: 0
Question Weird FIFO problem on Linux 2.6

Hello LQ:

The following code snippet produces weird behavior on my FC4 machine.
The amount of data that the FIFO can hold appears to be dependent upon the # of bytes written per write() call. (varying from ~62K to 65535 bytes)

This prog. works fine on a SuSE9 distro (always fills up the FIFO until the space available is less than the # of bytes I want to write).

I am guessing this is because of changes in the Linux Kernel. Any suggestions on how I can fix the problem is greatly appreciated.


//Start test.c

#include <errno.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

main(int argc, char **argv[])

int i, j, k;
int size, count, fd, total;
unsigned char data[1024];
size = atoi(argv[1]);

if(mkfifo("/tmp/Test", 0755)!= 0)
printf("Error Creating Fifo: *%s*\n", strerror(errno));

fd= open("/tmp/Test", O_RDWR|O_NONBLOCK);
count = 0;
total = 0;

for(i=0;i <65535; i++)
j = write(fd, &data[0],size);

if( j > 0)
printf("Count = %d Wrote %d bytes\n", count, j);
total = total + j;
printf("Fifo contains %d bytes\n", total);
printf("Write retrured error: \t *%s*\n", strerror(errno));

//End test.c
Old 04-05-2006, 08:37 AM   #2
Registered: Feb 2005
Location: Ontario, Canada
Distribution: Gentoo, Slackware
Posts: 345

Rep: Reputation: 30
The following code snippet produces weird behavior on my FC4 machine.
It would be helpful to know what you mean by "weird behavior".
Old 04-05-2006, 08:58 AM   #3
Registered: Jun 2005
Posts: 542

Rep: Reputation: 34
See what happens with fpathconf(fd, PC_PIPE_BUF) or PIPE_BUF from <limits.h>

Can you please show the fifo reader's source ?
It may actually depend on the number of bytes read...

There are many weird things in this code anyway:
1- The size parameter should not be > 1024 (sizeof(data))
2- You must test against -1 to correctly handle errors from write(). Change the "if (j > 0)" to "if (j >= 0)"
Quick note: Testing against < 0 isn't correct. write() returns a signed integer (ssize_t) but the 3rd parameter is unsigned (size_t). So any values between SSIZE_MAX and SIZE_MAX - 2 are "negative", if the implementation supports this range. It's better to get used to it early.
Old 04-05-2006, 11:56 AM   #4
LQ Newbie
Registered: Apr 2006
Posts: 3

Original Poster
Rep: Reputation: 0
Weird FIFO Beehavior..

I do not have a reader program for this yet.

As you can see, argv[1] determines the packet size to be written to in 1 write() call.

Here are the weird observations:
test 78
Count = 832, Bytes in FIFO = 64896 before I get EAGAIN.

test 100
Count = 640, Bytes in FIFO = 64000 before I get EAGAIN.

test 128
Count = 512, Bytes in FIFO = 65536 before EAGAIN

test 334
Count = 192, Bytes in FIFO = 64128

The Kernel parameters are:
PIPE_SIZE = 65536 (from pipe_fs_i.h)
PIPE_BUF = 4096 (from limits.h)

One of the guys here figured out the math behind this.
It appears that the write() system call continues using PIPE_BUF as the boundary.

Consider argv[1] ==100.
write() left 96 bytes unused every time the PIPE_BUF boundary is reached. In total 96 * 16 = 1536 bytes were left unused in the FIFO.
64000 + 1536 = 65536
The math works for other argv[1] values as well (I tested 335, and 336 as well)

The solution is obvious: set PIPE_BUF to 64k, and recompile. But this increases buffer usage for all other calls (perhaps even read())

But the behavior is definitely WEIRD.
I expected the FIFOs to work as 1 contigous block, not a bunch of smaller FIFOs (sized to PIPE_BUF) stacked on top of each other.

Perhaps something the Kernel Developers should look into fixing?

Old 04-05-2006, 06:27 PM   #5
Registered: Jun 2005
Posts: 542

Rep: Reputation: 34
There's nothing weird. PIPE_BUF is the number of bytes that are guaranteed to be written atomically, and the limit in <linux/pipe_fs_i.h> is the maximum number of bytes that may be hanging. You see that with 512 it's maximized. It'd be the same with 4096. Any number of bytes <= PIPE_BUF are guaranteed to be written atomically, so the system call doesn't fill the internal buffer to reach the 64k limit, otherwise it would break the fifo semantics.

It's not a "bug", so recompiling may not be even possible. This number happens to be the same to getpagesize(). It would make it easier to DoS your machine by using all the kernel memory.

Last edited by primo; 04-05-2006 at 06:29 PM.
Old 04-05-2006, 11:15 PM   #6
LQ Newbie
Registered: Apr 2006
Posts: 3

Original Poster
Rep: Reputation: 0
Um.. I have to disagree.
I think the FIFO semantics ARE being broken..
If the PIPE_SIZE is 65536 bytes, and there are only 64000 bytes in the FIFO, I expect to be able to write() to write 100 bytes 15 more times before I get EAGAIN.

On the 2.4 kernels (and older), where PIPE_SIZE and PIPE_BUF are both identical, I can write 100 bytes 40 times before EAGAIN.
Old 04-05-2006, 11:29 PM   #7
Registered: Jun 2005
Posts: 542

Rep: Reputation: 34
The fifo semantics are not being broken because they stick with the API. You're making an assumption about how the use of the internal buffer should be, but the applications are only expected to handle EAGAIN waiting for the fifo reader to do its job. It may be that the linux kernel handles these buffers internally in page-sized chunks, for efficiency reasons, and it's not breaking the expected use because they can be read in the usual linear way.
Old 06-26-2010, 12:54 PM   #8
Registered: Sep 2008
Distribution: Ubuntu 8.04 LTS Server
Posts: 89

Rep: Reputation: 16
man 7 pipe on my Ubuntu server 8.04 LTS says:

O_NONBLOCK enabled, n <= PIPE_BUF
If there is room to write n bytes to the pipe, then write(2) succeeds immediately, writing all n bytes; otherwise write(2) fails, with errno set to EAGAIN.
This seems to support primo's position, because the 100 bytes were supposed to be written to the pipe as long as there was room in the pipe for them.
Old 06-27-2010, 03:40 AM   #9
LQ Veteran
Registered: Sep 2003
Posts: 10,532
Blog Entries: 7

Rep: Reputation: 2389Reputation: 2389Reputation: 2389Reputation: 2389Reputation: 2389Reputation: 2389Reputation: 2389Reputation: 2389Reputation: 2389Reputation: 2389Reputation: 2389
@Aashima Singh: Stop spamming. Reported.


at, cdl, delhincr, distance, dlp, executive, imt, management, mba, noida, professionals, program, working

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
Linux weird installation problem... HELP!! franx Linux - Newbie 3 03-10-2006 10:50 PM
parport0: FIFO is stuck - problem with printing Debian_Poland Linux - Hardware 1 02-06-2006 01:32 PM
Clonning data, fifo/pipe/tee problem : Resource temporarily unavailable rmarco Linux - General 2 05-05-2005 11:15 AM
weird command problem in linux sonesay Linux - Newbie 9 05-21-2004 07:07 AM
Weird problem with linux. warTUX Linux - Newbie 3 03-19-2004 01:47 AM > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 09:54 AM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration