tcp/ip sockets, aren't transmitted buffers gaurenteed to arrive entirely and intact?
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
tcp/ip sockets, aren't transmitted buffers gaurenteed to arrive entirely and intact?
Hi,
My networking experience is quite extensive, I thought I knew the answer to this, but my program has me doubting myself.
I am writing a protocol (application level) that transmits one or more C like structures from client to server and vice versa using tcp/ip sockets. I assumed that when I transmit a buffer of a specific size, it arrives at it's destination unfragmented and intact; this is the responsibility of the socket layer. Regardless, of the ip fragmentation that happens, the socket layer on the server or client should reconstruct the original buffer in it's entirity before calling the application's recieve callback.
Am I wrong?
I am, on the rare occasion, getting what looks to be like fragmented buffers coming in.
I want to confirm my assumption above with an experienced networking guru.
It is a PACKET which is guaranteed (in TCP/IP) to arrive in its entirety, in sequence, and error-free. If you send an array (such as a buffer) you have to read very carefully about how the library calls treat the data - do they copy the data into their own buffers, do they keep referring to the pointer you passed to send parts of the data, and so on. There is also the issue of the ENDIAN - if you pass a structure on a little-end machine and it is received by a big-end machine, don't expect anything except for 'char' to come out looking right.
TCP ensures all data transmitted is received (unless the connection is broken). However, you may need to do more than one call to read(2) to get the data. If one read is short, read again until you have the data. Here is the general idea:
Code:
int readSocket (int fd, char * buf, int expected)
{
int bytesIn = 0;
int rc;
while ( bytesIn < expected ) {
rc = read ( rd, buf + bytesIn, expected - bytesIn );
if ( rc <=0 ) {
// Error: socket might be closed;
bytesIn = rc;
break;
}
bytesIn += rc;
}
return bytesIn;
}
Thanks guys. I am using C#, and using a class from CodeProject that simplifies socket calls. I had a sneaky feeling that the problem would be found in there. One problem is he used a static sized buffer to recieve of 8k. I will move to direct socket calls instead, get rid of that limitation, and change my protocol header to add the length field, and read until all data is recieved.
I am a hard core C++ programmer, new to C# but I'm loving it so far. Most of my apps run in linux without an issue! I also got Mono compiling too.
Thanks.
Last edited by guru_stpetebeach; 09-08-2008 at 10:02 AM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.