[SOLVED] How to detect a closed tcp client connection when client is only receiving data
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
How to detect a closed tcp client connection when client is only receiving data
Hello everyone,
I have researched this problem in this forum and elsewhere but I haven't really found an answer to my problem.
I am writing a TCP server in C, and the server listens to incoming client connections and accepts them. It then creates a thread to handle the client. The clients are expected to only receive data from my server and not send any data. So if I use a select() call with a recv(), I believe that the recv() will just block forever since there will not be any data coming from the client. If I use a non-blocking recv(), then this will just return a 0 which tells me nothing because the client is not expected to send any data. I am not sure if I have misunderstood some socket concepts, but I need a solution to detect when the client has disconnected so that I can close the socket and stop sending data to the client. As I understand it, simple ACKs etc are not captured by the recv(), and only data sent by the client will cause recv() to return a non-zero value, so I am not sure how to know when the client has disconnected.
Last edited by programlight; 03-09-2011 at 09:54 AM.
Reason: More clarification
I found out that recv() returns 0 when the client disconnects and -1 when no data is received, so I can use recv() and check for a return value of 0 to know that the client is disconnected. Any other return (actually only a -1 is possible in my case because the client doesn't actually send data) would lead my server to assume that the client is still connected, so it will continue to send the client data.
I can't tell if my assumption about recv() returning a -1 is correct anymore. In my program, when the client disconnects and then reconnects, somehow the connection is lost if more data is sent to the client. Also, recv() doesn't return a -1 anymore. I do receive a "send() failed, Resource temporarily unavailable" error. I am not sure if my statements are clear to everyone reading, but any insights would be greatly appreciated.
Thanks Mara and chrism01, however, I still am not clear about the concept. Sorry if I wasn't clear in my post.
I understand that I can use select() and read() or recv(), and if read/recv returns zero then the connection was closed by the client. If the read/recv returns a -1, then there was an error in reading data from the client.
In my case, I am not expecting my client to send any data but only to receive data that I send. I also don't want my read/recv to block as long as my client is connected, so that I can send it data. So, if I configure my socket to be nonblocking, then as long as my client is connected, read/recv() will return a -1(because client is still connected but never sending the server any data).
I have written my server code with the above assumption, but when something erroneous occurs at the client side and the connection is terminated improperly, my server doesn't recognize this (because in my assumption, when read/recv returns a -1, the server is still thinking that the client is connected).
So here's my question: How do I set a client socket in my server code to be non blocking and still be able to differentiate between a client still connected vs. an improper connection termination by the client?
I wrote ten suggestions for non-blocking TCP I/O here. This will probably answer your questions, and possibly some you should have asked, had you known about them.
if recv returns>0, then it is not EOF
if recv returns==0, then it is EOF
if recv returns<0 and errno==EWOULDBLOCK, then it is not EOF (it happens only in non-blockin mode)
if recv returns<0 and errno!=EWOULDBLOCK, then it is something that can be considered as EOF
for write:
if write returns>0, then it is success (total or partial)
if write returns<0 and errno==EWOULDBLOCK, then it is not connection-losing
if recv returns<0 and errno!=EWOULDBLOCK, then it is something that can be considered as connection-losing
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.