Quote:
Originally Posted by shreekanth
i checked the /proc/sys/net/ipv4/tcp_syncookies file and it was having a non-zero value (it was actually having value as 1). I changed it to 0 using sysctl command. Then i ran server and 3 client instances. Now also I am seeing that connect call doesn't fail for client of third instance.
|
My own tests with a current Linux kernel match your results.
I would like to point out, however, that
this is a good result. The backlog is a problematic concept for a system administrator; it would have been better if it had never existed at all. An application should not care how many partially open incoming requests there might be; it is something the kernel should take care of. Furthermore, most applications just use a fixed value (instead of something the administrator could set in a configuration file), making practical server load management more difficult than it would otherwise be.
At worst, you might complain that things no longer work like they used to, but that's just silly. Having the kernel ignore the backlog, and allow a large number of pending incoming connections, is a good thing. (We can discuss why, if you're unsure.)
However, I do find what actually happens quite interesting.
It seems that the Linux kernel not only accepts "extra" incoming connections, but it also seems to buffer the initial data sent by the client to the server. Based on this, I believe the socket buffers are used as a cache. I think the kernel uses the
backlog parameter to allocate larger buffers initially, so it is a good idea to set it to an user-configurable value in applications.
When the client expects the server to send data first, some of the clients seem to wait forever if TCP syncookies are disabled. (I have a suspicion why this occurs.. but in any case, such a client-server design is rare.) When TCP syncookies are enabled, everything works fine, of course. I tested with a thousand parallel clients, with the server having a backlog of 1 (and an effective backlog of about 4, based on the number of clients in the connected state at the same time).
Everything worked beautifully. I simulated a slow, single-threaded server, and most of the clients timed out on their own end (189 seconds into the connect() call) -- as they should. I find this behaviour excellent, better than I expected.
I believe the Linux kernel have been designed to ignore the
backlog parameter, at least in the traditional sense. Nowadays it seems to be more like a hint, at least as long as TCP syncookie support is enabled -- and I believe you should have it enabled. Still, connection backlog should be user-configurable in applications.