Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi,
Given the available bandwidth on my network interface and other tcp parameters, how can I estimate queue length sufficient to avoid overflow?
Thanks,
-V
OK, since no one has replied to this, I'll start the ball rolling, even though this may just display my ignorance. I sort of understand what you are asking but could you clarify a bit what is your concern? It's my understanding that overflow of queues only happens under certain circumstances, for example receiving from an uncontrolled source such as the Internet. For transmission, if the local machine processes dumping data to the queue cause the queue to fill, they will simply block until space is available. If a receive queue overflows, then the incoming packets are discarded but the protocol will act to cause them to be retransmitted. Local networks may be able to respond quickly enough at the lower datalink or hardware level to avoid activating the higher network level or end-to-end protocols. Do you have a general concern to know this, or is there a more specific project being engaged?
I should have never refer to protocol with congestion control mechanism...
Let's assume "udp" instead.
My questions really are:
- are there multiple queues in the kernel, or the single one?
- how the capacity for input kernel queue is selected?
- Is the capacity set in some file?
With the second clarification: i am working with the linux kernel-based router project.
Unfortunately the details of kernel operation is not in my range of expertise. You would probably be best either to pose the question in terms of "what does the kernel do with TCP/UDP queueing" to attract the attention of those in the know, or ask on a more specialized forum for people working on the kernel.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.