[SOLVED] Latency increases when we increase the packet size.
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Latency increases when we increase the packet size.
I am into a very weird issue. We have 10Mbps lease line connecting our servers in China and Chicago. When I do a normal ping to my Chicago server from China or Vice versa, the RTT is 161ms.
PING 10.30.49.52 (10.30.49.52) 56(84) bytes of data.
64 bytes from 10.30.49.52: icmp_seq=1 ttl=64 time=161 ms
64 bytes from 10.30.49.52: icmp_seq=2 ttl=64 time=161 ms
64 bytes from 10.30.49.52: icmp_seq=3 ttl=64 time=161 ms
But when I increase the packet size, the latency increases from 161 to 165ms as you can see below
ping 10.30.49.52 -s 500
PING 10.30.49.52 (10.30.49.52) 500(528) bytes of data.
508 bytes from 10.30.49.52: icmp_seq=1 ttl=64 time=165 ms
508 bytes from 10.30.49.52: icmp_seq=2 ttl=64 time=165 ms
508 bytes from 10.30.49.52: icmp_seq=3 ttl=64 time=165 ms
I have asked the Provider to increase the MTU to 1500 and still there is no improvement. The total packet size will be somewhere equal to 546 bytes(20 bytes of IP header, 8 bytes of ICMP header + 500 data 18 bytes of min ethernet frame) which is much less that 1500 bytes of MTU that we have set on out network devices like routers, switches and firewalls. I am completely stuck as to why the latency increases when we increase the packet size to 500. As we keep on increasing the packet size, the latency keeps on increasing.
Is there a way on Linux servers to discover MTU along the path (PMTU) ?
Of course latency increases with packet size. It has to.
Every router and switch along the path has to receive the entire packet/frame before it can forward it. The latency introduced at each point thus equals the speed of the inbound link in bps divided by the frame size in bits. Larger frames = increased latency.
Thanks for looking into this. The latency should increase only when the frame exceeds the MTU configured on the network device. This is because then fragmentation will occur and reassembling of those packets will take some time. But what if the frame size is much less that the MTU configured ? In that case the complete packet will be sent, no fragmentation and no additional delays should be added. MTU stands for Maximum Transmission Unit that I can pass via Data-Link.
So if the MTU is 1500 bytes and the total frame size is say 1000, then the latency should not be there as at that point the complete frame can pass through the network device.
Is there a command to find the Path MTU on Linux ?
Thanks for looking into this. The latency should increase only when the frame exceeds the MTU configured on the network device.
No, it shouldn't. Latency will increase with packet size in a linear fashion until the packet reaches the MTU limit, and then there will be a noticable jump as the packet has to be fragmented, which introduces a new header. An MTU of 1500 bytes does NOT mean that every packet is padded to that exact size.
This is why it pays off to limit the size of UDP datagrams (which as you know are never subject to fragmentation at layer 4) in VoIP applications. It reduces latency at the expense of line utilization, as the relative overhead due to IP and UDP headers increase.
There is a ping command that tells you the max mtu. I forget what it is but it will tell you how far you can go. I think it involved using wireshark and look for split packets or something.
Unfortunately ping is a very poor test for that distance for use in speed (yes fragment still). You say leased line but the isn't a dedicated line. You have maybe a few hundred devices in between. Any one of which could change how it handles icmp.
Looking at hardware at your end or trying to switch how they handle settings may help. Many wirespeed devices could have a way to handle how packets are held until complete packet is sent or some can be switched to start sending at first bit.
Look also at your system settings for nic and interior devices. Look very closely at vpn.
Thanks for looking into this. We have multiple vendor devices like Cisco Switches, Juniper and HP and all have an MTU set to 1500. I double-checked this settings. Also on my RedHat Server NIC, the MTU is set to 1500, I can confirm that.
So, from here, I have 1 question- Is it always that increasing the packet size will increase the RTT for the ping packets ? ( Assuming that the packet is much smaller than the MTU defined on the network devices ).
Since packets are transmitted as a sequence of bits, and since a larger packet represents a longer stream of bits which will take longer to transmit, AND since routers (and just about all switches) operate in store-and-forward mode, the answer is a resounding "yes". If not, it would mean that big data packets could be transmitted just as fast as smaller data packets, which obviously makes no sense.
As for how much round trip time will increase if the packet size is increased by X bits, that depends on the number of hops between endpoints and the bandwidth between routers.
576 was the old standard dialup packet size. Smaller packets have better queing of packets. Since you don't have to wait for big packets to clear before yours can be sent. For those networks with many users hogging all the available resources. If you exceed the MTU of the entire journey end to end your packets will fragment and be slower since you'll have more headers.