Question (probably dumb) about setting up jumbo packets...
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,802
Rep:
Question (probably dumb) about setting up jumbo packets...
What would cause Linux to balk at setting up jumbo packets?
I know there's a problem that I'd run into if I were running a sufficiently old kernel (version < 2.6.17) but that's not a factor with the systems I'm working with.
I've tried using the old-style command
Code:
ifconfig ethN mtu 9000
and received an error message:
Code:
SIOCSIFMTU: Invalid argument
When that happened, I tried backing off the MTU size until I did not receive that error and then tried a binary search on a maximum size. On one system, I got the MTU to "4080". (I should note, however, that this was using "ifconfig". Attempting to increase the MTU using "ip link set dev eth1 mtu 4080" returns an error. Reissuing the same command with an MTU of 1500 returns no error.) On another system I was able to set it to "7139". Yet another system staunchly refuses to accept any values greater than "1500". (Kernel compile-time setting, perhaps?) The last one is the oldest of the three I've been working with but is still new enough that I'm puzzled by its rejecting MTUs bigger than 1500. It is the next system on my list to be upgrading so perhaps that'll have a positive effect.
What should I be looking at to try and figure out why the typically suggested value of "9000" is not being accepted?
Here, you run into this wall: Standards are written that specify 1500 as the MTU. Error correction (where it exists) has 1500 in mind as the max MTU. You may get this to work in it's own limited environment, but I don't think you will get it outside. Checksums could become larger than the windows in headers to allow for them. It needs a more root & branch review of networking to get that working. That would need to be backward compatible.
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,802
Original Poster
Rep:
Quote:
Originally Posted by business_kid
Here, you run into this wall: Standards are written that specify 1500 as the MTU.
Well, the "M" in MTU does stand for "maximum". (Aside: Is there a minimum? I still remember the discussions as to why TCP was an inefficient means of handling traffic from terminal servers and other types small data transfers. LAT was the way to go some claimed.) I'm sure there are cases where some negotiation takes place, the results of which, could wind up limiting you to the 1500 payload size regardless of what is set as the MTU. Because of that I wouldn't expect that jumbo packets would be useful outside of one's LAN. I've never heard of anyone doing that anyway. I've encountered it at work but those cases were all inside the corporate network and it's far easier to predict what path your data is taking and what the capabilities of the equipment are.
I'm mainly curious about what could be limiting the setting to 1500, especially on that one system. It's using the same gigabit ethernet adapter as some other systems so it shouldn't be that. Part of the reason I was looking at jumbo packets was that I was getting ready to set up a system to receive backups across the wire, the hope being that jumbo packets would improve backup performance (much the same way that bumping up the block size improves backup performance). I recall reading that my consumer-grade gigabit switches claim to support jumbo packets but I don't remember just how jumbo those packets can be: 4000 bytes? 7000? 9000? Similarly, I have to dig deep to find out what each network adapter supports. [sigh] I miss the days when equipment shipping with detailed technical information.
Oddly, before posting a question here I'd run across pages that stated that Windows and Solaris seemed to accept "9000" just fine but when running Linux on the same system and it balks at anything larger than 4000/7000/whatever. Still scratching my head about that. Difference in sophistication/complexity of the device drivers?
Presuming you want this to be useful outside a limited environment, remember that you want, I presume, your service to be accessible. So mobile phones using Android and advanced error correction are sending over 3G to base stations who send & receive over microwave link to the general network. PCs send over wifi to modem/routers who talk over a variety of methods (mmds, asdl, Frequency division multiplexing, etc). Satellite systems no doubt use another protocol.
In there is error correction, which sends redundant data that can be used to correct faulty signals. All these systems need to know the max packet size. Changing that lot is not for the faint of heart.
I would run a test on your home network, and see how you do.
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,802
Original Poster
Rep:
Quote:
Originally Posted by business_kid
It's all software.
Presuming you want this to be useful outside a limited environment, remember that you want, I presume, your service to be accessible.
Wow. You jumped to a way out conclusion there. I'm looking at being able to do efficient backups within a LAN. Not some cloud service. (I'd expect that to be tough to pull off without spending an absolute fortune on high-end internet connectivity.)
Quote:
I would run a test on your home network, and see how you do.
Which I've been doing and where I've run into all the problems. I agree that it's all software (except for the consumer-grade switches I have to work with which all claim to be jumbo packet friendly). So far, it really seems that Linux's networking software has some more growing up to do compared to commercial operating systems that seem to deal with jumbo packets/frames with far, far less hassle. In my limited experiment, I have three Linux systems with gigabit adapters and have seen wildly different levels of "jumbo", including one that isn't even. Not a good state of affairs. IMHO.
Age has a lot to do with linux 'having some growing up to do.'
Linux = Gnu + Linus' kernel, which was all written 32 bit and a clone of unix which (in the 68K incarnation anyhow) was 16 bit in the 1970s and early 80s. Only recently some red hat guy went over maths routines in glibc and improved performance significantly; he did say that 'all the low hanging fruit had been plucked' so I gather he found it.
Maybe you ought to do the same for networking? :-P.
Different NIC drivers handle jumbo frames differently - and different NIC hardware supports different maximum frame sizes. That is, it's all highly hardware AND driver dependent.
For example, although the Intel NIC I've got in one server supports 9000-byte jumbo frames, the driver (e1000e) will only let me set the mtu to a maximum of 8996. The driver always assumes a 4-byte VLAN header will be present, even when VLANs aren't being used. The Windows driver, on the same NIC, lets me set the MTU to 9000 without hesitation.
My laptop, which also has an Intel NIC and uses the same e1000e driver as the server, doesn't support Jumbo frames at all - the laptop uses a cheaper variant of the NIC which doesn't have the hardware support for Jumbo frames.
Another server uses a Broadcom NIC with the bnx2x driver and has no problem setting the MTU to 9000. Another laptop I have that has a NIC that also uses the bnx2x driver, again uses a cheaper variant of the Broadcom hardware, that doesn't support MTU > 1500.
Be aware that performance gains from Jumbo frames are dependent on all of your network infrastructure, end-to-end. For example, one time I turned on jumbo frames and encountered a MASSIVE performance drop, even though both NICs and the switch supported an MTU of 9000. Eventually I found out that although the network switch 'supported' Jumbo frames, frames > 1500 bytes were not handled by the wire-speed switching hardware, and instead were switched by a software fallback mechanism, running on the switch's 100 MHz embedded CPU. The switch was from a very well-known network hardware manufacturer!
Also, technically any MTU bigger than 1500 is a 'Jumbo frame' so you might find some of your hardware isn't capable of a 9000-byte MTU at all, despite claiming support for Jumbo frames.
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,802
Original Poster
Rep:
Quote:
Originally Posted by business_kid
Age has a lot to do with linux 'having some growing up to do.'
Linux = Gnu + Linus' kernel, which was all written 32 bit and a clone of unix which (in the 68K incarnation anyhow) was 16 bit in the 1970s and early 80s. Only recently some red hat guy went over maths routines in glibc and improved performance significantly; he did say that 'all the low hanging fruit had been plucked' so I gather he found it.
Maybe you ought to do the same for networking? :-P.
Actually, I've done enough numerical work in the past (signal processing, simulations, etc.) that I'd probably feel more at home looking at the math routines. The networking stuff is not my area of expertise, hence my aggravation over the current state of affairs. I guess those that are more into the networking end are comfortable with its warts. (If I had any involvement in that area at all, I suppose it would be documentation. I'm told I do write decent documentation.) For the moment, I guess I'll refocus my projects toward the storage-related tasks and bringing all the server OSs to the same version. Not that I expect OS version harmonization to make trying jumbo packets any easier -- but ya never know.
Can't wait until somebody decides to start tackling converting the system time functions to 64 bit. That'll be fun, eh? I'm not sure there's any "low hanging fruit" available in that code tree.
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,802
Original Poster
Rep:
Quote:
Different NIC drivers handle jumbo frames differently - and different NIC hardware supports different maximum frame sizes. That is, it's all highly hardware AND driver dependent.
So I've come to learn.
Quote:
Be aware that performance gains from Jumbo frames are dependent on all of your network infrastructure, end-to-end. For example, one time I turned on jumbo frames and encountered a MASSIVE performance drop, even though both NICs and the switch supported an MTU of 9000. Eventually I found out that although the network switch 'supported' Jumbo frames, frames > 1500 bytes were not handled by the wire-speed switching hardware, and instead were switched by a software fallback mechanism, running on the switch's 100 MHz embedded CPU. The switch was from a very well-known network hardware manufacturer!
That's the sort of limitation I was attempting to learn whether I'd be encountering. For example, my switches claim to support jumbo frames but I've learned that there are so many interpretations of the term that "jumbo frames" can be almost meaningless.
Quote:
Also, technically any MTU bigger than 1500 is a 'Jumbo frame' so you might find some of your hardware isn't capable of a 9000-byte MTU at all, despite claiming support for Jumbo frames.
Like I said above, the term is almost meaningless. Would it kill a manufacturer to describe their unit's capabilities as "supports jumbo frames (maximum size of NNNN bytes)" so the consumer/network engineer would know what to expect? Yeah, you're going to lose sales to a competitor who supports 9000 byte MTUs when you only support 4000 bytes but that's when the sales department ought to be going back to engineering and tell them that they need to enhance their product to increase sales and not to hide the limitation from the buyer. But that's just MHO.
Later...
Last edited by rnturn; 01-04-2015 at 01:13 PM.
Reason: botched quote tag and grammatical error
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.