Hello Isaac, does it work already?
I have exact the same problem.
If it trey the individual interfaces without bond it works with jumbo frames.
Originally Posted by IsaacHunt
Hi everyone. this is my first post so be gentle.
I am running three X series IBM servers and an HP DL580 G2 in a RAC cluster using a NetApp filer for storage
This is a test rig to determine optimum performance. Details follow:. I will include some info irellivant to the problem
in case anyone is interested in the projects itself: -
Kernel is RH AS 2.4.9-e.38
Oracle version is 18.104.22.168
3x IBM X series servers with 4Gb RAM and 4x Zeon CPU's (summit kernel)
1x HP DL580 with 4Gb RAM and 8 CPU's (smp kernel)
each server has the following ethernet connections: -
1x 100mb FDX for the local network
1x 1000mb Fibre for the interconnect between servers connected via a Cisco 3500
2x 1000mb copper bonded together (name is bond0) connected via a Cisco 3750. this is the connection for the NetApp filer
The Netapp filer consists of 5Tb of storage split between two filer heads. Each filer head is connected to the Cisco 3750 via 4 trunked 1000mb copper ethernet cards.
Now for the problem:-
I want to use Jumbo frames on the bonded interface but if I use
ifconfig bond0 mtu 9000 (or anything over 1500) I get
SIOCSIFMTU: Invalid argument
I can change the mtu on each interface fine just not the bonded driver.
I checked drivers/net/bonding.c and it appears to take default ethernet settings at startup but I do not see any way of defining the mtu
Has anyone had any sucess with this? Is this fixed in 3.0? Any ideas?
BTW . If you want info on the project then let me know and I will send you info on the problems I have encountered (including a stinky one on the HP server)