LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   AIX (https://www.linuxquestions.org/questions/aix-43/)
-   -   megabit to gigabit nic conversion, bad speeds (https://www.linuxquestions.org/questions/aix-43/megabit-to-gigabit-nic-conversion-bad-speeds-473693/)

Frustin 08-14-2006 05:49 AM

megabit to gigabit nic conversion, bad speeds
 
We have the cards installed and currently running in megabit. all the switches are set up properly, the nics on the os are set to autoneg. entstat -d shows the nic's as running in 1000base t. We are using Etherchannel. Filesystems being used are jfs.

to test it the apps teams tried to ftp from one sever to another (the other server was set up in the same way with GbE).

the max speed we got was terrible, something like 5Mps. When we tried it again, using ftp we set the desination path to /dev/null. Then it speeded up to the proper speed.

This showed me there are overheads, big ones. is it because of the scsi disks being used (its a p595 platform) or possibly because its jfs and not jfs2? Can i have some ideas please?

thanks

stany001 09-15-2006 02:47 AM

The idea is to not use Autoneg on AIX !
The perfs are very bad with autoneg from AIX 3.2.5 to 5.3ML4 !

Frustin 09-15-2006 04:46 AM

there is no option for gigabit in the preferences, only up to 100baset. So the only option for me was autoneg.

niella 09-22-2006 10:08 AM

For optimised gigabit throughput, you need to enable jumbo frames on the card, MTU size of 9000, and set tcp send + receive spaces to higher values (262k + 131k), after you've enabled rfc1323.

also, if the switch is not dedicated for gigabit (or its settings are not optimised for gigabit traffic e.g. mtu must be 9000), you will have a slowdown.

Regards,
Niel

Frustin 09-25-2006 02:22 AM

did you have this problem then niella? where did you get the information from for this sort of set up?

niella 09-27-2006 03:41 AM

I've done comparative throughput tests for many network scenarios incl. ATM since I often have to tweak TSM throughput for customers. I'd investigate further if my 1Gb nic is performing at less than say 60MB/s, factoring in SCSI overhead and other variables such as file-size.

BTW, I've also encountered the older adapters where you have to select "auto" since it does not provide a "1000 full duplex" option. Works just as well if you keep the golden rule of setting both sides of the connection to the same duplexity.

Regards
Niel

Frustin 09-27-2006 03:57 AM

"Works just as well if you keep the golden rule of setting both sides of the connection to the same duplexity."

do you mean the switch should be set to auto as well as the nic? i think the switch is set to 1000 full.

niella 09-27-2006 04:34 AM

To see if that is a problem you could do a "netstat -v|grep -i media" and see if the duplexity has dropped to "half duplex" on the "Media Speed Running" field for that particular adapter...

Regards,
Niel

Frustin 10-02-2006 03:43 AM

oh right. well i checked that all when i first set it up.

The other thing i tried was coping at the block level, this worked. what i did was ftp a v.large file using ftp to the remote machine, straight into /dev/null. Clearly that proves that the gigabit part is working but doesnt explain why i cant copy to a filesytem at that speed.

niella 10-02-2006 04:09 AM

What (network) transfer speed did you get?

Are the disks perhaps RAID-ed? Do they have other activity while FTP-ing? (this can be checked with filemon)

e.g.
filemon -O all -o filemon.out
... time goes by
trcstop

I can only speculate with the little info available on your problem, hope this helps:
1. How many disks are your data spread across?
2. How much other activity on the disks relevant to your FTP?
3. Random / sequential disk IO workload?
4. How have you placed data on the LV's? (inner/outer/middle)

The output of iostat 1 30, vmstat 1 30 during peak loads or during the FTP may shed some more light...

I can also highly recommend http://publib.boulder.ibm.com/infoce...d/prftungd.pdf

Regards,
Niel


All times are GMT -5. The time now is 12:49 PM.