AIXThis forum is for the discussion of IBM AIX.
eserver and other IBM related questions are also on topic.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
We have the cards installed and currently running in megabit. all the switches are set up properly, the nics on the os are set to autoneg. entstat -d shows the nic's as running in 1000base t. We are using Etherchannel. Filesystems being used are jfs.
to test it the apps teams tried to ftp from one sever to another (the other server was set up in the same way with GbE).
the max speed we got was terrible, something like 5Mps. When we tried it again, using ftp we set the desination path to /dev/null. Then it speeded up to the proper speed.
This showed me there are overheads, big ones. is it because of the scsi disks being used (its a p595 platform) or possibly because its jfs and not jfs2? Can i have some ideas please?
I've done comparative throughput tests for many network scenarios incl. ATM since I often have to tweak TSM throughput for customers. I'd investigate further if my 1Gb nic is performing at less than say 60MB/s, factoring in SCSI overhead and other variables such as file-size.
BTW, I've also encountered the older adapters where you have to select "auto" since it does not provide a "1000 full duplex" option. Works just as well if you keep the golden rule of setting both sides of the connection to the same duplexity.
oh right. well i checked that all when i first set it up.
The other thing i tried was coping at the block level, this worked. what i did was ftp a v.large file using ftp to the remote machine, straight into /dev/null. Clearly that proves that the gigabit part is working but doesnt explain why i cant copy to a filesytem at that speed.
Are the disks perhaps RAID-ed? Do they have other activity while FTP-ing? (this can be checked with filemon)
filemon -O all -o filemon.out
... time goes by
I can only speculate with the little info available on your problem, hope this helps:
1. How many disks are your data spread across?
2. How much other activity on the disks relevant to your FTP?
3. Random / sequential disk IO workload?
4. How have you placed data on the LV's? (inner/outer/middle)
The output of iostat 1 30, vmstat 1 30 during peak loads or during the FTP may shed some more light...