Cross Platform NFS performance
I am having some performance issues using NFS across platforms. Let me explain my environment:
solnfs - a Solaris 8 box that serves as my NFS server. This box has five 15Krmp disks, two SCSI 160 controllers and is using software RAID 5 (Disksuite) spread out on the two controllers. This box has a Gigabit Ethernet adapter.
rhnfs - a Red Hat 2.1 AS box that serves as another NFS server. This box has five 15Krmp disks, two Fibre controllers and is using hardware RAID 5 load balanced between the two Fibre cards. This box also has a Gigabit ethernet adapter.
solclient - This is a Solaris 8 box that can pump data fast enough to fully utilize a Gigabit NIC. This box also has a Gigabit NIC. This box has access to both solnfs and rhnfs filesystems via NFS.
rhclient - This is a Red Hat 3.0 WS client that can pump data fast enough to fully utilize a Gigabit NIC. This box also has a Gigabit NIC. This box has access to both solnfs and rhnfs filesystems via NFS.
When I setup a job to move data over NFS from solclient to solnfs, I can send data at the rate of X. When solclient sends the same exact data to rhnfs, it does so at a rate of X - 40%.
When I setup a job to move data over NFS from rhclient to rhnfs, I can send data at the rate of Y. When rhclient sends the exact same data to solnfs, it does so at a rate of Y - 40%.
At first, I thought networking was the culprit. However, I have confirmed that the all the machines can operate at close to 1Gbps from FTP tests and a custom app I have setup.
I then thought the RAID setups must be screwed up, but each NFS box is fastest when a client with the same OS is sending data.
Any insight on where to look now?
|