nfs write == super slow; read == super fast - problem?
brand new boxes running RH9 (downloaded a couple weeks ago, so it should be current) with integrated Intel GigE running on the PRO1000 drivers through a netgear 24-port unmanaged GigE switch (GigE on all ports). The boxes have single platter Hitachi 40GB HD's (8MB buffer, 7200 RPM) and 1GB of PC3200 RAM running dual channel. The FSB is running around 811 MHz with a 2.6 GHz Intel P4.
scp'ing a ~400MB file from one box to another takes about 15 seconds.
nfs reading the file takes about the same time
nfs writing the file takes about 12 minutes.
dd if=/dev/zero of=/mnt/home/testfile bs=16k count=256
takes just over 10 seconds (using the time command to judge)
I've tried changing the read & write block size from 8192 all the way to 32768 (I'm using nfs v2 & 3) - no change.
I've double checked the MTU settings (currently at 1500 on the card and via tracepath)
I've tried changing size of the socket input queue to 256K.
*To note: Every machine on my network is looking at 192.168.0.1 as a DNS server. 192.168.0.1 is not serving up DNS at the moment (new office - haven't setup a DNS server yet). Could this be causing problems? All machines in question have entries in eachother's hosts file.
I've disabled iptables. (didn't think that would make a difference)
Why is my nfs write going so slow?
/etc/exports on the server side is using the options: bg,noac,no_root_squash,no_subtree_checking,sync
I have five identical machines nfs mounted to one another (using different directories)
Last edited by BrianK; 11-25-2003 at 12:10 AM.