NFSv3 over TCP
Hello.
Got a NFS server separated by a firewall from the NFS client. All tcp connections are allowed from from the client to the server, but not udp. Is there any way i can force NFS to use only TCP. I presume mountd udp is my problem. Getting the udp connections open aswell would take quite a long time. The NFS server is a Solaris 9 running NFSv3, while the client is Solaris 10 running NFSv4 but I set in /etc/default/nfs the max_version to 3. On a sidenote, i've mounted the share using webnfs manually and worked: mount -F nfs nfs://nfsserver:2049/nfs_share /share But i can't manage to make it with autofs. When i try to access the share, it always says Permission Denied. My /etc/auto_master has : /- /etc/auto_direct And my /etc/auto_direct has : /share -ro nfs://nfsserver:2049/nfs_share Any ideas on this ? |
I would try replacing
Code:
NFSD_PROTOCOL=ALL Code:
NFSD_PROTOCOL=tcp |
Solaris 10 and UDP/TCP NFS Mounts
Chiming in here..
I have a Solaris 10 thumper (x4550) which is our NFS server (NAS). Our RHEL clients were reconfigured to mount TCP NFS mounts -- the filesystems were unnmounted, then remounted. Surprisingly, nfsstat -m still shows them mounted via UDP. What's up with that? I don't want to restrict the capability to TCP only, as it may cause some other systems to fail. How can I change this, and more importantly, why is the reconfigured mount going back to UDP after it was unmounted and remounted and requested as proto=tcp? |
No idea but it looks like a RHEL issue to me.
|
I've configured our NFS mounts to be all TCP-based -- under RHEL, you can't just unmount and remount, you have to restart the netfs service (bug); otherwise, nfsstat -m still thinks it's UDP.
This pretty much stopped the retrans problems. However, running "iozone" shows the overall performance is pretty crappy in comparison. There is a very large wsize/rsize window negotiated between the RHEL linux clients and the Sun x4550 thumper (1048576). I may need to reduce that. I think another problem is that we have 3COM 4200G switches here that are daisy-chained via a gigabit switch port -- at least, the backend network will become congested, sorta like putting a ton of traffic through a small straw at times. I'm looking into rectifying this -- the back connectors MOD1 and MOD2 are meant for SFP transceiver connections (10 Gps) -- they should be connected that way. This may hopefully resolve the performance problem. Any other tips, thoughts welcomed. Thanks. |
All times are GMT -5. The time now is 10:52 PM. |