Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Sometimes an NFS connection does not terminate gracefully. The NFS server will still show an ESTABLISHED connection.
For example, client 192.168.1.7 connects to server 192.168.1.5 to browse some shared files. A power outage occurs and client 192.168.1.7 goes offline. Using netstat, 192.168.1.5 still shows an ESTABLISHED connection.
Or, for some reason the network connection is interrupted but several minutes later when reestablished, netstat will show the new connection and the stale connection as ESTABLISHED.
There are many articles online about using tcpkill and fuser. A challenge is tcpkill succeeds only with active connections. When the remote system is no longer actually connected then tcpkill hangs and does nothing.
Similarly, fuser -k 2049/tcp does nothing as well.
I think you want to change the default timeouts (which can last about 30 minutes). Normal TCP timeouts are about 15 minutes, so the connection should get cleared then.
If you are willing to accept the lower security... and the increased complexity in getting file access.
You can always specify using UDP instead of TCP. TCP is faster, but UDP doesn't have persistent connections.
BTW, those persistent connections always happen with any TCP connection that isn't shutdown. Even sshd connections are persistent, which is one reason they implemented the "keep alive" option - as it polls to see if the remote client is actually there, and then gracefully closes the connection if it doesn't get a reply.
Even sshd connections are persistent, which is one reason they implemented the "keep alive" option - as it polls to see if the remote client is actually there, and then gracefully closes the connection if it doesn't get a reply.
Is there any reason why NFS hasn't implemented something similar?
Honestly I'm on both sides of the fence here. I've had an NFS connection established, then the server goes down for some reason without cleanly disconnecting first. When that happens, the client just sits there and waits. This can be a pain when you just want a clean disconnect, but it can also be nice...boot the server back up and everything that was waiting just picks back up where it left off like nothing happened.
Even if NFS did implement an auto-disconnect, I'm not sure if I'd even want it. In many instances it's just too convenient to have your processes sit there and wait patiently until the server is back up, rather than having them violently crash because the files/dirs they were reading/writing to have suddenly disappeared, then you have to go through the hassle of restarting them once the server is back up.
Last edited by suicidaleggroll; 02-19-2015 at 09:50 AM.
Is there any reason why NFS hasn't implemented something similar?
Yes. Most people don't like the data corruption that can occur when a forced disconnect happens.
Quote:
Honestly I'm on both sides of the fence here. I've had an NFS connection established, then the server goes down for some reason without cleanly disconnecting first. When that happens, the client just sits there and waits. This can be a pain when you just want a clean disconnect, but it can also be nice...boot the server back up and everything that was waiting just picks back up where it left off like nothing happened.
That is what the nfs mount option "hard" and "soft" are for. A hard mount is a constant retry, and hangs the client - and is used to avoid data corruption. A "soft" mount allows the client to interrupt an operation, but at the cost of possible data corruption.
Quote:
Even if NFS did implement an auto-disconnect, I'm not sure if I'd even want it. In many instances it's just too convenient to have your processes sit there and wait patiently until the server is back up, rather than having them violently crash because the files/dirs they were reading/writing to have suddenly disappeared, then you have to go through the hassle of restarting them once the server is back up.
Including the possibility of having corrupted files from I/O transactions that haven't completed.
A "soft" mount allows the client to interrupt an operation, but at the cost of possible data corruption.
How does that work exactly? In my use case I am not running a dedicated 24/7 NFS server. Basically I am using peer sharing. I use a long tested script to connect to other systems in my home network and another script to disconnect. Yet sometimes weird things happen when the scripts are automated and a clean termination does not occur. Seems soft connections might help.
As I am using more of a peer connection than dedicated server, seems the potential of data loss is minimal for my use case. I don't need any kind of automatic disconnect. I need a way to allow manual forced disconnections.
I can test with the soft parameter, but my question is focused on how to force termination? That is, suppose I connect to my HTPC to transfer files, walk away to do something else, and then return, forgetting I had enabled NFS sharing on the HTPC. I shut down the HTPC, but the client is still connected. Then when I attempt to shutdown the client the client hangs. That is where I need the ability to force the shutdown and avoid the hang.
Or does using the soft parameter avoid the whole force termination problem?
That is, suppose I connect to my HTPC to transfer files, walk away to do something else, and then return, forgetting I had enabled NFS sharing on the HTPC. I shut down the HTPC, but the client is still connected. Then when I attempt to shutdown the client the client hangs. That is where I need the ability to force the shutdown and avoid the hang.
Use "umount -fl" to force the client to unmount the share.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.