LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Networking (https://www.linuxquestions.org/questions/linux-networking-3/)
-   -   Problem with mounting nfs shares after sudden poweroutage (https://www.linuxquestions.org/questions/linux-networking-3/problem-with-mounting-nfs-shares-after-sudden-poweroutage-4175699552/)

Thomas Korimort 08-23-2021 06:07 AM

Problem with mounting nfs shares after sudden poweroutage
 
Hi!

I am already experiencing this for the second time and i feel that this is a strange issue: I have AMD Ryzen desktop with Debian Bullseye (before Buster) and a JBOD disk tower with four fully occupied slots. On my desktop i mount the 4 disks as NFS shares, which used to work nicely before the most recent sudden power outage. After that the drives had to be checked and the inodes repaired. One disk in use was lost and restored through backup by copying the backup on the repaired disk. I also changed the file permissions and ownership after the copy procedure. After that the mount procedure during system startup happening in /etc/fstab did not mount anymore my nfs shares correctly. Now my mount table looks like this:

My /etc/fstab has this four entries related to the nfs shares

Code:

10.10.10.2:/mnt/WD01        /mnt/WD01        nfs        rw,auto,nofail        0        0
10.10.10.2:/mnt/WD02        /mnt/WD02        nfs        rw,auto,nofail        0        0
10.10.10.2:/mnt/WD03        /mnt/WD03        nfs        rw,auto,nofail        0        0
10.10.10.2:/mnt/WD04        /mnt/WD04        nfs        rw,auto,nofail        0        0

and the actual mount in /proc/mounts lists like this

Code:

10.10.10.2:/mnt/WD04 /mnt/WD04 nfs4 rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.1,local_lock=none,addr=10.10.10.2 0 0
10.10.10.2:/mnt/WD03 /mnt/WD03 nfs4 rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.1,local_lock=none,addr=10.10.10.2 0 0
10.10.10.2:/mnt/WD02 /mnt/WD02 nfs4 rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.1,local_lock=none,addr=10.10.10.2 0 0
10.10.10.2:/mnt/WD03 /mnt/WD01 nfs4 rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.1,local_lock=none,addr=10.10.10.2 0 0

One can see that /mnt/WD01 gets mounted under /mnt/WD03 and there is no mount entry for /mnt/WD01. What is wrong here? Reboot does not change anything. This strange constellation is carried over from reboot to reboot and i don't know where i can find the file that is jumbled up.

The mount procedure is waiting for 1,5 minutes for mounting WD01 and WD03 suddenly. The drives itself are okay on my nfs server 10.10.10.2 and exportfs also lists ok:

Code:

/mnt/WD01            10.10.10.1
/mnt/WD02            10.10.10.1
/mnt/WD03            10.10.10.1
/mnt/WD04            10.10.10.1
/mnt/WD01            192.168.0.0/24
/mnt/WD01            192.168.1.0/24
/mnt/WD01            10.10.10.0/24

WD01 is exported into multiple IP adress spaces for my different router and network switch configurations.

Exactly this problem happened to me with my old Debian Buster installation and led to the same problem.

computersavvy 08-23-2021 05:26 PM

How did you fix the issue with the old Debian installation?

It seems that maybe the nfs server is exporting WD03 in such a way that the client sees it wrong and mounts it at WD01

Also, I wonder about the issue of mounting the same device on different systems at the same time, especially in different subnets. The risk of multiple users altering the same file at the same time does exist and may cause corruption.

Lastly, exporting WD01 to 10.10.10.1 and 10.10.10.0/24 is redundant and possibly conflicting. The first only allows one machine to access it, and the other allows the entire subnet, including that one machine to access it. There may be a conflict introduced that does not appear obvious.

You can possibly avoid the extended delay by adding options in fstab for each of those entries such as
nofail & _netdev

Thomas Korimort 08-24-2021 12:10 PM

Quote:

Originally Posted by computersavvy (Post 6277963)
How did you fix the issue with the old Debian installation?

I did not. I could access WD01 and WD02 from my desktop though the mount was jumbled. WD03 and WD04 i use just occasionally.

Quote:

It seems that maybe the nfs server is exporting WD03 in such a way that the client sees it wrong and mounts it at WD01
It was working before nicely with all the different export cases. In my old Debian Buster i was also suspecting a problem with the mount export. But the jumbling of the mounts happened only after the sudden power outage and disk recovery of WD01 afterwards. As WD01 was operated on, the complete file system was destroyed and i had to restore it from my backup of WD02 which god thanks was not destroyed but could be repaired from the ext4-journal

Quote:

Also, I wonder about the issue of mounting the same device on different systems at the same time, especially in different subnets. The risk of multiple users altering the same file at the same time does exist and may cause corruption.
I think that risk is should be managed by a ressource lock which is a quite common method in operating systems to guarantuee exclusive access to a ressource. But you are right, in the sudden power outage the consistency of the file system got destroyed badly beyond recovery at least on WD01. So the IO data transfer to my JBOD array is in a way risky.

Quote:

Lastly, exporting WD01 to 10.10.10.1 and 10.10.10.0/24 is redundant and possibly conflicting. The first only allows one machine to access it, and the other allows the entire subnet, including that one machine to access it. There may be a conflict introduced that does not appear obvious.
Maybe. Though in my case the 10.10.10.1 export is just a widening of the 10.10.10.0/24 export. Let us put it straight: I got no error message...

Quote:

You can possibly avoid the extended delay by adding options in fstab for each of those entries such as
nofail & _netdev
I did with nofail. The _netdev option was just necessary on Ubuntu and under certain conditions. Usually, "ro,auto,nofail 0 0" works reliably.

I have filed a kernel bug report and a Debian bug report (Bug#992866) since this thing is bothering me already for too long. I think something is wrong with the kernel mount data structures. They are storing information from reboot to reboot, but they don't check properly for consistence.


All times are GMT -5. The time now is 03:20 AM.