I am running autofs-5.0.7-56.el7.x86_64 on CentOS 7.3 (Mate desktop). I have a VMWare guest of the same OS running on the CentOS 7 host. The guest VM has the following line in /etc/auto.master
Code:
/nfs /etc/auto.nfs -browse
and /etc/auto.nfs contains
Code:
archive -rw taylor20:/archive
backup -rw taylor20:/backup
data -rw taylor20:/data
quitelarge -rw taylor20:/quitelarge
data09.1 -rw taylor09:/media/data09.1a
data09.2 -rw taylor09:/media/data09.2a
data14.1 -rw taylor14:/media/data14.1a
data14.2 -rw taylor14:/media/data14.2a
data18.1 -rw taylor18:/media/data18.1a
data18.2 -rw taylor18:/media/data18.2a
taylor20 is the host machine and has 4 file systems exported. The other 3 machines listed are servers on my network each with two file systems exported. The servers are for data archive and are ONLY powered on when I need to access them to save or retrieve data. That is why I am using autofs rather than hard mounts in /etc/fstab. Here is the issue...
If I open the /nfs directory with Caja (Nautilus) on the VM it opens almost instantly. When I open one of the exported file systems on the host it opens in a few seconds. If I then backup to the /nfs directory it may take as long as 30 seconds before the Caja screen refreshes. After that I can switch between auto-mounted file systems on the host and the /nfs directory very quickly. I suspect that the delay is related to autofs interrogating the "dead" mounts to the servers which are off-line.
My question is... can I configure autofs to less diligent about looking for something which is not there? Or do I just have to live with the delays? This is a lot better than I had before using scripts to mount and umount the server file systems. I would often forget to run the umount script before powering down a server. This would cause all sorts of hangups as mount and nfs tried unsuccessfully to access the missing file systems.
TIA,
Ken