Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place! |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
08-10-2017, 09:43 PM
|
#1
|
Member
Registered: Jul 2009
Location: WI, USA
Distribution: Debian 8, Ubuntu 16.04, CentOS 7
Posts: 143
Rep:
|
NFS does not mount without nolock option from one NFS server, but mounts correctly from the other
Hello,
I am trying to set up a simple NFS setup in a small three computer system.
client1: Ubuntu 16.04
server1: Debian 8.5
server2: Ubuntu 16.04
When I try to mount NFS exports from server1 and server2 in client1, I can mount from server1 without the "nolock" option, but cannot do so for the exports from server2.
Code:
root@client1:/nfs# cat /etc/fstab
server1:/export/data /nfs/data nfs rw,async,hard,intr 0 0
server2:/export/home /nfs/home nfs rw,nolock,async,hard,intr 0 0
server2:/export/repo /nfs/repo nfs rw,async,hard,intr 0 0
root@client1:/nfs# mount -av
/ : ignored
/boot/efi : already mounted
/scratch : already mounted
none : ignored
mount.nfs: timeout set for Thu Aug 10 20:57:17 2017
mount.nfs: trying text-based options 'hard,intr,vers=4,addr=server1.ip,clientaddr=client1.ip'
/nfs/data : successfully mounted
mount.nfs: timeout set for Thu Aug 10 20:57:17 2017
mount.nfs: trying text-based options 'nolock,hard,intr,vers=4,addr=server2.ip,clientaddr=client1.ip'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'nolock,hard,intr,addr=server2.ip'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying server2.ip prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying server2.ip prog 100005 vers 3 prot UDP port 53584
/nfs/home : successfully mounted
mount.nfs: timeout set for Thu Aug 10 20:57:17 2017
mount.nfs: trying text-based options 'hard,intr,vers=4,addr=server2.ip,clientaddr=client1.ip'
mount.nfs: mount(2): No such file or directory
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified
root@client1:/nfs# systemctl status rpc-statd
● rpc-statd.service - NFS status monitor for NFSv2/3 locking.
Loaded: loaded (/lib/systemd/system/rpc-statd.service; disabled; vendor preset: enabled)
Active: active (running) since Thu 2017-08-10 19:49:21 CDT; 1h 36min ago
Process: 3034 ExecStart=/sbin/rpc.statd --no-notify $STATDARGS (code=exited, status=0/SUCCESS)
Main PID: 3036 (rpc.statd)
CGroup: /system.slice/rpc-statd.service
└─3036 /sbin/rpc.statd --no-notify
root@client1:/nfs# cat /etc/hosts.deny
ALL: ALL
root@client1:/nfs# cat /etc/hosts.allow
sshd:client1, server1, server2
rpcbind:client1, server1, server2
The error mentions rpc.statd service, but it seems to be running!
Both the servers have similar setup:
Server1:
Code:
somesh@server1:~$ cat /etc/exports
/export/data client1(rw,sync,no_subtree_check,no_root_squash)
somesh@server1:~$ cat /etc/hosts.deny
# EMPTY
somesh@server1:~$ cat /etc/hosts.allow
portmap: 192.168.1.
portmap: client1
Server 2:
Code:
root@server2:/export# cat /etc/exports
/export/home client1(rw,sync,no_subtree_check,no_root_squash)
/export/repo client1(rw,sync,no_subtree_check,no_root_squash)
root@server2:/nfs# cat /etc/hosts.deny
#Empty
root@server2:/export# cat /etc/hosts.allow
portmap: 192.168.1.
portmap: client1
rpcbind: client1
ALL:127.0.0.1
ALL: client1
root@server2:/export# cat /etc/exports
/export/home client1(rw,sync,no_subtree_check,no_root_squash)
/export/repo client1(rw,sync,no_subtree_check,no_root_squash)
root@server2:/nfs# cat /etc/hosts.deny
#Empty
root@server2:/export# cat /etc/hosts.allow
portmap: 192.168.1.
portmap: client1
rpcbind: client1
ALL:127.0.0.1
ALL: client1
Any pointer what I may be missing, or where should I be looking to diagnose the issue would be greatly appreciated.
TIA
|
|
|
08-11-2017, 01:21 AM
|
#2
|
LQ Guru
Registered: Jan 2005
Location: USA and Italy
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524
|
This looks suspicious:
Code:
portmap: 192.168.1.
The last octet seems to missing. You might also explicitly enable rpc.statd.service
Code:
$ service rpc.statd.service enable
|
|
1 members found this post helpful.
|
08-11-2017, 08:24 AM
|
#3
|
Member
Registered: Jul 2009
Location: WI, USA
Distribution: Debian 8, Ubuntu 16.04, CentOS 7
Posts: 143
Original Poster
Rep:
|
Thanks AwesomeMachine.
Quote:
Originally Posted by AwesomeMachine
This looks suspicious:
Code:
portmap: 192.168.1.
The last octet seems to missing.
|
I thought missing octet indicates all possible numbers in that place. At least that's how it works for ssh daemon in my other systems where I have only first two octets in the hosts.allow, e.g.
Quote:
Originally Posted by AwesomeMachine
You might also explicitly enable rpc.statd.service
Code:
$ service rpc.statd.service enable
|
Code:
$ sudo service rpc.statd.service enable
rpc.statd.service: unrecognized service
So instead I tried,
Quote:
systemctl enable/start/restart rpcbind.service
|
But nothing changes.
Also, as I mentioned in my original log, rpc-statd seems to be running:
Code:
$ systemctl status rpc-statd
● rpc-statd.service - NFS status monitor for NFSv2/3 locking.
Loaded: loaded (/lib/systemd/system/rpc-statd.service; disabled; vendor preset: enabled)
Active: active (running) since Thu 2017-08-10 19:49:21 CDT; 12h ago
Main PID: 3036 (rpc.statd)
CGroup: /system.slice/rpc-statd.service
└─3036 /sbin/rpc.statd --no-notify
Aug 10 20:21:33 client1 systemd[1]: Started NFS status monitor for NFSv2/3 locking..
Aug 10 20:21:33 client1 systemd[1]: Started NFS status monitor for NFSv2/3 locking..
Aug 10 20:21:33 client1 systemd[1]: Started NFS status monitor for NFSv2/3 locking..
Aug 10 20:54:35 client1 systemd[1]: Started NFS status monitor for NFSv2/3 locking..
Aug 10 20:55:17 client1 systemd[1]: Started NFS status monitor for NFSv2/3 locking..
Aug 10 21:21:06 client1 systemd[1]: Started NFS status monitor for NFSv2/3 locking..
Aug 10 21:46:46 client1 systemd[1]: Started NFS status monitor for NFSv2/3 locking..
Aug 10 21:50:41 client1 systemd[1]: Started NFS status monitor for NFSv2/3 locking..
Aug 11 08:13:01 client1 systemd[1]: Started NFS status monitor for NFSv2/3 locking..
Aug 11 08:15:05 client1 systemd[1]: Started NFS status monitor for NFSv2/3 locking..
|
|
|
08-11-2017, 08:39 AM
|
#4
|
Senior Member
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian Stable
Posts: 2,546
|
"mount.nfs: mount(2): No such file or directory"
This makes me suspect that one of the required directories is missing. I'm guessing that the shared directories do indeed exist on the servers, but that one of the directory mount points is missing on the client.
|
|
|
08-11-2017, 08:43 AM
|
#5
|
Member
Registered: Jul 2009
Location: WI, USA
Distribution: Debian 8, Ubuntu 16.04, CentOS 7
Posts: 143
Original Poster
Rep:
|
Quote:
Originally Posted by IsaacKuo
"mount.nfs: mount(2): No such file or directory"
This makes me suspect that one of the required directories is missing. I'm guessing that the shared directories do indeed exist on the servers, but that one of the directory mount points is missing on the client.
|
THanks IsaacKuo. But the directories do exist in right places. In fact, if I add the 'nolock' option, they mount.
|
|
|
08-11-2017, 09:05 AM
|
#6
|
Senior Member
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian Stable
Posts: 2,546
|
What are the file systems on the server side? I have to do some weird stuff to get nfs shares in tmpfs to work properly. There may be issues with other file system types also.
|
|
|
08-11-2017, 09:25 AM
|
#7
|
Member
Registered: Jul 2009
Location: WI, USA
Distribution: Debian 8, Ubuntu 16.04, CentOS 7
Posts: 143
Original Poster
Rep:
|
Quote:
Originally Posted by IsaacKuo
What are the file systems on the server side? I have to do some weird stuff to get nfs shares in tmpfs to work properly. There may be issues with other file system types also.
|
Its ext4 on all machines. Relevant mount points are starred:
Code:
root@server1:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
*/dev/sda2 ext4 110G 37G 68G 36% / <--- export
udev devtmpfs 10M 0 10M 0% /dev
tmpfs tmpfs 6.3G 666M 5.7G 11% /run
tmpfs tmpfs 16G 108M 16G 1% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda4 ext4 9.1G 4.5G 4.2G 52% /var
/dev/sda3 ext4 19G 45M 18G 1% /tmp
/dev/sda1 vfat 487M 132K 486M 1% /boot/efi
/dev/sda5 ext4 314G 113G 186G 38% /scratch
/dev/sdb3 ext4 716G 647G 33G 96% /storage
/dev/sdb1 ext4 110G 61G 44G 59% /home
/dev/sdb2 ext4 92G 60M 87G 1% /share
tmpfs tmpfs 3.2G 20K 3.2G 1% /run/user/1001
------------------------------------------------------------
root@server2:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs tmpfs 1.6G 9.3M 1.6G 1% /run
/dev/sda1 ext4 138G 4.4G 127G 4% /
tmpfs tmpfs 7.8G 192K 7.8G 1% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
*/dev/sdb1 ext4 917G 76M 871G 1% /storage <--- export
/dev/sda6 ext4 306G 67M 290G 1% /opt
tmpfs tmpfs 1.6G 32K 1.6G 1% /run/user/108
tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/1001
--------------------------------------------------------------
root@client1:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs tmpfs 1.6G 9.6M 1.6G 1% /run
/dev/nvme0n1p2 ext4 235G 26G 198G 12% /
tmpfs tmpfs 7.8G 108K 7.8G 1% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/nvme0n1p1 vfat 511M 3.4M 508M 1% /boot/efi
/dev/nvme0n1p4 ext4 227G 60M 216G 1% /scratch
server1:/export/data nfs4 110G 37G 68G 36% /nfs/data <--- mounted w/o nolock from server1
server2:/export/home nfs 917G 75M 871G 1% /nfs/home <--- mounted w/ nolock from server2
tmpfs tmpfs 1.6G 20K 1.6G 1% /run/user/108
tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/1001
One thing I do see is that the mount from server2 is as nfs (as opposed to nfs4 from server1). But that may not be the reason as I just added a second client in the network (client2) with same configuration as client1. And I can mount on client2 without the nolock option:
Code:
root@client2:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs tmpfs 1.6G 9.3M 1.6G 1% /run
/dev/nvme0n1p2 ext4 235G 6.5G 217G 3% /
tmpfs tmpfs 7.8G 192K 7.8G 1% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/nvme0n1p1 vfat 511M 3.4M 508M 1% /boot/efi
/dev/nvme0n1p4 ext4 227G 60M 216G 1% /scratch
*server1:/export/data nfs4 110G 37G 68G 36% /nfs/data <--- mounted w/o nolock from server1
tmpfs tmpfs 1.6G 24K 1.6G 1% /run/user/108
tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/1001
*server2:/export/home nfs 917G 75M 871G 1% /nfs/home <--- mounted w/ nolock from server2
*server2:/export/repo nfs 917G 75M 871G 1% /nfs/repo <--- mounted w/o nolock from server2
I'll scrutiny if there is any difference in client1 and client2 settings for any further clue.
Meanwhile, if anyone has any other hints/ideas, please share.
|
|
|
08-11-2017, 10:42 AM
|
#8
|
Member
Registered: Jul 2009
Location: WI, USA
Distribution: Debian 8, Ubuntu 16.04, CentOS 7
Posts: 143
Original Poster
Rep:
|
It seems the issue stems from hosts.deny/allow configuration as AwesomeMachine had mentioned earlier.
client1 is firewall-ed to be accessed only via vpn. So I had this in hosts.deny
and in hosts.allow
where all vpn connections get an ip xxx.yyy.aaa.bbb
client2 did not have this vpn firewall and if I remove the vpn restriction on client1, nfs mounts work!
Now the question is how can I best use NFS with vpn restrictions?
I found that this combination works:
Code:
$ cat /etc/hosts.deny
ALL: ALL
$ cat /etc/hosts.allow
sshd:xxx.yyy.
rpcbind:ALL
But this seems a little insecure to me. I was hoping to restrict NFS access to only vpn-restricted ips, i.e., xxx.yyy.aaa.bbb.
So I tried following combinations:
Code:
$ cat /etc/hosts.deny
ALL EXCEPT in.rpcbind: ALL
ALL EXCEPT in.portmap: ALL
$ cat /etc/hosts.allow
sshd:xxx.yyy.
rpcbind:ip.of.client.1
rpcbind:ip.of.server.1
portmap:ip.of.client.1
portmap:ip.of.server.1
But I still get the same error.
Any pointer on what I may be missing in the hosts.deny/hosts.allow files would be appreciated.
Thanks,
|
|
|
08-11-2017, 10:59 AM
|
#9
|
Member
Registered: Jul 2009
Location: WI, USA
Distribution: Debian 8, Ubuntu 16.04, CentOS 7
Posts: 143
Original Poster
Rep:
|
Found it!
I think I figured it out from here:
https://help.ubuntu.com/community/SettingUpNFSHowTo
Code:
$ cat /etc/hosts.deny
ALL: ALL
$ cat /etc/hosts.allow
sshd:xxx.yyy.
rpcbind: xxx.yyy.0.0/255.255.0.0
rpcbind: 127.0.0.1
The key, I believe, was adding "rpcbind: 127.0.0.1" in hosts.allow
This seems to be working now!
Thanks for the hints and help.
|
|
1 members found this post helpful.
|
All times are GMT -5. The time now is 12:04 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|