Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
i seem to be running into a firewall problem,
this is on suse if it matters, sles 11.4. Have seen it happen on 11.3 also but never paid it much attention and when things got problematic I rebooted everyone and seemed to fix things.
if i have one linux server nfs serving a /data folder.
i set the ip address of the systems that are allowed to be a client, for example if server A (having ip address 192.168.1.1) has nfs server set up and exporting /data i specify the following can be allowed to nfs mount:
192.168.1.2
192.168.1.3
192.168.1.4
and so on.
And let's say in /etc/hosts i specify hostname B is 192.168.1.2, C is 3, D is 4 and so on.
normally i have only one or two ip addresses specified and things work fine for server A, and B and C can nfs mount /data from A no problem.
And in nfs-server within yast on server A it gives me the option to open port in firewall and that's checked however i don't know the specific details of what "nfs server" does in the firewall.
today i specified a few more clients that can mount /data from server A and it was maybe the 5th server when i told it to mount it failed. I tried /etc/init.d/nfs-server restart on server A with no luck. But if i turn off the firewall on server A then try to mount it immediately works, and once mounted if I then enable the firewall on server A without changing anything everything still works and that 5th server can still see everything under /data.
Run the appropriate iptables commands to show you what rules are "seeing" traffic. iptables -vnL might work but you might need to look at the man page and run iptables with other chain names besides the defaults.
after reading up on nfs there's a few things happening that use random dynamic ports, and I believe i need to force those to a static non-changing value that i then have open on the firewall. has to do with mountd, lockd, statd.
and it seems to be a bad programming on the part of suse,
you can say "open port in firewall" within yast-nfs server but i think that only works if you have 1 export and only 1 client host specified, if you do more then nfs starts using more ports that are not open in the firewall.
## Path: Network/File systems/NFS server
## Description: use fixed port number for mountd
## Type: integer
## Default: ""
## ServiceRestart: nfsserver
#
# Only set this if you want to start mountd on a fixed
# port instead of the port assigned by rpc. Only for use
# to export nfs-filesystems through firewalls.
#
MOUNTD_PORT=""
so after reading some nfs crap on the web they say go look in /etc/sysconfig/nfs and set some stuff.
behold the mountd_port = "" set to nothing.
Hey Suse, it would be nice if you had this setting available in Yast NFS-server and Firewall and mentioned in your admin pdf documents.
You do realize you encourage the use of NFS and Firewall because it's there in Yast, and when you don't account for it in the firewall it results in FAILURE ?
this is why you guys suck and continue to take a back seat to Microsoft Windows.
what other ports do i need to force static to simply use both NFS and have the firewall enabled?
so it seems i figured a lot of it out.
first there is the firewall and it's attempt to make it easy for you as in the attached pic.
This is not the case.
I'm starting to think it's best to not use any of the menu drop downs... even though it might be the enterprise version and you'd think because you are paying for it that it would work!!
from the advanced tab in the lower right, that opens a window and lets you specify the specific ports for TCP, UDP, RPC, and IP.
here is what I specified manually to get NFS server/client to work, so far reliably
as i found out earlier, a lot of what you need to do is in the text file /etc/sysconfig/nfs which you have to edit manually, at least it's laid out straightforward.
The following by default are not set and are blank, you have to add in your own values, those values are then what you specify in your firewall settings. I say 10001 to 10004 here as an example, I used something above that but less than 65535. I forget which ports below what value are system ports that you are not supposed to use, but I thought there was some port range that was also restricted but admin configurable. if someone has a better port range to recommend please let me know and the reason why.
note: sm_notify_options needs the -p within the quotes to work properly, and it says so in the comments above it in the file.
Again this is for Suse Linux Enterprise server, 11.4 to be exact.
also not sure if i need udp opened in addition to tcp for everything, aside from lockd obviously.
but as soon as i set these up in the nfs file and opened those ports in the firewall, i can nfs mount on the client without delay and without error could not mount.
one other thing is RQUOTAD which was not listed in /etc/sysconfig/nfs. However i am not using quotas anywhere so i don't care about it at this point.
/etc/services
this is the file that has all the port numbers mapped to a number for tcp, udp, sctp.
so when the menu item in the firewall window says it's opening the SSH port for example, in this file you'll find
ssh 22/tcp # The Secure Shell (SSH) Protocol [RFC4251]
ssh 22/udp # The Secure Shell (SSH) Protocol [RFC4251]
ssh 22/sctp # SSH [Randall_Stewart] [RFC4960]
you'll also find {at least i found}
# ------------------------------------
sunrpc 111/tcp rpcbind # SUN Remote Procedure Call [Chuck_McManis]
sunrpc 111/udp rpcbind # SUN Remote Procedure Call [Chuck_McManis]
quotad 762/tcp
quotad 762/udp
notify 773/udp {ron7000 - i don't know if this pertains to sm_notify}
nfsd-keepalive 1110/udp # Client status info [Beth_Crespo]
nfs 2049/tcp # Network File System - Sun Microsystems [Brent_Callaghan]
nfs 2049/udp # Network File System - Sun Microsystems [Brent_Callaghan]
nfs 2049/sctp # Network File System [RFC5665]
mountd 20048/tcp # NFS mount protocol [Nicolas_Williams]
mountd 20048/udp # NFS mount protocol [Nicolas_Williams]
# ------------------------------------
here's where the firewall stuff happens... from what you see in the fancy menu {that in the end doesn't fulfill what's needed} to what ports actually get opened:
Under /etc/sysconfig/SuSEfirewall2.d/services you find files such as
- nfs-kernel-server
- nfs-client
- ntp
- sshd, cups {and so on, I have 15 plus a TEMPLATE file}
And it's these file names that show up in my drop down menu in the Yast Firewall gui for "services to allow" in the firewall. This is where I assumed that's all I needed to do... to use nfs just select "NFS Server Service" in the drop down. Hell, when you set up nfs server in yast it gives you a checkbox with "open port in firewall". You have that checked, it automatically adds "NFS server service" in your firewall allowed services, you think you're all good... not so fast.
so in file ntp for example, and every file it's all layed out the same
TCP=""
UDP="ntp"
RPC=""
IP=""
BROADCAST=""
ntp gets defined in /etc/services as port number 123, is that all there is to it?
so in my nfs-kernel-server file it has
TCP=""
UDP=""
RPC="portmap status nlockmgr mountd nfs nfs_acl"
IP=""
BROADCAST=""
now that's under RPC, and i can't find "nfs_acl" or "nlockmgr" in the /etc/services file.
so with nfs being port 2049 then how does the suse firewall gui in yast actually open that port?
Did they forget it?
regarding RPC port numbers, I see that happens under /etc/rpc and i find
# ------------------------------------
portmapper 100000 portmap sunrpc rpcbind
rstatd 100001 rstat rup perfmeter rstat_svc
nfs 100003 nfsprog
mountd 100005 mount showmount
rquotad 100011 rquotaprog quota rquota
nlockmgr 100021
status 100024
rpcnfs 100116 na.rpcnfs
# ------------------------------------
from here i gotta lookup how RPC works and what those numbers are all about, are they not port numbers since they are above a 16-bit {65535} value?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.