Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am running Fedora 7, and have come across a problem with NFS mount:
I cannot find a way to tell the mount command (issued on my firewall machine) to use a particular interface IP address for its NFS mount request. The result is that I have to put an extra address declaration in every NFS server on my network; one to cover most of the machines in this shop (192.168.1.0/24), and a second one to handle the special case of the firewall host, which has two addresses.
Incidentally, the address this firewall host is using happens to be the one for eth0, not eth1, which is where the UDP packets go to request the NFS mount.
Any (non-vulgar) suggestions will be gratefully accepted.
Yes; this is a firewall system. eth0 is my public IP address, and eth1 has an IP address of 192.168.x.1. My routing table has the public (eth0) address as its default route.
All private addresses are translated (SNAT) on the firewall into the (one) public IP address that I have assigned to me.
I don't see why the other interface would be used because the hostname should resolve to the 192.168.1.0/24 network. I don't think that the NFS servers inside your network should know anything about the outside.
When you mentioned your routing table, do you mean the routing table on the firewall? The default gateway for the other hosts should be your firewall's eth1 IP address.
The first part of the message indicates that the firewall is mounting shares on other hosts in your lan. I take it that this computer has other roles, and is also being used as a gateway.
Quote:
The result is that I have to put an extra address declaration in every NFS server on my network
By address declaration, are you referring to /etc/hosts, or are you referring to /etc/exports.
Using "192.168.1.0/24" should be sufficient to only allow exporting shares to hosts on the lan. On one of these hosts, what does "host <firewall-hostname>" return. I think it should be the address of eth1 and only the address of eth1.
Is the firewall host also running a DNS server? Sometimes a dns server is running inside a LAN to resolve local hostnames. A dns server is run on the outside of the LAN in the DMZ to resolve address outside the network. The inside DNS knows nothing about outside addresses. Conversely the outside DNS knows nothing about LAN hosts. However the outside DNS does have the address of the mysql server in it's /etc/hosts file for it's own use. But there is no entry for it in the directory. Maybe a misconfigured DNS server could be causing a problem, but this is just a guess. I'm trying to understand why eth0 would even be considered? Even if the eth0 IP address has a registered domain name, it should resolve to a local address first without having to query a DNS server that would have that info. Could the order of the entries in the "hosts: ..." line of your /etc/nsswitch.conf file or /etc/host.conf file be wrong?
Quote:
Originally Posted by /etc/host.conf
order hosts, bind
Quote:
Originally Posted by /etc/nsswitch.conf
hosts: files mdns4_minimal [NOTFOUND=return] dns
networks: files dns
Check your entries for hosts, networks, rpc and automount. If you use NIS, do you have conflicting information between what /etc/hosts says and NIS? Imagine if some hosts have an entry in /etc/hosts, while others don't and NIS returns different info. I doubt that is the case because you would have more serious problems. I'm just trying to brainstorm here. ( I hope I'm not showing signs of AADD here!)
I guess I needed to be clearer about who the actors in this problem are.
The firewall machine has quite a few uses in addition to packet filtering; it is in use for mail transport, public and private DNS, HTTP and HTTPS, and a local NTP source, to name a few. In some of its roles, I have wanted to NFS mount parts of the filesystems on other machines behind the firewall. If I attempt to perform this mount by issuing a command to a shell on the firewall, it fails. If I issue a mount command on any of the other machines naming the firewall as the remote target, it succeeds (if the right entry is in /etc/exports.
If I attempt a mount
Quote:
firewall -> local_machine
it fails. The local machine log then contains a notation that an NFS
request has been denied from {firewall_public_address}.
I do not have NSS configured or running. The default route I mentioned in my first post is the default route on the firewall.
The "extra declaration" that I find necessary is on every machine behind the firewall, to allow the firewall NFS mounts to succeed. It has gone into /etc/exports, and is an addition to the 192.168.x.0/24 that I, too, would have expected to be sufficient.
My /etc/host.conf file has one line: order hosts, bind. The file /etc/hosts contains only 3 lines (plus comments)
Quote:
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.207.1 firewall.digitalelephant.org firewall
In short, I haven't a clue as to why the NFS mount is using the wrong address, either.
It seems to me that when the firewall refers to itself, it is using an outside dns instead of the local files and the address returned is for eth0 instead of eth1. That is why I was interested in the /etc/host.conf and nsswitch.conf.
Part of the problem may be that you are using the same host for outside and inside DNS. The outside DNS should be in the DMZ. Suppose one of the hosts is called "dumbo". Do you get an inside address or an internet address for dumbo if you ping dumbo from the firewall, or run "host dumbo". I think that each host may be resolving to the internet address instead of a local address (on the firewall), causing the nfs mount to fail.
I thought of that, too; that is why I posted the text returned from the dig command (executed on the firewall). That transcript clearly shows that the firewall machine (which is issuing the flawed NFS request) is able to resolve its own name to an IP address on eth1, rather than to the public IP address on eth0.
Similarly, the other systems on the local net resolve the firewall's name to its internal address (192.168.x.1). I just verified that the command to have the firewalll ping itself also resolves the name to the local address.
It is true that I have configured DNS to have a split personality. The way I have done this is to run two separate named daemons. The local daemon is authoritative for the local domain, listens on 192.168.x.1 and on 127.0.0.1, and only hands out 192.168.x.y addresses. It consults the root name servers for names it does not know (via the hints in named.ca). The public named daemon listens only on the public IP address, and is authoritative for the public view of my domain. The only address it hands out is my public IP address, and that only for host names in its zone table.
So I think we are right where I began: I haven't a clue as to why NFS is behaving this way.
SOLVED: As with lots of problems, the trouble wasn't with the subsystem that was exhibiting the symptoms. Instead, I had an overeager rule in my iptables ruleset, which did SNAT on all packets in the POSTROUTING stage, even if the actual destination was the local net. Restricting this rule so that packets that were outbound into the greater Internet solved the problem (and a couple of other nasty ones, too.) [sheepish grin.]
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.