Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hey guys, hoping someone can clarify some things for me, couldn't seem to get a decent answer.
I had iptables setup with just two rules --
Code:
INPUT -s 192.168.1.0/24 -j ACCEPT
INPUT -j REJECT
My assumption is that would essentially allow all traffic on the local LAN, and nothing else.
Yet with that, if I create an NFS export and restart the NFS service, it totally hangs on NFS Quotas (which aren't configured).
I'm doing this on CentOS 6. I have been able to clear the error altogether by adding this line inbetween the two above --
Code:
INPUT -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
I guess I thought that line shouldn't have mattered if I'm set to just accept anything coming from the subnet anyways, can anyone teach me something new as to why that is or how iptables works so I can be more efficient in the future?
Last edited by Mavman; 11-06-2011 at 01:10 AM.
Reason: Clarifying code
I'm sorry, I did that from memory, and you are correct, it was actually a REJECT statement. (I edited the original post to reflect that.)
It isn't going to be an issue with NFS I don't think, considering the entire file contains the one line -
Code:
/directory 192.168.1.*(rw,sync)
Without iptables on, that works like a champ, if I turn iptables on and I just have the two lines, it hangs.
Seriously, I challenge anyone to try this. I would really like to know.
So it works if you have the NEW,RELATED,ESTABLISHED rule?
I havent had much to do with simplified sets of rules such as this, ive always used full sets of rules that have generally had a REL/EST rule in there anyway. But i THINK it shouldnt matter if you are matching anything from that subnet regardless of its state.
What is the actual problem, that nfs-server wont restart, with iptables rules in place?
or that the client cant connect due to iptables rules?
Try
Code:
watch iptables -nvL
and trying to connect from the client, then watching the packet/byte counters, to see which rule its hitting, if any.
The problem is the service doesn't even start. If you do that first method, and then do an NFS restart, it hangs on 'Starting NFS quotas:', after a few minutes it fails.
Then it goes to 'Starting NFS daemon:' and after a very long time, that fails as well. If you turn off iptables, it works fine.
I don't do many simple iptable configs either, but this was for a special project and thus I would like it clarified somehow so I better know how it works, it just doesn't seem right to me to HAVE to have that other rule in order for this to work.
Edit: Here's the output from after the NFS Daemon fails -
Code:
Starting NFS daemon: rpc.nfsd: writing fd to kernel failed: errno 110 (Connection timed out)
rpc.nfsd: unable to set any sockets for nfsd
But I still don't understand why that rule allows it the ability to create sockets?
Last edited by Mavman; 11-06-2011 at 01:59 AM.
Reason: More info
I dont understand the default RHEL/CentOS iptables setup (RH-FIREWALL whatever it is) these days. As i use my own which is somewhat different.
Quote:
In order for NFS to work with a default installation of Red Hat Enterprise Linux with a firewall enabled, IPTables with the default TCP port 2049 must be configured. Without proper IPTables configuration, NFS does not function properly.
The NFS initialization script and rpc.nfsd process now allow binding to any specified port during system start up. However, this can be error prone if the port is unavailable or conflicts with another daemon.
The 'iptables -nvL' just list those two rules under input, all other chains are empty.
The first rule should enable that tcp 2049 deal, I tried creating an explicit rule for it anyways, same result. I also was able to verify that rpcbind is indeed running before I try to restart the service. As this is CentOS 6, it's based off RHEL 6 and using NFSv4 if that helps at all. I didn't see anything new in the Centos manual.
I sincerely thank you for your help regardless fuwaki1, thanks.
Thanks anyways fukawi, if anyone has any other ideas, I'm completely open, I'd really like to know why that's the case.
Side note: I thought about it a bit more and since sockets are made locally, it would appear it can also work if you add-
Code:
-I INPUT 2 -s 127.0.0.0/8 -j ACCEPT
That just allows the localhost through the firewall, and that also appears to resolve it, I'm still pretty confused as to why that'd be necessary however because the original fix doesn't allow it through that subnet either, so two difference methods to fix the same issue. Can someone clear it up for me please?
Last edited by Mavman; 11-06-2011 at 10:22 AM.
Reason: New info
But that's the thing, it's allowing everything with a source in that subnet, so all those ports would be included anyways regardless. Also just a quick reminder, I do know that I can fix it by using the state rules or by adding the localhost to it, I just wanted a better explanation on why it works in that fashion because I would have assumed the source rule I listed to be enough.
But that's the thing, it's allowing everything with a source in that subnet, so all those ports would be included anyways regardless. Also just a quick reminder, I do know that I can fix it by using the state rules or by adding the localhost to it, I just wanted a better explanation on why it works in that fashion because I would have assumed the source rule I listed to be enough.
At this point, it shouldn't hurt to add to the firewall the specifics about tcp/udp and the ports involved. You might HAVE to be specific in this regard, especially since it seems you can't get this running successfully.
You can check the the FW logs to see why you can't get NFS working. You can also run a tcpdump to check this (either on the FW or wherever the NFS service is being run, if this is a remote server...or both).
You might also be able to check to see if there's an issue with NFS itself by looking at the log files. Or, disable the firewall and see if NFS works then (do this to get a good idea if it's a FW issue or an NFS issue).
I can try to test this on one of my test machines.
Again, I can get it to run successfully, just fine, with the '-m state --state NEW,ESTABLISHED,RELATED -j ACCEPT' or the '-s 127.0.0.0/8 -j ACCEPT' commands, which is a lot easier than making a bunch of port rules, I just wanted clarification on why it works in this manner for the purpose of being able to write more efficient firewall rules...
Also, I mentioned that NFS was dumping because of a socket error, in which it failed to create a local socket. Yes it involves binding the port to the IP, but if the IP range altogether is allowed then it shouldn't matter, additionally it's not even a full blown connection between two nodes, just the socket itself cracks up.
There is no issue with NFS, if iptables is disabled then it works fine. Also you don't have to go out of your way to test on your machines, I thank you though. I just was wondering if anyone had any more information as to why the loopback & state rules cleared up issues with just creating local sockets.
I am wondering whether this is to do with packet fragmentation. According to the NFS-HOWTO, NFS will not work unless packet fragments can traverse the firewall.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.