Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am running an IPTables with 20000+ rules. This of course is quite a high number, and I have performance issues (packet drops) which I suspect may be caused by this.
In order to reduce the number of rules, I am thinking of using the "multiport" module, which could save quite a number of iptables entries.
I am however wondering if this will indeed bring improvement, as I do not know whether "multiport" is a "usability" enhancement, or if it actually reduces the number of checks that happen when a packet is received.
If the use of multiport allows me to have 15000 rules instead of 20000, will I actually get performance gains?
As I understand things (and i stand to be corrected)
When a packet is processed by iptables, it is checked against each rule, until one is matched.
So I would imagine, that minimising the number of rules to check each packet against, would be of some help to increasing performance.
I would suggest:
-> using multiport and iprange modules where applicable, to reduce the over all rule count.
-> reduce jumping backwards (-j RETURN) and forwards(-j *) between chains (if you do much of that).
-> put your related and established rules as close to the top as possible.
-> prioritizing your rules so that the most frequently used rules are close to the top, (web/email/etc).
But this is coming from somebody with 160 lines worth of iptables script, so I probably wouldn't pay much attention to me :P
I do not have IP ranges (and can't actually) and I only go "forth" in chains (and not back and forth). I do not use "state" rules either. So far the only minimization option I have is the multiport.
If you think that reduces the number of checks, and that there is no important tradeoff in terms of CPU loads, I'm happy.
Of course if you would have a single line for each IP address that has a different destination port like 20, 21, 22, 25 and 80 then using multiport would make sense. Other than that, without any relevant details there's no sense in telling you what to do or expect. Wrt performance and solutions check Netfilter Performance Testing (PDF), chapter 4.3 "Performance dependency on the number of rules" and see which solutions exist. If you dismiss them you should do that based on empirical evidence.
I am running an IPTables with 20000+ rules...If the use of multiport allows me to have 15000 rules instead of 20000, will I actually get performance gains?
For me (and bear in mind that I am not in the position that you are to test the particular rule constellation on the hardware that you have and with the load that you have) that sounds like way too many. However, I need to point out the the number of rules is not a strict determinant of performance.
For a simple explanatory case:
Imagine a situation in which you do lots of ip address filtering on a certain type of packet
if, for example, you can ensure that the majority of packets do not pass through this (over-)long filtering sequence, it may have little impact on the overall throughput of the box (assuming that throughput is your main performance criterion)
OTOH, you may have a situation in which all of the packets do go through an extended filtering sequence, and then that might cause a throughput impact, even though the total number of rules is fewer than the previous case
So, I would recommend several things
Think carefully whether you really need to do all this filtering...what would go wrong if you didn't do it?
Can you design the system such that most of the packets do not hit the long chain? (eg, look carefully at packet stats, and whether the most common packets can go off into short chains and by looking at the order of comparisons, ensure that only the packets for which it is really needed, do they go through the long chain)
There may be a case for splitting the workload
Can you confirm that the perf shortfall is actually connected to the dropped packets?
Given that I've never played with Ipset, I'd put that off until last; I'm sure that it is a very worthwhile option, I'm just very conservative about that kind of 'leap in the dark' option (an option where I can't really be sure about the downsides until I have committed a lot of time to it). I'm sure that is just me being conservative, and someone will turn up and tell you how easy it was and how it solves all problems, without introducing any new issues. Won't they?
I believe in the 'short, stubby chains are best' approach to iptables performance (and that some people just add rules, one after another, without ever thinking about efficiency), so I'd tend to try to push in that direction, but, like fukawi1, you are way out of what would be my comfort zone, here.
Another thing to note would be that some Iptables modules are not exactly lightweight, and going at eg, conntrack or some of the fancier and more obscure matching modules, like a starving man at an all you can eat buffet can cause performance limitations even before you get to very high numbers of rules. (Something I should have thought of earlier; do ensure that the box on which this runs is not memory-starved; if the box is memory starved, and, at periods of peak load and using whatever set of complex rules and modules this causes it to start swapping vigorously, the performance will be reduced by the swapping activity, and that is almost certain to start causing problems.)
And if you really, really need all that filtering and the majority of the packets have to pass through it, Ipset may just be your best option.
I now feel I have not been precise enough. I have 20000+ rules in total, but they are not all in one sequence.
Instead, I have a set of about 4000 rules, each of them pointing to a different chain with averagely 4 rules each. So at most the packets are checked against about 4000+4 rules at most.
But my main question seems to have been answered, which is, using multiport to consolidate rules with same IP source and destination and different ports could bring improvement, and at least would not make things worse.
Last edited by Guigoune31; 11-23-2011 at 08:22 AM.
Instead, I have a set of about 4000 rules, each of them pointing to a different chain with averagely 4 rules each. So at most the packets are checked against about 4000+4 rules at most.
Sorry, I jumped to conclusions. The only way that I could imagine anyone coming up with anything like that number of rules was to imagine that you had one big long list of bad addresses that you were trying to filter out. Should have asked more questions before replying...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.