I am trying to decide between using hosts.deny and ipset/iptables or the route command for blocking botnets and things like that.
I came to find out that SSH and possibly Apache does not use the tcpwrappers mechanism that hosts.deny uses to block hosts. It looks like this approach is not used often anymore and it looks like it requires starting the services like in the old days with xinetd and it may not even work? What is really weird is if you check this issue here it was not long ago and it worked with the hosts.deny file?:
https://github.com/Ultimate-Hosts-Bl...ist/issues/588
Should hosts.deny be working or not on a modern Linux server?
This is the preferred route if it will work.
Since it is getting complicated anyway, I will ask a further question:
Should hosts.deny be working on a virtualizations server like Proxmox for the VM guests if the host has the hosts.deny?
As for using iptables my wild idea was to get the hosts0.deny file from here:
https://github.com/Ultimate-Hosts-Bl...osts.Blacklist
and use "cut" to get just the IP addresses which worked ok. But when using ipset to feed the addresses into a table I get:
"ipset v7.10: hash is full cannot add more elements".
I do see some addresses in that list that are connecting to my servers so it would help to use it.
I also read that using the "route" command to block addresses would incur less of a performance penalty than using iptables but I do not see a way to quickly add addresses in the way that ipset does. The blacklist file is 7.8M after being processed by cut so it would take some time to load by script.
I wanted to rule out using a hardware appliance/firewall or other computer in front of this one. I also wanted to rule out using application level blocking because I know that will not scale with so many addresses needing to be blocked.
Philosophically, are we getting to the point that one computer cannot "comprehend" the entire internet at once? Yet we are going to add even more IP addresses with IPv6?