LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Security (https://www.linuxquestions.org/questions/linux-security-4/)
-   -   Identify the host to block (https://www.linuxquestions.org/questions/linux-security-4/identify-the-host-to-block-862862/)

Noway2 02-15-2011 11:37 AM

Identify the host to block
 
For the last four days, I have been getting HIDS alerts like these:
Code:

152.2.x.x - - [15/Feb/2011:04:28:59 -0500] "PROPFIND /C%24.co HTTP/1.1" 405 231
152.2.x.x - - [15/Feb/2011:04:28:59 -0500] "PROPFIND /C%24.com HTTP/1.1" 405 232
152.2.x.x - - [15/Feb/2011:04:28:59 -0500] "PROPFIND /C%24.ex HTTP/1.1" 405 231
152.2.x.x - - [15/Feb/2011:04:28:59 -0500] "PROPFIND /C%24.exe HTTP/1.1" 405 232
152.2.x.x - - [15/Feb/2011:04:28:59 -0500] "PROPFIND /C%24.cm HTTP/1.1" 405 231
152.2.x.x - - [15/Feb/2011:04:28:59 -0500] "PROPFIND /C%24.cmd HTTP/1.1" 405 232
152.2.x.x - - [15/Feb/2011:04:28:59 -0500] "PROPFIND /C%24.ba HTTP/1.1" 405 231
152.2.x.x - - [15/Feb/2011:04:28:59 -0500] "PROPFIND /C%24.bat HTTP/1.1" 405 232
152.2.x.x - - [15/Feb/2011:04:28:59 -0500] "PROPFIND /C HTTP/1.1" 405 227
152.2.x.x - - [15/Feb/2011:04:28:59 -0500] "PROPFIND /C%24 HTTP/1.1" 405 228
152.2.x.x - - [15/Feb/2011:04:28:59 -0500] "PROPFIND /C HTTP/1.1" 405 227

These have been starting at about 3:30-4:00 am and they are able to fire off about 22 connections before the active response clamps down on the connection. They resume about 25-30 minutes later, which is longer than the response timeout, and get stopped again. After a 10 minute time out it resumes a 3rd time and the process is repeated a fourth time after which they stop for the day.

The pattern suggests that the parasite responsible may be a step above the traditional script kiddie, possibly even possessing a rudimentary brain stem. Consequently, I would like to block this host off it possible. Normally, I would simply put an entry in iptables to block the offending ip or network range but the address that I see as 152.2.x.x is one of our proxy servers, not the end host that is responsible. I would rather avoid cutting off all traffic via the proxy and would like to find a way to identify the host if possible.

As a test, I setup TCP dump to capture traffic from a host and accessed port 80. I was hoping to capture a MAC address or some other piece of information I could use to block the offender. I only see the MAC address of my interface and the one upstream and I get the same MAC if I access the page from another source; expected since MAC addresses don't route past the switch or router.

Clearly these attempts are futile as they are trying to use webdav to get at a 'C' drive and scanning for Windows executable files, neither of which exist on a Linux system. Consequently, I know that there is little to no real danger from these efforts, but I would like to cut off their attempts none the less.

Does anyone have some suggestions on how I could obtain enough information to configure an iptables rule to block this host when their traffic passes through a proxy that I don't control?

Hangdog42 02-15-2011 12:04 PM

Do you need to drop these at the firewall or would something like using mod_security be useful? The reason I'm asking is it might be a better idea to try and dump these based on the content of the request rather than on the IP address it is coming from. That way if they manage to move their IP address, you've still got them blocked.

unixfool 02-15-2011 12:24 PM

I think that mod_security would be the better solution, for the reasons that Hangdog42 already mentioned. The thing about mod_security is that it will need to be tuned. You'll need to pay specific attention to your logs, as I've seen it block out traffic that I didn't anticipate it blocking. In those cases, I'd turn off the signature. You should also be able to either fine-tune a sig or create your own based on your networking needs. It would be a living project (it will not be a PnP solution).

Or, you could go the firewall route. Knowing that the traffic you're seeing is going through a proxy, you can still determine the end hosts. It would mean either getting access to the proxy logs (if you don't have access already) or having the admin for that system providing you logs based on the traffic you observed on your webserver. Doing this should show a pattern. If the attackers are a pool of IPs that are being rotated (to lessen detection, distributed attacks (not DDoS) are becoming more prevalent), observing that traffic for a few days should determine the IPs involved, which can then be firewalled. You may even be able to throttle such traffic when it happens by setting thresholds on the firewall...this might be a better solution than outright blocking the IPs.

Or, you can leave this traffic alone, since it is benign. Such traffic happens so often that it should be considered the norm and as long as you're not adversely affected by the scans/attempts, you are OK.

As you learn more of the attacks, you can better understand how to deal with them, either by blocking them via iptables or by using applications firewalls such as mod_security. So, it is actually good that you made the attempt to capture traffic. You also already know your network's architecture somewhat (enough to know that there's an internal network proxy that the traffic is going through).

Noway2 02-15-2011 03:18 PM

A thank you to both of you. I have added mod_security to Apache. From what I have read up on it so far, it does look like a worthwhile program. As you said, it is not a PnP solution and will take time and effort to both learn and tune, but ultimately will allow a lot tighter control over the security of Apache. I was able to successfully test the install by running a wget command with the test rule wget -O - -U "webtrends security analyzer" http://<my url> and received the desired forbidden response, which indicates that it is working and finding the rule set. In this particular instance, the PROPFIND requests were already getting a 405 error (method not supported) and it will be interesting to see if this triggers any rule response from mod_security.

I have also changed the active response time out to an hour. I am also thinking about adding rate limiting to port 80 of this interface. In my opinion, this requires walking a fine line between affecting desired traffic and stopping the rapid-fire connection establishment. Given the current environment, I can safely stop the rapid hits and this will give the HIDS a change to trigger after fewer attempts. It will be enlightening to see if they return after this adaptive response or if it will be enough to get them to go away.

I can probably get the proxy logs and get them blocked at the IP level, but I agree that hardening at the application level is often times the better way to go. Of course, I like to see alert emails from the automated system telling me that it is responding, I just don't like to get 6-7 of them triggered by the same idiot every morning :)

Hangdog42 02-15-2011 03:54 PM

I've always found it stopped a lot of nonsense, but as unixfool pointed out, it can have unintended consequences so you do have to pay attention. If you're only serving your own stuff, it usually isn't too hard, but if you've got other peoples sites on there, you may want to let them know you're tightening security and to let you know if they are having trouble.

Quote:

Originally Posted by Noway2
It will be enlightening to see if they return after this adaptive response or if it will be enough to get them to go away.

Of course the cynical view is that by doing all this hardening, all we're doing is driving the evolution of more capable idiots. I guess just so long as they go evolve somewhere else, I'm happy......

Noway2 02-16-2011 05:27 AM

As a follow up to this, mod_security gave me a potentially key piece of information that I had been lacking. It indicated this: "User-Agent: Microsoft-WebDAV-MiniRedir/5.1.2600"

A follow up search on that set of keywords pointed me to this link. It is an article talking about their Drupal sites were getting pounded with requests from this user agent, which is indicative of being a crawler. It is unfortunate, but in keeping my with experience that Microsoft products rarely comply with standards and in this case, the stupid crawler does not understand most response codes. On top of that even Microsoft admits that the crawler hits hard. Apparently it does, however, understand 404 (not found) but apparently not 403 (forbidden) or 405 (method not supported). They claimed that when they modified their site to give it a 404 response code that it went away. I also learned from mod_security that it is targeting via numeric IP addresses, not named URLs, which cause mod_security to issue it a forbidden.

Seeing as it is coming from a proxy, the server doesn't see the user agent's IP. However, I should be able to either change the mod_security response to 404, which I think in many ways is generally preferable to forbidden which says that there is something there, or set a rewrite condition on the OPTIONS or PROPFIND methods to generate a 404 error code. With a little luck, it is a crawler and it will go away on that response code.

Semi off topic on this thread: the site is for our department at a fairly large university, which accounts for some of the IT challenges like getting the proxy logs. [rant]I mentioned that my experience with Microsoft products is that they are very poor in terms of standards compliance (often times earning them the tiles of Mickey-soft and Microshaft). They use several Active Directory domain controllers and those stupid things generate HUNDREDS of snort alerts caused by improperly framed TCP traffic. It seems like I get one rule bypassed for a DC and a few days later they start a new bad behavior. Examples include Missing timestamps, large numbers of out of sequence packets, timing and TTL violations, etc. This morning, I see that one DC which had been fine appears to have "taken a dump" starting at 11:23 last night and has been in strange land ever since. MS Server - Expensive piece of trash.[/rant]

Noway2 02-17-2011 04:40 AM

This morning I noticed that the application tried to connect again. This time it received a 404 error rather than 403 or 405. After the one connection attempt it went away. Hopefully it doesn't try this every morning, but it is certainly better than having tie up apache with near 100 connections in 2 minutes looking for something that doesn't exit.

unixfool 02-17-2011 01:33 PM

Quote:

Originally Posted by Noway2 (Post 4261349)
This morning I noticed that the application tried to connect again. This time it received a 404 error rather than 403 or 405. After the one connection attempt it went away. Hopefully it doesn't try this every morning, but it is certainly better than having tie up apache with near 100 connections in 2 minutes looking for something that doesn't exit.

Very nice! More than likely, whoever is responsible for those crawling attacks are looking for a specific server response. I'm guessing a 404 error wasn't what they were looking for and went away to something else more favorable to the attacks they're performing.


All times are GMT -5. The time now is 10:13 PM.