LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Security
User Name
Password
Linux - Security This forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.

Notices


Reply
  Search this Thread
Old 10-16-2012, 09:49 AM   #16
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600

Quote:
Originally Posted by MortenOnDebian View Post
When you talk about limited logging, you will for sure be glad when I tell you, that ntop doesn't display data about hosts and ports from before it was last started. So as the server has been forced to shut down, I cannot make ntop display any information about the source/destination of traffic during the attack.
For the 24th OK but not even for October 2nd?

Anyway, here's a script to generate a logging-only rule set. It doesn't do anything but drop invalid packets and log at what the script perceives is the amount of packets per second the host can take so we get an idea of what's a storm and what not. Round off the numbers if you will, check rule set for flaws, adapt if necessary and test before production use:
Code:
#!/bin/bash
_ruleset() { echo "
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:PPS${HALF} [0:0]
:PPS${FULL} [0:0]
:PPS${TWICE} [0:0]
-A INPUT -m conntrack --ctstate INVALID -m limit --limit 1/second -j LOG --log-prefix \"in_INV_REJ \"
-A INPUT -m conntrack --ctstate INVALID -j REJECT --reject-with icmp-admin-prohibited
-A INPUT -m conntrack --ctstate NEW -m limit --limit ${HALF}/second -j PPS${HALF}
-A INPUT -m conntrack --ctstate NEW -m limit --limit ${FULL}/second -j PPS${FULL}
-A INPUT -m conntrack --ctstate NEW -m limit --limit ${TWICE}/second -j PPS${TWICE}
-A PPS${HALF}  -m limit --limit 1/second --log-prefix \"in_${HALF}pps \"
-A PPS${FULL}  -m limit --limit 1/second --log-prefix \"in_${FULL}pps \"
-A PPS${TWICE} -m limit --limit 1/second --log-prefix \"in_${TWICE}pps \"
COMMIT
"; }

[ $# -eq 0 ] && { echo "Need FQDN like \"www.postdanmark.dk\", exiting."; exit 1; } || FQDN="$1"
which hping2 >/dev/null 2>&1 || { echo "Missing \"hping2\", exiting."; exit 1; }
export TMPDIR=/dev/shm; TMPFILE=`mktemp -p /dev/shm hping.XXXXXXXXXX` && {
 ( time hping2 -i u1 -S -p 80 -c 100 $FQDN 2>&1 ) > "${TMPFILE}" 2>&1
 RECVSEND=($(awk '/received/ {print $4, $1}' "${TMPFILE}"))
 SECS=$(awk '/real/ {print $2}' "${TMPFILE}"|awk -F'0m' '{print $2}'|tr -d 's')
 HALF=$(echo "scale=0;(${RECVSEND[0]}/${SECS})"|bc -l)
 FULL=$(echo "scale=0;${HALF}*2"|bc -l); TWICE=$(echo "scale=0;${FULL}*2"|bc -l);
 PPM=$(echo "scale=0;${FULL}*60"|bc -l)
 #[ ${#HALF} -eq 0 ] && cat "${TMPFILE}" || echo "S: ${RECVSEND[1]} R: ${RECVSEND[0]} T: ${SECS} aprox ${FULL} pps ${PPM} ppm"
 [ ${#HALF} -eq 0 ] || _ruleset
 rm -f "${TMPFILE}"; }
exit 0
 
Old 01-29-2013, 02:31 PM   #17
MortenOnDebian
LQ Newbie
 
Registered: Aug 2010
Posts: 14

Original Poster
Rep: Reputation: 1
Sorry for not responding. I did not want to just implement your script without fully understanding every part of it - After all its a live production environment. However while I was reading up on iptables and your rules the attacks stopped. I have not noticed an attack since mid october. The solution is still unknown, but I appreciate your help.
 
1 members found this post helpful.
Old 01-29-2013, 04:45 PM   #18
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by MortenOnDebian View Post
Sorry for not responding.
Not responding is one thing but this thread unearthed quite a lot of problems with that server. Please tell us that reading up on iptables and implementing that script wasn't the only thing you did the past four months.
 
1 members found this post helpful.
Old 01-30-2013, 02:27 AM   #19
MortenOnDebian
LQ Newbie
 
Registered: Aug 2010
Posts: 14

Original Poster
Rep: Reputation: 1
I guess you refer to the fact that it was quite hard to acquire logging information, as well as the fact that there are several unused extensions for apache etc. which is running. I have tried to trim down some of them, but hoonestly I have not done much more.
 
Old 01-30-2013, 07:05 AM   #20
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by MortenOnDebian View Post
I guess you refer to the fact that it was quite hard to acquire logging information, as well as the fact that there are several unused extensions for apache etc. which is running. I have tried to trim down some of them, but hoonestly I have not done much more.
You listed your main problem as your machine "getting attacked or attacking others" and as a consequence of that your network connection clogging up completely.

You also indicated:
- not being able to get a proper network usage overview,
- not having any idea of what web sites actually run,
- not possessing or not being able to find system logging over the relevant period,
- not running a firewall (at that time),
- not having any access restrictions configured, and
- not having any hardening of the server done (+ SELinux was disabled).

During the course of this thread it became clear analysis was hampered by:
- insufficient log retention, and
- lack of detail in application of choice (Ntop).


I. Given the severity of the problem and the hiatus we exposed what are the formal, technical reasons for not changing or implementing the above?
II. With the above list of things to address, what changes do you propose to make?
 
Old 02-03-2013, 07:30 AM   #21
MortenOnDebian
LQ Newbie
 
Registered: Aug 2010
Posts: 14

Original Poster
Rep: Reputation: 1
I guess there really are no good excuses for not implementing more security as described.

I have installed vnstat and are looking at iftop as well. Both of them should give me a better overview of the current network situation as well as a log for the upcoming past.

I am aware that a well configured firewall should always be up and running, and that I should be more strict about which ports/programs it allows to pass through. I'm currently looking at which programs should be running and which shouldn't.

Talking about hardening and SELinux is a different matter. I have heard a lot of people complaining about SELinux causing more trouble than good. Resulting in many hours spend debugging until finally realising why. Of course this might be people which don't know what they are talking about, but that is one reason why I have not installed it or something simular (any suggestions?).
 
Old 02-04-2013, 07:44 AM   #22
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
No, there isn't. If you (think you) have other priorities then by all means let somebody else take care of it. Because the fact the symptoms stopped does not mean your problem disappeared automagically. Tools like Vnstat, iftop or equivalent get you traffic overview stats but you can't drill down further than stream level (it doesn't point to users or applications), you can't set threshold alerts (though you could have a cron job parsing vnstat output) but more importantly these tools don't prevent or mitigate anything (for bandwidth shaping use iproute and iptables).

It's good you're trying to find out which applications should be running and which shouldn't. But what are you using to find out? And if you find such applications how are you going to prevent them from running?

And wrt SELinux it isn't something you'll be implementing at this stage without gaining basic user knowledge first and rigorous testing on a non-production machine (virtualization?). But even without SELinux there's enough you could do: make a list of security and performance risks, add a solution or fix to each, gauge which changes would have the biggest positive effect and then prioritize work.
 
Old 02-05-2013, 07:11 AM   #23
MortenOnDebian
LQ Newbie
 
Registered: Aug 2010
Posts: 14

Original Poster
Rep: Reputation: 1
I do understand your point about logging tools not being enough. However I believe they are neat to have, as they provide me with more knowledge about how all is linked together. And I know they should not stand alone

About running programs and processes I'm currently just watching the output of the ps command though a cronjob. However I guess a utility like ps-watcher could ease this process and make it more informative, but I have not look into that. Yet.

Sitting down with a pen and a piece of paper writing down security and performance risks might be a good idea. Also I hope to be able to learn more about any performance problems through the extra logging tools. Also considered installing file::monitor letting it watch and report changes to directories, but yes again it does not prevent anything.
 
Old 02-05-2013, 07:48 AM   #24
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by MortenOnDebian View Post
I do understand your point about logging tools not being enough. However I believe they are neat to have, as they provide me with more knowledge about how all is linked together. And I know they should not stand alone
I guess the first question you should ask yourself is "what is the problem, how will I know about it and what will I do to mitigate it?" Then check your standard toolkit. Netfilter has modules for accounting and if you search Sourceforge, Berlioz, Savannah.nongnu or the-site-formerly-known-as-Freshmeat you may find tools that make things easier, for example ipband only starts logging when a threshold is crossed and can send alerts.


Quote:
Originally Posted by MortenOnDebian View Post
About running programs and processes I'm currently just watching the output of the ps command though a cronjob.
Watching processes is good but note it won't tell you everything you want to know, so in the end it'll be a combination of process names or paths "/home/user/.mutt/apache -DSSL", names of files kept open "/usr/bin/perl /tmp/.favicon/.ico " or combinations "wget some.ser.ver/user/xhide", ports and connections "irc.somechatserver.net", listing newly created files (inotify?), logged anomalies, general and per-process resource usage and more.


Quote:
Originally Posted by MortenOnDebian View Post
Sitting down with a pen and a piece of paper writing down security and performance risks might be a good idea.
Feel free to post your list once you're ready.


Quote:
Originally Posted by MortenOnDebian View Post
Also I hope to be able to learn more about any performance problems through the extra logging tools. Also considered installing file::monitor letting it watch and report changes to directories, but yes again it does not prevent anything.
No, it wouldn't prevent anything. But if you have general system resource usage statistics in place (Atop, Dstat, Collectl, like that) and network resource monitoring and if you find the discipline to look at reports regularly and act on them then at least you have a chance nipping things in the bud.
 
  


Reply

Tags
hacking, log, network monitoring, ntop


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] MySQL backup - how to deal with large amounts of data? karll Linux - Server 8 02-18-2011 09:51 AM
Best utility to copy massive amounts of data king0770 Linux - Hardware 2 07-16-2010 01:45 PM
Searching through massive amounts of Data sxa Linux - Software 5 02-27-2009 09:42 PM
how to limit swapping - prevent processes allocating huge amounts of memory david@linuxquestions Linux - General 10 12-21-2006 07:26 AM
rm command is choking on large amounts of data? Jello Linux - General 18 02-28-2003 07:11 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Security

All times are GMT -5. The time now is 01:22 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration