[SOLVED] Server suddently uploads huge amounts of data
Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
When you talk about limited logging, you will for sure be glad when I tell you, that ntop doesn't display data about hosts and ports from before it was last started. So as the server has been forced to shut down, I cannot make ntop display any information about the source/destination of traffic during the attack.
For the 24th OK but not even for October 2nd?
Anyway, here's a script to generate a logging-only rule set. It doesn't do anything but drop invalid packets and log at what the script perceives is the amount of packets per second the host can take so we get an idea of what's a storm and what not. Round off the numbers if you will, check rule set for flaws, adapt if necessary and test before production use:
Sorry for not responding. I did not want to just implement your script without fully understanding every part of it - After all its a live production environment. However while I was reading up on iptables and your rules the attacks stopped. I have not noticed an attack since mid october. The solution is still unknown, but I appreciate your help.
Not responding is one thing but this thread unearthed quite a lot of problems with that server. Please tell us that reading up on iptables and implementing that script wasn't the only thing you did the past four months.
I guess you refer to the fact that it was quite hard to acquire logging information, as well as the fact that there are several unused extensions for apache etc. which is running. I have tried to trim down some of them, but hoonestly I have not done much more.
I guess you refer to the fact that it was quite hard to acquire logging information, as well as the fact that there are several unused extensions for apache etc. which is running. I have tried to trim down some of them, but hoonestly I have not done much more.
You listed your main problem as your machine "getting attacked or attacking others" and as a consequence of that your network connection clogging up completely.
You also indicated:
- not being able to get a proper network usage overview,
- not having any idea of what web sites actually run,
- not possessing or not being able to find system logging over the relevant period,
- not running a firewall (at that time),
- not having any access restrictions configured, and
- not having any hardening of the server done (+ SELinux was disabled).
During the course of this thread it became clear analysis was hampered by:
- insufficient log retention, and
- lack of detail in application of choice (Ntop).
I. Given the severity of the problem and the hiatus we exposed what are the formal, technical reasons for not changing or implementing the above?
II. With the above list of things to address, what changes do you propose to make?
I guess there really are no good excuses for not implementing more security as described.
I have installed vnstat and are looking at iftop as well. Both of them should give me a better overview of the current network situation as well as a log for the upcoming past.
I am aware that a well configured firewall should always be up and running, and that I should be more strict about which ports/programs it allows to pass through. I'm currently looking at which programs should be running and which shouldn't.
Talking about hardening and SELinux is a different matter. I have heard a lot of people complaining about SELinux causing more trouble than good. Resulting in many hours spend debugging until finally realising why. Of course this might be people which don't know what they are talking about, but that is one reason why I have not installed it or something simular (any suggestions?).
No, there isn't. If you (think you) have other priorities then by all means let somebody else take care of it. Because the fact the symptoms stopped does not mean your problem disappeared automagically. Tools like Vnstat, iftop or equivalent get you traffic overview stats but you can't drill down further than stream level (it doesn't point to users or applications), you can't set threshold alerts (though you could have a cron job parsing vnstat output) but more importantly these tools don't prevent or mitigate anything (for bandwidth shaping use iproute and iptables).
It's good you're trying to find out which applications should be running and which shouldn't. But what are you using to find out? And if you find such applications how are you going to prevent them from running?
And wrt SELinux it isn't something you'll be implementing at this stage without gaining basic user knowledge first and rigorous testing on a non-production machine (virtualization?). But even without SELinux there's enough you could do: make a list of security and performance risks, add a solution or fix to each, gauge which changes would have the biggest positive effect and then prioritize work.
I do understand your point about logging tools not being enough. However I believe they are neat to have, as they provide me with more knowledge about how all is linked together. And I know they should not stand alone
About running programs and processes I'm currently just watching the output of the ps command though a cronjob. However I guess a utility like ps-watcher could ease this process and make it more informative, but I have not look into that. Yet.
Sitting down with a pen and a piece of paper writing down security and performance risks might be a good idea. Also I hope to be able to learn more about any performance problems through the extra logging tools. Also considered installing file::monitor letting it watch and report changes to directories, but yes again it does not prevent anything.
I do understand your point about logging tools not being enough. However I believe they are neat to have, as they provide me with more knowledge about how all is linked together. And I know they should not stand alone
I guess the first question you should ask yourself is "what is the problem, how will I know about it and what will I do to mitigate it?" Then check your standard toolkit. Netfilter has modules for accounting and if you search Sourceforge, Berlioz, Savannah.nongnu or the-site-formerly-known-as-Freshmeat you may find tools that make things easier, for example ipband only starts logging when a threshold is crossed and can send alerts.
Quote:
Originally Posted by MortenOnDebian
About running programs and processes I'm currently just watching the output of the ps command though a cronjob.
Watching processes is good but note it won't tell you everything you want to know, so in the end it'll be a combination of process names or paths "/home/user/.mutt/apache -DSSL", names of files kept open "/usr/bin/perl /tmp/.favicon/.ico " or combinations "wget some.ser.ver/user/xhide", ports and connections "irc.somechatserver.net", listing newly created files (inotify?), logged anomalies, general and per-process resource usage and more.
Quote:
Originally Posted by MortenOnDebian
Sitting down with a pen and a piece of paper writing down security and performance risks might be a good idea.
Feel free to post your list once you're ready.
Quote:
Originally Posted by MortenOnDebian
Also I hope to be able to learn more about any performance problems through the extra logging tools. Also considered installing file::monitor letting it watch and report changes to directories, but yes again it does not prevent anything.
No, it wouldn't prevent anything. But if you have general system resource usage statistics in place (Atop, Dstat, Collectl, like that) and network resource monitoring and if you find the discipline to look at reports regularly and act on them then at least you have a chance nipping things in the bud.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.