LinuxQuestions.org
Support LQ: Use code LQ3 and save $3 on Domain Registration
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Security
User Name
Password
Linux - Security This forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.

Notices

Reply
 
Search this Thread
Old 04-15-2012, 09:35 PM   #1
clintonm9
LQ Newbie
 
Registered: Jun 2003
Posts: 20

Rep: Reputation: 0
Internal Breach/ Misuse Server


I have a CentOS 5.8 server that follows most of the NSA Secure Configuration (Hardening) of Red Hat Enterprise Linux 5. After getting this all setup I felt pretty good about my security, plus using Nessus for external scanning and patch monitoring.

So the issue now is that one of our users have came in and installed uniscan and started scanning other web servers (not managed by us) for vulnerabilities/DoS. I did not notices this until I get an email from another web hosting company saying we have been attempting to “hack” their server. I couldn’t believe it, after all my hard work I didn’t think about this simple issue. I thought about it a little more and realized that nothing would stop a user from writing a perl or php script to eat up all the CPU and crash the server. After restarting the server it would not be easy to track down which script caused the high load.

So my question is, how do you handle these things? Is there some software you recommend? How can I be alerted of software like uniscan running on the server?

Thanks
 
Old 04-16-2012, 05:13 AM   #2
Noway2
Senior Member
 
Registered: Jul 2007
Distribution: Ubuntu 10.10, Slackware 64-current
Posts: 2,124

Rep: Reputation: 776Reputation: 776Reputation: 776Reputation: 776Reputation: 776Reputation: 776Reputation: 776
Quote:
So the issue now is that one of our users have came in and installed uniscan and started scanning other web servers (not managed by us) for vulnerabilities/DoS
The best way to deal with this would be directly with the user. In most Linux systems, users can run copies of executables from their home folders and this does not require root access. Scanning systems are known for not requiring root access and are typically used in compromised web stacks where someone was able to upload files to the /tmp directory. In your case, you had malicious activity from a trusted user.

The second thing you can try is to implement user quotas: http://www.linuxtopia.org/online_boo...ing_Users.html Here is another write up / FAQ with some good information: http://www.tek-tips.com/faqs.cfm?fid=1493

As far as some additional monitoring tools: http://www.linuxscrew.com/2012/03/22...itoring-tools/
 
Old 04-16-2012, 11:19 AM   #3
unSpawn
Moderator
 
Registered: May 2001
Posts: 27,118
Blog Entries: 54

Rep: Reputation: 2787Reputation: 2787Reputation: 2787Reputation: 2787Reputation: 2787Reputation: 2787Reputation: 2787Reputation: 2787Reputation: 2787Reputation: 2787Reputation: 2787
Additionally, and I realize none of this actually keeps users from Doing Stuff, SAR(-like) tools like Atop can easily log which user is running what resource hogs, the audit service can log syscalls like execs (needs tuning though), iptables can also log and rate-limit egress traffic, GRSecurity includes Trusted Path Execution (TPE) meaning users can only binaries from trusted directories, there's the path-based MAC like TOMOYO and if you tweak some signatures you can also use Snort IDS on outbound traffic.
 
Old 04-16-2012, 04:12 PM   #4
anomie
Senior Member
 
Registered: Nov 2004
Location: Texas
Distribution: RHEL, Scientific Linux, Debian, Fedora, Lubuntu, FreeBSD
Posts: 3,930
Blog Entries: 5

Rep: Reputation: Disabled
Quote:
Originally Posted by clintonm9
So my question is, how do you handle these things? Is there some software you recommend? How can I be alerted of software like uniscan running on the server?
Where possible/practical, it is also a good idea to restrict outbound traffic.

For instance, I maintained an application server for some devs I didn't know well. I allowed outbound traffic only to a few hosts (like TCP 443 to the RHN, UDP/TCP 53 for DNS queries, UDP 123 for time, etc.).

---

Alternatively, the blacklist approach would be to explicitly block certain ports. Taken directly from a pf.conf(5) script on my FreeBSD bastion host:
Code:
bad_x_ports="{ 22 23 25 110 137 139 143 445 3389 }"
...
# Help prevent turning this host into a spambot or attack bot
block out quick on $ext_if proto { tcp udp } from any to !<orgips> \
  port $bad_x_ports
The blacklist approach is less comprehensive, of course. (And both approaches require that you really understand your server's and users' needs.)

Last edited by anomie; 04-16-2012 at 04:15 PM.
 
Old 04-16-2012, 11:24 PM   #5
clintonm9
LQ Newbie
 
Registered: Jun 2003
Posts: 20

Original Poster
Rep: Reputation: 0
Noway2, Thanks for the info. I do have the shell logging pretty much locked down where they cannot disable logging to the central syslog server for all bash commands. I do like the ‘Maximum process, a single user can create’. I think I will implement that, but it still will not solve the issue I had the other day. It seems like the ‘@students hard cpu 2’ is a nice feature but it seems like there are major flaws (disconnecting from ssh whenever a certain cpu thresh hold is met). Would be nice to just throttle them and be notified of this.

We currently use munin for monitoring resources, but in this case it is nice to see the cpu is 100% but that doesn’t help me once the system is crashed/rebooted to figure out which process(es) was getting the cpu to 100%. Maybe one of these can do that, but I don’t think munin can.

unSpawn, I am kind of new to SAR command, but it seems very limited in its capabilities. Pooling every 10 minutes or some doesn’t seem as nice as the graphing software like munin. I do use the audit software with centos (pretty much following the NSA 2.6.2.4 Configure auditd Rules for Comprehensive Auditing). This is great info to help monitor a system but it starts to become a lot of information quick (especially if you have a lot of servers with a lot of activity). If I added in a way to show all commands executed by users I would not have time in the day to review all the logs.

From a trusted binary stand point I don’t think this would help against perl or php scripts. I do like the thought but I feel I still would of not prevented this user (or another user) from running uniscan with perl.

Anomie, I was going to mention blocking outbound traffic in my original post, but I feel like with most attacks being over port 80 it would not help to much (the main reason I don’t want to block outbound ports is because I am afraid I will break a lot of scripts by forgetting to enabling certain ports ) The uniscan was doing port 80 url stuff so this would not of helped me in this instance.

I feel like what I am missing is two things. 1 being some process that will log either all high resource processes to a log or a process that logs all process every minute or so and just keep the last few hours (on disk). The second thing being if a process is taking lots of CPU/Memory it could trigger an alert with certain filters. Maybe one of the monitoring services Noway2 sent me has this built in but I did not see mention of it in the article that was sent. Another thought I just had is monit might be about to do it as well.

Thanks for all your guys feedback!
 
Old 04-16-2012, 11:32 PM   #6
clintonm9
LQ Newbie
 
Registered: Jun 2003
Posts: 20

Original Poster
Rep: Reputation: 0
Another crazy thought I had. What stops someone from going to this server and writing a script that is accessible from apache (chmod 777).

The script would be executed by apache (through a web browser), and the script would create a new script (this script will then be owned by apache). The old script would exec the new script which would delete the original script and then start the uniscan. How would I track what this user did? If they used SFTP to uploaded everything this can be done outside the bash. Man I am just over smarting myself  Guess you got to think like a hacker to try and catch one!

I guess I would have an audit trail of the file being deleted by apache, but that might be hard to track down. I would also have to enable auditing for all new files created on the system so I know who created the script that was deleted by apache.

Sorry thought I would share my thought process!
 
Old 04-17-2012, 04:30 AM   #7
Noway2
Senior Member
 
Registered: Jul 2007
Distribution: Ubuntu 10.10, Slackware 64-current
Posts: 2,124

Rep: Reputation: 776Reputation: 776Reputation: 776Reputation: 776Reputation: 776Reputation: 776Reputation: 776
Regarding your crazy thought, while perhaps not 100%, this might be of value: http://en.wikipedia.org/wiki/Sticky_bit

Quote:
When the sticky bit is set, only the item's owner, the directory's owner, or the superuser can rename or delete files ... Typically this is set on the /tmp directory to prevent ordinary users from deleting or moving other users' files.
Apache would not be able to delete a file owned / uploaded by the user and the user would not be able to delete Apache files. If this were to be set on the directory, perhaps at the file system level during mount using /tmp, /var or both in seperate partitions....It may be enough to slow them down, provide an unexpected twist to your malicious user, and allow you to gain some evidence
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] 505- Internal server error - The server encountered an internal error or misconfigura physnastr Linux - Server 7 11-17-2011 04:57 PM
server problems after security breach mattyg007 Linux - Security 2 10-04-2011 07:41 AM
misuse of mv command fedora_user Linux - Newbie 9 10-09-2008 08:10 AM
Possible mail server breach i_nomad Linux - Security 1 09-18-2008 05:18 PM
Hardware address misuse srikrishna097 Slackware 3 03-13-2007 03:59 PM


All times are GMT -5. The time now is 11:26 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration