LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Security
User Name
Password
Linux - Security This forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.

Notices


Reply
  Search this Thread
Old 07-07-2011, 08:31 AM   #1
orgcandman
Member
 
Registered: May 2002
Location: new hampshire
Distribution: Fedora, RHEL
Posts: 600

Rep: Reputation: 110Reputation: 110
more than one uid 0 - centOS/Plesk 10.x (OT)


One thing that you might also do, if you haven't done it already, is turn on the auditing daemon, and audit all calls to exec() family of calls. While it doesn't catch the really good exploit writers who know what they're doing, it can be incredibly useful for determining initial execution vector. Of course, keep this log going offsite, if possible.
 
Click here to see the post LQ members have rated as the most helpful post in this thread.
Old 07-08-2011, 11:58 AM   #2
micxz
Senior Member
 
Registered: Sep 2002
Location: CA
Distribution: openSuSE, Cent OS, Slackware
Posts: 1,131

Rep: Reputation: 75
Quote:
Originally Posted by orgcandman View Post
One thing that you might also do, if you haven't done it already, is turn on the auditing daemon, and audit all calls to exec() family of calls. While it doesn't catch the really good exploit writers who know what they're doing, it can be incredibly useful for determining initial execution vector. Of course, keep this log going offsite, if possible.
Thank You. I will look into this. Can you post some more info or links?
 
Old 07-08-2011, 12:24 PM   #3
orgcandman
Member
 
Registered: May 2002
Location: new hampshire
Distribution: Fedora, RHEL
Posts: 600

Original Poster
Rep: Reputation: 110Reputation: 110
Quote:
Originally Posted by micxz View Post
Thank You. I will look into this. Can you post some more info or links?
Sure thing.

First, there are two packages that can be used: psacct (or acct for debian/ubuntu users), OR auditd. Google will present many good links for them. psacct/acct is probably the easiest as you simply run:

For RHEL5+ (including CentOS5+)
Code:
yum install psacct
For RHEL4-
Code:
up2date psacct
For Debian/Ubuntu
Code:
apt-get install acct
Once the monitor is in-place and loaded, you can then run lastcomm, ac, sa, and get a good grasp for who is doing what on the system.

auditd uses the syslog infrastructure, so it can be more flexible, but requires more tuning.

Again, google will really do a better job than I can.
 
Old 07-08-2011, 04:37 PM   #4
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
As the investigation isn't finished yet I find it, with all due respect, slightly premature to talk about auditing in the post-incident hardening sense and therefore I moved your post and comments to a separate thread. If the comment was meant to aid the current investigation then please realize it is not a SOP to install software on an already compromised system as it could hamper (further) investigation. If this doesn't ring a bell please read some basic incident response and forensics documents, TIA.

Last edited by unSpawn; 07-08-2011 at 06:53 PM. Reason: //More *is* more
 
Old 07-08-2011, 04:39 PM   #5
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by orgcandman View Post
While it doesn't catch the really good exploit writers who know what they're doing,
In what way do "their" execve's differ from ours?


Quote:
Originally Posted by orgcandman View Post
it can be incredibly useful for determining initial execution vector.
Just being curious: do you have practical experience with auditing that way?

Last edited by unSpawn; 07-08-2011 at 06:53 PM.
 
Old 07-08-2011, 06:50 PM   #6
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by orgcandman View Post
there are two packages that can be used: psacct (or acct for debian/ubuntu users), OR auditd.
The OR is unnecessary. BSD process accounting and Auditd rules are dealt with differently and can be deployed together without adverse effects.


Quote:
Originally Posted by orgcandman View Post
auditd uses the syslog infrastructure, so it can be more flexible, but requires more tuning. Again, google will really do a better job than I can.
No need to Google as there are members here who use (more than) default, CAPP, LSPP or NISPOM Audit rule sets.


Quote:
Originally Posted by orgcandman View Post
Once the monitor is in-place and loaded, you can then run lastcomm, ac, sa, and get a good grasp for who is doing what on the system.
The key problem here is "good grasp".

The main problem with people wanting to monitor user movement and commands within a server is tied to the purpose: knowing what regulations (if any) prohibit or mandate. (Also see implications of say Privacy Laws.) For instance the requirements for PCI-DSS chapter 10.2.1 tru 10.2.7 may require different mechanisms compared to say a forensic workstation or a generic network file server. The second problem is GNU/Linux does not have an on/off switch to enable centralized, all-encompassing, easy to correlate, human readable logging: it's modular so pick some. The third problem is people often have no idea* what goes on process-wise between userland and the kernel, which methods can be (more easily) subverted, which method matches what purpose and what additional measures must be taken to ensure successful logging at all times. While this may sound like common sense to most do realize auditing and logging are no panacea for proper system hardening and that logging means generating reports (Logwatch*, SEC?), actually reading those and acting on anomalies..

GNU/Linux has several logging methods which are:
Kernel based: that what the kernel itself logs or frameworks like Auditd*, SELinux, GRSecurity, TOMOYO, AppArmor or FUSE*, which is:
- involuntary (meaning that, unless subverted previously, no unprivileged user can evade logging),
- has a kernel-centric point of view (as opposed to say userland utilities attached to one users login, also: dependencies),
- is resistant to tampering (at least Auditd ('man auditctl: "-e";'), SELinux ("strict" policy) and GRSecurity (sysctl) include ways to make configuration immutable, meaning even root must reboot the machine to subvert or apply changes),
- has accurate time stamping but only due to logging via Syslog.
This type of logging depends on syslog which means protection must include depending libraries, NTP and system time, syslog, its configuration, framework configuration and available syslog partition space. If remote syslog is used (which is a good suggestion) then protection must include network traffic, the remote syslog host and optionally messaging itself (Rsyslog: RELP?). Next to evasion and disk space others concern may be malformed logging and syslog DoSsing to say "hide" log entries.
Userland utilities: what subsystem and service daemons log. Examples are SAR, BSD process accounting, PAM and login-related logging (last, lastlog, lastb, wtmp, utmp), service or subsystem logging (that which fail2ban or equiv. act on), Sudo and shell-based logging (rootsh, sudosh)
Common misconception is that all will give you accurate and detailed insight in a users movement on the system which is not the case. For instance while 'sa' may list commands it does not list command arguments and while 'lastcomm' allows you to select $LOGNAME and displays time stamps it does not display command arguments nor is its time stamp accurate to the second unless you supply the "--debug" flag. Another example: more recent version of Bash allow you can set HISTTIMEFORMAT but (apart from users changing history length or file) this time stamp is in no way related to how syslog sets time stamps. Finally Rootsh, which may not easily be evaded if wrapped around a user shell, allows for logging to file and to syslog but only in the latter case accurate time stamps are provided. As far as tampering is concerned (ranging from LD_PRELOADs to modified binaries) it is recommended to set the immutable bit (though sparingly as this will hamper system maintenance) and use kernel-based auditing and userland auditing tools (Samhain*, Aide, hell even tripwire or Monit) to be notified of changes.
kludges: examples are the 'script' utility and any custom kludges (like "email-root-on-user-login") often suggested by users new to Linux. I strongly suggest to avoid those unless no viable alternative exists.

I prolly could say more but that's it for now.

Last edited by unSpawn; 07-08-2011 at 06:55 PM.
 
2 members found this post helpful.
Old 07-12-2011, 03:07 PM   #7
orgcandman
Member
 
Registered: May 2002
Location: new hampshire
Distribution: Fedora, RHEL
Posts: 600

Original Poster
Rep: Reputation: 110Reputation: 110
Quote:
Originally Posted by unSpawn View Post
In what way do "their" execve's differ from ours?
They don't execve(). Most commands that would be used have specific system calls tied to them. In that way, one can write a small bootstrapped shell code loader which executes the calls directly. In fact, most exploit frameworks already support this (CORE does, iirc). The execve() just catches the easy stuff (ie: I downloaded this milw0rm exploit and fired it off, and it execve()'s a /bin/sh).



Quote:
Just being curious: do you have practical experience with auditing that way?
I've implemented a basic rig which uses the auditd to log specific system calls, who issued them, etc. and shovel them off to an aggregator. The aggregator would then log everything into a postgres database, which could then be queried to determine when systems started behaving differently. However, I'm not the auditor, in that case. My experience is limited to development (ie: my sandbox is features, mostly). I do have some practical experience conducting penetration tests, and vulnerability assessments (neither of which qualify me as an auditor). The OP was asking for suggestions (I thought).
 
Old 07-12-2011, 03:11 PM   #8
orgcandman
Member
 
Registered: May 2002
Location: new hampshire
Distribution: Fedora, RHEL
Posts: 600

Original Poster
Rep: Reputation: 110Reputation: 110
Quote:
Originally Posted by unSpawn View Post
As the investigation isn't finished yet I find it, with all due respect, slightly premature to talk about auditing in the post-incident hardening sense and therefore I moved your post and comments to a separate thread. If the comment was meant to aid the current investigation then please realize it is not a SOP to install software on an already compromised system as it could hamper (further) investigation. If this doesn't ring a bell please read some basic incident response and forensics documents, TIA.
Sorry, didn't mean to thread-jack. I had seen other responses which included suggestions for system setup when setting up the new system (or thought I had). I was hoping to add something to the pre-deployment, post-install checklist for things to do.
 
Old 07-12-2011, 03:34 PM   #9
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
With cases like these I'd like to confine anything that isn't incident response to a new thread (which the OP should create) about system hardening. This should ensure paths between what must be done post-incident-wise (investigation) and what must be done post-install-wise (hardening) don't cross. Other than that it's good to see you've got practical experience beyond what common users seem to have (as I judge it) and I'd like to ensure you know your additions are valued and most welcome.
 
Old 07-12-2011, 03:42 PM   #10
orgcandman
Member
 
Registered: May 2002
Location: new hampshire
Distribution: Fedora, RHEL
Posts: 600

Original Poster
Rep: Reputation: 110Reputation: 110
Quote:
Originally Posted by unSpawn View Post
The OR is unnecessary. BSD process accounting and Auditd rules are dealt with differently and can be deployed together without adverse effects.
I wasn't that familiar with psacct, so I incorrectly assumed they did the same thing. Clearly, I was wrong

Quote:
Originally Posted by unSpawn View Post
The key problem here is "good grasp".

The main problem with people wanting to monitor user movement and commands within a server is tied to the purpose: knowing what regulations (if any) prohibit or mandate. (Also see implications of say Privacy Laws.) For instance the requirements for PCI-DSS chapter 10.2.1 tru 10.2.7 may require different mechanisms compared to say a forensic workstation or a generic network file server. The second problem is GNU/Linux does not have an on/off switch to enable centralized, all-encompassing, easy to correlate, human readable logging: it's modular so pick some. The third problem is people often have no idea* what goes on process-wise between userland and the kernel, which methods can be (more easily) subverted, which method matches what purpose and what additional measures must be taken to ensure successful logging at all times. While this may sound like common sense to most do realize auditing and logging are no panacea for proper system hardening and that logging means generating reports (Logwatch*, SEC?), actually reading those and acting on anomalies..
I completely agree, and as we both know, there's no simple, single all-encompassing silver bullet which "solves" the security problem (if there were, part of my job wouldn't exist!). My end goal is to try and educate. However, I tend to shy away from offering very detailed advice (although in this case I did). There are many reasons for this, not the least of which is I'm primarily involved with the test/development side of platforms+security.

Quote:
GNU/Linux has several logging methods which are:
Kernel based: that what the kernel itself logs or frameworks like Auditd*, SELinux, GRSecurity, TOMOYO, AppArmor or FUSE*, which is:
- involuntary (meaning that, unless subverted previously, no unprivileged user can evade logging),
- has a kernel-centric point of view (as opposed to say userland utilities attached to one users login, also: dependencies),
- is resistant to tampering (at least Auditd ('man auditctl: "-e";'), SELinux ("strict" policy) and GRSecurity (sysctl) include ways to make configuration immutable, meaning even root must reboot the machine to subvert or apply changes),
- has accurate time stamping but only due to logging via Syslog.
This type of logging depends on syslog which means protection must include depending libraries, NTP and system time, syslog, its configuration, framework configuration and available syslog partition space. If remote syslog is used (which is a good suggestion) then protection must include network traffic, the remote syslog host and optionally messaging itself (Rsyslog: RELP?). Next to evasion and disk space others concern may be malformed logging and syslog DoSsing to say "hide" log entries.
The syslog infrastructure isn't the only thing vulnerable here. For instance, two daemons logging to the same file, where one can tamper with the logfile, are also valid attack vectors. While we shouldn't expect to see control characters written to the log files anymore, it's probably not out of the question for someone to figure how to insert a valid pattern match for jrandom perl script syslog parser (or j.random grep string) which can be (depending on circumstances) a valid path to shoring up log files. I guess that could fall under malformed logging, but most put it under a separate class of just "tampering."

Quote:
Userland utilities: what subsystem and service daemons log. Examples are SAR, BSD process accounting, PAM and login-related logging (last, lastlog, lastb, wtmp, utmp), service or subsystem logging (that which fail2ban or equiv. act on), Sudo and shell-based logging (rootsh, sudosh)
Common misconception is that all will give you accurate and detailed insight in a users movement on the system which is not the case. For instance while 'sa' may list commands it does not list command arguments and while 'lastcomm' allows you to select $LOGNAME and displays time stamps it does not display command arguments nor is its time stamp accurate to the second unless you supply the "--debug" flag. Another example: more recent version of Bash allow you can set HISTTIMEFORMAT but (apart from users changing history length or file) this time stamp is in no way related to how syslog sets time stamps. Finally Rootsh, which may not easily be evaded if wrapped around a user shell, allows for logging to file and to syslog but only in the latter case accurate time stamps are provided. As far as tampering is concerned (ranging from LD_PRELOADs to modified binaries) it is recommended to set the immutable bit (though sparingly as this will hamper system maintenance) and use kernel-based auditing and userland auditing tools (Samhain*, Aide, hell even tripwire or Monit) to be notified of changes.
Again, I am full agreement here. Having as many layers as possible is absolutely imperative.

Quote:
kludges: examples are the 'script' utility and any custom kludges (like "email-root-on-user-login") often suggested by users new to Linux. I strongly suggest to avoid those unless no viable alternative exists.
A lot of those can easily backfire, as I'm sure you are well aware. 'script' doesn't protect you from control codes embedded in the dump files, having roots mailbox reach the quota is a "Bad Thing," and I think that the point of the discussion really is to make sure people are aware of all the options they have w.r.t. setting up a system so that future incident response has more information able to help them figure out what happened.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] more than one uid 0 - centOS/Plesk 10.x micxz Linux - Security 28 08-04-2011 07:41 AM
ls command fails after updating plesk on centos 5.4 delmoras Linux - General 3 06-02-2010 01:30 PM
[SOLVED] Time Sync Issues - Clock Drift Way Off - CentOS 5.2, VMware 1.0.8, Plesk CP 8.6.0 bskrakes Linux - Server 13 03-02-2009 01:38 PM
Plesk 9/CentOS symlinks prophoto Linux - Server 1 02-19-2009 03:47 PM
Can't access Plesk on vmware Centos 5 server on LAN goodgirl Linux - Server 6 09-11-2008 11:25 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Security

All times are GMT -5. The time now is 08:49 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration