LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Security
User Name
Password
Linux - Security This forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.

Notices


Reply
  Search this Thread
Old 10-27-2013, 01:48 PM   #1
OtagoHarbour
Member
 
Registered: Oct 2011
Posts: 332

Rep: Reputation: 3
How do you whitelist files for chkrootkit


I ran chkrootkit and got
Code:
Searching for suspicious files and dirs, it may take a while... The following suspicious files and directories were found:  
/usr/lib/jvm/.java-1.6.0-openjdk-amd64.jinfo
I still get it after editing /etc/chkrootkit.conf and adding

Code:
IGNORE="/usr/lib/jvm/.java-1.6.0-openjdk-amd64.jinfo"
Thx,
OH.
 
Old 11-02-2013, 02:27 PM   #2
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
As far as I'm aware Chkrootkit doesn't come with a "/etc/chkrootkit.conf" so this must be functionality either you or your distribution has added. Apart from that Chkrootkit hasn't been updated in ages.
 
1 members found this post helpful.
Old 11-03-2013, 09:24 AM   #3
salasi
Senior Member
 
Registered: Jul 2007
Location: Directly above centre of the earth, UK
Distribution: SuSE, plus some hopping
Posts: 4,070

Rep: Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897
...and it doesn't come with a 'man page' either. What it does have is a 'README' file in, eg, /usr/share/doc/packages/chkrootkit, and that has, for example

Quote:
09/30/2009 - Version 0.49
as the last recorded change (other distros will probably differ slightly in their latest version, but, probably around that kind of date is the last time that there was much progress).

Just as an example, something like 'rkthunter' has, on the same distro, a last update date of 28 Jan 2013, so there is quite a difference.
 
1 members found this post helpful.
Old 11-03-2013, 09:37 AM   #4
OtagoHarbour
Member
 
Registered: Oct 2011
Posts: 332

Original Poster
Rep: Reputation: 3
Quote:
Originally Posted by salasi View Post
...and it doesn't come with a 'man page' either. What it does have is a 'README' file in, eg, /usr/share/doc/packages/chkrootkit, and that has, for example



as the last recorded change (other distros will probably differ slightly in their latest version, but, probably around that kind of date is the last time that there was much progress).

Just as an example, something like 'rkthunter' has, on the same distro, a last update date of 28 Jan 2013, so there is quite a difference.
I don't get any alerts with rkhunter. Maybe white listing isn't such a good idea anyway. If a particular file has a reputation for false positives, an attacker may replace it with malware knowing that file name is likely white listed.

Thanks,
OH.
 
Old 11-03-2013, 10:00 AM   #5
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Maybe it's time to step back and first review the security posture of your machine as a whole?
Like in what services do you run?
What risks do you (think you) run?
How do you protect the machine, users and data?

Something for a new thread?..
 
Old 11-04-2013, 04:50 PM   #6
OtagoHarbour
Member
 
Registered: Oct 2011
Posts: 332

Original Poster
Rep: Reputation: 3
Quote:
Originally Posted by unSpawn View Post
Maybe it's time to step back and first review the security posture of your machine as a whole?
Like in what services do you run?
What risks do you (think you) run?
How do you protect the machine, users and data?

Something for a new thread?..
Hi unSpawn,

I run a web site using a Linux-based DMZ where the web server runs Ubuntu 11.4 and the app. server runs Debian 7.0. I aim to limit my ports to 80 and 443. Both servers are protected by maldet, iptables, rkhunter and chkrootkit. On the app server I also have Snort, WireShark and Samhain. The sort of things I wish to prevent are spyware, root kits, DOS attacks and hackers reverse engineering my executables.

Thanks,
OH.
 
Old 11-05-2013, 05:37 PM   #7
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by OtagoHarbour View Post
I run a web site using a Linux-based DMZ where the web server runs Ubuntu 11.4 and the app. server runs Debian 7.0. I aim to limit my ports to 80 and 443.
Sounds OK though it's not as much as what OS you run (as long as its up to date and properly hardened of course) but more what runs on top of it like a CMS, photo gallery, shopping cart, etc, etc including themes, plugins and add-ons that'll be targeted first...


Quote:
Originally Posted by OtagoHarbour View Post
Both servers are protected by maldet, iptables, rkhunter and chkrootkit. On the app server I also have Snort, WireShark and Samhain.
Wireshark should not be on a server simply because a server should not run a GUI. (Though rawshark, tshark or dupcap could.) With respect to auditing and post-op verification LMD and Samhain are good choices (I said before Chkrootkit was updated last in 2009) but I'd put Snort on the router providing the DMZ if possible (better placement) and I'd minimally add fail2ban and Logwatch for early warnings. Again all of this asserts both machines, their user accounts and services are properly hardened.


Quote:
Originally Posted by OtagoHarbour View Post
The sort of things I wish to prevent are spyware, root kits, DOS attacks and hackers reverse engineering my executables.
While Linux as a platform is certainly capable of running unwanted processes there essentially is no "spyware for Linux" (there is malware though) and the threat of rootkits has been replaced by anything that can run in the web stack. Defending against DoS and DDoS attacks is not as much a case of end point protection as it is a case of good filtering from your upstream provider. Finally "reverse engineering executables" is kind of ambiguous as it can mean both attacking Intellectual Property or substituting trojaned versions of applications. Apart from (D)DoS all require gaining a foothold be it through a SSH account that's "protected" by only a weak password, a remote site editor running a tainted OS (cred grabbing) or say a vulnerable piece of web site software. Those require network connections which can be logged, reported and alerted upon (Logwatch, fail2ban). Often files need to be dropped on the file system and that too is an event that can be watched for (audit service, Samhain using inotify). Finally accessing a resource and executing a process can be watched for too (audit service, TOMOYO, GRSecurity Trusted Path Execution).

Long story short: a gill of preventive measures is worth more than a pint of cure.
 
1 members found this post helpful.
Old 11-10-2013, 07:37 AM   #8
OtagoHarbour
Member
 
Registered: Oct 2011
Posts: 332

Original Poster
Rep: Reputation: 3
Quote:
Originally Posted by unSpawn View Post
Sounds OK though it's not as much as what OS you run (as long as its up to date and properly hardened of course) but more what runs on top of it like a CMS, photo gallery, shopping cart, etc, etc including themes, plugins and add-ons that'll be targeted first...
Thank you for the great suggestions. The web server is an old computer running Ubuntu 11.4 which is not longer supported. The hardware cannot handle newer versions of Ubuntu. I'm not sure if it is due to lightdm replacing gdm in newer versions. I could try installing Debian 7.0 instead but may have the same problem since Debian is also adopting lightdm as I understand it. I don't use any plug ins on my web site, except for InAFlash (for file uploads) which no longer seems to be supported and uses FlashPlayer which seems to be on the way out. I am thinking of replecing it with standard PHP calls. Everything else is standard PHP, HTML, JavaScript, HTML and CSS.

Quote:
Wireshark should not be on a server simply because a server should not run a GUI. (Though rawshark, tshark or dupcap could.)
I didn't realize that was a problem. I mainly use WireShark on the app server to monitor packets. I inspect the packets it captures, block ip addresses I don't like with iptables and set a filter on the ip addresses I consider harmless so they are ignored. What would be the best option for that?

Quote:
I'd put Snort on the router providing the DMZ if possible (better placement)
I had some issues with that that I discussed here. I haven't managed to revolve that yet.


Quote:
and I'd minimally add fail2ban and Logwatch for early warnings.
I installed logwatch from the tar ball and edited the .conf file so that it emails me the results with low details. So far, it has not emailed me anything although it appears to be running a a daemon.

Code:
$ ps -aux | grep logwatch
OtagoHarbour     1224  0.0  0.0   4184   860 pts/2    S+   20:17   0:00 grep --color=auto logwatch
I also set up fail2ban from the tar ball. It runs with some error messages.

Code:
$ fail2ban-client start
WARNING 'action' not defined in 'php-url-fopen'. Using default value
WARNING 'action' not defined in 'lighttpd-fastcgi'. Using default value
2013-11-09 20:15:05,305 fail2ban.server : INFO   Starting Fail2ban v0.8.4
2013-11-09 20:15:05,306 fail2ban.server : INFO   Starting in daemon mode

ERROR  Could not start server. Maybe an old socket file is still present. Try to remove /var/run/fail2ban/fail2ban.sock. If you used fail2ban-client to start the server, adding the -x option will do it
$ fail2ban-client start -x
ERROR  Unable to contact server. Is it running?
$ ps -aux | grep fail2ban
OtagoHarbour    32183  0.0  0.0   4184   856 pts/2    S+   20:17   0:00 grep --color=auto fail2ban
So it looks like it is running as a daemon but could not start the server. I did a search for fail2ban.sock, before trying to run the program and it does not exist anywhere on my system.

Quote:
Again all of this asserts both machines, their user accounts and services are properly hardened.
How do I ensure that is the case?

Quote:
the threat of rootkits has been replaced by anything that can run in the web stack.
Do you mean add-ons on the web site that I host?

Quote:
Defending against DoS and DDoS attacks is not as much a case of end point protection as it is a case of good filtering from your upstream provider.
Would that be my ISP?

Quote:
Finally "reverse engineering executables" is kind of ambiguous as it can mean both attacking Intellectual Property or substituting trojaned versions of applications.
I was mainly thinking of the former but the latter could also be a concern.

Quote:
Apart from (D)DoS all require gaining a foothold be it through a SSH account that's "protected" by only a weak password, a remote site editor running a tainted OS (cred grabbing) or say a vulnerable piece of web site software. Those require network connections which can be logged, reported and alerted upon (Logwatch, fail2ban).
I have ssh between the two servers but I have not consciously set it up for the outside world.

Thank you so much for your very helpful information,
OH
 
Old 11-10-2013, 10:05 AM   #9
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by OtagoHarbour View Post
The web server is an old computer running Ubuntu 11.4 which is not longer supported. The hardware cannot handle newer versions of Ubuntu. I'm not sure if it is due to lightdm replacing gdm in newer versions.
A web server shouldn't run a Desktop Environment in the first place. If you want to stay with Ubuntu please check out the LTS release and do not install Xorg, Desktop Environment or any graphical tools that could draw in Xorg / DE as a depencency.


Quote:
Originally Posted by OtagoHarbour View Post
I don't use any plug ins on my web site, except for InAFlash (for file uploads) which no longer seems to be supported
Simply put any software exposed to world that is no longer maintained, supported is a potential risk.


Quote:
Originally Posted by OtagoHarbour View Post
I didn't realize that was a problem. I mainly use WireShark on the app server to monitor packets. I inspect the packets it captures, block ip addresses I don't like with iptables and set a filter on the ip addresses I consider harmless so they are ignored. What would be the best option for that?
Automate it ;-p


Quote:
Originally Posted by OtagoHarbour View Post
I had some issues with that that I discussed here. I haven't managed to revolve that yet.
First install Ubuntu LTS or any other distribution of choice. If you can't get it to work afterwards let me know.


Quote:
Originally Posted by OtagoHarbour View Post
I installed logwatch from the tar ball and edited the .conf file so that it emails me the results with low details. So far, it has not emailed me anything although it appears to be running a a daemon.
As far as I know Logwatch usually runs via a daily cron job.


Quote:
Originally Posted by OtagoHarbour View Post
Code:
$ ps -aux | grep logwatch
OtagoHarbour     1224  0.0  0.0   4184   860 pts/2    S+   20:17   0:00 grep --color=auto logwatch
What you're seeing is the 'grep' process itself and that's one of the reasons why people shouldn't use grep (see those %{deities}-awful 'doSomething|grep something|grep -v grep' constructs) in the first place but use 'pgrep' instead.


Quote:
Originally Posted by OtagoHarbour View Post
Code:
ERROR  Could not start server. Maybe an old socket file is still present.
Try removing the /var/run/fail2ban/fail2ban.sock file.


Quote:
Originally Posted by OtagoHarbour View Post
How do I ensure that is the case?
First of all if you've installed these servers you should know (I tend to keep an admin log) if you hardened them. If you can't remember then see the Ubuntu Wiki, the "Securing Debian", OWASP and Cisecurity.org web sites and run GNU/Tiger for an initial audit.


Quote:
Originally Posted by OtagoHarbour View Post
Do you mean add-ons on the web site that I host?
Anything ranging from outdated versions of net-facing daemons to interpreters to CMSes and web logs to add-ons and plugins, yes.


Quote:
Originally Posted by OtagoHarbour View Post
Would that be my ISP?
It would except that you should first check if their AUP permits you to run a server. No use in talking to them otherwise.


Quote:
Originally Posted by OtagoHarbour View Post
I was mainly thinking of the former but the latter could also be a concern.
Replacing system binaries with trojaned versions still occurs but for that to happen the perp needs a way in. That's why it's good to invest time and effort preventing it.


Quote:
Originally Posted by OtagoHarbour View Post
I have ssh between the two servers but I have not consciously set it up for the outside world.
No matter what you do just keep the SSH best practices in mind, OK?
 
Old 11-14-2013, 07:59 AM   #10
OtagoHarbour
Member
 
Registered: Oct 2011
Posts: 332

Original Poster
Rep: Reputation: 3
Sorry about my slow reply. I have been working on the issues that you mentioned. There is one question I have about something I am working on just now.

Quote:
Originally Posted by unSpawn View Post
Quote:
I didn't realize that was a problem. I mainly use WireShark on the app server to monitor packets. I inspect the packets it captures, block ip addresses I don't like with iptables and set a filter on the ip addresses I consider harmless so they are ignored. What would be the best option for that?
Automate it ;-p
What I have been doing is to do a Google search for the ip address that is recorded by WireShark. If it is associated with an innocuous site, I add it to the capture filter so that WireShark will ignore it. If I see a discussion where someone is saying that the ip address has been associated with an attempt to hack their system I block it with iptables. I could try to formulate an algorithm to automate this but did not want to reinvent the wheel. Can you suggest a good starting point?

Thanks,
OH.
 
Old 11-15-2013, 01:55 AM   #11
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
First of all you'll realize this will be an atrociously slow process, that the intent of posts can only be analyzed by a human user understanding the context in which comments are made and that this process should be made unnecessary by preventive measures and early warnings, right? IIRC there's a RBL-like extension for iptables called "packetbl", might look into that. (EDIT: as in method because of the nfnetlink_queue / libnetfilter_queue stuff it uses.)

Last edited by unSpawn; 11-15-2013 at 02:27 PM. Reason: //More *is* more
 
1 members found this post helpful.
Old 11-15-2013, 05:25 AM   #12
NewFedoraUser5
LQ Newbie
 
Registered: Nov 2013
Posts: 7

Rep: Reputation: Disabled
sorry delete this message

Last edited by NewFedoraUser5; 11-15-2013 at 06:21 AM.
 
Old 11-21-2013, 10:09 AM   #13
OtagoHarbour
Member
 
Registered: Oct 2011
Posts: 332

Original Poster
Rep: Reputation: 3
Quote:
Originally Posted by unSpawn View Post
First of all you'll realize this will be an atrociously slow process, that the intent of posts can only be analyzed by a human user understanding the context in which comments are made and that this process should be made unnecessary by preventive measures and early warnings, right? IIRC there's a RBL-like extension for iptables called "packetbl", might look into that. (EDIT: as in method because of the nfnetlink_queue / libnetfilter_queue stuff it uses.)
I did find this paper which uses libnetfilter_queue for queuing selected packets to user space. Overall, they use anomaly detection to assign a score to packets based upon vector-based outlier detection. Would that be the way to go? Perhaps this could be used to reduce the number of URLs that need to be manually inspected?

Thanks,
OH.
 
Old 11-21-2013, 04:52 PM   #14
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by OtagoHarbour View Post
Would that be the way to go? Perhaps this could be used to reduce the number of URLs that need to be manually inspected?
In short I suggest you deploy mod_security and Snort with the Emerging Threats rule set. If you do that in addition to what I wrote about before you already have fail2ban pick up issues from the logs (make it use ipset to block access) and Logwatch to alert you about remaining issues you should deal with manually and on a day to day basis. Run that for a week and evaluate the next week if this covers your needs. If it doesn't let us know.
 
Old 11-23-2013, 05:26 PM   #15
OtagoHarbour
Member
 
Registered: Oct 2011
Posts: 332

Original Poster
Rep: Reputation: 3
Quote:
Originally Posted by unSpawn View Post
In short I suggest you deploy mod_security and Snort with the Emerging Threats rule set. If you do that in addition to what I wrote about before you already have fail2ban pick up issues from the logs (make it use ipset to block access) and Logwatch to alert you about remaining issues you should deal with manually and on a day to day basis. Run that for a week and evaluate the next week if this covers your needs. If it doesn't let us know.
I added the Emerging Threats rule set to Snort, on the app server, today and I have already started getting the following alert.

Code:
[**] [1:2012648:3] ET POLICY Dropbox Client Broadcasting [**]
[Classification: Potential Corporate Privacy Violation] [Priority: 1] 
11/23-14:34:23.497703 192.168.1.3:17500 -> 255.255.255.255:17500
UDP TTL:64 TOS:0x0 ID:15761 IpLen:20 DgmLen:131
Len: 103
Thanks,
OH.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Both chkrootkit and rkhunter find suspicious files, are they false positives? theif519 Linux - Newbie 2 06-28-2011 08:42 PM
Auto Unrar files and move whitelist to another directory johnychemist Linux - Newbie 13 05-14-2011 04:38 AM
Whitelist mahmoud Linux - General 2 06-27-2008 06:50 AM
Whitelist kool_kid Red Hat 6 07-05-2007 12:08 PM
chkrootkit - suspicious files and dirs Dave Lerner Linux - Security 2 07-09-2005 08:49 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Security

All times are GMT -5. The time now is 05:06 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration