LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Security
User Name
Password
Linux - Security This forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.

Notices


Reply
  Search this Thread
Old 01-19-2011, 07:43 AM   #31
Hangdog42
LQ Veteran
 
Registered: Feb 2003
Location: Maryland
Distribution: Slackware
Posts: 7,803
Blog Entries: 1

Rep: Reputation: 419Reputation: 419Reputation: 419Reputation: 419Reputation: 419

Quote:
Originally Posted by ZS-
Are you saying Hash against the base files? The thing with this is I know that at least two PHPBB installations are "modified" so the hashes will be different, thus not really giving an accurate picture?
I think at very least your going to need to have some serious discussions with the PHPBB owners to give a thorough accounting of what they've done. If they are modifying the base PHPBB, they may be introducing security issues without knowing it. In the longer term, something like Aide or Samhain would probably be a decent idea, but until you're sure the machine is clean, they aren't going to be particularly useful. And depending upon how often the sites are changed, there could be a good deal of noise to sift through to find a signal.

Quote:
Originally Posted by ZS-
I have "grep -r"'d the whole /home partition and cannot find sources/functions.php anywhere... However I realise the code could be obscured or encrypted (would somone go to that level of hassle?)
It might not be that much of a hassle to do. I once saw a Joomla theme where the author had used rot13 to obscure his copyright notice, and the theme had the PHP code to unrot it. That would be enough to keep the straight string searches from registering a hit.
 
Old 01-19-2011, 07:46 AM   #32
ZS-
LQ Newbie
 
Registered: Jan 2011
Posts: 21

Original Poster
Rep: Reputation: 7
Quote:
Originally Posted by tva View Post
Not that this would solve anything, I'd apply iptable rules to drop any new outgoing connections and add logging to outgoing connections / attempts and only allow related and established connections, if that is possible in that scenario.

Didn't look that much into posts but did you grep for that targeted ip from your logs, that would help identify if thats triggered per request to some script.

edit:
Just in case, have you tried rkhunter? (http://sourceforge.net/projects/rkhunter) I bet it won't find apache-related stuff but might find if someone has planted backdoor on system earlier..
My IPtables is pretty well secured down now, both in and out. And logs all attemps to breach security (connections to ports not open)

A grep for the tagetted IP returns nothing... in any of the logs

I have run rkhunter and also its sister, Lynis, but not found anything.

Last edited by unSpawn; 01-19-2011 at 11:08 AM. Reason: //Wrong URI
 
Old 01-19-2011, 10:09 AM   #33
Nominal Animal
Senior Member
 
Registered: Dec 2010
Location: Finland
Distribution: Xubuntu, CentOS, LFS
Posts: 1,723
Blog Entries: 3

Rep: Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947
Have you run my modified script (in post #25) to see which pages were loaded at or around the time of the attack? e.g.
Code:
log-interval -p 'last thursday' 'last friday' /var/log/httpd/access.log | sort | uniq -c | sort -bg
The attack used UDP packets. If PHP was used to launch the attack, the code must use the socket_create function. One thing then is to look for that:
Code:
find /var/www/html -type f -print0 | xargs -0 grep -le socket_create
There are legitimate uses for that, of course, and there is no reason to concentrate on PHP, but this is one more thing to look into.
Nominal Animal

Last edited by Nominal Animal; 03-21-2011 at 06:16 AM.
 
Old 01-19-2011, 04:37 PM   #34
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,409
Blog Entries: 55

Rep: Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582
Quote:
Originally Posted by ZS- View Post
I have manually scanned them, I didn't think of using Logwatch
HTTP recon tends to be noisy so Logwatch is a perfect early warning system for highlighting any anything with 4.* and 5.* return codes.


Quote:
Originally Posted by ZS- View Post
Are you saying Hash against the base files? The thing with this is I know that at least two PHPBB installations are "modified" so the hashes will be different, thus not really giving an accurate picture?
Depends on the modifications but indeed but if the majority of files doesn't match then you can discard that approach.


Quote:
Originally Posted by ZS- View Post
I have "grep -r"'d the whole /home partition and cannot find sources/functions.php anywhere... However I realise the code could be obscured or encrypted (would somone go to that level of hassle?)
Your provider grepped one of your Apache logs and found a "/2009/04/bye-waves//sources/functions.php?CONFIG[main_path]=" path which returned a 200 so the "functions.php" file as well as the "/2009/04/bye-waves/sources/" path must have existed as at some point in time. The log file is or was for a specific Vhost so that should have been easy to pinpoint right from the start except for customer-initiated file deletion, restoration et cetera.


Quote:
Originally Posted by ZS- View Post
You have given me a lot to think about there, I will be looking further into the mod_rewrite code above, incidently I have written some mod rewrite code to only allow uploading of image files into image directories which I thought was a sensible step (especially where image directories have Write rights).
Indeed it is. Note deploying mod_security is not equal to the kind of protection you get from loading current software versions and it won't detect everything. Not one hundred percent comforting but with a combination of user access controls, vulnerability-free software, network access restrictions, log reporting and mod_security you at least stand a better chance.


Quote:
Originally Posted by ZS- View Post
I am systematically checking all versions of PHPBB, Wordpress etc on the sites and upgrading (or speaking to the customer and making sure they upgrade) in case there are any very old software on the server.
Good, good.
 
Old 01-19-2011, 04:58 PM   #35
ZS-
LQ Newbie
 
Registered: Jan 2011
Posts: 21

Original Poster
Rep: Reputation: 7
Quote:
Originally Posted by unSpawn View Post
Your provider grepped one of your Apache logs and found a "/2009/04/bye-waves//sources/functions.php?CONFIG[main_path]=" path which returned a 200 so the "functions.php" file as well as the "/2009/04/bye-waves/sources/" path must have existed as at some point in time. The log file is or was for a specific Vhost so that should have been easy to pinpoint right from the start except for customer-initiated file deletion, restoration et cetera.
My Provider has no access to my server, the logs they provided are logs from the remote side... They only act when someone complains (or they see the attack on their switch) and will not touch the server itself, instead block it at the switch level then reboot the server and using PXE BOOT (I guess) send it into a recovery console or to the reimage stage...
 
Old 01-19-2011, 06:58 PM   #36
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,409
Blog Entries: 55

Rep: Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582Reputation: 3582
Quote:
Originally Posted by ZS- View Post
My Provider has no access to my server, the logs they provided are logs from the remote side...
For some indeterminable reason I was operating under the impression posted log excerpts were from the victim and not the attackers side. Bummer.
 
Old 01-19-2011, 09:47 PM   #37
Hangdog42
LQ Veteran
 
Registered: Feb 2003
Location: Maryland
Distribution: Slackware
Posts: 7,803
Blog Entries: 1

Rep: Reputation: 419Reputation: 419Reputation: 419Reputation: 419Reputation: 419
Quote:
Originally Posted by unSpawn View Post
For some indeterminable reason I was operating under the impression posted log excerpts were from the victim and not the attackers side. Bummer.
I was under the same impression. So does that mean that the URLs in the posted log files are coming out of the suspect server? If that is true, then we really have no idea what the compromise is.
 
Old 01-20-2011, 02:07 AM   #38
ZS-
LQ Newbie
 
Registered: Jan 2011
Posts: 21

Original Poster
Rep: Reputation: 7
Quote:
Originally Posted by Hangdog42 View Post
I was under the same impression. So does that mean that the URLs in the posted log files are coming out of the suspect server? If that is true, then we really have no idea what the compromise is.
Yes, this is correct the URLS posted are as seen on the remote server, which means the script\whatever on my server is trying to connect to those URLS...

I have found one file that had some Includes to remote files... they didn't sppear in keeping with the relivant site so I have removed them, and the site still works.

Will update as I get more info
 
Old 01-20-2011, 02:16 AM   #39
Nominal Animal
Senior Member
 
Registered: Dec 2010
Location: Finland
Distribution: Xubuntu, CentOS, LFS
Posts: 1,723
Blog Entries: 3

Rep: Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947
unSpawn, Hangdog42: There shouldn't be any confusion. Any logs with xx.xx.xx.242 were provided by the ISP, recorded on the victim(s) of the attack; the xx.xx.xx.242 being ZS-'s server, one of the originators of the attacks.

I for one am trying to help ZS- to go thorough the Apache logs on xx.xx.xx.242, because it is most likely the attack was done using a web page (a PHP or Perl script), and not from e.g. command line. If so, and xx.xx.xx.242 was not compromised, the Apache logs have entries that identify the attack page(s). This is true even if the page just launched the actual attack script.

Therefore, files accessed via Apache well before, during, and immediately after the attack, might include the attack page. The script I listed in post #22 and post #25 make it easy to concentrate on the attack period. I even listed an example command using it to get a list of files accessed in that interval, with access counts, sorted by access count.

As to the sources/functions.php string: Because it was part of one of the URLs targeted in the attack, I suggested ZS- to look for it on xx.xx.xx.242. Yes, it was a long shot; the target URLs would most likely have been fed to the script as part of the query (POST data block, most likely, to avoid the URL showing up in the logs). That string is rare, and practically all valid uses would have been in 'include' statements. It should only have taken a couple of minutes to check all of the occurrences. I hoped ZS- would have used this as a starting point, and done the same for other easy-to-detect strings he could find in the victim logs supplied by his ISP.

Only a careful investigation of all leads is likely to show exactly what the compromise is. Doing it in an organized manner also allows you to eliminate candidates. For example, if ZS- went thorough every single file accessed on his server when the attacks occurred, but found no page that could possibly have made the attack, he would have to conclude the attacker has compromized his server. Even if there were several thousand different files accessed in that interval, you could already have checked them all; you just need to do it in an organized, efficient way. (For example, use my script to get the file names, then file to sort them by file type, and then go thorough each file of the same type, using some kind of script or command to eyeball them sequentially.)

Security requires a focused mind. Concentrate!
Nominal Animal

Last edited by Nominal Animal; 03-21-2011 at 06:14 AM.
 
Old 01-20-2011, 02:42 AM   #40
120
Member
 
Registered: Oct 2010
Posts: 46

Rep: Reputation: 9
Quote:
Originally Posted by Hangdog42 View Post
I was under the same impression. So does that mean that the URLs in the posted log files are coming out of the suspect server? If that is true, then we really have no idea what the compromise is.
You've already covered what it can be - don't be so blue :-) Looking at it through the eyes of a n00b like me I can see Nominal Animal probably nailed the 'Colombo' part of this problem.

Forget all the fancy scripts and lah-dee-dah. What you have here is some form of executable making socket calls to a given port. Creatively grepin' through the web directory for socket opening lines (or obfuscation unpacking lines for that matter) is probably going to turn this up.

Other thoughts (may be flawed)
People modify web apps for their own ends - it's somewhat in the spirit of Open Source, so hash checks are of limited use.

Before you can determine 'how' a trick is done, you need to see the trick. To that end finding how this file/executable is called is secondary to actually finding the executable itself (if that makes sense).

Closing off the port outbound may mitigate the attack, but won't stop it. It may be of no use to the attacker, but it remains a dormant breach needing resolution.

Only the really stupid script kiddies tend to name files 'compromised_code.php'. They will often replace little used files in well known packages such as 'language' files and the like. These will live in normal directories and won't stand out.

Don't forget about Perl and CGI. Just because many folk use PHP, Perl can be 'God' on a system and can pretty much do anything within bounds. Unlike PHP it is not usually locked down.

If this were my problem I'd do a couple of things. First I would upload a simple 'browse' type script so I can deduce how much access a PHP (and Perl) user could get on that server. I'd want to know that one user could not access and read or write to places they should not. Second, I would knock up a little script that takes a snapshop of the output of a single shot 'top', lsof and netstat every couple of minutes and mails the results to me. Then when the attack is fired there is something useful to look at that may just nail down the 'what'.

Once you know the 'what', you can get to the 'why'.
 
Old 01-20-2011, 04:04 AM   #41
Nominal Animal
Senior Member
 
Registered: Dec 2010
Location: Finland
Distribution: Xubuntu, CentOS, LFS
Posts: 1,723
Blog Entries: 3

Rep: Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947
Spot on, 120. Also, this

Quote:
Originally Posted by 120 View Post
Don't forget about Perl and CGI. Just because many folk use PHP, Perl can be 'God' on a system and can pretty much do anything within bounds.
reminded me: ZS-, if you haven't already done so, check all files in your web directories which have execution rights:
Code:
find /var/www -type f -perm +0111 -ls
This should list all existing CGI programs, including any Perl scripts, unless they're launched from a PHP page (using e.g. popen("/usr/bin/perl", "w") and piping the script to the interpreter).

Which reminds me: ZS-, you are missing popen from your disable_functions. Also check if it's used anywhere:
Code:
find /var/www -type f -print0 | xargs -0 grep -e '[^A-Za-z]popen[^A-Za-z]'
A PHP page might have used it via eval (by constructing the function call in a variable), so check those too:
Code:
find /var/www -type f -print0 | xargs -0 grep -e '[^A-Za-z]eval[^A-Za-z]'
Nominal Animal

Last edited by Nominal Animal; 03-21-2011 at 06:13 AM.
 
Old 01-20-2011, 04:33 AM   #42
ZS-
LQ Newbie
 
Registered: Jan 2011
Posts: 21

Original Poster
Rep: Reputation: 7
I ran the code provided by nominal and checked through the output, all of the files appear to be okay, however a couple had the include files from a remote server (as above)

Quote:
Originally Posted by Nominal Animal View Post
reminded me: ZS-, if you haven't already done so, check all files in your web directories which have execution rights:
Code:
find /var/www -type f -perm +0111 -ls
This should list all existing CGI programs, including any Perl scripts, unless they're launched from a PHP page (using e.g. popen("/usr/bin/perl", "w") and piping the script to the interpreter).
Hmmm there are a few directories that have Execute permissions on all files under, and the files are only images... am updating permissions and checking the files over now!

Quote:
Originally Posted by Nominal Animal View Post
Which reminds me: ZS-, you are missing popen from your disable_functions. Also check if it's used anywhere:
Code:
find /var/www -type f -print0 | xargs -0 grep -e '[^A-Za-z]popen[^A-Za-z]'
Running this command gives me..

./site1/web/wp-content/plugins/si-contact-form/si-contact-form/ctf_geekMail-1.0.php: $fh = @popen($this->_mailPath . ' -oi -f ' . $this->_cleanEmail($this->_headers['From']) . ' -t', 'w');
./site1/web/wp-includes/class-phpmailer.php: if(!@$mail = popen($sendmail, 'w')) {
./site2/web/wp-includes/class-phpmailer.php: if(!@$mail = popen($sendmail, 'w')) {
./site3/web/includes/db/sqlite.php: $this->db_connect_id = ($this->persistency) ? @sqlite_popen($this->server, 0666, $error) : @sqlite_open($this->server, 0666, $error);
./site3/web/temp/update2/includes/db/sqlite.php: $this->db_connect_id = ($this->persistency) ? @sqlite_popen($this->server, 0666, $error) : @sqlite_open($this->server, 0666, $error);

And various other sqllite.php identicle to the last result, but on other sites...

popen added to disable_functions now.

Quote:
Originally Posted by Nominal Animal View Post
A PHP page might have used it via eval (by constructing the function call in a variable), so check those too:
Code:
find /var/www -type f -print0 | xargs -0 grep -e '[^A-Za-z]eval[^A-Za-z]'
Nominal Animal
And this command shows loads of results, mainly from wordpress sites - so is there a legitimate reason to call this function?

Thanks again guys

Last edited by ZS-; 01-20-2011 at 04:41 AM.
 
Old 01-20-2011, 06:09 AM   #43
Nominal Animal
Senior Member
 
Registered: Dec 2010
Location: Finland
Distribution: Xubuntu, CentOS, LFS
Posts: 1,723
Blog Entries: 3

Rep: Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947Reputation: 947
Quote:
Originally Posted by ZS- View Post
./site1/web/wp-content/plugins/si-contact-form/si-contact-form/ctf_geekMail-1.0.php: $fh = @popen($this->_mailPath . ' -oi -f ' . $this->_cleanEmail($this->_headers['From']) . ' -t', 'w');
./site1/web/wp-includes/class-phpmailer.php: if(!@$mail = popen($sendmail, 'w')) {
These do look like they're sending mail via the sendmail server binary. I don't understand why they should use an external program, since PHP has a mail function for exactly this purpose. It's not too difficult to change those to use the function, but it does get quite tedious after the fourth or fifth place you have to do that. Unless, of course, the files are the same, and you can just copy over the changes.

You do know you can use diff to compare two files or two directories? I use
Code:
diff -Nabur thing1 thing2 | less
which shows the differing lines starting with - for thing1 and + for thing2, plus a few lines before and after that are common in both. Or, use
Code:
diff -bar --side-by-side thing1 thing2 | less
to see the differences side-by-side.

Quote:
Originally Posted by ZS- View Post
./site3/web/includes/db/sqlite.php: $this->db_connect_id = ($this->persistency) ? @sqlite_popen($this->server, 0666, $error) : @sqlite_open($this->server, 0666, $error);
Yeah, those are just "persistent" sqlite_open calls, not popen calls. This should omit those:
Code:
find /var/www -type f -print0 | xargs -0 grep -e '[^_a-z]popen[^_a-z]'
How many hits did you get? (You can just add | wc -l at the end to give just the count.)

Quote:
Originally Posted by ZS- View Post
popen added to disable_functions now.
Like I said, that is probably a good idea, even if it disables some of the e-mail functionality for now.

Quote:
Originally Posted by ZS- View Post
And this command shows loads of results, mainly from wordpress sites - so is there a legitimate reason to call this function?
Kind of. eval runs the given string as PHP code. It is very difficult to use it safely, since an attacker only needs to inject their commands into that string (and many programmers don't do a good job of filtering nasty stuff out, because it is quite complex thing to do right).
What you can do is check if these files are original code or not.

Have you checked how many individual pages were accessed on your server, when the attack on for example Jan 15 occurred?
Code:
log-interval '15/Jan/2011:04:00:00' '15/Jan/2011:05:00:00' /var/log/httpd/* | sort | uniq | wc -l
to get the list, with the number of accesses, during that interval, use
Code:
log-interval '15/Jan/2011:04:00:00' '15/Jan/2011:05:00:00' /var/log/httpd/* | sort | uniq -c | sort -bg
Personally, I'd check each of those files very, very carefully, starting from the most accessed one (the last one in the list).
Then I'd do the same for the other times you know (from the logs your ISP showed you) an attack has occurred.
It would be a good idea to check if the lists (for separate attack times) have any files in common.
Nominal Animal

Last edited by Nominal Animal; 03-21-2011 at 06:12 AM.
 
Old 01-20-2011, 07:00 AM   #44
120
Member
 
Registered: Oct 2010
Posts: 46

Rep: Reputation: 9
I don't want to throw any confusion into the mix here - but this little script may help. It takes a snapshot of top / lsof / netstat as often as you care to cron it, emailing the results to you. It's probably not the best looking code in the world, but it's clear, concise and you can see what it does and how to change it.

Code:
#!/bin/bash
EPOCHTIME=`date +%s`
TIMESTAMP=`date`
LOGFILE="/tmp/snapshot.$EPOCHTIME"
EMAILFROM="your_sender@domain.com"
EMAILTO="your_recipient@domain.com"
SUBJECT="SNAPSHOT: $TIMESTAMP"
top c n 1 b >> $LOGFILE
lsof -i -nP >> $LOGFILE
netstat -aunt >> $LOGFILE
(
cat <<!
From: $EMAILFROM
Subject: $SUBJECT
To: $EMAILTO
!
[ "$CC" ] && echo "Cc: $CC"
cat $LOGFILE
) | /usr/lib/sendmail -t
rm $LOGFILE
To have it run every 4 minutes:
Code:
*/4 * * * * nice -n 20 /path/to/script.sh >/dev/null 2>&1
As far as checking who has access to what in a web context here are a couple of scripts - one in php, the other Perl (don't forget to chmod +x it). Calling these from a web browser will show you just how much is exposed with your PHP and Perl. Make sure you remove them afterwards if you use them. Hopefully they we be useful to someone.

PHP VERSION:
Code:
<?php
echo "<pre>\n";
if (ini_get('safe_mode'))
{
    echo "[safe_mode enabled]\n\n";
}
else
{
    echo "[safe_mode disabled]\n\n";
}
if (isset($_GET['dir']))
{
    ls($_GET['dir']);
}
elseif (isset($_GET['file']))
{
    cat($_GET['file']);
}
else
{
    ls('/');
}
echo "</pre>\n";
function ls($dir)
{
    $handle = dir($dir);

    while ($filename = $handle->read())
    {
        $size = filesize("$dir$filename");

        if (is_dir("$dir$filename"))
        {
            if (is_readable("$dir$filename"))
            {
                $line = str_pad($size, 15);
                $line .= "<a href=\"{$_SERVER['PHP_SE LF']}?dir=$dir$filename/\">$filename/</a>";
            }
            else
            {
                $line = str_pad($size, 15);
                $line .= "$filename/";
            }
        }
        else
        {
            if (is_readable("$dir$filename"))
            {
                $line = str_pad($size, 15);
                $line .= "<a href=\"{$_SERVER['PHP_SELF']}?file=$dir$filename\">$filename</a>";
            }
            else
            {
                $line = str_pad($size, 15);
                $line .= $filename;
            }
        }

        echo "$line\n";
    }

    $handle->close();
}

function cat($file)
{
    ob_start();
    readfile($file);
    $contents = ob_get_contents();
    ob_clean();
    echo htmlentities($contents);
    return true;
}
?>
IN Perl :
Code:
#!/usr/bin/perl -w
use strict;
use CGI qw(:standard);
use CGI::Carp qw(fatalsToBrowser);
use File::Copy;
print header;
my $browsedir = "/";
my $homedir = "/tmp/";
#get the querystring path
my $request = $ENV{'QUERY_STRING'};
print <<End_of_Header;
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html lang="en-US" xml:lang="en-US" xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Perl File System Browser</title>
<style type="text/css" media="screen">
	body {
		background-color: #CCCCCC;
		font: 11px verdana,arial;
		}
		h1 {
		color: #ff0000;
		font: 18px verdana,arial;
		}
</style>
</head>
<body>
<h1>SPFB - Simple Perl File Browser</h1>
End_of_Header
if ($request) {
	$request =~ s/\%([A-Fa-f0-9]{2})/pack('C', hex($1))/seg;
	$request =~ s%p=%%g; #strip off the leading p=
	$browsedir = $request;
		#check if this is a file or a directory
		if (-f $request) {
			#if this is a zip file download it
			print "REQ: $request <br>";
				if ($request =~ /(\.zip|\.gz)/g) {
				print "looks like a zip file - I will try to copy it to<b>$homedir</b><br>";
				$_ = copy($request,$homedir) or die "Copy failed: $!";
				print "copied $request to $homedir<br>" if $_;
				print "</body>";
				exit();	
				}
			if (-B $request) {
			print "This looks like a binary file and may cause the server to crap the bed.<br>I will attempt to copy it to: <b>$homedir</b>.<br>";
			$_ = copy($request,$homedir) or die "Copy failed: $!";
			print "copied $request to $homedir<br>" if $_;
			print "</body>";
			exit();	
			} else {
				print "This is a file, I will attempt to open it<br>";
				open (FILE, $request);
				while (<FILE>) {
				chomp;
				print $_ . "<br>";
				}
			close (FILE);
			print "</body>";
			exit();	
			}
		}
}

	
my ($path,$urlpath,$upone,@tmp);
my @files = <$browsedir/*>;
print '<a href="' .$0 . '">.</a><br>';
my $upone;
#calculate the upone value to go to
@tmp = split(/\//, $request);
#drop off the right side of the array if it has at least one element
if (@tmp) {
pop @tmp;
$upone = join("/", @tmp);
$upone =~ s/([^A-Za-z0-9])/sprintf("%%%02X", ord($1))/seg;
}

print '<a href="' .$0 . '?p=' . $upone .'">..</a><br>';
foreach my $file (@files) {
	$path = $file;
	$path =~ s%\/{2}%/%g; #strip double slashes
	(my $dev, my $ino, my $mode, my $nlink, my $uid, my $gid, my $rdev, my $size, my $atime, my $mtime, my $ctime, my $blksize, my $blocks) = stat $path;
	$urlpath = $path;
	$urlpath =~ s/([^A-Za-z0-9])/sprintf("%%%02X", ord($1))/seg;
	print "F" if -f $file;
	print "B" if -B $file;
	print "D" if -d $file;
	print "T" if -T $file;
	print " ";
	print "U:$uid G:$gid ";
	printf "%03o", $mode&0777;
	print " &lt;$size Bytes -" . localtime($ctime) . "&gt;";
	print '  <a href="' .$0 . '?p=' . $urlpath . '">' . $path . '</a> ';
	print "<br>";
	
} 
print "</body>";

Last edited by 120; 01-20-2011 at 07:01 AM.
 
Old 01-20-2011, 08:31 AM   #45
ZS-
LQ Newbie
 
Registered: Jan 2011
Posts: 21

Original Poster
Rep: Reputation: 7
Quote:
Originally Posted by Nominal Animal View Post
You do know you can use diff to compare two files or two directories? I use
Code:
diff -Nabur thing1 thing2 | less
which shows the differing lines starting with - for thing1 and + for thing2, plus a few lines before and after that are common in both. Or, use
Code:
diff -bar --side-by-side thing1 thing2 | less
to see the differences side-by-side.
Yes I did know that

Quote:
Originally Posted by Nominal Animal View Post
Yeah, those are just "persistent" sqlite_open calls, not popen calls. This should omit those:
Code:
find /var/www -type f -print0 | xargs -0 grep -e '[^_a-z]popen[^_a-z]'
How many hits did you get? (You can just add | wc -l at the end to give just the count.)
11 Hits from that command, all appear to be sendmail related.

Quote:
Originally Posted by Nominal Animal View Post
Kind of. eval runs the given string as PHP code. It is very difficult to use it safely, since an attacker only needs to inject their commands into that string (and many programmers don't do a good job of filtering nasty stuff out, because it is quite complex thing to do right).
What you can do is check if these files are original code or not.

Have you checked how many individual pages were accessed on your server, when the attack on for example Jan 15 occurred?
Code:
log-interval '15/Jan/2011:04:00:00' '15/Jan/2011:05:00:00' /var/log/httpd/* | sort | uniq | wc -l
to get the list, with the number of accesses, during that interval, use
Code:
log-interval '15/Jan/2011:04:00:00' '15/Jan/2011:05:00:00' /var/log/httpd/* | sort | uniq -c | sort -bg
1024 - Thats an odd number for that time of the morning!

Quote:
Originally Posted by Nominal Animal View Post
Personally, I'd check each of those files very, very carefully, starting from the most accessed one (the last one in the list).
Then I'd do the same for the other times you know (from the logs your ISP showed you) an attack has occurred.
It would be a good idea to check if the lists (for separate attack times) have any files in common.
Nominal Animal
I'm running the command now to see what was being access etc at that time... might give me some clues
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Remote setup of LAMP server on CentOS 5 satimis Linux - Server 19 01-31-2008 11:42 PM
LXer: CentOS 4.6 Server Setup: LAMP, Email, DNS, FTP, ISPConfig LXer Syndicated Linux News 0 01-10-2008 03:40 PM
LXer: CentOS 5.1 Server Setup: LAMP, Email, DNS, FTP, ISPConfig LXer Syndicated Linux News 0 12-06-2007 03:21 PM
LAMP server - which flavour Linux: Fed or CentOS? uncle-c Linux - General 2 06-20-2007 04:10 PM
LXer: Building A Low-Cost LAMP Server For Your Webhosting Business With CentOS 4.3 LXer Syndicated Linux News 0 05-03-2006 02:54 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Security

All times are GMT -5. The time now is 03:47 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration