Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
If you were hacked, how would you know this? If you're using for example, "cat" to look at your password file for changes, who says the cracker didn't just install some custom version of cat that filters out the new user from /etc/passwd? You don't. That's the point.
Once something's been sufficiently compromised, you're basically screwed and you have to start over from a point where you trust the source. Of course, the hard part is determining (1) when you've been sufficiently violated, and (2) how far back you have to go before its safe. If you're 99.99% sure you've been cracked, my advice is to wipe it all and start from scratch. (You do have recent backups, right? If not, that certainly complicates the question.)
Click here to see the post LQ members have rated as the most helpful post in this thread.
As for switching ports, I don't think it really matters. A portscan will still reveal that your SSH server port is open, and connecting to it will reveal the fact that it's SSH. In other words, it can (and will) still be attacked, which is why I say switch to keypair auth only. The length of time it would take to successfully brute-force a keypair system is several magnitudes longer than that required to brute-force a password system. The same argument goes for switching your username, IMO. Anytime you merely hide information away (i.e. switching ports or user names) you're only buying yourself a small amount of time up front, without actually boosting security. It's like comparing steganography to cryptography; obviously, crypto is better. Always operate with the least privilege and most rigorous authentication mechanism possible.
While I agree with almost everything you said, I do think that switching ports `matters' in a practical sense. While this issue has been discussed ad naueseum, I just thought I'd throw in my .
It's not wrong to switch to a nonstandard port in addition to 1) disallowing root login and 2) disallowing password authentication. It will lower the probability of a `routine 5|<|pT |<iDDi3' attack. Only more advanced attackers (those who even understand how the internet works) will be trying to connect. That way, your SSH logs become smaller and easier to parse (by hand).
For example (back when I had a standard port22 ssh), about 70% of the ipaddrs associated with `break-in' attempt in the logs did not do a full portscan. There are `exploits' whose main purpose it is to attempt to break many ssh systems, hoping to get lucky once with at least one system. A full-blown portscan would be too time-consuming for such a program. Extrapolating on this example, imagine that tomorrow, a GAPING_SECURITY_HOLE is announced in openssh (along with a patch). Now consider that maybe half of the (potential) script kiddies use some exploit that can take control of an unpatched openssh (the other half are clueless of this 0day exploit and continue what they were doing before). Now if only half of those (70% is the only statistic with any sort of factual backing, generated from my sample, but applies to the entire population of script kiddies. Here we assume that the kiddies who know about the exploit are as a whole, more educated, and therefore an estimated adjustment is made. YMMV.) are able to find your unpatched ssh, it gives you half the likelihood that you are attacked within a certain timespan. Thus, you do `buy time' to patch your system.
Nonstandard ports are therefore beneficial from a general, security-conscious, business point-of-view. Methods such as port-knocking decrease attack probability much more. At the bare minimum, you are no worse off than you were before (A nonstandard port requires no extra resources. With the iptables `recent' module, port knocking is only a few extra rules in the kernel packetfilter, introducing unnoticeable overhead). But it's not some voodoo or like rubbing a dead chicken. Such things are not to be depended on, and don't increase security (if your definition of security is the absolute ability of someone dedicated to get into your system given enough time), but they do decrease the likelihood of your system being compromised.
Normally, "ls -l /etc/shadow". This won't tell you WHICH password was changed however. And if you were hacked it's a simple and standard thing to to do cover your (the cracker's) tracks. This may included changing timestamps on /etc/shadow, deleting or altering logfiles, replacing the ls command with a trojan version, etc. Basically, anything you think you can trust ... you can't.
If you were hacked, how would you know this? If you're using for example, "cat" to look at your password file for changes, who says the cracker didn't just install some custom version of cat that filters out the new user from /etc/passwd? You don't. That's the point. You might be able to expose this trick by booting your system from a Knoppix CD and using the known-good "cat", "ls", "ps", etc. from there.
I am going to use knoppix to do the autopsy, but I just logged on as my regular account and fingered root. Sure enough 100% hacked. Root logged on yesterday at 5:01 EDT from some foreign IP, I'll check it out later. What info should I save to show as proof of being hacked, as if the other ISP will care? Looks like a long night tonight!
Definitely get rid of password authentication in SSH. Pubkey only.
If you cannot limit the IP addresses allowed to connect to only a small known set (via iptables or hosts.allow), you can use an adaptive firewall strategy to dynamically limit them. For example, here are the tracks of an intrusion attempt from my logfiles. You can see that the intruder got three attempts to get in (via pubkey only), and then was banished. The program doing the banishing was one I wrote for myself, "banssh", a simple shell script triggered out of hosts.allow. There are other varieties readily available on the net.
Code:
FIRST ATTEMPT
-------------
Jun 11 09:33:07 familyroom sshd[29469]: Connection from 72.22.82.147 port 57127
Jun 11 09:33:07 familyroom sshd[29469]: Did not receive identification string from 72.22.82.147
Jun 11 09:33:07 familyroom banssh: Initial authentication for 72.22.82.147 pending, flagging
SECOND ATTEMPT
--------------
Jun 11 09:36:50 familyroom sshd[29573]: Connection from 72.22.82.147 port 50378
Jun 11 09:36:51 familyroom sshd[29573]: User root from 72.22.82.147 not allowed because not listed in AllowUsers
Jun 11 09:36:51 familyroom banssh: Previous authentications for 72.22.82.147 have failed, flagging
THIRD ATTEMPT
-------------
Jun 11 09:36:51 familyroom sshd[29591]: Connection from 72.22.82.147 port 50446
Jun 11 09:36:51 familyroom sshd[29591]: User root from 72.22.82.147 not allowed because not listed in AllowUsers
Jun 11 09:36:51 familyroom banssh: Previous authentications for 72.22.82.147 have failed, flagging
Jun 11 09:36:51 familyroom banssh: Multiple authentications for 72.22.82.147 have failed, BLOCKED
(That "BLOCKED" indication above means banssh added a tcpwrapper rule to deny
all future connections from this IP address)
FOURTH AND ALL SUBSEQUENT ATTEMPTS
----------------------------------
Jun 11 09:36:52 familyroom sshd[29608]: refused connect from 72.22.82.147 (72.22.82.147)
You can see that the first hit by the intruder did not attempt to send any authentication. It was probably just a port scan that identified the port was open, and scraped the returned header to determine it was sshd. Then all the following hits were rapid fire - seconds apart - and were most likely attempts to brute force guess a root password. Doomed to fail since I only allow pubkey authentication. Double-doomed because they only got two futile guesses before "banssh" locked them out. I chose to have banssh create a tcpwrapper rule, but that could just have easily have been an iptables rule if I were so inclined. I will probably create both rules in the future.
Can you post the banssh script? I would like to see it. Thanks
Certainly. It's only about 40+ lines of executable code, the rest is comments and whitespace.
First, I'll explain how it works. The script is called from two places: (1) /etc/hosts.allow and (2) /etc/ssh/sshrc
The call from hosts.allow occurs on each ssh incoming attempt and passes the incoming IP address as a parameter. This IP address is added to hosts.allow as a DENY rule, but initially prepended with two comment characters ("#"). Subsequent calls from hosts.allow find the line already existing and strip one of the leading comment chars. Eventually the comment chars are all stripped, and the line becomes active. The number of failed attempts an intruder is allowed is determined by how many initial comment chars the script puts there.
All modifications to hosts.allow are confined between "banssh_START" and "banssh_END" comment lines (this is why you'll see me using sed when at first thought you'd think grep might be the command of choice).
Upon successful authentication/login, sshd normally calls /etc/ssh/sshrc if that file exists. Mine didn't - I added it. This sshrc file simply calls my banssh script again, but WITHOUT any parameters. This triggers banssh to parse the incoming IP address from the environment and then REMOVE any lines referencing that IP from hosts.allow.
My /etc/hosts.deny file is empty.
Note: Since banssh modifies /etc/hosts.allow, it needs root permissions. When called from hosts.allow, it has these by default. When called from /etc/ssh/sshrc it does not. So I use sudo to give it needed permissions in this case.
Now that the explanation is out of the way, here's the script and associated files:
/etc/hosts.allow (initial setup)
Code:
# IP addresses you NEVER want to block come BEFORE the banssh stuff
ALL : 192.168.0.1 : ALLOW
############################### banssh_START ##################################
############################### banssh_END ##################################
sshd : ALL : spawn (/usr/local/sbin/banssh %a)&
/etc/sudoers (initial setup)
Code:
ALL ALL = NOPASSWD: /usr/local/sbin/banssh
/etc/ssh/sshrc (initial setup)
Code:
#!/bin/bash
# Call the "banssh" program to advise that the connecting IP address
# (i.e., the one that spawned this program) was successfully authenticated.
# We must use sudo to execute banssh as root because it needs to modify
# /etc/hosts.allow Make sure all users who are allowed to ssh into this
# computer have appropriate sudo permissions. The following /etc/sudoers
# line should work:
# ALL ALL = NOPASSWD: /usr/local/sbin/banssh
sudo /usr/local/sbin/banssh
And finally, /usr/local/sbin/banssh
Code:
#!/bin/bash
# The file this program will modify, file extension to use for the backup copy
FILE=/etc/hosts.allow
BAK=banssh
# Delete any older backup files that may have been left laying around
rm -f "$FILE.$BAK"
# Determine the initial (pre-editing) size of FILE
SZ_START=`wc -c $FILE | cut -f1 -d ' '`
# Was an IP address passed to this program via the commandline?
if [ "$1" ]
then
# Set IP address to the commandline parameter (passed via /etc/hosts.allow)
IP=$1
# This regex check works in bash v3.1, comment-out for other/older shells
if [[ ! "$IP" =~ "^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$" ]] ; then exit ; fi
# Determine if a line containing this IP already exists in FILE
EXISTS=`sed -n -e"/banssh_START/,/banssh_END/s/$IP/ZZZ/p" $FILE`
if [ "$EXISTS" ]
then
# A line containing this IP is already present in FILE, so modify it
logger -p auth.info -t "$0" \
"Previous authentications for $IP have failed, flagging ($FILE)"
sed -i.$BAK -r -e"/banssh_START/,/banssh_END/s/#(.*$IP)/\1/" $FILE
else
# FILE contains no lines referencing this IP, so add one
logger -p auth.info -t "$0" \
"Initial authentication for $IP pending, flagging ($FILE)"
sed -i.$BAK -e"/banssh_START/a##ALL : $IP : DENY" $FILE
fi
# Check to see if this IP is now blocked, if so note this in the logfile
BLOCKED=`sed -r -n -e"/banssh_START/,/banssh_END/s/^[^#].*$IP/ZZZ/p" $FILE`
if [ "$BLOCKED" ]
then
logger -p auth.info -t "$0" \
"Multiple authentications for $IP have failed, BLOCKED ($FILE)"
fi
else
# No commandline parameter, so parse the IP address from the environment
# variable set by sshd (in this case, we were called from /etc/ssh/sshrc)
IP=`echo $SSH_CONNECTION | cut -f1 -d ' '`
# This regex check works in bash v3.1, comment-out for other/older shells
if [[ ! "$IP" =~ "^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$" ]] ; then exit ; fi
# A more basic check, for shells that don't support the above regex check
if [ ! "$IP" ] ; then exit ; fi
# Delete all lines from FILE that reference IP
logger -p auth.info -t "$0" \
"Authentication successful, clearing $IP ($FILE)"
sed -i.$BAK -e "/banssh_START/,/banssh_END/s/$IP/DELETE_ME/" $FILE
sed -i -e "/DELETE_ME/d" $FILE
fi
# Determine the final (post-editing) size of FILE
SZ_END=`wc -c $FILE | cut -f1 -d ' '`
# This sanity check works in bash v3.1, comment-out for other/older shells.
# Sanity check: Did size of FILE change too much? If so, restore backup copy.
if [[ ($(($SZ_END - $SZ_START)) -gt 32) || ($(($SZ_START - $SZ_END)) -gt 32) ]]
then
logger -p auth.err -t "$0" \
"Size of $FILE changed too much after editing, restoring backup copy"
mv "$FILE.$BAK" "$FILE"
fi
# Delete the backup file
rm -f "$FILE.$BAK"
exit
That's it! I wrote this particular script, but credit must go to others for initial concepts. Mostly Bob Toxen for his "Cracker Trap" mentioned in his book "Real World Linux Security".
Pulled the box offline last night, used a rescue disk to change root's PW so I can login as root, and look around, and this is what I got.
Root's last login was from a Romanian ISP.
He/SHE broke in around 5PM on the 10th, shadow file confirms PW was changed on the 10th.
The last command in the History file is what I did last, so he covered his tracks that way.
My 'Logwatch' that was e-mailed to me says someone uploaded 83kb, but the ftp logs were wiped clean.
I had to download a new copy of knoppix last night, so I will poke around with that later today, but a few questions are in my mind.
How do I find what they uploaded?
Is there a way to notify someone when root's PW is changed, via logs or some other way?
Why did they change root's PW? Wouldn't it be beneficial to them to be a little more covert? There is nothing subtle about changing root's PW.
When I use Knoppix later today, I'll see if they added any users, or groups.
Thanks!!
Everyone has been great, I just need to secure the new box a little better this time!!
------------
Without a tool like Tripwire, that is going to be a bit tough. I believe that you can search for altered RPM's with rpm -Va, but that won't tell you if anything new was added. You could run chkrootkit and rkhunter to see if they pick up anything. You also might search for files added after the break-in occurred. However, if they are good at covering their tracks, you might not get a satisfactory answer.
Quote:
Why did they change root's PW? Wouldn't it be beneficial to them to be a little more covert? There is nothing subtle about changing root's PW.
I agree that it isn't subtle, but then again it does lock you out of root. I suppose its also possible that they don't know roots password since it was picked out of a dictionary attack. I don't know if those kinds of scripts leave a record of what password worked.
It's been good watching this thread progress and see there's been lots of advice given that is on the mark.
Apologies to the OP because I would like to add some comments (basically addressing LQSEC regulars and those with a +1K postcount) and recap a wee bit. Please accept this as additional information to keep in mind and not as me lecturing you. The main reason I want to do this is that LQSEC IMHO doesn't have that many experienced incident handlers and I hope this would increase your mana the next time you go down the incident handler path, which I hope you do.
First of all I would like talk about handling in general. As with all troubleshooting you need to have a plan, and handling breach of security incidents is no different at that. Except that mistakes can have consequences that are irreversible. For instance forgetting to have the data frozen can hamper forensics a lot later on. To me this means that if you rise to the call, you take the responsability to handle the case and must understand the risks. A plan or checklist also serves to keep both you and the "victim" focussed as victims tend to digress from any steps given based on impulse, frustration levels and other often influences that are often irrelevant. Make sure the OP follows the plan and signs off on each phase. (If the OP stalls, digresses w/o solid reasoning or just behaves stubborn, let him know he's fscking up and drop your responsabilities. Probably the OP has some expertise-fu going on I would not understand as well.)
- The LQ FAQ: Security references should be able to point you towards checklists, checking CERT and the SANS Reading Room first would be a good choice. Next to that this forum has some incident handling threads I think could be useful too.
On to handling the case itself.
- The description of the incident and current situation wasn't made onehundred percent clear in the first post. This means that if you do not address that by asking questions first you come out of your assessment phase blindfolded. Building an understanding of the situation (audit data, auth data, logs, installed SW, running services) is crucial because else you can not give advice tailored to the situation, which you should if you want to handle (suspected) breach of security incidents properly and responsably.
- Damage control / risk mitigation. Knowing the location and purpose of the box are important starting points because once (perceived) untrusted the box should be isolated immediately and parties involved should be warned pending further investigation.
- It's only after the situation is stabilised you should start a more elaborate investigation or have the OP initiate a mop up.
- Noticing and addressing the OP's questions also is important. You need to place yourself in the position of the OP and remember the side effects of such an incident can cause major uncertainty which has to be addressed and replaced by reassurance (help is on the way, it's under control) and confidence (it will work out following these steps). Also keep an eye on eachothers questions and answers and do try to fill in gaps. After all we're all trying to solve the OP's problem and combined efforts have more chance of covering it all.
On to specific advice.
- However pasionately argued by some, moving a service to another port is not a way to enhance security. Period.
- Not an ad hominem but more the example itself: "That's a nice hacker, he changed your password because it was too simple. In case you get hacked again and he would loose his zombie.". While this may be the case there was no evidence given to support that. Handling should revolve around facts because you know what assumptions make...
- "Basically, anything you think you can trust ... you can't." That is an excellent remark and starting point. Shame it wasn't used earlier.
All in all I think most of the advice given was solid. The only major annoyance I've seen was the banssh hijack. No, don't look at the messenger: look at what it adds to addressing the OP's problem. See what I mean? Nothing. Next time please make your own thread or ask via email. TIA
* Moderators nota bene: for those of you that want to reply in anger for slagging off your efforts: you're probably missing the point, but OK: be my guest, but please do so by email. TIA
Well, the question itself is still unanswered: Why did they change the root password? Because once they cracked in and got (probably limited) root access they had to change the password to something they know. So they can log in a regular way and start using your box.
Would it be bad form to ask what IP the attack came from? I'd like to add it to my black list.
I don't think it would be bad form, but others may disagree. I am going go through the logs this weekend to pinpoint the IP that did the breach, what I can give you so far is that it was a Romanian IP Address that the hacker used to log in as root. And it looks like maybe the initial crack was from a different IP, but only maybe. I don't have all the time to do a full autopsy, untill the 'new' box is back up 100% If it is ok I will post IP ranges, or the actuall IP, clips of logs as well, if ok?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.