LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 05-11-2019, 03:40 PM   #1
timsoft
Member
 
Registered: Oct 2004
Location: scotland
Distribution: slackware 15.0 64bit, 14.2 64 and 32bit and arm, ubuntu and rasbian
Posts: 495

Rep: Reputation: 144Reputation: 144
Question server has been hacked. any ideas on how to find/remove hacker/s


Hi, I have had 2 servers at different locations hacked by what I assume is brute force ssh attack, even though I have denyhosts running with a 5 second refresh rate, and only ssh (via a none-standard port) is forwarded by the routers involved. One server is slackware 13.1 and the other is 14.1 64bit. They had been patched with security patches at least till 2014.
I discovered a compromised user on the 13.1 machine which I removed, and found a hidden directory in /tmp containing crytocurrency miner (which used up my internet data allowance :-( )
I gzipped it and removed it. I will redo this machine with 14.2, but it will take a while as it runs mail,file, database and remote monitoring services.

The 14.1 machine on the other hand shows signs of attack ( running
Code:
netstat -pn
as root shows ssh connections to known brute force ssh attack ip's) but no process id's or program names (- is all i get).
I tried
Code:
lsof -n -i
which gave me a pid and [accepte
i've tried
Code:
w
but it only shows my own login, and
tried
Code:
ps -A
(all as root) but nothing stands out. i've checked /etc/passwd for new users, checked for existence of /root/.shh/* (.ssh directory is not there), and looked for hidden dirs in /tmp like the other server, but no sign of anything. I've looks through /var/log/message* and /var/log/sys* and not spotted anything. but there are still these ssh connections showing in netstat

any ideas of what else to check? Again, only ssh port-forwarded from a non-standard port has access sourced from the internet to the server. thanks.
has anyone any ideas

Last edited by timsoft; 05-11-2019 at 03:41 PM. Reason: fix typo
 
Old 05-11-2019, 04:05 PM   #2
upnort
Senior Member
 
Registered: Oct 2014
Distribution: Slackware
Posts: 1,893

Rep: Reputation: 1162Reputation: 1162Reputation: 1162Reputation: 1162Reputation: 1162Reputation: 1162Reputation: 1162Reputation: 1162Reputation: 1162
Everything I have read about hacked servers is the same -- reinstall. Update all security patches. Use SSH keys and disable remote password logins. Carefully and selectively restore data files.

Basically, do not try to outguess the people who hacked the system. They already own the system and few people can find all the tracks.
 
7 members found this post helpful.
Old 05-11-2019, 04:06 PM   #3
hoodlum7
Member
 
Registered: May 2016
Posts: 40

Rep: Reputation: Disabled
I would boot your suspect system off a known good USB/DVD image and inspect the system. It is entirely possible they installed hacked utilities which hide their directories and processes. The only way to be sure is to use outside good utils to investigate.

Once you are sure they are not on or you have gotten their scripts from starting up automatically. I would then reinstall all packages.

slackpkg reinstall a ap d e f k kde l n t tcl x xap xfce y

Last edited by hoodlum7; 05-11-2019 at 04:12 PM.
 
1 members found this post helpful.
Old 05-11-2019, 04:27 PM   #4
astrogeek
Moderator
 
Registered: Oct 2008
Distribution: Slackware [64]-X.{0|1|2|37|-current} ::12<=X<=15, FreeBSD_12{.0|.1}
Posts: 6,269
Blog Entries: 24

Rep: Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196
I agree with upnort, unfortunately. Trying to clean up a compromised server is a losing proposition.

If you want to do forensics, take it offline or mirror it into a VM where you can run it without internet access.

For the live host, reformat, reinstall from the ground up and carefully 100% verify anything that you must reuse from the old one - configs, web site directories, etc. Then secure it thoroughly before reconnecting to the net.

Configure SSH to use shared key authentication only, disable root login, and further restrict by firewall rules if possible.

Sorry, and good luck!
 
2 members found this post helpful.
Old 05-11-2019, 04:51 PM   #5
hitest
Guru
 
Registered: Mar 2004
Location: Canada
Distribution: Debian, Void, Slackware, VMs
Posts: 7,342

Rep: Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746Reputation: 3746
Quote:
Originally Posted by astrogeek View Post
I agree with upnort, unfortunately. Trying to clean up a compromised server is a losing proposition.
I agree completely. Format and restore from back-ups that are known to be secure. If possible, when the infected system is disconnected from the network, try to determine how they penetrated your system. Then take steps to prevent a recurrence of the event. Perhaps you can ask a network sysadmin to help you with your detective work. Best of luck, man!
When I started with Linux back in 2002 I ran a home server. My unit was owned, I was locked out. I know how depressing this is.
 
1 members found this post helpful.
Old 05-11-2019, 05:21 PM   #6
timsoft
Member
 
Registered: Oct 2004
Location: scotland
Distribution: slackware 15.0 64bit, 14.2 64 and 32bit and arm, ubuntu and rasbian
Posts: 495

Original Poster
Rep: Reputation: 144Reputation: 144
yes, I will be doing what you suggested for the first server, in the mean time i disabled port forwarding in the router which stopped external logins. I am not looking forward to having to use keys for ssh (although I did a test on a couple of vm's which seamed to work) because of the inconvenience of having to use a specific key regardless of which machine I am connecting from.
from my own documetation , on the client I run
Code:
ssh-keygen -b 521 -t ecdsa
and save the private and bulic key in a directory which is permissions 700 and the keys are 600 permissions. then create on the server /root/.ssh/authorized_keys containing the .pub key (only up to and including -- and all on one line)
then log in from the client with ssh -i /private/key root@serverip and accept the key of the server

for the 14.1 server I'll hunt for a 14.1 boot/install disk. I was planning on just installing all the patch/packages/*.txz files for 14.1 in the hope that it would over-write any compromised binaries. Unfortunately as well as running samba and vsftp it also runs a database and proprietary software which was installed by the developer, so I don't have documentation or installation media for that software. I do have the drive data and /etc config files rsynced to a second drive.
does anyone have comparative experience re denyhosts and failtoban ? I have historically used denyhosts, and recently raised the security (reduced the number of failed attempts before blocking), but either someone got lucky, or there was an unpatched ssh bug.
If I had a preference, I'd go up to 14.2, but samba works differently regarding permissions, and smb1/windows access and I don't suppose the other software people will be available on a Sunday. I may have to just turn off port forwarding for that server as well, but that will mean travelling to do any admin. I forgot to mention I have checked crontab on both machines (the 13.1 one was hacked so I fixed it once I stopped the malware running), but the 14.1 machine's crontab was clean.
It is rather unfortunate, and has had me checking 6 other servers. I may have to change ssh access on all of them. Trawling through forums here also brought up rkhunter and chkrootkit which I haven't tried, but I am more concerned with how they got in, so that if I redo everything from scratch it won't just happen again.
Doing forensics is a good idea. If I have spare 2TB drive, I might just try that.
 
Old 05-11-2019, 06:00 PM   #7
astrogeek
Moderator
 
Registered: Oct 2008
Distribution: Slackware [64]-X.{0|1|2|37|-current} ::12<=X<=15, FreeBSD_12{.0|.1}
Posts: 6,269
Blog Entries: 24

Rep: Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196Reputation: 4196
I do not mean to cast any negative light on fail2ban or denyhosts, but I would not consider either a first line of defense.

Both monitor logs for failed attempts, but that still leaves the service open to access and some number of failed attempts from a single IP - and there are a lot of IPs! The bots run by the human scum who do these things are infinitely patient and are now "smart" enough to simply discover the number of attempts and the time window before ban, then run that right to the edge of the limit from as many IPs as needed to produce a hit in some desired time with some specified certainty of success.

IMO, the only defense strategy which can succeed in the current environment is to not expose the service to any more attempts than is absolutely necessary - deny them access to make those failed or successful attempts.

The first line of defense is of course the firewall rules. If you do not need global access via SSH, then only accept packets from the specific IP ranges, or addresses from which you normally connect. Consider also imposing a time window during which access is allowed.

Port knocking is also an option to allow access from other addresses. Simple knocking schemes can be easily discovered, so many dismiss it as ineffective. But there are ways to use it effectively, and easily, with a little thought.

Use shared keys with pass phrase protection, so if the keys are stolen they still cannot be used. You can push a single key to each of your own devices so that you only have to manage one key and pass phrase - and change it regularly.

Again, the main idea is not to block an IP after failed attempts, but to deny them the opportunity to make the attempts. Then use fail2ban and denyhosts to hold the bridge temporarily if the first line is breached.

UPDATED: I could not find this thread when I first posted, but follow the link here to an excellent alternative idea which is very effective!

Last edited by astrogeek; 05-11-2019 at 07:31 PM. Reason: typos, added comments
 
7 members found this post helpful.
Old 05-11-2019, 06:15 PM   #8
ChuangTzu
Senior Member
 
Registered: May 2015
Location: Where ever needed
Distribution: Slackware/Salix while testing others
Posts: 1,718

Rep: Reputation: 1857Reputation: 1857Reputation: 1857Reputation: 1857Reputation: 1857Reputation: 1857Reputation: 1857Reputation: 1857Reputation: 1857Reputation: 1857Reputation: 1857
This is a good thread for reference purposes as well. I've bookmarked it along with the other good tips threads.

astro made excellent points.

PS: disconnect that server from the net, reinstall from the latest stable version, and lock it down as tightly as you can going forward. If they got in once, when they realize you made changes they may be tempted to come back. Depending on what they installed they could be notified as soon you lock them out/in.
 
3 members found this post helpful.
Old 05-11-2019, 07:19 PM   #9
Gerard Lally
Senior Member
 
Registered: Sep 2009
Location: Leinster, IE
Distribution: Slackware, NetBSD
Posts: 2,184

Rep: Reputation: 1765Reputation: 1765Reputation: 1765Reputation: 1765Reputation: 1765Reputation: 1765Reputation: 1765Reputation: 1765Reputation: 1765Reputation: 1765Reputation: 1765
Use a known good 14.1 host at the same patch level to do a comparative checksum on at least the ssh, ls, w, and ps binaries. scp these binaries to the clean system to ensure the remote checksum binaries have not themselves been compromised.

Do not assume it was a brute force attack; test the newly installed system before and after you reinstall the proprietary software.

Last edited by Gerard Lally; 05-11-2019 at 07:35 PM.
 
3 members found this post helpful.
Old 05-11-2019, 10:06 PM   #10
drgibbon
Senior Member
 
Registered: Nov 2014
Distribution: Slackware64 15.0
Posts: 1,221

Rep: Reputation: 943Reputation: 943Reputation: 943Reputation: 943Reputation: 943Reputation: 943Reputation: 943Reputation: 943
On the new installs I'd take steps for OpenSSH hardening. There are many good tips on that page, but I think at least:
  • No default port.
  • No password authentication (keys only).
  • Increase key strength.
  • Configure for strong ciphers, Kex, MAC (needs server and client config).
  • Allow only relevant users.
  • Use strict mode.
  • Disable root logins (maybe not a big deal with key login only, but still).
Obviously strict firewall rules should be used, lock down TCP wrappers, and denyhosts could also be useful. Use of ~/.ssh/config and keychain (it's on SBo) on the clients should make connecting easier.
 
Old 05-11-2019, 10:40 PM   #11
glorsplitz
Senior Member
 
Registered: Dec 2002
Distribution: slackware!
Posts: 1,310

Rep: Reputation: 368Reputation: 368Reputation: 368Reputation: 368
Quote:
Originally Posted by drgibbon View Post
Obviously strict firewall rules should be used
When I first tried iptables sometime ago, there were all kinds of attempts to access my server, I shut down everything external and only ssh'd internal, not sure about tightening external access, I imagine the points drgibbon mentions.
 
Old 05-11-2019, 11:37 PM   #12
denydias
Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 297

Rep: Reputation: Disabled
Quote:
Originally Posted by drgibbon View Post
Obviously strict firewall rules should be used, lock down TCP wrappers, and denyhosts could also be useful.
my 2¢: add fwknop to any server hardening toolkit.
 
3 members found this post helpful.
Old 05-12-2019, 12:48 AM   #13
gus3
Member
 
Registered: Jun 2014
Distribution: Slackware
Posts: 490

Rep: Reputation: Disabled
And my 2¢:

When you first spot that a server has been compromised, it might have been so for 6 seconds, 6 minutes, or 6 hours. Depending on the type of server (web, FTP, ssh, rsync, or some other OS), you might want to spend a few precious minutes to get some forensic info. Focus on finding the unauthorized entry point.

First rule: assume that every command-line tool has been compromised as well. Don't use "netstat" or "lsof" or even "ps". An external attack may have replaced those commands, or replaced a critical library that they depend on. "netstat" can be made to lie to you. BTW, that also applies to "cat"! It's an external CLI tool, not a shell built-in.

Second rule: There is one shell built-in that you can count on: "echo". The shell itself may also be compromised, but "echo" is just about guaranteed to work, every time, the way you want. Plus, it works in Every CLI Shell In Unix!

So if you need to find a PID that "ps" or "top"/"htop" has hidden, find a PID from
Code:
# echo /proc/* # showing a list of all PID's
when you look at something like
Code:
# ps axf | awk '{ print $1 }' # if "ps" is compromised, some PID's might be hidden
It's a lot of numbers to look at, but you're looking for things that aren't there. Look for things like how many PID's are in even decades, or odd decades ([24680]N, or [13579]N), and count them up. Do you get a different count between the "ps" command and the "cat" command above? Find the PID that shows up in the "cat" command, but not in the "ps" command. That's the PID you want to focus on.

At this point, you might want to move your CLI work to another machine on the same hub. Not a switch! You need to see some network traffic.

(If you don't have a few cheap Cat-5 hubs to insert into a network, you really should get some at the first available opportunity.)

Assume that your compromised system is under command from outside. Another machine (on the same hub) can see lots of network traffic into that command channel. For example, with a mis-configured Apache server, another system on the same hub can spot a compromised web server on WWW1.example.com.

When your regular tools are compromised, get ready to think like a hacker: You have lots of tools available. Some of them won't work the way you want them to. So, you use the other tools to get something done.

I speak from experience, as my signature says:
 
3 members found this post helpful.
Old 05-12-2019, 02:41 AM   #14
LuckyCyborg
Senior Member
 
Registered: Mar 2010
Posts: 3,530

Rep: Reputation: 3367Reputation: 3367Reputation: 3367Reputation: 3367Reputation: 3367Reputation: 3367Reputation: 3367Reputation: 3367Reputation: 3367Reputation: 3367Reputation: 3367
Quote:
Originally Posted by timsoft View Post
Hi, I have had 2 servers at different locations hacked by what I assume is brute force ssh attack...
Excuse me, and my ignorance, because I am not a webmaster and I never touched a server.

But I remember that the former forum member Darth Vader said long time ago (5 or even 8 years ago?) that using on a webserver the SSH passwords instead of keys is plain crazy, but however those brute force attacks on SSH passwords today are done only by the 12 years olds, and the grownups just speculate an issue on whatever famous CMS like Wordpress, Joomla or Drupal, to inject their "little nuisance" on a server.

What made me to ring a bell is that I remember that he talked literally about that usually the objective is not the root takeover, but to inject a spam sender or, later, miners. And about two stages attacks, where first the attacker adds a small PHP script, colloquially named "proxy" which can be one line or just a line added on top of a rather innocent file exposed to web and executable by PHP, then somewhere on /tmp or other places accessible to Apache and PHP could be injected a more complex stage.

Apparently, this story looks like a live example for those long forgotten explanations.

Last edited by LuckyCyborg; 05-12-2019 at 03:29 AM.
 
1 members found this post helpful.
Old 05-12-2019, 05:36 AM   #15
timsoft
Member
 
Registered: Oct 2004
Location: scotland
Distribution: slackware 15.0 64bit, 14.2 64 and 32bit and arm, ubuntu and rasbian
Posts: 495

Original Poster
Rep: Reputation: 144Reputation: 144
i think i'll have to use iptables to block all external ip's apart from my own, although it does mean if i am out and about i won't be able to do maintenance, as wherever i am that ip will be blocked.
useful tip about comparing /proc/* with ps -aux thanks. At the moment there is nothing hidden, but in the process of remote updating the machine's ssh got borked, so as well as locking me out, it locked them out as well. :-)
now to work out how to block incoming and outgoing ssh from all but local lan and my external ip.
 
  


Reply

Tags
hacked, slackware, ssh



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
My web server has been hacked. SU password has been disabled rcrosoer Linux - Security 11 06-27-2008 02:18 PM
Server has been hacked, help please Seventh Linux - Security 11 09-26-2006 11:57 AM
LXer: Another Debian server has been hacked into LXer Syndicated Linux News 0 09-07-2006 03:03 PM
Been hacked Any way of getting ip of hacker? mattfraunfelter Linux - Security 14 03-30-2005 06:02 PM
My server has been hacked, how to remove SUCKIT? ruleman Linux - Security 7 06-20-2004 06:25 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 06:19 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration