Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
So if your server was sending out suspicious packets to external IP's located somewhere around the world (and not supposed to), what would you check first? Would you run an AV for linux servers? We are contacting a Linux Specialist in the meantime but I got curious myself and started messing with my linux test server at home. Nonetheless, I did run a quick netstat on our actual server and saw the following programe names to provide at least some info of our situation. Hopefully someone can tell me what the heck some of the programs below are and what else you would do...THANK YOU!
netstat -anutp
Foreign Address
LISTEN 4654/xinetd
LISTEN 7282/Xvnc
LISTEN 5474/vadmind
LISTEN 5201/smbd
LISTEN 5810/vino-serve
LISTEN 7282/Xvnc
LISTEN 5483/vrsched
LISTEN 4807/rpc.statd
LISTEN 7353/vino-serve
LISTEN 4773/portmap
LISTEN 7282/Xvnc
LISTEN 5485/vxmld
LISTEN 4654/xinetd
LISTEN 5040/cupsd
LISTEN 5086/sendmail:
LISTEN 5201/smbd
LISTEN 5459/tfodbcd
LISTEN 5029/python
ESTABLISHED 5225/winbindd
ESTABLISHED 19068/in.telnet
CLOSE_WAIT 5225/winbindd
ESTABLISHED 7638/in.telnetd
ESTABLISHED 19404/in.telnet
ESTABLISHED 20855/in.telnet
ESTABLISHED 20888/in.telnet
ESTABLISHED 28084/in.telnet
ESTABLISHED 5248/winbindd
ESTABLISHED 15445/in.telnet
ESTABLISHED 9717/in.telnetd
ESTABLISHED 5248/winbindd
ESTABLISHED 8910/in.telnetd
ESTABLISHED 10162/in.telnet
ESTABLISHED 26188/in.telnet
ESTABLISHED 29377/in.telnet
ESTABLISHED 11079/in.telnet
ESTABLISHED 14273/in.telnet
ESTABLISHED 7282/Xvnc
ESTABLISHED 9601/in.telnetd
ESTABLISHED 21734/in.telnet
ESTABLISHED 21227/in.telnet
ESTABLISHED 20988/in.telnet
ESTABLISHED 13348/in.telnet
LISTEN 7282/Xvnc
LISTEN 5054/sshd
LISTEN 5040/cupsd
5204/nmbd
5204/nmbd
5204/nmbd
5204/nmbd
5299/avahi-daem
5483/vrsched
5486/vnetfax
5484/vgsched
5485/vxmld
5487/vmaild
5490/fim-sm
ESTABLISHED 5490/fim-sm
5489/fim-lb
ESTABLISHED 5489/fim-lb
5488/fim-cx
ESTABLISHED 5488/fim-cx
5485/vxmld
4807/rpc.statd
5299/avahi-daem
4807/rpc.statd
5474/vadmind
4773/portmap
5040/cupsd
5299/avahi-daem
5299/avahi-daem
So if your server was sending out suspicious packets to external IP's located somewhere around the world (and not supposed to), what would you check first?
Assuming you were notified about this situation (by a security team for instance) I would first open tcpdump and filter for the IPs I was given to verify the claim.
If I see the traffic being generated then I would look at the source port on my system. I would use lsof to view what process has a socket open. This is mostly easy if the connections are long lived (like a daemon). However, it's quite possible that they're short lived processes that get maliciously forked to make the call (e.g. a web application is compromised and forking processes to make network calls). In this scenario, I would have to use something like auditctl to create an auditing trail of open and closed network sockets. Be sure that your system has sufficient space because this audit trail will be really noisy.
Once I figure out which application is doing the wrong-doing then I would start inspecting the application source and parts to determine where the issue is occuring. I would also review the settings in which the application is running (e.g. the apache conf if there are possible security settings I can turn on like being able to execute processes). I would also, poke in a firewall rule which is blocking the outgoing traffic to the malicious destination.
From there I would take steps to resolve the root cause of the issue (in the application itself). That might be restoring from an earlier point and checking if that earlier point is still compromised (because I would have backups). If the earlier point is not compromised then I would install it in a sandboxed environment. I would then implement a fix in the application which likely involves upgrading it to the latest release so that it is patched. I would then migrate the patched software back into production and bring down the compromised service.
This assumes the application that is compromised is not mission critical to the end users. The steps vary depending on the type and scope of the application. In some cases, I would start with a block and throw up a maintenance page for users so they're not harmed by the found vulnerability. If the application is a login portal then I'd force a password reset on users and suggest they to change their passwords at other sites. It may also involve remuneration such as an apology letter and, depending on the scope of the application, more. That would involve multiple departments in a large organization such as PR, marketing, security, engineering operations, and development. When it comes to compromised services it's a big "depends".
First of all: welcome to LQ, hope you like it here.
I noticed that you have not been giving us detailed information. Realize posting just "red hat" (instead of exact distribution and release), "suspicious packets" (deemed suspicious by whom and based on what criterion?), "external IP's located somewhere around the world" (just post them) and partial "netstat -anutp" output is not an efficient way of transferring information. Remember you need to help us help you.
That said and in addition to what sag47 already posted about:
- generated traffic (that's 'netstat -antupe;' in full and do obfuscate your external IP addresses if you need to) and
- the use of lsof (that's 'lsof -Pwln;'), and
- if auditctl is not available you could use an outbound iptables filter (like 'iptables -I OUTPUT 1 ! -d [local_address] -m ctstate --state NEW -j LOG --log-prefix "egress_NEW ";' but do adjust to your chain use and naming conventions).
Also regardless of which distribution you're running look into:
0) (edge) router, switch and IDS (if any) log entries for clues with respect to connections if they're logged,
1) adjacent machines on your own network or other locations this machine has access to or provides access for including employee and client machines,
2) look at local login records ('lastb', 'last -wai;') and /var/log/secure including logrotated copies,
3) /var/log/messages and any daemon logs including logrotated copies, dmesg, kernel log,
4) and when running RHEL or derivatives also check package integrity ('rpm -qVva 2>&1|grep -v "^\.\{8\}";') and
5) check for files in /boot, /etc, /home, /tmp and other common directories that are not part of any package, are owned by root or a daemon or unprivileged user, have a setuid or setgid bit set or look otherwise out of place by name or modification time.
*One tool that comes in handy is Logwatch, however you should not install software or alter a system that has a (perceived?) breach of security. Best copy the log files to a known safe workstation and process them there.
A few more questions:
- are you responsible for this machine or else: who is?
- what's the exact distribution and release?
- has it been hardened properly?
- has it been updated regularly?
- have any breaches of security happened before on this machine or network?
**Since this is a potential breach of security you should react fast, notify responsible personnel and users and as soon as you have information post it here. Add any details you think may be helpful. Please stay with this thread until completion and respond in a timely fashion.
Thanks... The Linux server is RedHat Version 5.3 (Tikanga). It's not been hardened properly or updated regularly from my knowledge. This is the first time we've experienced a security breach. I am not responsible for this machine. My co-worker is and he is working with someone as we speak [or type]. Here is his report after running some tests:
The Linux server has the appearance of mis-behavior when default gateway is in place. Some evidences indicated unauthorized connections to external IPs. The affected systems would become unresponsive within two minutes. The only way known to prevent the server becomes unreachable is by removing its default gateway or block all outgoing traffic on the main firewall. It is likely that such actions would prevent excessive connections to the outside sources which could have triggered firewall IPS (Intrusion Detection System) to block the server.
A quick analysis revealed suspicious binary and scripts. A crontab entry was discovered in /etc/crontab file and scheduled to run every three minutes. The program called by the crontab entry mimics the known library (libgcc.so). Upon quick examination, the library appears to be an executable hidden by a well-known name. The suspicious codes were temporary quarantined with the following commands.
# mv /lib/libgcc.so /root/.bad/.bad.libgcc.so
# mv /etc/cron.hourly/cron.sh /root/.bad/cron.sh
# vi /etc/crontab (and comment out the following line)
*/3 * * * * root /etc/cron.hourly/cron.sh
However, putting back the default gateway still renders the server unreachable after two minutes. That indicates there are likely other suspicious codes not completely removed. Based on that observation, there are couple short and long term recommendations.
Short term solution:
- Leave default gateway un-configured on the affected server
- Impose access restriction on PIX firewall to allow servers to authorized IP blocks. Other unknown IPs are denied.
Medium term solution:
- Re-install a fresh operating system with the latest Redhat RPMs plus re-install applications. However this effort is only effective if the source of breach can be identified and eliminated.
Long term solution:
- Recommend to review overall security environment of the whole company in order to eliminate future breaches.
...
A quick analysis revealed suspicious binary and scripts. A crontab entry was discovered in /etc/crontab file and scheduled to run every three minutes. The program called by the crontab entry mimics the known library (libgcc.so). Upon quick examination, the library appears to be an executable hidden by a well-known name. The suspicious codes were temporary quarantined with the following commands.
# mv /lib/libgcc.so /root/.bad/.bad.libgcc.so
# mv /etc/cron.hourly/cron.sh /root/.bad/cron.sh
# vi /etc/crontab (and comment out the following line)
*/3 * * * * root /etc/cron.hourly/cron.sh
However, putting back the default gateway still renders the server unreachable after two minutes. That indicates there are likely other suspicious codes not completely removed. Based on that observation, there are couple short and long term recommendations.
Doing a package integrity check as unSpawn recommended will help to catch any other possibly malicious files masquerading as common system files. Be sure to read up on all of the options before executing any commands from an Internet forum and understand what they do. Because the system had modified files in /etc and /lib it is likely the breach involved gaining root access (unless the filesystem permissions were messed up). Since that is the case it is less likely the logs on the local system are any good (unless the adversary was sloppy). Hopefully, your team has implemented centralized logging. It's still worth checking out the local system logs (inspect them from another computer as unSpawn suggested). Also, inventory the kernel modules as it's possible malicious code was installed there. Compare the running kernel modules to a similar system if you have one.
Quote:
Originally Posted by robrdz
Short term solution:
- Leave default gateway un-configured on the affected server
- Impose access restriction on PIX firewall to allow servers to authorized IP blocks. Other unknown IPs are denied.
Medium term solution:
- Re-install a fresh operating system with the latest Redhat RPMs plus re-install applications. However this effort is only effective if the source of breach can be identified and eliminated.
Long term solution:
- Recommend to review overall security environment of the whole company in order to eliminate future breaches.
Those sound like sane recommendations. I would add; if you don't have it already centralized system logging (rsyslog and syslog can ship logs to another system via udp or tcp; syslog-ng works well as an aggregator). Then installing a log ingest like splunk or logstash. All future looking of course after the matter at hand has been handled. Getting system trends is also a good thing (graphite/collectd, munin, nagios/pnp4nagios, cacti, etc). At my previous place of work we would get hourly emails on suspicious logs for all systems. If the logs were normal then we would add them to a filter.
the very last in the old legacy support 5 series is RHEL 5.12
if it is 5.3 then the missing 4 years of security updates and fixes might be the bigger issue
Quote:
- Re-install a fresh operating system with the latest Redhat RPMs plus re-install applications. However this effort is only effective if the source of breach can be identified and eliminated.
you are paying redhat for the server licenses ?
contact redhat support
installing 7.0 might be the best option IF it will install on your older hardware
Hey guys...just want to say thank you for all your replies. I checked out some of secure logs from /var/logs and saw that since a couple of weeks ago, someone seemed to have been trying to do some kind of brute force attack through our ssh ports on the linux server.
..............
Feb 1 04:12:11 SEVR-LIN sshd[12056]: Failed password for root from 115.239.228.14 port 51611 ssh2
Feb 1 04:12:12 SEVR-LIN sshd[12058]: Failed password for root from 103.41.124.39 port 58250 ssh2
Feb 1 04:12:13 SEVR-LIN sshd[12056]: Failed password for root from 115.239.228.14 port 51611 ssh2
Feb 1 04:12:14 SEVR-LIN sshd[12058]: Failed password for root from 103.41.124.39 port 58250 ssh2
....................
There were so many failed password attempts until they finally got in...the times match. The IT Manager blocked all ports and is only allowing ssh connections to those that need it...not to everyone around the world :|
We are re-evaluating our network and will re-install Linux and load the software on it. I'm kind of new here and it has been overwhelming experience but I learn something everyday...and this has been a big lesson for us all.
..............
Feb 1 04:12:11 SEVR-LIN sshd[12056]: Failed password for root from 115.239.228.14 port 51611 ssh2
Feb 1 04:12:12 SEVR-LIN sshd[12058]: Failed password for root from 103.41.124.39 port 58250 ssh2
Feb 1 04:12:13 SEVR-LIN sshd[12056]: Failed password for root from 115.239.228.14 port 51611 ssh2
Feb 1 04:12:14 SEVR-LIN sshd[12058]: Failed password for root from 103.41.124.39 port 58250 ssh2
....................
By the way....the IP's above are [seemingly] from somewhere half way around the world....no bueno.
If you have a need to widely share an SSH port then implementing fail2ban would be good for that. It definitely sounds like you need to implement centralized logging and alert on failed password attempts. Requiring public key auth and denying password auth would be good as well.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.