Configuring and Hardening a New Server to Replace Compromised Machine
As detailed in this other thread, my server has been compromised. I need to (quickly) set up a new server and migrate my site to this new machine. Because my client/parner is affiliated with Amazon, he has determined that I should use Amazon Web Services to host the machine. I'm hoping to get some guidance here about how I can:
1) Install and configure this machine to be as secure as reasonably possible. 2) Migrate my website to the new server without bringing any compromised files, trojan horses, malware, etc. with it. 3) Automate software patching so that it patches the web stack and any other packages as quick as possible without bringing my website down. 4) Establish audit and backup procedures to insure the server keeps running safely and is backed up as frequently as possible. I expect to create an Amazon EC2 instance based on one of the AMIs listed here: http://uec-images.ubuntu.com/releases/11.04/release/ Can anyone comment on the security of these instances or which instance might be most appropriate? I expect a 64-bit instance is in order and that the region is not critical. I would appreciate some guidance about how I may use the checksum to insure the integrity of my installation. As far as I know, these are instantiated not from downloadable files but directly from the AMIs which are hosted in the Amazon network. Here are some questions posted to me by Unspawn and Noway: Are you buying one server or a cluster of linked servers? If you have multiple servers, will there be traffic between them? I was in the process of expanding our server configuration to use two machines: 1 for the app server and 1 for MySQL. Given that we are moving to AWS, I expect we'll try and make use of their services so I'm not entirely sure this question applies. We've been experiencing growth and want something scalable. Redundancy is also desired. EC2 lets one scale a given machine to a large computing capacity and I expect we'll gain a lot of headroom that way. On the other hand, we'll probably be using RDS. Also, what services do you think you need to run (less it better as in more secure of course). The old system was a LAMP stack and it handled mail/spam filtering/anti virus scanning, etc. That's about it really. I don't recall the exact requirements for PHP but I do know that we need the curl extension. It should be pretty trim otherwise. We can probably use Amazon Simple Email Service (SES) for email. Not sure about that. We have perhaps a dozen email accounts and the server needs to send email notifications using the PHP mail() command. Are you comfortable with command line or do you really need a GUI? I'm quite comfy with the command line and am really new to this Ubuntu desktop I have. I've never been exposed to a GUI for a remote server and find this thought intriguing, but not at all necessary. I definitely want something like cacti so I can keep track of my server load. When the server gets slow under load, I want to be able to figure out why. What web-based apps do you need, such as Myadmin I really like phpMyAdmin and hope to use that in the future. The old system had postfix admin on it which was quite convenient for handling email. I like webalizer and have been using it to track stats of our traffic so I'd like to have that. Also, a web-based email interface (RoundCube or Squirrelmail or some such) will be important. I've also been considering GIT or SVN for source management. Do you need to run your own DNS or is the one from your registrar or ISP sufficient We are currently using a wildcard subdomain scheme so that we can offer a special portal page to some of our customers (e.g., http://somecustomer.ourdomain.com). Managing this through our ISP before was cumbersome and costly. I'm not sure at this moment what options we may have through our domain registrar. I could use some guidance here. Aside from the customer subdomains, we have mail, www and I hope to also have a dev subdomain for development. Looking forward to learning about this. |
Alrighty. I have acquired the necessary Amazon AWS credentials so that I may instantiate my machine. I'm hoping to accomplish the following as soon as possible:
1) Instantiate a new server using one of the Official Ubuntu Images. Instantiating one of these is quite easy using the browser-based AWS console. I wonder if I should try to perform any package validations on this system once it is up and running? 2) Disable any non-critical services and ideally lock down the machine to that only SSH service is permitted and that only to a reasonable IP range that would cover me wherever I might need to work from. Note that in addition to IP tables, AWS offers security groups. At the moment, I'm wondering a) how to get a list of running services and b) what is a reasonable IP range restriction for incoming traffic? Assuming my IP address is currently WWW.XXX.YYY.ZZZ, I know that I can do WWW.XXX.YYY.ZZZ.0/24 or perhaps even WWW.XXX.YYY.ZZZ/16. 3) Once the server is locked down so that I can reasonably expect SSH access only from me or someone at my ISP (a reasonable limitation I think) then I want to establish secure apt such that all packages installed will be checked for checksum and signature. Wondering a couple of things here: a) What my sources.list should contain, b) which keys need to go into my keyring. Once I've got these established, I'll have questions about installing software: Apache, PHP 5, MySQL, and possibly other tools. |
2 Attachment(s)
I've followed the instructions here. I have created a large EC2 compute instance based on one of the official Ubuntu 64-bit EBS images provided on the Ubuntu site. The setup process had me set up a key pair using the Amazon console I downloaded the private key to my local machine. I'm not certain, but I believe that the public key is stored on the compute instance in /home/root/.ssh/authorized_keys. I just logged in using a command like this:
Code:
ssh -i ~/.ec2/MyPrivateKey.pem ubuntu@ec2-WWW-XXX-YYY-ZZZ.compute-1.amazonaws.com The compute instance has a "security policy" enforced by Amazon that only permits inbound traffic for SSH from 76.173.0.0/16. I can modify this security policy at any time to permit additional inbound requests on other ports. I've run a command to get a list of installed packages and I've attached the output (see packages.txt): Code:
dpkg -l > packages.txt Code:
ps -eo euser,ruser,suser,fuser,f,comm,label > processes.txt Code:
/etc/apt/trusted.gpg |
OK so I've re-read my emails from Noway2 and Unspawn and I've re-read the articles I've managed to find and have a better understanding of the keys that are in my default keyring.
I believe (and someone please correct me if I'm wrong here) that I have located the keyserver for Ubuntu at http://keyserver.ubuntu.com/ Interestingly, one cannot access this keyserver using HTTPS. Perhaps I'm misunderstanding things, but that seems like quite an oversight to me. I've tried sudo apt-key finger which yields the fingerprints of the keys in my apt keyring: Code:
prompt:~$ sudo apt-key finger Code:
prompt:~/$ GET "http://keyserver.ubuntu.com:11371/pks/lookup?search=0x437D05B5&fingerprint=on&op=get" | gpg --import I've also checked the other key (FBB75451) both visually and using the command line tools and I've gotten identical results. A few things bother me: * I'm not entirely sure I'm doing the right things here or interpreting the output from the keyserver site or these commands correctly. Could someone please give me a hell yeah or a hell no? * I'm relying on the output of certain command-line functions on this machine. If the binaries are compromised, these verification steps are not particularly meaningful. The keys and the software came pre-installed on an EC2 machine image that was listed on the Ubuntu site as an official release. That sounds fairly trustworthy to me. Thoughts? * Ultimately I don't know any of the key signers directly and as far as I can tell, I can upload my own key to keyserver.ubuntu.com and then generate 39 other keys and sign my key with all those other keys. Anyways, I could really use some outside input here. I think I'm understanding the key relationships and using the commands as they are intended, but would appreciate some feedback. Also, before installing anything at all or proceeding another inch, I'd like to make sure the installed packages are all legit and not compromised. Advice much needed! |
Compromise notes
Before I get into hardening I would like to summarize things for this particular case:
- The OS (Centos-5.0) was never updated to "current" (5.6) so you missed out on enhancements but most of all bug and security fixes. - Certain services were not installed as package but compiled from source, making update checking harder. - Services were exposed to the 'net that should not have been: NTP, MySQL and dccifd. - Of services that were exposed like FTP, SSH, SMTP, HTTP and DNS it is unclear (service configuration, firewall, fail2ban) how they were hardened. - Default logrotate configuration wiped out logging that could have aided in learning the compromise point(s) of entry. - No separate, autonomous integrity checking was available. - No regular auditing was done and no alerting was in use. - No off-site backups exist (AFAIK). |
Quote:
|
Quote:
* In your case Security Groups work for you as they operate in a "default DENY policy" kind of way: only what traffic you ALLOW will access your instances (1|2|3) so for now you could confine access to them from only your management IP range (but watch ranges if your ISPs DHCP changes leases often). Note using Security Groups should not mean you should not run a firewall on your systems because layering measures is good (wrt single point of failure), it allows you to restrict access (services, 'net ranges, fail2ban) and "guide" traffic in certain ways like rate limiting. For now the only 'net-facing service that should be enabled is SSH. Ensure you have access to an unprivileged user account (pubkey auth only!) that you can 'sudo' with and restrict access using sshd_config AllowUsers and AllowGroups and deny root access over the 'net. * Wrt "known good sources": in your case the instance you use is signed by Ubuntu with their 'gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 0x7DB87C81' key. Having a good signature on the image is good enough evidence the image was created and shipped by Ubuntu but getting acquainted with verifying package contents inside your live instance is something you should practice regardless. Other than that I would at this time suggest against pouring more effort than that into verifying image integrity. Wrt establishing a baseline I like to install GNU/Tiger. Running it w/o having done any configuration will provide you with more than a few leads to follow up on. Also an instance is easily destroyed and created so in essence your backup should only contain the differences. Once you have configured the system (accounts, password policy, aging, access to services like cron) install Aide or Samhain and ensure a copy of the binaries, configuration and database reside on another server. Same goes for your backups and ensure you automate the process. Join the Ubuntu security mailing list or otherwise ensure you get notified stat of any security updates. Ensure rsyslog logs what you need to see (this will require tweaking), ponder usage of a separate syslog server (or mirror logs elsewhere) and set up Logwatch or OSSEC or equivalent to email you reports regularly. While I think it's best to read documentation before installing your OS I can understand you're eager to return to a "business as usual" situation. Please read: - Ubuntu 10.04 TLS docs - Ubuntu docs/security - Securing Debian Manual - Ubuntu/AppArmor, wiki.apparmor.net, (SSH, protecting SSL and merchant information?) (- and maybe http://rkhunter.wiki.sourceforge.net/SECREF?f=print) While GNU/Tiger is good for testing defaults from a local point of view you should also test the system from remote each time you complete a phase or make a change that affects 'net-facing services. Use what you are familiar with or are willing to invest time in: OpenVAS, Nessus, etc, etc. Please note running 'nmap' may be fun and it may be sufficient at this stage but it will no longer be after you have enabled your full web stack. At this point you should also investigate: - CIS Debian benchmark, CIS MySQL Benchmark and CIS Apache Benchmark - Suhosin PHP hardening (in Ubuntu/Universe) - OWASP Top Ten 2010, OWASP ModSecurity Core Rule Set Project, OWASP Testing Guide. Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
|
Quote:
|
2 Attachment(s)
Thanks for the excellent information. I hope you could answer some quick questions:
1) This machine is currently assigned to a Security Group that only permits SSH traffic for incoming connections from 76.173.0.0/16. nmap -PN only detects SSH and no other services. I can blocking incoming SSH access entirely and also alter the IP range by editing the Security Group via the AWS console. In your opinion, does this sound like adequate protection against unauthorized SSH access or should I also use the 'bastion' technique described in your second link ? 2) Not sure what you mean by unprivileged user that can also sudo. The configuration out of the box does not permit login as root and instead has the user ubuntu who can login via cert and who can sudo without using any password ostensibly because of this line in /etc/sudoers: Code:
# ubuntu user is default user in ec2-images. 2a) I've been looking deeper into the no-root-login situation and noticed that the file /root/.ssh/authorized_keys does in fact contain a key in it but it also contains some text which does in fact appear to prevent root login. Does this look safe or should I just remove the key from the root authorized_keys file? Does the attached sshd_conf look ok? Code:
# in /root/.ssh/authorized_keys 4) Assuming apt is performing the signature and verification I've been reading so much about, does this mean that it accepts only signatures signed with keys in my apt keyring or does it also accept the more extensive body of signatures that have some chain of trust relationship to the keys in my keyring? |
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
|
Thanks for the additional info and advice.
Quote:
They don't make me feel very comfortable at all. I'll need to take stock of the packages I expect to install and see if I can rule out the universe repository. I'm wondering if apt-get install/update/upgrade will fail if any unsigned packages or signature failures are encountered. My preference would be that it would fail with a fairly blatant warning if this were so. A hard/noisy failure would probably go a long way to help me keep my new machine clean. While I'm fairly good at sorting out missing packages and installing things I need, my knowledge of the inner workings of apt are unfortunately limited. |
Quote:
So, with the non-LTS 11.04, you'll be looking at rebuilding everything in late 2012, but with 10.04-LTS that date pushes out to mid-2015. I know all this rebuilding stuff is fun, but you can have too much of a good thing! (BTW: as you are setting up this system, take copious notes- whenever rebuild comes around, it will be non-obvious what you did this time, and having your notes to refer to will be a help, even if you do things differently next time.) Again, by the way, Rackspace has a nice, gentle amble through Cloud Servers here, which while it is not focused on security, is a nice, easy read. And their only Ubuntu cloud offering is 10.04LTS... |
Thanks for the input, Salasi. It comes as a total surprise to me that 10.04 will be supported longer than 11.04. There have been so many other things to consider that one hadn't even crossed my mind. Do you have any links you can refer me to read more about this? As for rackspace, I've used their servers for some projects in the past. That link looks really helpful.
At the moment, I'm trying to determine a) if I really need the universe repository to get my server back up and b) will apt reject any packages that are either signed with untrusted signatures or not signed at all. |
2 Attachment(s)
I found a page in the Ubuntu Wiki which appears to corroborate Salasi's point that 11.04 runs out of steam in a couple of years whereas 10.04 server is around for the long haul.
I created a 10.04 LTS instance at Amazon and took note of the installed packages and running processes. Although 10.04 and 11.04 match in the approximate number of packages installed (388 and 389, respectively) there are differences. More notably, 10.04 has 94 running processes whereas 11.04 has only 63. Although Unspawn has approved the package and process list for 11.04, I would certainly hate to have to reinstall everything in just a year when support gets dropped for 11.04. My instinct tells me that going with 10.04 is probably a better idea. I've attached the process list and package list for 10 to this post in the forlorn hope that I might get some fairly prompt feedback on how clean the procs/pkgs look. |
Package list for LTS looks good IMO. BTW while you're at it, CYP provide an account of what you've done so far wrt configuration and hardening aspects? TIA.
|
All times are GMT -5. The time now is 09:50 AM. |