LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Security (https://www.linuxquestions.org/questions/linux-security-4/)
-   -   Configuring and Hardening a New Server to Replace Compromised Machine (https://www.linuxquestions.org/questions/linux-security-4/configuring-and-hardening-a-new-server-to-replace-compromised-machine-891764/)

sneakyimp 07-14-2011 02:06 PM

Configuring and Hardening a New Server to Replace Compromised Machine
 
As detailed in this other thread, my server has been compromised. I need to (quickly) set up a new server and migrate my site to this new machine. Because my client/parner is affiliated with Amazon, he has determined that I should use Amazon Web Services to host the machine. I'm hoping to get some guidance here about how I can:
1) Install and configure this machine to be as secure as reasonably possible.
2) Migrate my website to the new server without bringing any compromised files, trojan horses, malware, etc. with it.
3) Automate software patching so that it patches the web stack and any other packages as quick as possible without bringing my website down.
4) Establish audit and backup procedures to insure the server keeps running safely and is backed up as frequently as possible.

I expect to create an Amazon EC2 instance based on one of the AMIs listed here:
http://uec-images.ubuntu.com/releases/11.04/release/
Can anyone comment on the security of these instances or which instance might be most appropriate? I expect a 64-bit instance is in order and that the region is not critical. I would appreciate some guidance about how I may use the checksum to insure the integrity of my installation. As far as I know, these are instantiated not from downloadable files but directly from the AMIs which are hosted in the Amazon network.

Here are some questions posted to me by Unspawn and Noway:
Are you buying one server or a cluster of linked servers? If you have multiple servers, will there be traffic between them?
I was in the process of expanding our server configuration to use two machines: 1 for the app server and 1 for MySQL. Given that we are moving to AWS, I expect we'll try and make use of their services so I'm not entirely sure this question applies. We've been experiencing growth and want something scalable. Redundancy is also desired. EC2 lets one scale a given machine to a large computing capacity and I expect we'll gain a lot of headroom that way. On the other hand, we'll probably be using RDS.

Also, what services do you think you need to run (less it better as in more secure of course).
The old system was a LAMP stack and it handled mail/spam filtering/anti virus scanning, etc. That's about it really. I don't recall the exact requirements for PHP but I do know that we need the curl extension. It should be pretty trim otherwise. We can probably use Amazon Simple Email Service (SES) for email. Not sure about that. We have perhaps a dozen email accounts and the server needs to send email notifications using the PHP mail() command.

Are you comfortable with command line or do you really need a GUI?
I'm quite comfy with the command line and am really new to this Ubuntu desktop I have. I've never been exposed to a GUI for a remote server and find this thought intriguing, but not at all necessary. I definitely want something like cacti so I can keep track of my server load. When the server gets slow under load, I want to be able to figure out why.

What web-based apps do you need, such as Myadmin
I really like phpMyAdmin and hope to use that in the future. The old system had postfix admin on it which was quite convenient for handling email. I like webalizer and have been using it to track stats of our traffic so I'd like to have that. Also, a web-based email interface (RoundCube or Squirrelmail or some such) will be important. I've also been considering GIT or SVN for source management.

Do you need to run your own DNS or is the one from your registrar or ISP sufficient
We are currently using a wildcard subdomain scheme so that we can offer a special portal page to some of our customers (e.g., http://somecustomer.ourdomain.com). Managing this through our ISP before was cumbersome and costly. I'm not sure at this moment what options we may have through our domain registrar. I could use some guidance here. Aside from the customer subdomains, we have mail, www and I hope to also have a dev subdomain for development.

Looking forward to learning about this.

sneakyimp 07-18-2011 03:02 PM

Alrighty. I have acquired the necessary Amazon AWS credentials so that I may instantiate my machine. I'm hoping to accomplish the following as soon as possible:
1) Instantiate a new server using one of the Official Ubuntu Images.
Instantiating one of these is quite easy using the browser-based AWS console. I wonder if I should try to perform any package validations on this system once it is up and running?

2) Disable any non-critical services and ideally lock down the machine to that only SSH service is permitted and that only to a reasonable IP range that would cover me wherever I might need to work from. Note that in addition to IP tables, AWS offers security groups. At the moment, I'm wondering a) how to get a list of running services and b) what is a reasonable IP range restriction for incoming traffic? Assuming my IP address is currently WWW.XXX.YYY.ZZZ, I know that I can do WWW.XXX.YYY.ZZZ.0/24 or perhaps even WWW.XXX.YYY.ZZZ/16.

3) Once the server is locked down so that I can reasonably expect SSH access only from me or someone at my ISP (a reasonable limitation I think) then I want to establish secure apt such that all packages installed will be checked for checksum and signature. Wondering a couple of things here: a) What my sources.list should contain, b) which keys need to go into my keyring.

Once I've got these established, I'll have questions about installing software: Apache, PHP 5, MySQL, and possibly other tools.

sneakyimp 07-19-2011 11:29 AM

2 Attachment(s)
I've followed the instructions here. I have created a large EC2 compute instance based on one of the official Ubuntu 64-bit EBS images provided on the Ubuntu site. The setup process had me set up a key pair using the Amazon console I downloaded the private key to my local machine. I'm not certain, but I believe that the public key is stored on the compute instance in /home/root/.ssh/authorized_keys. I just logged in using a command like this:
Code:

ssh -i ~/.ec2/MyPrivateKey.pem ubuntu@ec2-WWW-XXX-YYY-ZZZ.compute-1.amazonaws.com
.

The compute instance has a "security policy" enforced by Amazon that only permits inbound traffic for SSH from 76.173.0.0/16. I can modify this security policy at any time to permit additional inbound requests on other ports.

I've run a command to get a list of installed packages and I've attached the output (see packages.txt):
Code:

dpkg -l > packages.txt
I've run a command to get a list of all running processes and attached the output (see processes.txt):
Code:

ps -eo euser,ruser,suser,fuser,f,comm,label > processes.txt
According to this page, the apt keyring is kept in the file /etc/apt/trusted.gpg. The command "sudo apt-key list" yields this output:
Code:

/etc/apt/trusted.gpg
--------------------
pub  1024D/437D05B5 2004-09-12
uid                  Ubuntu Archive Automatic Signing Key <ftpmaster@ubuntu.com>
sub  2048g/79164387 2004-09-12

pub  1024D/FBB75451 2004-12-30
uid                  Ubuntu CD Image Automatic Signing Key <cdimage@ubuntu.com>

Obviously, I'm interested in finding out more about these keys. I'll be reviewing the information I've been reading about apt-secure and the validation of keys. In the meantime, any tips about how to proceed with my hardening would be most appreciated.

sneakyimp 07-19-2011 05:29 PM

OK so I've re-read my emails from Noway2 and Unspawn and I've re-read the articles I've managed to find and have a better understanding of the keys that are in my default keyring.

I believe (and someone please correct me if I'm wrong here) that I have located the keyserver for Ubuntu at http://keyserver.ubuntu.com/
Interestingly, one cannot access this keyserver using HTTPS. Perhaps I'm misunderstanding things, but that seems like quite an oversight to me.

I've tried sudo apt-key finger which yields the fingerprints of the keys in my apt keyring:
Code:

prompt:~$ sudo apt-key finger
/etc/apt/trusted.gpg
--------------------
pub  1024D/437D05B5 2004-09-12
      Key fingerprint = 6302 39CC 130E 1A7F D81A  27B1 4097 6EAF 437D 05B5
uid                  Ubuntu Archive Automatic Signing Key <ftpmaster@ubuntu.com>
sub  2048g/79164387 2004-09-12

pub  1024D/FBB75451 2004-12-30
      Key fingerprint = C598 6B4F 1257 FFA8 6632  CBA7 4618 1433 FBB7 5451
uid                  Ubuntu CD Image Automatic Signing Key <cdimage@ubuntu.com>

I have located a verbose detail page on the first key, 437D05B5 here. Unless I'm misreading this page, it would appear that this particular key has been self-signed and also directly signed by many people -- none of whom I know personally. I have visually/manually compared the fingerprint reported by the command line against the fingerprint displayed on the ubuntu site and they match. I've also tried the method recommended in this blog post that was linked from this article to check the actual key in order to import the key to my gpg ring and "verify the key's integrity":
Code:

prompt:~/$ GET "http://keyserver.ubuntu.com:11371/pks/lookup?search=0x437D05B5&fingerprint=on&op=get" | gpg --import
gpg: keyring `/home/jaith/.gnupg/secring.gpg' created
gpg: key 437D05B5: public key "Ubuntu Archive Automatic Signing Key <ftpmaster@ubuntu.com>" imported
gpg: Total number processed: 1
gpg:              imported: 1
gpg: no ultimately trusted keys found

prompt:~/prompt$ gpg --check-sigs --fingerprint 437D05B5
pub  1024D/437D05B5 2004-09-12
      Key fingerprint = 6302 39CC 130E 1A7F D81A  27B1 4097 6EAF 437D 05B5
uid                  Ubuntu Archive Automatic Signing Key <ftpmaster@ubuntu.com>
sig!3        437D05B5 2004-09-12  Ubuntu Archive Automatic Signing Key <ftpmaster@ubuntu.com>
sub  2048g/79164387 2004-09-12
sig!        437D05B5 2004-09-12  Ubuntu Archive Automatic Signing Key <ftpmaster@ubuntu.com>

39 signatures not checked due to missing keys

Unless I misunderstand, those 39 signatures are all the folks who signed the key and because I don't know any of them, I cannot verify this key with any existing chain of trust I have. I can choose either to trust it blindly (an option I am considering!) or I can try to get in touch with one of the other people or try to establish some other six-degrees-of-kevin-bacon wherein I tie myself to one of these people.

I've also checked the other key (FBB75451) both visually and using the command line tools and I've gotten identical results. A few things bother me:
* I'm not entirely sure I'm doing the right things here or interpreting the output from the keyserver site or these commands correctly. Could someone please give me a hell yeah or a hell no?
* I'm relying on the output of certain command-line functions on this machine. If the binaries are compromised, these verification steps are not particularly meaningful. The keys and the software came pre-installed on an EC2 machine image that was listed on the Ubuntu site as an official release. That sounds fairly trustworthy to me. Thoughts?
* Ultimately I don't know any of the key signers directly and as far as I can tell, I can upload my own key to keyserver.ubuntu.com and then generate 39 other keys and sign my key with all those other keys.

Anyways, I could really use some outside input here. I think I'm understanding the key relationships and using the commands as they are intended, but would appreciate some feedback.

Also, before installing anything at all or proceeding another inch, I'd like to make sure the installed packages are all legit and not compromised. Advice much needed!

unSpawn 07-19-2011 05:45 PM

Compromise notes
 
Before I get into hardening I would like to summarize things for this particular case:
- The OS (Centos-5.0) was never updated to "current" (5.6) so you missed out on enhancements but most of all bug and security fixes.
- Certain services were not installed as package but compiled from source, making update checking harder.
- Services were exposed to the 'net that should not have been: NTP, MySQL and dccifd.
- Of services that were exposed like FTP, SSH, SMTP, HTTP and DNS it is unclear (service configuration, firewall, fail2ban) how they were hardened.
- Default logrotate configuration wiped out logging that could have aided in learning the compromise point(s) of entry.
- No separate, autonomous integrity checking was available.
- No regular auditing was done and no alerting was in use.
- No off-site backups exist (AFAIK).

sneakyimp 07-19-2011 05:58 PM

Quote:

Originally Posted by unSpawn (Post 4419525)
Before I get into hardening I would like to summarize things for this particular case:
- The OS (Centos-5.0) was never updated to "current" (5.6) so you missed out on enhancements but most of all bug and security fixes.
- Certain services were not installed as package but compiled from source, making update checking harder.
- Services were exposed to the 'net that should not have been: NTP, MySQL and dccifd.
- Of services that were exposed like FTP, SSH, SMTP, HTTP and DNS it is unclear (service configuration, firewall, fail2ban) how they were hardened.
- Default logrotate configuration wiped out logging that could have aided in learning the compromise point(s) of entry.
- No separate, autonomous integrity checking was available.
- No regular auditing was done and no alerting was in use.
- No off-site backups exist (AFAIK).

I humbly acknowledge these failures and assert it is my ardent desire to avoid repeating these mistakes. I am anxiously hankering for any guidance you might provide to address these problems and pledge to pounce on any tips you may give me. If there's anything I can do in order to develop guides or how-to documents that may be of use to you or the community here, please let me know what I can do.

unSpawn 07-19-2011 07:25 PM

Quote:

Originally Posted by sneakyimp (Post 4415020)
I'm hoping to get some guidance here about how I can:
1) Install and configure this machine to be as secure as reasonably possible.

When using known good sources installing an OS is considered safe until you connect it to a network. Establishing a baseline, setting up update, auditing and backup procedures and hardening of 'net-facing services should happen prior to that.
* In your case Security Groups work for you as they operate in a "default DENY policy" kind of way: only what traffic you ALLOW will access your instances (1|2|3) so for now you could confine access to them from only your management IP range (but watch ranges if your ISPs DHCP changes leases often). Note using Security Groups should not mean you should not run a firewall on your systems because layering measures is good (wrt single point of failure), it allows you to restrict access (services, 'net ranges, fail2ban) and "guide" traffic in certain ways like rate limiting. For now the only 'net-facing service that should be enabled is SSH. Ensure you have access to an unprivileged user account (pubkey auth only!) that you can 'sudo' with and restrict access using sshd_config AllowUsers and AllowGroups and deny root access over the 'net.
* Wrt "known good sources": in your case the instance you use is signed by Ubuntu with their 'gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 0x7DB87C81' key. Having a good signature on the image is good enough evidence the image was created and shipped by Ubuntu but getting acquainted with verifying package contents inside your live instance is something you should practice regardless. Other than that I would at this time suggest against pouring more effort than that into verifying image integrity.
Wrt establishing a baseline I like to install GNU/Tiger. Running it w/o having done any configuration will provide you with more than a few leads to follow up on. Also an instance is easily destroyed and created so in essence your backup should only contain the differences. Once you have configured the system (accounts, password policy, aging, access to services like cron) install Aide or Samhain and ensure a copy of the binaries, configuration and database reside on another server. Same goes for your backups and ensure you automate the process. Join the Ubuntu security mailing list or otherwise ensure you get notified stat of any security updates. Ensure rsyslog logs what you need to see (this will require tweaking), ponder usage of a separate syslog server (or mirror logs elsewhere) and set up Logwatch or OSSEC or equivalent to email you reports regularly.

While I think it's best to read documentation before installing your OS I can understand you're eager to return to a "business as usual" situation. Please read:
- Ubuntu 10.04 TLS docs
- Ubuntu docs/security
- Securing Debian Manual
- Ubuntu/AppArmor, wiki.apparmor.net, (SSH, protecting SSL and merchant information?)
(- and maybe http://rkhunter.wiki.sourceforge.net/SECREF?f=print)

While GNU/Tiger is good for testing defaults from a local point of view you should also test the system from remote each time you complete a phase or make a change that affects 'net-facing services. Use what you are familiar with or are willing to invest time in: OpenVAS, Nessus, etc, etc. Please note running 'nmap' may be fun and it may be sufficient at this stage but it will no longer be after you have enabled your full web stack. At this point you should also investigate:
- CIS Debian benchmark, CIS MySQL Benchmark and CIS Apache Benchmark

- Suhosin PHP hardening (in Ubuntu/Universe)
- OWASP Top Ten 2010, OWASP ModSecurity Core Rule Set Project, OWASP Testing Guide.


Quote:

Originally Posted by sneakyimp (Post 4415020)
2) Migrate my website to the new server without bringing any compromised files, trojan horses, malware, etc. with it.

While you harden and test your server fetch your user files, cron jobs, mail spools, /etc configuration, MySQL backups and whatnot over a secure connection to a known good machine, preferably not in the same network, and store them there for reference. Most configuration on the new machine you will want to do from scratch. In those cases where you can't you should visually inspect human readable file contents to ensure nothing untoward enters the new system. Bringing over binaries should be thought of as strictly forbidden. Data that can not be visually inspected may be exportable. If not it could be run in a safe (virtualized?) environment.


Quote:

Originally Posted by sneakyimp (Post 4415020)
3) Automate software patching so that it patches the web stack and any other packages as quick as possible without bringing my website down.

While automated updating has its benefits I would suggest running a (virtualized?) mirror of the server. Having a "staging area" ensures you can roll out (manually or automagically) and roll back any changes that prove detrimental, test enhancements and provides you with a backup environment.


Quote:

Originally Posted by sneakyimp (Post 4415020)
We can probably use Amazon Simple Email Service (SES) for email.

Off-loading stress to a scalable system, if cost-effective, seems a good way to go...


Quote:

Originally Posted by sneakyimp (Post 4415020)
I definitely want something like cacti so I can keep track of my server load. When the server gets slow under load, I want to be able to figure out why.

Running management tools is definitely good. *Also note there's EC2 management tools. If not appropriate now then for future reference: Cloudmin, Eucalyptus, openQrm, nimbul, OpenNebula, Scalr.


Quote:

Originally Posted by sneakyimp (Post 4415020)
We are currently using a wildcard subdomain scheme so that we can offer a special portal page to some of our customers (..). Managing this through our ISP before was cumbersome and costly. I'm not sure at this moment what options we may have through our domain registrar. I could use some guidance here.

// I hope somebody with expert NS knowledge will jump in here.


Quote:

Originally Posted by sneakyimp (Post 4418408)
Wondering a couple of things here: a) What my sources.list should contain,

I'm not an Ubuntu man so I'd say "whatever Ubuntu officially supports for its LTS release". Maybe Noway2 could add some pointers?


Quote:

Originally Posted by sneakyimp (Post 4419164)
I've run a command to get a list of installed packages and I've attached the output (see packages.txt):
Code:

dpkg -l > packages.txt
I've run a command to get a list of all running processes and attached the output (see processes.txt):
Code:

ps -eo euser,ruser,suser,fuser,f,comm,label > processes.txt

Packages look good: lean, not much that could be removed (minor packages like wireless tools but they may be dependencies). Same for processes. I only see DHCP client and sshd enabled.


Quote:

Originally Posted by sneakyimp (Post 4419164)
* I'm not entirely sure I'm doing the right things here or interpreting the output from the keyserver site or these commands correctly. Could someone please give me a hell yeah or a hell no? * I'm relying on the output of certain command-line functions on this machine. If the binaries are compromised, these verification steps are not particularly meaningful. The keys and the software came pre-installed on an EC2 machine image that was listed on the Ubuntu site as an official release. That sounds fairly trustworthy to me. Thoughts?

At this time I would suggest against pouring more effort than what I wrote above into verifying image integrity in this way.

unSpawn 07-19-2011 07:29 PM

Quote:

Originally Posted by sneakyimp (Post 4419534)
I humbly acknowledge these failures and assert it is my ardent desire to avoid repeating these mistakes. I am anxiously hankering for any guidance you might provide to address these problems and pledge to pounce on any tips you may give me.

The only reason I wrote the summary is for us to use as guideline and there's really no need to say more than that about it: I know you've been hit more than enough by the compromise and the aftermath.

sneakyimp 07-19-2011 08:48 PM

2 Attachment(s)
Thanks for the excellent information. I hope you could answer some quick questions:
1) This machine is currently assigned to a Security Group that only permits SSH traffic for incoming connections from 76.173.0.0/16. nmap -PN only detects SSH and no other services. I can blocking incoming SSH access entirely and also alter the IP range by editing the Security Group via the AWS console. In your opinion, does this sound like adequate protection against unauthorized SSH access or should I also use the 'bastion' technique described in your second link ?

2) Not sure what you mean by unprivileged user that can also sudo. The configuration out of the box does not permit login as root and instead has the user ubuntu who can login via cert and who can sudo without using any password ostensibly because of this line in /etc/sudoers:
Code:

# ubuntu user is default user in ec2-images. 
# It needs passwordless sudo functionality.
ubuntu  ALL=(ALL) NOPASSWD:ALL

Do you reckon it's safe to keep the ubuntu user or should I create a different user with those same privileges? Is sudo ability without a password acceptable?

2a) I've been looking deeper into the no-root-login situation and noticed that the file /root/.ssh/authorized_keys does in fact contain a key in it but it also contains some text which does in fact appear to prevent root login. Does this look safe or should I just remove the key from the root authorized_keys file? Does the attached sshd_conf look ok?
Code:

# in /root/.ssh/authorized_keys
command="echo 'Please login as the user \"ubuntu\" rather than the user \"root\".';echo;sleep 10" ssh-rsa AAAARRGGB_SAME_EXACT_KEY_AS_IN_UBUNTU_AUTHORIZED_KEYS MyAmazonCertName

3) Assuming we are comfortable with the integrity of the image (are we?), I expect to start installing things shortly using apt. Before I do so, I believe we should take a look at sources.list (see attached). Big question: If I run apt-get install BLAHBLAH, can i expect apt to verify the checksums and signatures on any installed packages or do I need to install some other version of apt that implements these security checks?

4) Assuming apt is performing the signature and verification I've been reading so much about, does this mean that it accepts only signatures signed with keys in my apt keyring or does it also accept the more extensive body of signatures that have some chain of trust relationship to the keys in my keyring?

unSpawn 07-20-2011 01:15 AM

Quote:

Originally Posted by sneakyimp (Post 4419654)
1) This machine is currently assigned to a Security Group that only permits SSH traffic for incoming connections from 76.173.0.0/16. nmap -PN only detects SSH and no other services. I can blocking incoming SSH access entirely and also alter the IP range by editing the Security Group via the AWS console. [b]In your opinion, does this sound like adequate protection against unauthorized SSH access or should I also use the 'bastion' technique

No need for a bastion host at this stage but please ensure you add the other measures: pubkey auth only, AllowUsers, AllowGroups, fail2ban.


Quote:

Originally Posted by sneakyimp (Post 4419654)
2) Not sure what you mean by unprivileged user that can also sudo. The configuration out of the box does not permit login as root and instead has the user ubuntu who can login via cert and who can sudo without using any password ostensibly because of this line in /etc/sudoers:
Code:

# ubuntu user is default user in ec2-images. 
# It needs passwordless sudo functionality.
ubuntu  ALL=(ALL) NOPASSWD:ALL

Do you reckon it's safe to keep the ubuntu user or should I create a different user with those same privileges? Is sudo ability without a password acceptable?

Even though it apparently uses pubkey auth as it is a default I would create a different user with same privileges. Right now OK, but Sudo w/o password will cease to be useful once you add user accounts that should not be able to perform commands as root: at that time you will need to make distinctions.


Quote:

Originally Posted by sneakyimp (Post 4419654)
2a) I've been looking deeper into the no-root-login situation and noticed that the file /root/.ssh/authorized_keys does in fact contain a key in it but it also contains some text which does in fact appear to prevent root login. Does this look safe or should I just remove the key from the root authorized_keys file? Does the attached sshd_conf look ok?

Good as long as sshd_config denies root logins (default method).


Quote:

Originally Posted by sneakyimp (Post 4419654)
3) Assuming we are comfortable with the integrity of the image (are we?),

Image key and package verification should do, yes.


Quote:

Originally Posted by sneakyimp (Post 4419654)
I expect to start installing things shortly using apt. Before I do so, I believe we should take a look at sources.list (see attached). Big question: If I run apt-get install BLAHBLAH, can i expect apt to verify the checksums and signatures on any installed packages or do I need to install some other version of apt that implements these security checks?

You're the Ubuntu expert. Do the comments for the "universe" repo provide enough guarantee for you? Apt is able to verify sigs by default AFAIK but for instance not all upstream (Debian) packagers sign packages.


Quote:

Originally Posted by sneakyimp (Post 4419654)
4) Assuming apt is performing the signature and verification I've been reading so much about, does this mean that it accepts only signatures signed with keys in my apt keyring or does it also accept the more extensive body of signatures that have some chain of trust relationship to the keys in my keyring?

Only direct sigs. The chained keys are not an attribute of Apt but of GPG.

sneakyimp 07-20-2011 01:53 AM

Thanks for the additional info and advice.

Quote:

Originally Posted by unSpawn (Post 4419807)
You're the Ubuntu expert. Do the comments for the "universe" repo provide enough guarantee for you? Apt is able to verify sigs by default AFAIK but for instance not all upstream (Debian) packagers sign packages.

*shudder*
They don't make me feel very comfortable at all. I'll need to take stock of the packages I expect to install and see if I can rule out the universe repository. I'm wondering if apt-get install/update/upgrade will fail if any unsigned packages or signature failures are encountered. My preference would be that it would fail with a fairly blatant warning if this were so. A hard/noisy failure would probably go a long way to help me keep my new machine clean. While I'm fairly good at sorting out missing packages and installing things I need, my knowledge of the inner workings of apt are unfortunately limited.

salasi 07-20-2011 06:51 AM

Quote:

Originally Posted by sneakyimp (Post 4419164)
I've followed the instructions here. I have created a large EC2 compute instance based on one of the official Ubuntu 64-bit EBS images provided on the Ubuntu site.

Have you considered that carefully? As far as I am aware, the support period for the cloud instance is the same as the standard server (I don't know this, and I can't instantly find the relevant documentation, but it seems like the only reasonable guess - can anyone confirm or deny this as a fact?), and support for 11.04 terminates in late 2012. In comparison, the latest LTS version (10.04) has its support terminate in early 2013 for the desktop version and mid-2015 for the server version.

So, with the non-LTS 11.04, you'll be looking at rebuilding everything in late 2012, but with 10.04-LTS that date pushes out to mid-2015. I know all this rebuilding stuff is fun, but you can have too much of a good thing! (BTW: as you are setting up this system, take copious notes- whenever rebuild comes around, it will be non-obvious what you did this time, and having your notes to refer to will be a help, even if you do things differently next time.)

Again, by the way, Rackspace has a nice, gentle amble through Cloud Servers here, which while it is not focused on security, is a nice, easy read. And their only Ubuntu cloud offering is 10.04LTS...

sneakyimp 07-20-2011 01:25 PM

Thanks for the input, Salasi. It comes as a total surprise to me that 10.04 will be supported longer than 11.04. There have been so many other things to consider that one hadn't even crossed my mind. Do you have any links you can refer me to read more about this? As for rackspace, I've used their servers for some projects in the past. That link looks really helpful.

At the moment, I'm trying to determine a) if I really need the universe repository to get my server back up and b) will apt reject any packages that are either signed with untrusted signatures or not signed at all.

sneakyimp 07-20-2011 02:27 PM

2 Attachment(s)
I found a page in the Ubuntu Wiki which appears to corroborate Salasi's point that 11.04 runs out of steam in a couple of years whereas 10.04 server is around for the long haul.

I created a 10.04 LTS instance at Amazon and took note of the installed packages and running processes. Although 10.04 and 11.04 match in the approximate number of packages installed (388 and 389, respectively) there are differences. More notably, 10.04 has 94 running processes whereas 11.04 has only 63. Although Unspawn has approved the package and process list for 11.04, I would certainly hate to have to reinstall everything in just a year when support gets dropped for 11.04. My instinct tells me that going with 10.04 is probably a better idea. I've attached the process list and package list for 10 to this post in the forlorn hope that I might get some fairly prompt feedback on how clean the procs/pkgs look.

unSpawn 07-20-2011 04:59 PM

Package list for LTS looks good IMO. BTW while you're at it, CYP provide an account of what you've done so far wrt configuration and hardening aspects? TIA.


All times are GMT -5. The time now is 09:50 AM.