Thanks for input. Do you have any thoughts on the list of processes? Remember that 10.04 has some extra 30 processes running versus 11.04.
As Salasi recommended, I'm assembling a detailed document for my own purposes that includes URLs and credentials and everything, but it's not for public consumption.I've decided to go with 10.04 LTS and so far I have: * Login to Amazon EC2.com account and create a new key pair * Create a security group that permits inbound traffic for port 22 from IP range 76.173.0.0/16 ONLY. AFAIK, this should be the only inbound connection permitted for this virtual machine. * Instantiate a Large EC2 Compute Instance using this AMI which was linked from this page at Ubuntu. The ubuntu system that resulted is configured to only allow ssh login using certificate as user ubuntu. While the sshd_config had PermitRootLogin set to 'yes', the authorized_keys file for the root user had a command value which instructed anyone using root to login as 'ubuntu' instead. PasswordAuthentication and PermitEmptyPassword are set to no by default. RSAAuthentication and PubkeyAuthentication are set to yes by default. * Listed the trusted keys using sudo apt-key finger. Verify their key fingerprints visually against data located at http://keyserver.ubuntu.com as detailed above. Also use gpg commands to import and attempt to verify these keys (and the one subkey) on a separate machine, my ubuntu desktop using techniques described above. I'm still not clear on why it's ok to trust these keys because they have no chain-of-trust link to me. While this is not surprising, the keyserver neither delivers key information via HTTPS nor would it be impossible to create some random key and 40 fake email addresses and sign it all myself. Per unspawn, I'm giving the key verification a rest in favor of progress. * Ask Unspawn (and community at large) to inspect installed package list, running processes list, sources.list, and sshd_config. * Create a new account for myself with no password. * Add the public key from my personal key pair to the ~/.ssh/authorized_keys file for this new user. Test login using this new account. * Using the ubuntu account, add this new user to sudoers with ALL=(ALL) NOPASSWD:ALL which gives this new user sudo without requiring any password. * Comment out the public key in /home/ubuntu/.ssh/authorized_keys, thereby disabling login entirely for user ubuntu. * Alter command directive in /root/.ssh/authorized_keys so that it tells users to use their own login rather than ubuntu login. * Test that new user has sudo capability and ubuntu login is disabled. * Edit /etc/ssh/sshd_config so that PermitRootLogin is no, PermitEmptyPasswords is no, PasswordAuthentication is no, and AllowUsers contains only the name of my newly added user. * restart sshd: sudo /etc/init.d/ssh restart * Test effectiveness of AllowUsers by re-enabling login (but not sudoer) for ubuntu while excluding it from AllowUsers. * confirm that both root and ubuntu logins are no longer permitted, whether using the key or otherwise. At the moment, I'm trying to figure out fail2ban and Tiger (which is brand new to me) and thinking that I'll be removing the universe repositories from my sources.list to see how far I get with package installs. I'm still wondering if apt-get install will fail if it encounters a) some unsigned package or dependency or b) a package signed by a key other than the two in my apt-keys. |
By the way, I just noticed that the 10.04 image has a totally different sources.list:
Code:
deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ lucid main universe |
Quote:
Quote:
Quote:
Quote:
Quote:
|
Unspawn, thanks yet again for the priceless input. I wait anxiously for every morsel of info you are kind enough to give.
Before I install anything, before I run apt-get update or apt-get upgrade, I feel like I need to resolve the universe vs. main question for repositories. Given that universe is in the default sources.list, I'm guessing that there may be installed packages already that are in universe. My inclination is to trust it but following my nose before is precisely what got me in trouble. * Can anyone recommend any command(s) or process wherein I might determine (quickly) whether my installed packages (or other packages I plan to install) are in main or universe? The only way I can imagine doing so -- and I don't much like this idea -- is to restrict my sources.list to main and try updating/installing things and see what fails. I'm on #ubuntu IRC now trying to get answers. UPDATE: I believe this command lists my installed packages and calls apt-cache policy on all of them: dpkg --get-selections | grep -oE '^[+\.a-z0-9\-]+\s' | xargs apt-cache policy some additional filtering yields the main/universe/multiverse bit: dpkg --get-selections | grep -oE '^[+\.a-z0-9\-]+\s' | xargs apt-cache policy | grep -E ' (lucid.*/[a-z]+)' and a final pipe says to me that not one of these packages is universe: dpkg --get-selections | grep -oE '^[+\.a-z0-9\-]+\s' | xargs apt-cache policy | grep -E ' (lucid.*/[a-z]+)' | grep universe Does this mean that if I were to disable the universe repository options now that I could be sure to exclude all but main from my system? * Can anyone recommend a way to test whether unsigned or untrusted packages cause apt-get failure and/or noisy notification? Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Also, if my dpkg/grep/apt-cache/grep commands above do what I think they do, we can assume that there are no universe packages used in this basic machine image. Does that sound like a reasonable conclusion? |
I figure I might as well state a few goals here to see if it looks good for me to proceed.
Tomorrow, I hope to: * run apt-get update and apt-get upgrade to bring the machine up to date * install Tiger, fail2ban, and other security and diagnostic tools * set up iptable or other rules to lock the machine down properly. * start setting up the web stack: PHP 5.x, MySQL 5.x.x, and any required modules (curl, suhosin, possibly others) * determine DNS situation. Please recall that we are using a LOT of subdomains. Hopefully we won't have to use BIND but this is a big question mark. * Create cert signing request for a new security certificate for www.mydomain.com To given an idea of the PHP stuff I might need, I ran this command on the old server: Code:
[adminuser@nameserver ~]$ php -me |
Good day.
So I bit the bullet and decided to proceed with apt-get update and apt-get upgrade. Only a couple of things were updated: Code:
sneakyimp@machine:~$ sudo apt-get upgrade As for the question of whether to include universe packages, it seems quite likely that I will need to. I did an apt-cache search for tiger: Code:
apt-cache search tiger Code:
aide - Advanced Intrusion Detection Environment - static binary apt-cache policy tells me this is a universe package: Code:
sneakyimp@machine:~$ apt-cache policy tiger Code:
jason@ip-10-100-237-252:~$ apt-cache policy fail2ban |
Quote:
Code:
# Do not enable debsig-verify by default; since the distribution is not using Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
HTH |
Quote:
Quote:
Code:
messagebus:x:102:107::/var/run/dbus:/bin/false Quote:
Code:
deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ lucid main universe Quote:
Quote:
* establish IP table rules * install fail2ban, configure it * install tiger, configure it * install Apache and PHP and harden as directed by the Securing Debian Manual * set up Amazon RDS to host MySQL database. Limit DB access to either my security group or this machine specifically. * Use Amazon Route 53 to handle DNS * Set up Amazon SES to handle outgoing mail * Incoming mail?? Google apps? ??? Need to migrate existing email and accounts to a new system. * Antivirus? ClamAV? Email and image upload are the only ways that files are introduced via users. * Set up automated apt-get update/upgrade as described here. I'm not really sure what tradeoffs there are. I understand that unattended updates can introduce security issues. On the other hand, no updates also introduce security problems. What would Bruce Schneier do? * Create AMI from the hardened, configured machine for backup and for the purpose of creating a staging area as needed. |
Per unSpawn's email previously, I would like to:
Quote:
I've also tried reading this ponderous document and reading the man pages. I could certainly use some tips here. In particular, these are my goals in order: * not to exclude myself from SSH access to the server, even if my IP address changes, which it will * block unwelcome visitors (meaning pretty much the entire world) from ever speaking to my machine via ssh * allow incoming requests for web traffic on port 80 and 443 * allow this machine to make mysql queries to another machine * allow this machine to make secure (and possibly non-secure) curl requests to another web server * allow this machine to send mail on another machine * close every other port down Any thoughts or input would be most appreciated. |
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
+ Wrt to iptables "-m recent" what I do is move inbound SSH traffic from the filter table INPUT to a separate chain (default INPUT chain DROP policy) in which it creates its fail2ban-[name] rules, then I allow certain ranges and finally trap offenders with "-m recent --name SSH --set" and then drop traffic. In the raw table PREROUTING chain there's a "-m tcp -p tcp ---dport 22 -m recent --name SSH --update --seconds n --hitcount n -j DROP". The added value is that now /proc/net/ipt_recent/SSH exists you can use that to manage blocks (and this goes for every service you have a bucket for) and remove or add IP addresses to it without having to muck with iptables rules (in contrast with tools that dump just about everything in the filter table INPUT chain which, given the way filters are traversed is not good for performance let alone be easy to manage...). Quote:
/* Note to self: add MySQL security best practices. */ Quote:
Quote:
+ AFAIK you did use FTP as well, right? Quote:
|
Amazing info thank you so much. Kind of choking on the information overload at the moment. Both ufw and iptables have pretty epic man pages. I think I'll go with iptables directly because it seems more precise. I understand a few things (and please do not hesitate to correct or add detail)
* if any rule results in ACCEPT, this overrules any DROP, regardless of rule sequence (do I have that right?) * be careful not to DROP loopback; i.e., make sure your first rule is to permit loopback/localhost access * For this hardening project, I'm really just interested in the INPUT stage and not so much FORWARD or OUTPUT (or does ESTABLISHED,RELATED somehow affect OUTPUT) ? * I could very well lock myself out permanently from this server and, because it's an amazon cloud instance, nobody can just walk up and plug a keyboard in to correct my mistake. I'm really sweating the subnet thing and expect I should build in access for some other IP addresses or subnets just in case. Suggestions welcome here for avoiding lockout. Quote:
|
OK my belief that an accept rule can occur anywhere doesn't seem correct any more. I think I mis-read the tutorial here which uses the -j flag for each rule. ORDER IS IMPORTANT.
Based on that tutorial, I have concocted thiese iptables rules: Code:
# allow established sessions to receive traffic Comments welcome. I really don't want to lock myself out of my server. I'll be working through the connection to fail2ban next. |
Quote:
Quote:
Quote:
Quote:
Quote:
|
Quote:
*AT*...brilliant. |
Quote:
Code:
*filter |
All times are GMT -5. The time now is 07:32 PM. |