I would really appreciate some help from the community - I'm just about to leave a job, and I've chosen a bad time to lock myself out of the server...
Salient facts: I am locked out of my Amazon EC2 instance (no SSH access). The instance is fully backed up as an AMI, plus I have both volumes snapshotted from less than 1 hour ago. The data an machine images is therefore secure, and I could, with an extended period of time, simply rebuild my server on a new instance, and then port across our elastic IP and be done with it. However, there are 3 websites on the server, and they depend on both MySQL and Mongo databases, so this route would take a lot of time and pain to get working properly again.
Instead, I want to restore SSH access, so I don't have to do this, but so far my attempts have failed. Because I'm using AWS, I have access to my Linux install by mounting the boot partition using another AWS instance. Through this I've tried a few things to solve the problem, by writing a recovery script and adding a @reboot crontab entry - then dismounting the image and remounting it on the new instance as the boot partition to see if that solves it.
I have this indirect access via the boot partition, so theoretically I should be able to fix this, I just don't know how to.
I've attached my /var/log/secure log file, restricted to the lines that lead up to the point where SSH access dies.
To clarify, when I attempt to connect via SSH, the server appears to accept my private key, but then disconnects me. I am almost certain that this has to do with the fact that I installed vsftpd just prior to the problem occurring, because it looks like when I try to login via SSH, that the request is being routed to vsftpd rather than sshd - though I could be completely wrong about that. I've attached the log, plus my sshd and vsftpd config files - please help!
Contents of /var/log/secure:
http://pastebin.com/WMrAjXq4
My vsftpd.conf:
http://pastebin.com/UH8q0ENU
My sshd.conf:
http://pastebin.com/SEQFWLBz