Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I need to login as root, or at least get root privileges, in a cron triggered backup run. The straight way to do this would be the backup server making an ssh connection to the server to be backed up (this way because I want to avoid many servers being backed up in parallel and the backup server itself would be managing this diversity), via the rsync command which would be performing the backup's synchronization step.
I'm looking for alternatives to this in some form. I'd like to disallow direct root login to my ssh port (not 22).
One idea I have is to have the backup server initiate an ssh login as a non-root user, to either the actual source server, or to a server that can reach the source server ... and set up port forwarding. Over the forwarded port, then initiate the rsync that logs in as root via another port that allows direct root, but cannot be reached from the internet at all (because the border firewall doesn't include this port as allowed in).
FYI, these logins will be using ssh keys, not passwords. I do need to keep ownership metadata for files being backed up, so this is why I am using root. Also, rsync is needed to get the incremental updates to keep bandwidth usage lower (otherwise I could just transfer a tarball each day).
Anyone have any other ideas or comments, for security issues, based on experience doing things like this (backups, routine data replication, etc)?
I'm sure you've already considered using a complete backup solution like bacula or amanda? If those aren't acceptable it sounds like you've got a good solution figured out for your particular case.
To allow root logins you need to change the parameter, PermitRootLogin, to yes in `/etc/ssh/sshd_config`.
Yeah, I've got dual port setups for sshd already, with one allowing root and the other not allowing root.
Although these other backup solutions have to-disk capability, they are still originally designed as tape backups. And that has always been painful for incrementals, especially with their sensitivity to file dates to determine the need to make a backup. And even so, they still seem to need periodic full backups and that just doesn't make sense when going to-disk and rsync is available which can do incrementals indefinitely.
I have my own program that "peels off" incrementals from the on-disk copy of the rsync target. After running rsync for the backup cycle from the place being backed up to the backup location, my program then runs at the backup location. Whatever changed since the last cycle is archived as an incremental. If a file was deleted by rsync, it will still be present in the reference tree kept by this program. It will then move that file over to the incremental tree for this cycle. Files that are newly created are just logged so it is known what date it appeared, so a restore for a previous date won't include it. It works in such a way that you can just remove the older increments any time you wish. For really long term archiving, copy those older increments to a couple tapes and remove the on-disk copy.
We're doing these backups to multiple sites, over the internet. So avoiding full backups is highly desired . A full backup of what we have backing up now takes about 20 hours over gigabit ethernet within the LAN. Imagine what it would take over a T3 link (and we don't even have that, yet).
Good analysis. Have you ever heard of rdiff-backup? It's built on rsync and only requires the first backup to be full and then does incrementals after that.
Yes, I looked at rdiff-backup. It did the rsync part of it integrated, and that just didn't seem to be as flexible as I'd like. For example, in some cases I'd like to do the rsync every 2nd hour, but generate the delta just daily.
You can lock down the ssh access to only a certain username and domain. For example, root@remotesite.com. On top of that, using tcpwrapper you can specify which URL or host can have access to SSH; not to mention iptables rules that you can specify as well. You can also enable and disable all this with a script during a certain hour of the night to lock it down even further. Last, I would recommend to use squashfs filesystem for random access to the restores. It beats having to restore from a huge linear tar file.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.