due to missing support of SSH/Rsync I tried to use a mounted and encrypted container file for storing a huge amount of data.
Dedicated Server (source) + Storage Server connected with 1 Gbit (target)
I have mounted and build an encrypted container file the following way:
sshfs firstname.lastname@example.org:/ /mnt/backup
dd if=/dev/urandom of=/mnt/backup/container.img bs=1M count=2000000
losetup /dev/loop1 /mnt/backup/container.img
cryptsetup luksFormat /dev/loop1
cryptsetup luksOpen /dev/loop1 container
mount /dev/mapper/container /mnt/container
This is my setup for the monted container file, it works like a charm, doing an rsync:
rsync -aAXv /xxx/* /mnt/container
works with incredible speed.
This way approx 800 GB of data are backed up correctly.
The second rsync run was also very fast, just a few minutes.
Now my problem:
After some days some data changed and there is approx ~ 100 GB of new data to be stored.
Rsync now seems to run forever, there is still activity on rsync and the networking interface but far far away from using the availible bandwith and as said it wont even finish a rsync run within 24 hours...
After doing some `ls` in various directories of the mounted device i recognized that every `ls` command in each directory takes a very long time (up to 30 seconds).
This brings me to the conclusion that this must have something to do with the filesystem structure and inode readings. This would also explain why rsync which is mainly sending and reading incremental file lists takes forever.
As you can read in the above code, i formated as ext2 filesystem. Now i tried converting to ext3 (tune2fs -j -O dir_index). The filesystem should now be ext3, but this did not solve my problem.
Anyone has an idea how I can increase the perfomance and what would be the best way to mount such a large container file? Do i have to change blocksizes, convert to ext4, use ReiserFS or is this just not possible at all.
Thanks for your suggestions and feedback.