Mount 2TB encrypted-container file through SSHFS (performance question)
due to missing support of SSH/Rsync I tried to use a mounted and encrypted container file for storing a huge amount of data.
Dedicated Server (source) + Storage Server connected with 1 Gbit (target)
I have mounted and build an encrypted container file the following way:
This way approx 800 GB of data are backed up correctly.
The second rsync run was also very fast, just a few minutes.
Now my problem:
After some days some data changed and there is approx ~ 100 GB of new data to be stored.
Rsync now seems to run forever, there is still activity on rsync and the networking interface but far far away from using the availible bandwith and as said it wont even finish a rsync run within 24 hours...
After doing some `ls` in various directories of the mounted device i recognized that every `ls` command in each directory takes a very long time (up to 30 seconds).
This brings me to the conclusion that this must have something to do with the filesystem structure and inode readings. This would also explain why rsync which is mainly sending and reading incremental file lists takes forever.
As you can read in the above code, i formated as ext2 filesystem. Now i tried converting to ext3 (tune2fs -j -O dir_index). The filesystem should now be ext3, but this did not solve my problem.
Anyone has an idea how I can increase the perfomance and what would be the best way to mount such a large container file? Do i have to change blocksizes, convert to ext4, use ReiserFS or is this just not possible at all.
Thanks for your suggestions and feedback.
Is the target a host you control?
Do you need the traffic to it to be encrypted? You are using a user mode filesystem (sshfs) to loop mount an encrypted filesystem.
Could you have an encrypted partition on the backup server instead? I'm suggesting using the network block device. If you can't repartition the backup server, you could loop mount the encrypted file at the server and use the encrypted loopback device for the nbd source. Then use cryptsetup on the local machine. I think this may give you a faster throughput, but you probably want to test it to be sure.
Since the file is already encrypted, you don't need ssh to encrypt the traffic.
P.S. The article also creates a file on the server, but the nbd-server serves uses the file directly instead of a loop device. So there example is even closer to what you are doing then I first thought.
Unfortunately I dont have control over the target host. The only ways to connect to the host are FTP/SFTP/SCP/SAMBA/CIFS.
I think using SFTP with sshfs mount is still the fastest way. I dont see any other way to get the ability to use rsync in combination with hardlinking.
Do you think encryption reduces the speed or the way files and inodes are read from the target? On the source host i have 12 cores, the first rsync run is using the complete 1Gbit bandwith. So encrpytion doesn not seem to affect the performance. Do you think reading encrypted data from the target influences the way directory and inode listings are read?
From the ext4 wiki i read the following:
I don't know how well sshfs does at seeking into the encrypted file. The decryption, and filesystem are mapped locally.
Try catting a large file to /mnt/container through a pv pipe to measure the bandwidth. Then mount the /mnt/backup/ share using cifs instead, replacing your first step. See if there is an improvement.
It turned out that there were perfomance issues on the network/storage server.
Now everything seems to work fluently - so there seems to be no problem mounting a 2TB image file over a network connection.
Anyway using cifs was in my case more stable than using sshfs.
|All times are GMT -5. The time now is 11:37 AM.|