When I was asking the question, it was not possible to get external 1TB disk. And I was wondering this as a challenge and which should not be so difficult to do. But it seems to be.
Both solutions #4 and #7 work if all the pieces of the huge tar file can be stored in the target machine first before extracting. But neither works by using one and only one piece at time.
Solution
#4 works, if target machine has free space twice of size hugedir (the splitted tar file is xz-compressed). It needs all pieces of the splitted tar file concatenated in target machine to get extracted.
I tried to pipe it through fifo (mkfifo) but it didn't work either:
Extract1:
Code:
mkfifo /tmp/tarfifo.tar.xz
tar xJf /tmp/tarfifo.tar.xz
It will hang and wait for data from fifo.
Extract2: Concatenate the first transferred piece to fifo:
Code:
cat /mnt/usb64g/hugedir.tar.xz.1 >>/tmp/tarfifo.tar.xz
Fails with error message:
Code:
xz: (stdin): Unexpected end of input
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
Solution
#7 using dar worked, but seems like it requires all pieces twice, or at least I couldn't get it work if I removed the transferred chunk every time "dar --extract" already had processed it once.
Create:
Code:
dar --verbose --create /mnt/usb64g/hugedir_xz --compression=xz --execute 'read -p "Wrote %p/%b.dar.%n : Hit RETURN"' --slice 60G --fs-root /home/user/hugedir
Extract:
Code:
dar --extract /mnt/usb64g/hugedir_xz --execute 'read -p "Wants %p/%b.dar.%n - Hit RETURN"'
When extracting it asks volumes 1, 2, 3, 4, ... but after the last volume/chunk, it wants volume 1 again, and then 2, - wants to read every chunk two times. So here again, if there is not enough space to have the hugedir (256 GB) twice (the second one is xz-compressed though), this solution does not work.
Using tar without compression works:
Create:
Code:
cd /home/user
tar --create --tape-length=60G --file /mnt/usb64g/hugedir.tar hugedir
Prepare volume #2 for ‘hugedir.tar’ and hit return:
Prepare volume #3 for ‘hugedir.tar’ and hit return:
Prepare volume #4 for ‘hugedir.tar’ and hit return:
In between here transfer the 64 GB USB flash drive between the source and target machines 2x four times.
Extract:
Code:
cd /home/user2
tar --extract --tape-length=60G --file /mnt/usb64g/hugedir.tar
Prepare volume #2 for ‘hugedir.tar’ and hit return:
Prepare volume #3 for ‘hugedir.tar’ and hit return:
Prepare volume #4 for ‘hugedir.tar’ and hit return:
(Disclaimer: I didn't try this really with 60G "tapes", but with smaller dir and 100k tapes.)
The --tape-length (-L) method with tar does not work with compression though. In fact, not even split having --pause option would work, because tar refuses to extract files from compressed tar file piece by piece.
One solution is to compress the files first in the source machine, transfer with "tar -L", then decompress in the target machine:
Code:
find /home/user/hugedir -type f -exec xz --compress '{}' ';'
Transfer with "tar -L" in pieces and then decompress:
Code:
find /home/user2/hugedir -type f -name "*.xz" -exec xz --decompress '{}' ';'
Compression with tar and using fifo (mkfifo) would work, if somehow fifo would not give EOF (end of file) when the parts are concatenated in in and "tar xJf" reads from there. There should be FIFO, where one must explicitly tell when EOF is outputted. If files are concatenated in it, and read out, it won't tell there is nothing to be read anymore. (I tried Goggling around but found nothing suitable.)