Maybe not useful, but why not using zstd to compress a file? It is VERY fast! Many many times more than xz or gzip.
Then you can split the file, doing something like:
Code:
zstd -c <your big file> | split -b200M
By this command you compress the file and create compressed pieces of it (in this case 200MB each).
When done, you find files named xaa, xab, xac, etc.
You can send the files, so in case of error, you must transfer again only 200MB.
At the other site, the command could be:
Code:
cat x?? >> "original file name"
and then
Code:
unzstd "original file name"
Of course, everything said about the network speed is valid. But, zstd is VERY fast so you don't have to wait for hours, when compressing. The same for uncompressing.
You could check how big the compressed file is and, based on your network speed, you can understand how long it takes.
Using public repositories (like git, gdrive, etc.) could be anyway
against the company police. Be careful!
PS zstd can be installed in Fedora by dnf. For other distro it should be available. Maybe, someone here can help about that.