My approach would be this:
Make a list of all files that will be backed up, along with each file's size. For each file, the size should include not just the size of the actual file, but any overhead that tar
includes per file. To determine that overhead (you don't need to be exact, but the closer the better), run a few tar
experiments, and examine hex dumps of the output files.
For each time through, pick 499MB worth of files, tar
them, and send them along to the appropriate disk.
I'm thinking that's easier than keeping track of the size of the tar
file as you go, and figuring out where to pick up on the next pass. Easier to split the files into passes before you start each pass.
Your approach is to build the 499MB tar
data to standard output and send it on the fly, right? In that case, you're right; rsync
doesn't look too promising. Have you considred nc6
? If it's not on your system, it's available at http://www.freshmeat.net
The "nc" stands for "netcat", and is useful for piping stuff through standard output on the sending system and receiving it through standard input on the other end.
If you're not familiar with bash
scripting, google this:
bash script tutorial
and have yourself a read-fest.
Also, do this at the command line:
man bash # of course!
man tar # of course!
man od # for dumping a tar file to determine overhead per file
man less # you're not likely to fit the dump of a tar file on one screen
man nc6 # if that's the way you want to go
I'm sure someone will come along with a complete solution, but you'll have more fun if you do it yourself!
Hope this helps.