Look in Section 4.6 of the tar info manual.
tar -C sourcedir -cf - . | tar -C targetdir -xf -
--file - \
--format=gnu | ssh user@host cat >$str
would stream the entire archive to the storage server instead of breaking it up to a multi-volume set. This won't create a temporary tar file locally to be moved to the remote server.
AFAIK, this won't make it easier to backup, but it might be easier to restore. So maybe using the same backup script but using ssh to restore through a pipe could be what you are looking for. I have restored from a single backup this way, but not a multi-volume backup. I don't know if you can cat the volumes of a multi-volume tar backup ( at the storage server ) and use tar to restore files on the local side. I think it will work.
Using pipes this way may be more flexible if you use cpio instead of tar. I haven't tried it using dar.
You would need to use pubkey authentication. Locally, the script needs to be run as a backup user or root, but on the storage server, a normal user account would do.
I would also recommend using the -g option with tar for incremental backups. This will reduce the size of backups in between weekly/bi-weekly/monthly backups.
My favorite that I have played with is something like:
tar -C / -g podcasts/.snar -cf - <directory list> | tee /mnt/ndas/backupfile.tar | ssh -C / -xvf - >backuplog
I used this one-liner to replicate new podcasts from one computer to another, while simultaneously creating an incremental backup on a mounted NAS share. This is from memory. IIRC I also used the -v option and