The "set" command will show you what the arguments to a command will look like when a command runs.
$0 is the command. $1 is the first argument. $2 is the second argument, etc. This was just for a demonstration.
With just 4 directories and no files in /mnt/1TB/, you won't run out of memory due to the expansion of the wildcard by the bash shell.
For incremental backups, I would use tar with the -g argument. See the info manual. There is a section called "Incremental Dumps" that you want to read. Here is the short description on this argument:
During a `--create' operation, specifies that the archive that
`tar' creates is a new GNU-format incremental backup, using
SNAPSHOT-FILE to determine which files to backup. With other
operations, informs `tar' that the archive is in incremental
format. *Note Incremental Dumps::.
Only new files would be copied.
Maybe I should explain the line I gave better.
This will change the working directory to /mnt/1TB which is the base of the filesystem you want to back up.
`-cf - .'
The -c option tells tar that you are creating an argument. The -f <filename> argument tells tar the filename of the archive. The dash `-' tells tar to stream the archive output via stdout instead of creating the file. The dot character refers to the current directory. Normally you would list the directories or files to be backed up. The . says to backup up the files & directories in the current directory.
The vertical bar `|' is the pipe character. The output of the command to the left of the `|' becomes the input of the command to the right.
`tar -C /mnt/backup'
The tar command on the right hand side runs at the same time as the tar command on the lhs. The current directory is changed to the destination.
`-xvf - > logfile'
The -x tells tar that you want to extract. The -v tell tar to list the files being extracted. The `-f -' tells tar to read the input from stdin instead of reading from a file.
On the right hand side, you could even use ssh to run the extraction on another computer. This allows you to back up from one computer to another, even across the Internet securely. (If you use pubkey authentication, you won't get the username/password authentication messing up the pipe.)
I'm wondering if there might be a problem with the /mnt/1TB filesystem causing the lockup. Another thing is if the destination filesystem is FAT and you are trying to save a file over 2GB.
If one of these mounts is a network mount, then you have to take into consideration whether the network protocol has a limit in filesystem size and a filesize limit. This may be a problem if you use samba for example. Also there may be limitations on the remote host.
Another problem could be filesystem corruption or a bad drive on either drive. Running a filesystem check would be a good idea.
For such large backups, I would work from the terminal instead of graphically. The graphical client may be doing things such as sorting filenames in a subdirectory.
Also check your /var/log/messages log. If there is a filesystem problem, the kernel might log it.