the 3 G overhead may be due to a lot of files, each hard link is using a small amount of disk, also the folders use 4096 Bytes at least.
so with 'find folder | wc -l' you can count and see if you have a huge number of files.
another possible cause is that rsync cannot use hard-links because the original folder has different timestamps (or permissions or ownerships) than the existing backup. you can use 'stat file' to check if original and backup files are really idendical.
you can also find the responsible folder by using 'du -chs /newbackup/folder1 /oldbackup/folder1' and then on folder2 3 etc.
so you will find out where the hard links are used and where not, then use 'stat' to check why rsync did not use hard link.
finally you can replace file with hard links using 'fdupes -r1L /newbackup /oldbackup'
you can find a script to backup the whole disk with rsync here: http://blog.pointsoftware.ch/index.p...th-hard-links/
It uses file deduplication thanks to hard-links, uses also MD5 integrity signature, 'chattr' protection, filter rules, disk quota, retention policy with exponential distribution (backups rotation while saving more recent backups than older).
It was already used in Disaster Recovery Plans for banking companies, in order to replicate datacenters, using only little network bandwidth and transport encryption tunnel.
Can be used locally on each servers or via network on a central remote backup server.
and it is free of course ^^