Well the file system is ext3
I read some things on the overhead reserve topic you mentioned but it doesn't seem to line up with this problem.
Also, I tried a "du . -sh" to see the size reported and this is what I get (this result is the correct space used):
hp#1 - 325G .
hp#2 - 325G .
The df -h returns:
hp#1 - /dev/sdc1 429G 48G 359G 12% /r03
hp#2 - /dev/sdb1 429G 325G 82G 80% /n03
Maybe there isn't much of a problem here, but it just seems really strange that the df -h command shows such skewed results. And I worry that down the line the machines will complain of not enough disk space or something.
Ofcourse this problem never occured when using NFS but backups took 6x as long.. as well as restores.
FEARS COME TRUE
Backups to the mounted drive now fail because the system doesn't think there is enough drive space.
Maybe this will help: The old setup which it looks like I will be returning to was..
hp#1 - directly mounts the SAN logical drives (like it does now)
hp#2 - uses NFS to conencted to the drives through hp#1
Again backups ran slow going through NFS so I instead had hp#2 directly mount the SAN drives like hp#1.. Backup are much faster but space not being read correctly..??..