Long story short:
I backed up ~5TB of media to a USB3 6TB array. I think there were 2 different mount points for the USB3 array. I didn't check the USB3 array before killing the original.
I'm doing fsck on the array and is taking all day. It ran out of memory so I added a spare 130GB partition for swap. Not horribly slow writing at ~80MB/s.
Code:
[root@svrns ~]# free -g │············································································································
total used free shared buffers cached │············································································································
Mem: 15 15 0 0 0 0 │············································································································
-/+ buffers/cache: 15 0 │············································································································
Swap: 130 55 74
My question is:
Am I wasting my time with this if I only have 145GB of memory running fsck on a 6TB array? Does this task require ~6TB of memory???
EDIT:
apparently so:
Code:
[root@svrns ~]# free -g │············································································································
total used free shared buffers cached │············································································································
Mem: 15 15 0 0 0 0 │············································································································
-/+ buffers/cache: 15 0 │············································································································
Swap: 130 115 14