elcore: the drive has a single ext4 partition, and yes, I had to use gpt as it is over 2tb in size.
it's a wd drive. it is not being used as a boot drive (I have a separate dos partion style 2tb drive for swap and root mount)
Apparently badblocks is using 32bit vars for it's block counts. that could be fixed in source, but maybe there is a reason why not.
I am aware of 48bit lba as well as 32bit lba, and would have thought it could handle either, but maybe not.
The workaround is to specify the blocksize in the badblocks command (rather than leaving it at the default 512bytes.) The actual blocksize used can be obtained from
Code:
blockdev --getbsz /dev/sdx
(where /dev/sdx is your drive)
then use (in my case)
Code:
badblocks -s -b 4096 /dev/sdx
this allows the number of blocks to be small enough.
I get the impression though that on even larger drives, such as 20TB the block size will either be large enough that lots of space is wasted with small files; or if bs is capped at 4k or 8k, badblocks will again hit it's 32bit limit.