Quote:
Originally Posted by Sydney
You can run...
Code:
df -h | grep /I/assume/you/know/the/mount | awk '{print $5}'
into a variable while it is not 100%. Once it is 100% a while loop can end and spit out what you want.
|
In most cases that won't work as expected. If you have 5k left and you try to add a 6k file, it will fail, but the drive will still have 5k free. This would, at best, confuse the user. At worst, it could generate an endless loop.
This is a classic case of a simple concept that's not so simple to implement. It ended up being more complex than I innitially thought it would be when several fine points of "the way things work" surfaced.
If you're really masochistic, you could write an "add to thumbdrive" script:
Run df on the thumbdrive and capture the fourth field of the second line of output with sed or awk. That's the number of (1k) free blocks on the drive.
Run du on the file or directory to be transferred and capture the first field on the last line which is the total number of (1k) blocks to be copied.
Test that the number of blocks to be copied (rounded up to the nearest multiple of 4) is less than the number of free blocks available.
If it's less, copy it. If not, issue a not enough space message. The 4 is the device block size - which for most modern disks is 4k. Files are alocated in blocks, not bytes, so their physical size will be a multiple of 4k blocks. If your drive uses a different block size, then use that instead.
(Upon rereading this: even this isn't good enough because you have to round up the size of each file! It would be easier just to add a fudge factor of "reserved" extra free space 2k times the maximum number of files you'll copy at one time. That would eliminate *almost* all false positives)
This algorithm could also generate false negatives (failure to copy) if some of the files to be copied were already present on the destination - so that some of the destination space would be reused.
These things can be fixed, but it turns a single command into a 30 - 50 line script or even larger.
Although this would work, it's a lot more trouble than just letting cp complain as suggested above.
Where it would make a difference is with copying multiple files at a time. In that case, some of the files may be copied before space runs out and they would have to be dealt with manually or by doing something like first placing them in a temporary location on the destination where they can easily be deleted. After a successful copy, they would just be moved (mv) to their final destination - which just adjusts directory entries and doesn't move any actual content a second time.