LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   copy sparse files (https://www.linuxquestions.org/questions/linux-newbie-8/copy-sparse-files-818089/)

tincboy 07-05-2010 09:41 AM

copy sparse files
 
I've a big sparse file ( about 100 GB that only 1 GB of it is used and it's .raw file )
I want to copy this file faster than normal copy with cp command,
Any one familiar with this concept?

pixellany 07-05-2010 09:50 AM

My first reaction: Why would you want or need such a file?

Regardless, I'd assume that any action to compress the file before copying would take more time than simply copying it. But, if you need to copy it many times, then just compress it.

You could also try "dd", but I have no idea if it would be faster. Maybe try it on a smaller file.

onebuck 07-05-2010 09:51 AM

Hi,

Quote:

Originally Posted by tincboy (Post 4024293)
I've a big sparse file ( about 100 GB that only 1 GB of it is used and it's .raw file )
I want to copy this file faster than normal copy with cp command,
Any one familiar with this concept?

Look at 'sparse file:copying'.
:hattip:

business_kid 07-05-2010 10:04 AM

Quote:

Originally Posted by tincboy (Post 4024293)
I've a big sparse file ( about 100 GB that only 1 GB of it is used and it's .raw file )
I want to copy this file faster than normal copy with cp command,
Any one familiar with this concept?

Yes, the concept is called impatience ;-).
Do you need the bloat - the extra 99 Gig? If so, and you have another 30 or 40 gig for a temporary file, why not 'gzip sparse_file'. If you don't need the crap, give details on the stuff you want, and the stuff you want to leave behind.

vikas027 07-05-2010 10:26 AM

use bzip2 -9 filename. This has maximum compression.

Although, it may take considerable time compressing/uncompressing the file.

I would suggest you to bzip2 it, if in case you need to copy it again and again.

MTK358 07-05-2010 11:42 AM

xz takes very, very, very long to compress, but it produces much smaller files and, ironically, decompresses really fast!

tincboy 07-06-2010 12:51 AM

I've many of these files, and it's every day job of mine,
the most important factor for me is time,
I want to do it faster than cp command

syg00 07-06-2010 02:10 AM

You probably can't.
I just ran some tests, and "cp" appears to recognise sparse (input) files o.k. However strace shows it issuing a read every 32k. There is also a corresponding seek on the output fd.
All that takes time, even if the file is completely empty (as in my test).

Update: got me wondering now - how much benefit is there in that. A file of the same size full of random data issues the same number of reads, and issues writes in place of seeks. Takes much longer of course, but if a sparse file is (actually) zero bytes, why all the reads ...

I see if I can chase this up tomorrow.

business_kid 07-06-2010 02:42 AM

Just for the post, try a race. I would suggest gzip -1, as you are not particularly pressed for space. You could also do a cron job to have the zipping done when you are at home :-D

catkin 07-06-2010 03:17 AM

rsync's --sparse option makes it "handle sparse files efficiently" (so says the man page).

syg00 07-06-2010 04:05 AM

I was thinking about this on the ride home. Looking at the manpage confirms that "cp" is only looking to see if a sparse output allocation is required.
I'll check rsync tomorrow.

onebuck 07-06-2010 06:42 AM

Hi,

Quote:

excerpt from 'sparse file:copying';cp --sparse=always formerly-sparse-file recovered-sparse-file
It should be noted that some cp implementations do not support the --sparse option and will always expand sparse files, like FreeBSD's cp. A viable alternative on those systems is to use rsync with its own --sparse option[3] instead of cp.

'rsync --sparse' is viable alternative to the 'cp --sparse=always formerly-sparse-file recovered-sparse-file'.
:hattip:

syg00 07-06-2010 07:40 PM

O.K., some more testing showed the above "cost" for cp is all set-up of a new file. Repeated copies into the (pre-allocated) destination file showed minimal reads and writes.
Far better than rsync (-b -S) in fact. Both cp and rsync created a sparse output, but rsync continued to read and write the entire file when only a couple of sectors out of 1Gig had non-zero data. cp was much more efficient.
Similar results for 5 Meg input.

My test, my data, my machine, YMMV, <blah>, <blah>, <blah> ...

tincboy 07-07-2010 01:24 AM

Quote:

Originally Posted by syg00 (Post 4025680)
O.K., some more testing showed the above "cost" for cp is all set-up of a new file. Repeated copies into the (pre-allocated) destination file showed minimal reads and writes.
Far better than rsync (-b -S) in fact. Both cp and rsync created a sparse output, but rsync continued to read and write the entire file when only a couple of sectors out of 1Gig had non-zero data. cp was much more efficient.
Similar results for 5 Meg input.

My test, my data, my machine, YMMV, <blah>, <blah>, <blah> ...

So do you think normal use of cp is the best choice?

syg00 07-07-2010 01:47 AM

Yes - especially if you can re-use the output files each day (after the first obviously). That is, don't delete the (output) files each day, over-write them.
The "-b" on the rsync was *bad* - but even with "-t" (or -a), "cp" was still marginally faster. Which surprised me I must admit.


All times are GMT -5. The time now is 08:36 AM.