How To Resume Failed copy ( cp command ) where it left off?
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
How To Resume Failed copy ( cp command ) where it left off?
In windows there is a utility called "robocopy" that can resume partially copied files where the original copy left off at. I check the man pages for "cp". "rcp", and "rsync" and I don't see the functionality I am looking for. I tried to make wget ( which does resume just fine ) to work on the file system using the syntax "file://..../fileaname" but it doesn't work.
Anyone know of a tool that has the ability to do what I am looking for? I could put the file on a local web server then use wget, but is there a simpler solution to resuming the failed copy?
I made a little script for you, but it copies quite slowly because it copies in 1 byte blocks. Here it is in case it's of some use:
Code:
#!/bin/bash
#set -x
if [ $# -ne 2 ]
then
echo $0: Usage: $0 original copy
exit 1
fi
# We need some more meaningful names.
ORIGINAL="$1"
COPY="$2"
if [ ! -f "$ORIGINAL" ]
then
# ORIGINAL wasn't found.
echo $0: "$ORIGINAL": No such file
exit 1
fi
# Calculate the number of bytes to skip before we start copying.
if [ -f "$COPY" ]
then
# This is a continuation of earlier, interrupted copy.
WC_OUTPUT=$(wc --bytes "$COPY")
# Since wc pads its output with spaces making it hard to reliably
# parse, we have to do a little hack job:
# Make sure there's at least one space at the beginning.
WC_OUTPUT=" $WC_OUTPUT"
# Sqeeze out excess spaces, making the size the second field.
SKIP_BYTES=$(echo "$WC_OUTPUT" | tr -s ' ' | cut -d' ' -f2)
else
# This is the first attempt at copying--there's no COPY yet.
SKIP_BYTES=0
fi
# Do the actual copying.
dd if="$ORIGINAL" of="$COPY" conv=notrunc bs=1 skip="$SKIP_BYTES" seek="$SKIP_BYTES"
I was thinking after I wrote it that it might be possible to optimize it by:
Using the cp command in the case it's the first attempt at copying (super easy to do).
In the case it's a continuation, use dd to copy one byte at a time until $COPY's size is a multiple of a much larger block size, then call dd again having it continue at that larger blocksize.
Unfortunately, I have no time or energy for such a task at the moment :-)
I'd have thought rsync would work. One of its main points is that it only copies file changes (inc new files), so it ought to be able handle that.
It does work; however it will need to read BOTH the source and destination files (to compare them and figure out what needs to be copied). If the source or destination is on a USB stick, and you've already transferred 650 MB out of 700, rsync will take forever.
By contrast, cURL will simply append to the destination (trusting the already transferred bits). So for a failed cp that is better.
Distribution: RHEL, CentOS, Debian, Oracle Solaris 10
Posts: 1,420
Rep:
The easiest way will be, as the code given by Geremia.
Some other examples of using rsync are as follows:
Quote:
#rsync -a /from/file /dest/file
or #rsync -aP file user@host2:/path/to/new/dir/
or #rsync -v --append /path/to/afile /mnt/server/dest/afile
If you know you simply need to append to the local file and do not want to use rsync (which could potentially take a long time calculating checksums), you can use curl. For example, if you have a large file on a slow removable USB stick mounted at /media/CORSAIR/somefile.dat and only half of it is in the current directory, to resume:
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.