LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 05-18-2004, 08:50 PM   #1
fortezza
Member
 
Registered: Mar 2003
Location: Colorado
Distribution: Fedora Core 4
Posts: 297

Rep: Reputation: 30
How To Resume Failed copy ( cp command ) where it left off?


In windows there is a utility called "robocopy" that can resume partially copied files where the original copy left off at. I check the man pages for "cp". "rcp", and "rsync" and I don't see the functionality I am looking for. I tried to make wget ( which does resume just fine ) to work on the file system using the syntax "file://..../fileaname" but it doesn't work.

Anyone know of a tool that has the ability to do what I am looking for? I could put the file on a local web server then use wget, but is there a simpler solution to resuming the failed copy?
 
Old 05-18-2004, 10:10 PM   #2
TheOther1
Member
 
Registered: Feb 2003
Location: Atlanta, GA
Distribution: RHAS 2.1, RHEL3, RHEL4, SLES 8.3, SLES 9, SLES9_64, SuSE 9.3 Pro, Ubuntu, Gentoo
Posts: 335

Rep: Reputation: 32
most FTP daemons support resuming an upload/download. Check out vsftpd for a good, secure, fast ftp daemon.
 
Old 05-19-2004, 09:49 PM   #3
lyle_s
Member
 
Registered: Jul 2003
Distribution: Slackware
Posts: 392

Rep: Reputation: 55
I made a little script for you, but it copies quite slowly because it copies in 1 byte blocks. Here it is in case it's of some use:

Code:
#!/bin/bash

#set -x

if [ $# -ne 2 ]
then
	echo $0: Usage: $0 original copy
	exit 1
fi


# We need some more meaningful names.
ORIGINAL="$1"
COPY="$2"


if [ ! -f "$ORIGINAL" ]
then
	# ORIGINAL wasn't found.
	echo $0: "$ORIGINAL": No such file
	exit 1
fi


# Calculate the number of bytes to skip before we start copying.
if [ -f "$COPY" ]
then
	# This is a continuation of earlier, interrupted copy.
	
	WC_OUTPUT=$(wc --bytes "$COPY")

	# Since wc pads its output with spaces making it hard to reliably
	# parse, we have to do a little hack job:

	# Make sure there's at least one space at the beginning.
	WC_OUTPUT=" $WC_OUTPUT"

	# Sqeeze out excess spaces, making the size the second field.
	SKIP_BYTES=$(echo "$WC_OUTPUT" | tr -s ' ' | cut -d' ' -f2)
else
	# This is the first attempt at copying--there's no COPY yet.
	SKIP_BYTES=0
fi

# Do the actual copying.
dd if="$ORIGINAL" of="$COPY" conv=notrunc bs=1 skip="$SKIP_BYTES" seek="$SKIP_BYTES"

Lyle
 
Old 05-25-2004, 06:02 PM   #4
fortezza
Member
 
Registered: Mar 2003
Location: Colorado
Distribution: Fedora Core 4
Posts: 297

Original Poster
Rep: Reputation: 30
Thanks

I appreciate the script. Maybe I will code up a copy utility that can resume at a certain byte count using C++ and share it open-source style.

Thanks Again.
 
Old 05-25-2004, 09:16 PM   #5
lyle_s
Member
 
Registered: Jul 2003
Distribution: Slackware
Posts: 392

Rep: Reputation: 55
I was thinking after I wrote it that it might be possible to optimize it by:
  1. Using the cp command in the case it's the first attempt at copying (super easy to do).
  2. In the case it's a continuation, use dd to copy one byte at a time until $COPY's size is a multiple of a much larger block size, then call dd again having it continue at that larger blocksize.
Unfortunately, I have no time or energy for such a task at the moment :-)

Lyle
 
Old 10-19-2008, 02:18 PM   #6
danutz
LQ Newbie
 
Registered: Oct 2008
Posts: 5

Rep: Reputation: 0
use cURL

Hi, for the reference of others searching for a solution, I have found something more efficient:

http://www.omnigia.com/news/2008/10/...cp-using-curl/

curl -C - -O file:///media/memstick/somefile.dat

resumes copying somefile.dat (to the current directory)
 
Old 10-19-2008, 08:05 PM   #7
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,358

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
I'd have thought rsync would work. One of its main points is that it only copies file changes (inc new files), so it ought to be able handle that.
 
Old 10-25-2008, 01:42 AM   #8
danutz
LQ Newbie
 
Registered: Oct 2008
Posts: 5

Rep: Reputation: 0
Quote:
Originally Posted by chrism01 View Post
I'd have thought rsync would work. One of its main points is that it only copies file changes (inc new files), so it ought to be able handle that.
It does work; however it will need to read BOTH the source and destination files (to compare them and figure out what needs to be copied). If the source or destination is on a USB stick, and you've already transferred 650 MB out of 700, rsync will take forever.

By contrast, cURL will simply append to the destination (trusting the already transferred bits). So for a failed cp that is better.

Last edited by danutz; 10-25-2008 at 01:44 AM.
 
Old 12-23-2011, 01:18 PM   #9
Geremia
Member
 
Registered: Apr 2011
Distribution: slackware64-current
Posts: 497

Rep: Reputation: 45
pretty sure rsync can do this

Try this:
Code:
rsync --progress --partial --append source_file destination_file
 
1 members found this post helpful.
Old 12-23-2011, 01:34 PM   #10
Satyaveer Arya
Senior Member
 
Registered: May 2010
Location: Palm Island
Distribution: RHEL, CentOS, Debian, Oracle Solaris 10
Posts: 1,420

Rep: Reputation: 305Reputation: 305Reputation: 305Reputation: 305
The easiest way will be, as the code given by Geremia.

Some other examples of using rsync are as follows:

Quote:
#rsync -a /from/file /dest/file
or
#rsync -aP file user@host2:/path/to/new/dir/
or
#rsync -v --append /path/to/afile /mnt/server/dest/afile

If you know you simply need to append to the local file and do not want to use rsync (which could potentially take a long time calculating checksums), you can use curl. For example, if you have a large file on a slow removable USB stick mounted at /media/CORSAIR/somefile.dat and only half of it is in the current directory, to resume:

Quote:
#curl -C - -O "file:///media/CORSAIR/somefile.dat"

rsync is only good if you have rsync on the destination server. In that case, it's indeed the best solution.

Last edited by Satyaveer Arya; 12-23-2011 at 01:35 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Bind9: NDC command failed : rndc: connect failed: connection refused Boudewijn Linux - Networking 19 01-02-2014 07:19 AM
Slow or failed copy of large files to NFS david.skinner Linux - Networking 1 06-11-2006 05:22 AM
webmin troubles - Failed to write to /etc/webmin/module.infos.cache : No space left o coal-fire-ice Linux - Software 1 07-28-2005 10:08 AM
Making an exact copy? k3d failed adamrobie Linux - Software 1 01-31-2005 10:47 PM
is there a terminal command that shows how much space is left in my partitions lemuel Linux - Newbie 3 01-23-2005 07:16 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 02:02 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration