LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 09-09-2014, 03:05 PM   #16
jlinkels
Senior Member
 
Registered: Oct 2003
Location: Bonaire
Distribution: Debian Wheezy/Jessie/Sid, Linux Mint DE
Posts: 4,497

Rep: Reputation: 636Reputation: 636Reputation: 636Reputation: 636Reputation: 636Reputation: 636

Quote:
Originally Posted by evo2 View Post
Hi,


Not true. You can list the files contained with
Code:
tar tf foo.tar.gz
Then you can extract just the file you want
Code:
tar xf foo.tar.gz some/file/bar.baz
Evo2.
Sure I know the list command. But imagine you are looking for a long lost file. First thing you want to know the time stamp and file size. So you have to list the tar, extract the file you need, and then look at the time stamp. That is a lot of more work so that you can type "tar" instead of "rsync". Why?

jlinkels
 
Old 09-09-2014, 03:06 PM   #17
jlinkels
Senior Member
 
Registered: Oct 2003
Location: Bonaire
Distribution: Debian Wheezy/Jessie/Sid, Linux Mint DE
Posts: 4,497

Rep: Reputation: 636Reputation: 636Reputation: 636Reputation: 636Reputation: 636Reputation: 636
Quote:
Originally Posted by joe_2000 View Post
Generally I would agree, but the OP stated that he has no physical access to the machine and wants to run something remotely as a preparation.
rsync is an excellent solution for copying to remote machines over SSH.

jlinkels
 
Old 09-09-2014, 03:37 PM   #18
joe_2000
Member
 
Registered: Jul 2012
Location: Aachen, Germany
Distribution: Void, Debian
Posts: 822

Rep: Reputation: 237Reputation: 237Reputation: 237
Quote:
Originally Posted by jlinkels View Post
rsync is an excellent solution for copying to remote machines over SSH.

jlinkels
40GB over ssh?!? I don't know if that makes sense...
 
Old 09-09-2014, 04:00 PM   #19
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,433

Rep: Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043
Quote:
Originally Posted by joe_2000 View Post
40GB over ssh?!? I don't know if that makes sense...
Why not? With a decent connection (>50Mb) it shouldn't take more than a few hours. And with rsync you can pick up where you left off if the connection drops.
 
1 members found this post helpful.
Old 09-09-2014, 06:02 PM   #20
Yura_Ts
LQ Newbie
 
Registered: Jun 2011
Location: Moscow
Distribution: Gentoo Is Power!
Posts: 6

Rep: Reputation: Disabled
Quote:
Originally Posted by suicidaleggroll View Post
Why not? With a decent connection (>50Mb) it shouldn't take more than a few hours. And with rsync you can pick up where you left off if the connection drops.
Some processes may write to some files, when you will copy all 40GB over SSH. It makes this method of backuping inconsisted.

-------
Sorry my simple English
 
Old 09-09-2014, 07:23 PM   #21
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,433

Rep: Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043Reputation: 2043
Quote:
Originally Posted by Yura_Ts View Post
Some processes may write to some files, when you will copy all 40GB over SSH. It makes this method of backuping inconsisted.

-------
Sorry my simple English
That's why you run it again once it finishes to pick up any changes. It's worth noting that a giant tarball would have the same problem. No 40 GB backup process is going to run instantaneously, at least the rsync lets you run it again to pick up the changes to any files that were modified while it was running the first time.
 
1 members found this post helpful.
Old 09-09-2014, 08:30 PM   #22
jlinkels
Senior Member
 
Registered: Oct 2003
Location: Bonaire
Distribution: Debian Wheezy/Jessie/Sid, Linux Mint DE
Posts: 4,497

Rep: Reputation: 636Reputation: 636Reputation: 636Reputation: 636Reputation: 636Reputation: 636
Quote:
Originally Posted by joe_2000 View Post
40GB over ssh?!? I don't know if that makes sense...
I routinely use rsync to copy > 1 TB between machines.
 
Old 09-09-2014, 08:41 PM   #23
rjo98
Senior Member
 
Registered: Jun 2009
Location: US
Distribution: RHEL, CentOS
Posts: 1,683

Original Poster
Rep: Reputation: 48
Same here, works great, especially in catching updates.
 
Old 09-10-2014, 12:50 AM   #24
evo2
LQ Guru
 
Registered: Jan 2009
Location: Japan
Distribution: Mostly Debian and Scientific Linux
Posts: 5,753

Rep: Reputation: 1288Reputation: 1288Reputation: 1288Reputation: 1288Reputation: 1288Reputation: 1288Reputation: 1288Reputation: 1288Reputation: 1288
Hi,
Quote:
Originally Posted by jlinkels View Post
Sure I know the list command.
You previous post implied that you didn't.
Quote:
Originally Posted by jlinkels View Post
But imagine you are looking for a long lost file. First thing you want to know the time stamp and file size. So you have to list the tar, extract the file you need, and then look at the time stamp. That is a lot of more work so that you can type "tar" instead of "rsync". Why?
Are you asking me? I never argued either for or against tar or rsync. I merely pointed out that the method you described for extracting a single file from a tar file was not optimal.

Evo2.
 
Old 09-10-2014, 10:53 AM   #25
joe_2000
Member
 
Registered: Jul 2012
Location: Aachen, Germany
Distribution: Void, Debian
Posts: 822

Rep: Reputation: 237Reputation: 237Reputation: 237
Quote:
Originally Posted by jlinkels View Post
I routinely use rsync to copy > 1 TB between machines.
Wow. Sounds crazy to me, but interesting to know that that's possible. Is that a remote connection as well? I am assuming that if it is, we are talking about a symmetric upload/download connection on the uploading side?
 
Old 09-10-2014, 11:19 AM   #26
jlinkels
Senior Member
 
Registered: Oct 2003
Location: Bonaire
Distribution: Debian Wheezy/Jessie/Sid, Linux Mint DE
Posts: 4,497

Rep: Reputation: 636Reputation: 636Reputation: 636Reputation: 636Reputation: 636Reputation: 636
Well, it is on the local area network, but over TCP/IP yes. My internet speed is only 1Mb/5Mb. But it doesn't really make difference whether it is local or over the internet. Especially when you use a VPN.

I do sync tens of GB over that slow connection though. Rsync has a bandwidth limiting feature which I use to prevent saturating my uplink. Once the bulk of the date is transferred maintaining the changes is easy and does not involve much traffic.

jlinkels
 
Old 09-10-2014, 11:22 AM   #27
joe_2000
Member
 
Registered: Jul 2012
Location: Aachen, Germany
Distribution: Void, Debian
Posts: 822

Rep: Reputation: 237Reputation: 237Reputation: 237
Quote:
Originally Posted by jlinkels View Post
Well, it is on the local area network, but over TCP/IP yes. My internet speed is only 1Mb/5Mb. But it doesn't really make difference whether it is local or over the internet. Especially when you use a VPN.

I do sync tens of GB over that slow connection though. Rsync has a bandwidth limiting feature which I use to prevent saturating my uplink. Once the bulk of the date is transferred maintaining the changes is easy and does not involve much traffic.

jlinkels
Ah, ok that explains. So I still think that for the OP's usecase preparing a tarball upfront to be able to later pull it onto a external drive would be the most reasonable way to go...
 
Old 09-10-2014, 11:38 AM   #28
jlinkels
Senior Member
 
Registered: Oct 2003
Location: Bonaire
Distribution: Debian Wheezy/Jessie/Sid, Linux Mint DE
Posts: 4,497

Rep: Reputation: 636Reputation: 636Reputation: 636Reputation: 636Reputation: 636Reputation: 636
Does a tarball transmit faster then?
Besides, it is monolithic. If the transfer would fail after 90% you can start again.

jlinkels
 
Old 09-10-2014, 01:10 PM   #29
joe_2000
Member
 
Registered: Jul 2012
Location: Aachen, Germany
Distribution: Void, Debian
Posts: 822

Rep: Reputation: 237Reputation: 237Reputation: 237
Quote:
Originally Posted by jlinkels View Post
Does a tarball transmit faster then?
Besides, it is monolithic. If the transfer would fail after 90% you can start again.

jlinkels
No, it does not transfer faster over the network. But if you read the OP's question you'd see that he just wanted to compress the data to a tarball on the same drive and copy the resulting tarball onto an external disk later on...

EDIT: Actually, it would transfer faster since it would be a compressed tarball. But that wasn't the point anyway.

Last edited by joe_2000; 09-10-2014 at 01:12 PM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] wiping out data for server decommission ksri07091983 Other *NIX 16 02-28-2014 02:22 PM
Decommission NIS server thekillerbean Linux - Software 1 04-11-2013 02:24 AM
Copy and delete files during backup process zetrotrack000 Linux - General 14 11-30-2012 05:49 AM
LXer: How to use rsync to copy files, folders, and to backup in Linux LXer Syndicated Linux News 0 03-01-2011 12:00 AM
backup files, tar, do i need to make a copy dtra Linux - Software 3 05-10-2005 08:52 PM


All times are GMT -5. The time now is 12:50 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration