LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Best way to copy all files from server for backup before decommission (https://www.linuxquestions.org/questions/linux-newbie-8/best-way-to-copy-all-files-from-server-for-backup-before-decommission-4175517896/)

jlinkels 09-09-2014 03:05 PM

Quote:

Originally Posted by evo2 (Post 5234724)
Hi,


Not true. You can list the files contained with
Code:

tar tf foo.tar.gz
Then you can extract just the file you want
Code:

tar xf foo.tar.gz some/file/bar.baz
Evo2.

Sure I know the list command. But imagine you are looking for a long lost file. First thing you want to know the time stamp and file size. So you have to list the tar, extract the file you need, and then look at the time stamp. That is a lot of more work so that you can type "tar" instead of "rsync". Why?

jlinkels

jlinkels 09-09-2014 03:06 PM

Quote:

Originally Posted by joe_2000 (Post 5234720)
Generally I would agree, but the OP stated that he has no physical access to the machine and wants to run something remotely as a preparation.

rsync is an excellent solution for copying to remote machines over SSH.

jlinkels

joe_2000 09-09-2014 03:37 PM

Quote:

Originally Posted by jlinkels (Post 5235172)
rsync is an excellent solution for copying to remote machines over SSH.

jlinkels

40GB over ssh?!? I don't know if that makes sense...

suicidaleggroll 09-09-2014 04:00 PM

Quote:

Originally Posted by joe_2000 (Post 5235190)
40GB over ssh?!? I don't know if that makes sense...

Why not? With a decent connection (>50Mb) it shouldn't take more than a few hours. And with rsync you can pick up where you left off if the connection drops.

Yura_Ts 09-09-2014 06:02 PM

Quote:

Originally Posted by suicidaleggroll (Post 5235198)
Why not? With a decent connection (>50Mb) it shouldn't take more than a few hours. And with rsync you can pick up where you left off if the connection drops.

Some processes may write to some files, when you will copy all 40GB over SSH. It makes this method of backuping inconsisted.

-------
Sorry my simple English :)

suicidaleggroll 09-09-2014 07:23 PM

Quote:

Originally Posted by Yura_Ts (Post 5235235)
Some processes may write to some files, when you will copy all 40GB over SSH. It makes this method of backuping inconsisted.

-------
Sorry my simple English :)

That's why you run it again once it finishes to pick up any changes. It's worth noting that a giant tarball would have the same problem. No 40 GB backup process is going to run instantaneously, at least the rsync lets you run it again to pick up the changes to any files that were modified while it was running the first time.

jlinkels 09-09-2014 08:30 PM

Quote:

Originally Posted by joe_2000 (Post 5235190)
40GB over ssh?!? I don't know if that makes sense...

I routinely use rsync to copy > 1 TB between machines.

anon091 09-09-2014 08:41 PM

Same here, works great, especially in catching updates.

evo2 09-10-2014 12:50 AM

Hi,
Quote:

Originally Posted by jlinkels (Post 5235171)
Sure I know the list command.

You previous post implied that you didn't.
Quote:

Originally Posted by jlinkels (Post 5235171)
But imagine you are looking for a long lost file. First thing you want to know the time stamp and file size. So you have to list the tar, extract the file you need, and then look at the time stamp. That is a lot of more work so that you can type "tar" instead of "rsync". Why?

Are you asking me? I never argued either for or against tar or rsync. I merely pointed out that the method you described for extracting a single file from a tar file was not optimal.

Evo2.

joe_2000 09-10-2014 10:53 AM

Quote:

Originally Posted by jlinkels (Post 5235306)
I routinely use rsync to copy > 1 TB between machines.

Wow. Sounds crazy to me, but interesting to know that that's possible. Is that a remote connection as well? I am assuming that if it is, we are talking about a symmetric upload/download connection on the uploading side?

jlinkels 09-10-2014 11:19 AM

Well, it is on the local area network, but over TCP/IP yes. My internet speed is only 1Mb/5Mb. But it doesn't really make difference whether it is local or over the internet. Especially when you use a VPN.

I do sync tens of GB over that slow connection though. Rsync has a bandwidth limiting feature which I use to prevent saturating my uplink. Once the bulk of the date is transferred maintaining the changes is easy and does not involve much traffic.

jlinkels

joe_2000 09-10-2014 11:22 AM

Quote:

Originally Posted by jlinkels (Post 5235678)
Well, it is on the local area network, but over TCP/IP yes. My internet speed is only 1Mb/5Mb. But it doesn't really make difference whether it is local or over the internet. Especially when you use a VPN.

I do sync tens of GB over that slow connection though. Rsync has a bandwidth limiting feature which I use to prevent saturating my uplink. Once the bulk of the date is transferred maintaining the changes is easy and does not involve much traffic.

jlinkels

Ah, ok that explains. So I still think that for the OP's usecase preparing a tarball upfront to be able to later pull it onto a external drive would be the most reasonable way to go...

jlinkels 09-10-2014 11:38 AM

Does a tarball transmit faster then?
Besides, it is monolithic. If the transfer would fail after 90% you can start again.

jlinkels

joe_2000 09-10-2014 01:10 PM

Quote:

Originally Posted by jlinkels (Post 5235692)
Does a tarball transmit faster then?
Besides, it is monolithic. If the transfer would fail after 90% you can start again.

jlinkels

No, it does not transfer faster over the network. But if you read the OP's question you'd see that he just wanted to compress the data to a tarball on the same drive and copy the resulting tarball onto an external disk later on...

EDIT: Actually, it would transfer faster since it would be a compressed tarball. But that wasn't the point anyway.


All times are GMT -5. The time now is 10:48 AM.