LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Solaris / OpenSolaris (https://www.linuxquestions.org/questions/solaris-opensolaris-20/)
-   -   How to copy a zpool safely? rsync? cp? (https://www.linuxquestions.org/questions/solaris-opensolaris-20/how-to-copy-a-zpool-safely-rsync-cp-695591/)

kebabbert 01-07-2009 10:14 AM

How to copy a zpool safely? rsync? cp?
 
I have a ZFS raid with 4 samsung 500GB disks. I now want 5 drives samsung 1TB instead. So I connect the 5 drives, create a zpool raidz1 and copy the content from the old zpool to the new zpool.

Is there a way to safely copy the zpool? Make it sure that it really have been copied safely? Ideally I would like a tool that copies from source to destination and checks that the copy went through. A night mare would be if the copy get interrupted, and I have to copy again. How can I be sure that the new invocation has copied everything, from the interruption? Using gnu commander feels a bit unsafe. It will only copy blindly(?), and no more. Will it tell me if something went wrong?

How do you make sure the copy has been correct? Is there any utility that does exactly that? (Does cp warn if there was any error?)

crisostomo_enrico 01-07-2009 10:23 AM

I would copy file systems snapshots using zfs send and zfs recv to receive them in the remote machine. As far as I remember, it's the only documented way to make a "dump" of ZFS file systems. I'm using it and doing incremental backups and works like a charm.

Check ZFS documentation here.

You can send a snapshot this way:
Code:

machine0$ pfexec zfs snapshot mypool/myfs@now
machine0$ pfexec zfs send mypool/myfs@now > myfile
machine1$ pfexec zfs receive anotherpool/anotherfs@anothersnap < myfile

or directly
Code:

machine0$ pfexec zfs send mypool/myfs@now | ssh machine1 zfs receive anotherpool/anotherfs@anothersnap
Hope this helps,

crisostomo_enrico 01-07-2009 10:35 AM

By the way, kebabbert, if you're concerned about rsync, scp or yet another such tool does a bad "copy" of the file, I think that the safest way to check it (even if with such tools I wouldn't do it) is using the digest command with the algorithm you like most and comparing the output.

kebabbert 01-07-2009 10:43 AM

Ok, that sounds good. I copy via ZFS send and receive. You listed a command I can try. But I have to modify it, because I only have one machine.
machine0$ pfexec zfs send mypool/myfs@now | ssh machine1 zfs receive anotherpool/anotherfs@anothersnap
The "| ssh machine1" shall be omitted, right? So I will instead use:
machine0$ pfexec zfs send mypool/myfs@now | zfs receive anotherpool/anotherfs@anothersnap
Right?

I have 1.4TB to copy. I cant digest 1.4TB. It will take too long time.

And, I dont have any snapshots on my zpool yet. I dont want any snapshots at all, right now. Not on my new zpool. After zfs receive, how do I delete the snapshot on the new zpool that zfs receive gave me?

crisostomo_enrico 01-07-2009 10:50 AM

Snapshotting is necessary but that's not a problem: it's an "almost no-cost" operation which and then you remove your snapshot with
Code:

$ pfexec zfs destroy mypool/myfs@now
Don't forget to put the entire name with the snapshot, otherwise you lose your fs! ;)
At the destination, it creates a zfs file system which must not exist (unless the send is incremental) and creates a snapshot. The snapshot, too, can be deleted. This way you won't be able to make incremental sends but you don't seem interested to them, indeed.

The command is correct. If you prefer, you can dump the fs redirecting the send operation output with > and receiving redirecting input with <. You don't even need to specify the entire name of the destination file system because the snapshot name can be retrieved from the stream you're sending. Check zfs man page or documentation for all of the available options.

I assume that you're moving one file system from one zpool to another, obviously, otherwise a clone operation would be wiser.

kebabbert 01-07-2009 11:46 AM

Thanx for your help.


"The command is correct." Which command is correct? 1 or 2?

1) machine0$ pfexec zfs send mypool/myfs@now | ssh machine1 zfs receive anotherpool/anotherfs@anothersnap
2) machine0$ pfexec zfs send mypool/myfs@now | zfs receive anotherpool/anotherfs@anothersnap

It doesnt matter if I copy or move the zpool to the new zpool. I can clone if that is better.

So I do a zfs send, via a snapshot. Then send the zpool to the other computer with command 2) and then destroy the snapshot on my old zpool and on my new zpool. This way I have an exact replica. Right so?

crisostomo_enrico 01-07-2009 03:37 PM

Hi kebabbert, sorry for the wait.

Command 2 because you don't need ssh to connect to the local machine. You don't even need the snapshot, just a:
Code:

$ pfexec zfs send mypool/myfs@now | zfs receive anotherpool/anotherfs
In the pool anotherpool (or even in the same pool), the anotherfs file system will be created and it will also have a snapshot called now. You can destroy the snapshot, as said. The two filesystems will be absolutely identical.

The clone is a possibility only inside the same pool, that's why I asked.

Let us know and hope this helps.
Enrico.

kebabbert 01-07-2009 06:38 PM

Ok great! Ive done:

# zfs snapshot mypool/myfs@now
# zfs send mypool/myfs@now | zfs receive anotherpool/anotherfs

And anotherfs will automatically be created. It is now copying all data. 1.5TB. :) Going to bed now. Thanx your help! :)

crisostomo_enrico 01-07-2009 06:41 PM

Great!
You're welcome.

Bye,
Enrico

socalrover 06-20-2009 03:31 AM

Is it possible to send the contents of the entire zpool? So for example:

zfs snapshot mypool@today
zfs send mypool@today | ssh somehost "zfs receive mypool"

The zpool mypool exists, so I receive an error message to that effect, to use the -F. When I do, any existing zfs filesystem disappears if ls -l /mypool. However, a zfs list will show my previsouly existing zfs FS on the destination host.

Thanks

crisostomo_enrico 06-20-2009 05:23 PM

zfs send sends hierarchies of filesystems, with the last semantics modifications introduced on subsequent versions of OpenSolaris. If you read the documentation, even the zfs man page, you'll notice that

Code:

    zfs receive [-vnF] filesystem|volume|snapshot
    zfs receive [-vnF] -d filesystem

        Creates a snapshot whose contents are  as  specified  in
        the  stream provided on standard input. If a full stream
        is received, then a new file system is created as  well.

[...]
-F

            Force a rollback of the  file  system  to  the  most
            recent snapshot before performing the receive opera-
            tion. If receiving an incremental replication stream
            (for example, one generated by "zfs send -R -[iI]"),
            destroy snapshots and file systems that do not exist
            on the sending side.

This explains the requirement of a new filesystem or, using -F, why the filesystem is "rolled back" destroying snapshots and file systems that do not exist on the sending side. You cannot "merge" filesystems sent via zfs send by putting them into an already existing filesystem on the receiving side.

If you want to make a backup via send/receive of a zfs hierarchy you could use the -r and -R options of the zfs send command. As I said, that depends on the Solaris version you're running:
Code:

zfs send [-vR] [-[iI] snapshot] snapshot

-R

            Generate a replication stream  package,  which  will
            replicate  the specified filesystem, and all descen-
            dant file systems, up to the  named  snapshot.  When
            received, all properties, snapshots, descendent file
            systems, and clones are preserved.

            If the -i or -I flags are used in  conjunction  with
            the  -R  flag,  an incremental replication stream is
            generated. The current  values  of  properties,  and
            current  snapshot and file system names are set when
            the stream is received. If the -F flag is  specified
            when  this  stream  is  recieved, snapshots and file
            systems that do not exist on the  sending  side  are
            destroyed.


choogendyk 06-21-2009 07:19 PM

Cool. Good to see we have some zfs support on the forums. I may post a question of my own, but I'll start a new thread if I do. It's also good to see that zfs is maturing. I thought it was unable to do incrementals with zsend.

Anyway, I just wanted to post a comment on this thread regarding choice of tools. Kebabbert is going from one zpool to another, so using zfs send | to zfs receive works for him. I have a zfs file system on a newer server that I rsync to a ufs file system on an older server. It works perfectly well in both directions. rsync would also transparently deal with the situation where you got a large part of the transfer completed and then the connection was lost, the power dropped, or something else went wrong. Run rsync again, and it would only transfer the differences. When we first set this up it took overnight to complete the transfer. Now we can do an rsync in a few minutes before proceeding to do some update work. The two servers happen to have our radmind directories for two different buildings and allow us to keep all our lab and classroom computers in those two buildings in sync.

I'm also likely to use gtar within Amanda to back up the zfs systems. I don't have to deal with that yet since we have the rsync and are just getting going with zfs. But it seemed that zfs send/recieve had some shortcomings as a backup system. See, for example, http://www.zmanda.com/blogs/?p=128 .

crisostomo_enrico 06-22-2009 03:58 AM

Hi choogendyk.

Yes, rsync would also work. I'm not sure how rsync, or even gtar, would deal with ZFS specifics such as ZFS ACLs. Your mileage may vary and it also depends on what you need. Be also aware that in some cases you'd better use rsync's inplace option: Google for it into opensolaris.org site.

If you don't need cross platform restore I'd really go for using zfs send for backups. You've got replication streams and incremental streams. Less hassle and better result, IMHO.

I was looking the link you posted and the "shortcomings" derive from the semantics of snapshots and sends: that explains why you haven't got file level restore, because you only can clone/promote/send/receive snapshots. Anyway, be also aware that, if you're taking regular snapshots of your filesystems, you can simply have a look into the hidden .zfs directory and you'll be able to look into every snapshot and restore every single file. That's more or less what the time slider does. ZFS scales well with great number of snapshots so don't worry and let it snapshot as often as you need.

choogendyk 06-22-2009 06:42 AM

Quote:

Originally Posted by crisostomo_enrico (Post 3582052)
Anyway, be also aware that, if you're taking regular snapshots of your filesystems, you can simply have a look into the hidden .zfs directory and you'll be able to look into every snapshot and restore every single file.

How would that translate for tape backups?

For example, if I use ufsdump/ufsrestore, I can do an interactive extraction, tag the items I want to recover, and let it run. In Amanda, that translates directly, so amrestore essentially gives me the ufsrestore interface. Then it tells me what tape it needs and does the restore -- just one file, in my restore directory, if that's the way I request it. Virtually all the restores I ever have to do are for individual files or directories.

jlliagre 06-22-2009 07:50 AM

You don't need tape backups if you use snapshots as a backup strategy.

The old individual files and directories you want to recover are directly reachable on the disk through the snapshots .zfs directories.


All times are GMT -5. The time now is 03:00 PM.