LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Other *NIX Forums > Solaris / OpenSolaris
User Name
Password
Solaris / OpenSolaris This forum is for the discussion of Solaris, OpenSolaris, OpenIndiana, and illumos.
General Sun, SunOS and Sparc related questions also go here. Any Solaris fork or distribution is welcome.

Notices


Reply
  Search this Thread
Old 01-07-2009, 10:14 AM   #1
kebabbert
Member
 
Registered: Jul 2005
Posts: 489

Rep: Reputation: 45
How to copy a zpool safely? rsync? cp?


I have a ZFS raid with 4 samsung 500GB disks. I now want 5 drives samsung 1TB instead. So I connect the 5 drives, create a zpool raidz1 and copy the content from the old zpool to the new zpool.

Is there a way to safely copy the zpool? Make it sure that it really have been copied safely? Ideally I would like a tool that copies from source to destination and checks that the copy went through. A night mare would be if the copy get interrupted, and I have to copy again. How can I be sure that the new invocation has copied everything, from the interruption? Using gnu commander feels a bit unsafe. It will only copy blindly(?), and no more. Will it tell me if something went wrong?

How do you make sure the copy has been correct? Is there any utility that does exactly that? (Does cp warn if there was any error?)
 
Click here to see the post LQ members have rated as the most helpful post in this thread.
Old 01-07-2009, 10:23 AM   #2
crisostomo_enrico
Member
 
Registered: Dec 2005
Location: Madrid
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547

Rep: Reputation: 36
I would copy file systems snapshots using zfs send and zfs recv to receive them in the remote machine. As far as I remember, it's the only documented way to make a "dump" of ZFS file systems. I'm using it and doing incremental backups and works like a charm.

Check ZFS documentation here.

You can send a snapshot this way:
Code:
machine0$ pfexec zfs snapshot mypool/[email protected]
machine0$ pfexec zfs send mypool/[email protected] > myfile
machine1$ pfexec zfs receive anotherpool/[email protected] < myfile
or directly
Code:
machine0$ pfexec zfs send mypool/[email protected] | ssh machine1 zfs receive anotherpool/[email protected]
Hope this helps,

Last edited by crisostomo_enrico; 01-07-2009 at 10:29 AM.
 
2 members found this post helpful.
Old 01-07-2009, 10:35 AM   #3
crisostomo_enrico
Member
 
Registered: Dec 2005
Location: Madrid
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547

Rep: Reputation: 36
By the way, kebabbert, if you're concerned about rsync, scp or yet another such tool does a bad "copy" of the file, I think that the safest way to check it (even if with such tools I wouldn't do it) is using the digest command with the algorithm you like most and comparing the output.
 
Old 01-07-2009, 10:43 AM   #4
kebabbert
Member
 
Registered: Jul 2005
Posts: 489

Original Poster
Rep: Reputation: 45
Ok, that sounds good. I copy via ZFS send and receive. You listed a command I can try. But I have to modify it, because I only have one machine.
machine0$ pfexec zfs send mypool/[email protected] | ssh machine1 zfs receive anotherpool/[email protected]
The "| ssh machine1" shall be omitted, right? So I will instead use:
machine0$ pfexec zfs send mypool/[email protected] | zfs receive anotherpool/[email protected]
Right?

I have 1.4TB to copy. I cant digest 1.4TB. It will take too long time.

And, I dont have any snapshots on my zpool yet. I dont want any snapshots at all, right now. Not on my new zpool. After zfs receive, how do I delete the snapshot on the new zpool that zfs receive gave me?
 
Old 01-07-2009, 10:50 AM   #5
crisostomo_enrico
Member
 
Registered: Dec 2005
Location: Madrid
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547

Rep: Reputation: 36
Snapshotting is necessary but that's not a problem: it's an "almost no-cost" operation which and then you remove your snapshot with
Code:
$ pfexec zfs destroy mypool/[email protected]
Don't forget to put the entire name with the snapshot, otherwise you lose your fs!
At the destination, it creates a zfs file system which must not exist (unless the send is incremental) and creates a snapshot. The snapshot, too, can be deleted. This way you won't be able to make incremental sends but you don't seem interested to them, indeed.

The command is correct. If you prefer, you can dump the fs redirecting the send operation output with > and receiving redirecting input with <. You don't even need to specify the entire name of the destination file system because the snapshot name can be retrieved from the stream you're sending. Check zfs man page or documentation for all of the available options.

I assume that you're moving one file system from one zpool to another, obviously, otherwise a clone operation would be wiser.

Last edited by crisostomo_enrico; 01-07-2009 at 10:53 AM.
 
Old 01-07-2009, 11:46 AM   #6
kebabbert
Member
 
Registered: Jul 2005
Posts: 489

Original Poster
Rep: Reputation: 45
Thanx for your help.


"The command is correct." Which command is correct? 1 or 2?

1) machine0$ pfexec zfs send mypool/[email protected] | ssh machine1 zfs receive anotherpool/[email protected]
2) machine0$ pfexec zfs send mypool/[email protected] | zfs receive anotherpool/[email protected]

It doesnt matter if I copy or move the zpool to the new zpool. I can clone if that is better.

So I do a zfs send, via a snapshot. Then send the zpool to the other computer with command 2) and then destroy the snapshot on my old zpool and on my new zpool. This way I have an exact replica. Right so?
 
Old 01-07-2009, 03:37 PM   #7
crisostomo_enrico
Member
 
Registered: Dec 2005
Location: Madrid
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547

Rep: Reputation: 36
Hi kebabbert, sorry for the wait.

Command 2 because you don't need ssh to connect to the local machine. You don't even need the snapshot, just a:
Code:
$ pfexec zfs send mypool/[email protected] | zfs receive anotherpool/anotherfs
In the pool anotherpool (or even in the same pool), the anotherfs file system will be created and it will also have a snapshot called now. You can destroy the snapshot, as said. The two filesystems will be absolutely identical.

The clone is a possibility only inside the same pool, that's why I asked.

Let us know and hope this helps.
Enrico.
 
Old 01-07-2009, 06:38 PM   #8
kebabbert
Member
 
Registered: Jul 2005
Posts: 489

Original Poster
Rep: Reputation: 45
Ok great! Ive done:

# zfs snapshot mypool/[email protected]
# zfs send mypool/[email protected] | zfs receive anotherpool/anotherfs

And anotherfs will automatically be created. It is now copying all data. 1.5TB. Going to bed now. Thanx your help!
 
Old 01-07-2009, 06:41 PM   #9
crisostomo_enrico
Member
 
Registered: Dec 2005
Location: Madrid
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547

Rep: Reputation: 36
Great!
You're welcome.

Bye,
Enrico
 
Old 06-20-2009, 03:31 AM   #10
socalrover
LQ Newbie
 
Registered: Jun 2009
Posts: 1

Rep: Reputation: 0
Is it possible to send the contents of the entire zpool? So for example:

zfs snapshot [email protected]
zfs send [email protected] | ssh somehost "zfs receive mypool"

The zpool mypool exists, so I receive an error message to that effect, to use the -F. When I do, any existing zfs filesystem disappears if ls -l /mypool. However, a zfs list will show my previsouly existing zfs FS on the destination host.

Thanks
 
Old 06-20-2009, 05:23 PM   #11
crisostomo_enrico
Member
 
Registered: Dec 2005
Location: Madrid
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547

Rep: Reputation: 36
zfs send sends hierarchies of filesystems, with the last semantics modifications introduced on subsequent versions of OpenSolaris. If you read the documentation, even the zfs man page, you'll notice that

Code:
     zfs receive [-vnF] filesystem|volume|snapshot
     zfs receive [-vnF] -d filesystem

         Creates a snapshot whose contents are  as  specified  in
         the  stream provided on standard input. If a full stream
         is received, then a new file system is created as  well.

[...]
-F

             Force a rollback of the  file  system  to  the  most
             recent snapshot before performing the receive opera-
             tion. If receiving an incremental replication stream
             (for example, one generated by "zfs send -R -[iI]"),
             destroy snapshots and file systems that do not exist
             on the sending side.
This explains the requirement of a new filesystem or, using -F, why the filesystem is "rolled back" destroying snapshots and file systems that do not exist on the sending side. You cannot "merge" filesystems sent via zfs send by putting them into an already existing filesystem on the receiving side.

If you want to make a backup via send/receive of a zfs hierarchy you could use the -r and -R options of the zfs send command. As I said, that depends on the Solaris version you're running:
Code:
zfs send [-vR] [-[iI] snapshot] snapshot

-R

             Generate a replication stream  package,  which  will
             replicate  the specified filesystem, and all descen-
             dant file systems, up to the  named  snapshot.  When
             received, all properties, snapshots, descendent file
             systems, and clones are preserved.

             If the -i or -I flags are used in  conjunction  with
             the  -R  flag,  an incremental replication stream is
             generated. The current  values  of  properties,  and
             current  snapshot and file system names are set when
             the stream is received. If the -F flag is  specified
             when  this  stream  is  recieved, snapshots and file
             systems that do not exist on the  sending  side  are
             destroyed.
 
Old 06-21-2009, 07:19 PM   #12
choogendyk
Senior Member
 
Registered: Aug 2007
Location: Massachusetts, USA
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,194

Rep: Reputation: 105Reputation: 105
Cool. Good to see we have some zfs support on the forums. I may post a question of my own, but I'll start a new thread if I do. It's also good to see that zfs is maturing. I thought it was unable to do incrementals with zsend.

Anyway, I just wanted to post a comment on this thread regarding choice of tools. Kebabbert is going from one zpool to another, so using zfs send | to zfs receive works for him. I have a zfs file system on a newer server that I rsync to a ufs file system on an older server. It works perfectly well in both directions. rsync would also transparently deal with the situation where you got a large part of the transfer completed and then the connection was lost, the power dropped, or something else went wrong. Run rsync again, and it would only transfer the differences. When we first set this up it took overnight to complete the transfer. Now we can do an rsync in a few minutes before proceeding to do some update work. The two servers happen to have our radmind directories for two different buildings and allow us to keep all our lab and classroom computers in those two buildings in sync.

I'm also likely to use gtar within Amanda to back up the zfs systems. I don't have to deal with that yet since we have the rsync and are just getting going with zfs. But it seemed that zfs send/recieve had some shortcomings as a backup system. See, for example, http://www.zmanda.com/blogs/?p=128 .
 
Old 06-22-2009, 03:58 AM   #13
crisostomo_enrico
Member
 
Registered: Dec 2005
Location: Madrid
Distribution: Solaris 10, Solaris Express Community Edition
Posts: 547

Rep: Reputation: 36
Hi choogendyk.

Yes, rsync would also work. I'm not sure how rsync, or even gtar, would deal with ZFS specifics such as ZFS ACLs. Your mileage may vary and it also depends on what you need. Be also aware that in some cases you'd better use rsync's inplace option: Google for it into opensolaris.org site.

If you don't need cross platform restore I'd really go for using zfs send for backups. You've got replication streams and incremental streams. Less hassle and better result, IMHO.

I was looking the link you posted and the "shortcomings" derive from the semantics of snapshots and sends: that explains why you haven't got file level restore, because you only can clone/promote/send/receive snapshots. Anyway, be also aware that, if you're taking regular snapshots of your filesystems, you can simply have a look into the hidden .zfs directory and you'll be able to look into every snapshot and restore every single file. That's more or less what the time slider does. ZFS scales well with great number of snapshots so don't worry and let it snapshot as often as you need.
 
Old 06-22-2009, 06:42 AM   #14
choogendyk
Senior Member
 
Registered: Aug 2007
Location: Massachusetts, USA
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,194

Rep: Reputation: 105Reputation: 105
Quote:
Originally Posted by crisostomo_enrico View Post
Anyway, be also aware that, if you're taking regular snapshots of your filesystems, you can simply have a look into the hidden .zfs directory and you'll be able to look into every snapshot and restore every single file.
How would that translate for tape backups?

For example, if I use ufsdump/ufsrestore, I can do an interactive extraction, tag the items I want to recover, and let it run. In Amanda, that translates directly, so amrestore essentially gives me the ufsrestore interface. Then it tells me what tape it needs and does the restore -- just one file, in my restore directory, if that's the way I request it. Virtually all the restores I ever have to do are for individual files or directories.
 
Old 06-22-2009, 07:50 AM   #15
jlliagre
Moderator
 
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Tribblix, Ubuntu/WSL
Posts: 9,761

Rep: Reputation: 459Reputation: 459Reputation: 459Reputation: 459Reputation: 459
You don't need tape backups if you use snapshots as a backup strategy.

The old individual files and directories you want to recover are directly reachable on the disk through the snapshots .zfs directories.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
rsync syntax for local copy hoover93 Linux - Software 3 12-21-2012 02:39 AM
Can I safely copy one partition to another and delete the first? Excalibre Linux - Newbie 3 06-28-2008 12:35 AM
sudo rsync -uvrlpot doesn't copy some files xpucto Linux - Newbie 1 01-15-2007 06:56 AM
can you apply 2.6.x kernel config to 2.4.x safely/somewhat safely? silex_88 Linux - Software 3 12-09-2005 11:38 PM
copy/ftp/sftp/rsync from ssh'd machine allelopath Linux - Software 5 05-05-2005 02:16 PM

LinuxQuestions.org > Forums > Other *NIX Forums > Solaris / OpenSolaris

All times are GMT -5. The time now is 12:56 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration