Disk to disk backup using dd command in suse11 sp4
Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Disk to disk backup using dd command in suse11 sp4
Hi,
I am trying to create disk to disk backup on cisco H/W having two raid1 group using 1+1 disk for both raid.
First raid volume is : /dev/sda and second /dev/sdb
I have used below command for backup.
# dd if=/dev/sda of=/dev/sdb conv=noerror,sync
but while removing first raid group disk, system is not booting and after inserting raid disk again, system booting but data is coming from second raid disk only.
Our scenario is to do disk to disk backup and if first raid disk fails, system should be available with second raid disk.
It's production system, so can't do much testing.
Waiting for revert if anyone have the same scenario.
OK, I think your problem is going to be your boot configuration. If you are simply coping the entire disk from one to the other you need to update grub on disk 2 to know if it boots to look at disk 2 for its files. Right now if disk 2 boots grub is looking for disk 1 for its files.
The easiest way to take care of this would be to re-install grub on disk 2 so when it boots it knows to look at disk 2 for its files.
Hi,
I am trying to create disk to disk backup on cisco H/W having two raid1 group using 1+1 disk for both raid.
First raid volume is : /dev/sda and second /dev/sdb I have used below command for backup.
# dd if=/dev/sda of=/dev/sdb conv=noerror,sync
but while removing first raid group disk, system is not booting and after inserting raid disk again, system booting but data is coming from second raid disk only. Our scenario is to do disk to disk backup and if first raid disk fails, system should be available with second raid disk.
It's production system, so can't do much testing.
Waiting for revert if anyone have the same scenario.
You've been here for NINE YEARS now...you've asked about backups and things previously. Since you need someone to 'revert', and you're using SLES 11, have you contacted SuSE support, since you're paying for it??? Further, how do you KNOW it isn't booting, since (as you say), it's a production system and you can't do testing?
And you say nothing about the hardware, other than "cisco hardware"..what kind of 'cisco hardware'? What kind of RAID controller? Are you saying you have TWO DISKS? Or do you have FOUR disks, each with RAID 1, and those are mirrored? You're posting /dev/sda and /dev/sdb...which indicate single disks.
If you have 4 disks, and the first two are mirrored, and the second set is a mirror of the first...what you're posting make zero sense, since they should ALL be identical. And you don't say how you're testing this or how you're failing it over...just replacing one disk in the set won't do it.
Exactly your understanding is same but customer requirement is that when first raid corrupt, system should start with second raid.
Right...so we're back to "why is this not working, if you have a MIRROR of a MIRRORED SET???, and what kind of controller/hardware are you using, and how is the RAID set up??? You only mention /dev/sda and /dev/sdb....if those are two-disk sets created on the RAID controller when the system was built, GRUB should be configured already.
And as lazydog pointed out, did you try installing GRUB? And again, you've been working with Linux for NINE YEARS, and you're getting paid to fix this for a client....seems fairly rude to ask us to solve YOUR problem (that you're getting PAID to solve) for free.
I think you have not read my query properly.
I have already mentioned it's raid1 and having 1+1 disk each.
Anyway, this is opensource forum for any query and technology can be posted.
If everything works with OEM support only then there is no meaning of this forum.
if you can not share your experience.. please don't try to pull down anyone.
I think you have not read my query properly. I have already mentioned it's raid1 and having 1+1 disk each.
...which is why YOU WERE ASKED TO CLARIFY things, and provide more information...and you didn't. Re-stating what you already said tells us nothing more.
Quote:
Anyway, this is opensource forum for any query and technology can be posted. If everything works with OEM support only then there is no meaning of this forum.
No idea what you're trying to say here. You still haven't answered any questions put to you, to clarify things. There is no point in POSTING a question, if you're not going to answer questions when asked, or participate in a conversation.
Quote:
if you can not share your experience.. please don't try to pull down anyone.
Trying to, but you don't answer questions. And again, this is VERY rude, period...YOU are getting paid by someone to solve this problem....and you are asking US to do it for you, FOR FREE, then you complain when you're asked for more information.
Again, RAID1 having 1+1 disk each is fairly meaningless...when you mention /dev/sda and /dev/sdb, and hint at FOUR disks. You won't answer questions about the hardware, what kind of controller, or what (if any) testing you did/can do.
Exactly your understanding is same but customer requirement is that when first raid corrupt, system should start with second raid.
What I have written is how you get the second HD to boot when the first one fails. The second one doesn't know to look at itself for the files and this is only fixed when you update Grub to look at itself.
What I have written is how you get the second HD to boot when the first one fails. The second one doesn't know to look at itself for the files and this is only fixed when you update Grub to look at itself.
Yes...but I get confused here, because the OP said "two raid1 group using 1+1 disk for both raid." To me, that says they have a mirrored set, being mirrored by ANOTHER set. So, four disks. And the OP goes on to reference /dev/sda and /dev/sdb, which would indicate hardware RAID (**ASSUMING** here, since if you created the arrays on the controller, you'd only see one 'disk' for the two connected in the mirrored pair).
And that brings me back to "Why are things like this?" Because if the devices are mirrored on the controller and the system was BUILT, that first set (/dev/sda) should be EXACTLY the same as the second (/dev/sdb)...and should boot just fine. The OP also hints at them just replacing ONE disk...out of a mirrored set....and wanting it to boot....(????)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.