LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > CentOS
User Name
Password
CentOS This forum is for the discussion of CentOS Linux. Note: This forum does not have any official participation.

Notices


Reply
  Search this Thread
Old 01-16-2017, 04:17 PM   #1
circus78
Member
 
Registered: Dec 2011
Posts: 273

Rep: Reputation: Disabled
Migrate /boot to RAID 1


Hi,
I have a physical server with two 80GB hard disk.

Code:
# fdisk -l /dev/sda

Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005c12b

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64        9605    76635136   8e  Linux LVM
root@server:~# fdisk -l /dev/sdb

Disk /dev/sdb: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005c12b

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sdb2              64        9605    76635136   8e  Linux LVM

I don't remember why, but I didn't created /boot on RAID mount.
Now I would like to "upgrade" sda1 and sdb1 to a RAID 1 device.

Which steps should I follow?

I think:

1. change partition type with fdisk (on both disks at same time?)
2. create software raid device with mdadm
3. mount new raid device somewhere (eg. /mnt/newboot)
4. copy /boot/* to /mnt/newboot

.. at this point I need some help for remaining steps: grub? /etc/fstab?

Some other info:

Code:
# uname -ar
Linux server 2.6.32-642.6.1.el6.x86_64 #1 SMP Wed Oct 5 00:36:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux


# cat /etc/redhat-release
CentOS release 6.8 (Final)

# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Wed Nov 14 10:50:34 2012
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_server-lv_root /                       ext4    defaults        1 1
UUID=a5533f23-9746-4bf5-8085-1fd0626cae22 /boot                   ext4    defaults        1 2
/dev/mapper/vg_server-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0


Thank you
 
Old 01-27-2017, 02:18 PM   #2
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
Hi

I have absolutely no experience with LVM.

Anyway, concerning a "pure" server using only raid (and no LVM), this is the /etc/fstab of my server, which is booting from a raid1 (showing 2nd line just to show that there are no other tricks for other raid-partitions):
Code:
/dev/md1                /boot           ext4            noatime         0 2
/dev/md2                /               ext4            noatime         0 1
Important - 1
You'll need to install grub on the MBR of both disks (e.g. "grub-install /dev/sda" AND "grub-install /dev/sdb"), so that if e.g. your 1st HDD is kaputt + you are a very lucky person having a BIOS which is able to understand that and decides to boot from the 2nd HDD, the PC/server will be able to load GRUB from the 2nd HDD.

(maybe) Important - 2
Not 100% sure if this is really needed, but I'm currently passing to the kernel the option "domdadm" (for some kind of early raid-member-autodiscovery? Cannot remember...).
Therefore, in "/etc/default/grub" of grub2 I have currently this line:
Code:
GRUB_CMDLINE_LINUX_DEFAULT="net.ifnames=0 domdadm"
Not sure if this is really needed because this is explicitly mentioned in Gentoo's instructions but I've seen other threads not mentioning it - in any case it won't hurt.


I don't remember if having the mdadm-stuff compiled in the kernel (instead of modules) was a must, but I would go for it if you can - just to be on the safe side... .
 
Old 01-27-2017, 06:17 PM   #3
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,126

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
Nothing is ever as simple as it first looks.
As well as copying the data you have to install grub to both devices (note grub-install is a Debian-ism, CentOS likely won't have it). You have to ensure that the install of grub only refers to the disk it is installed on - i.e. you can't simply run setup from the good system to the second disk as it will refer back to the first disk for stage2 loading. Bad things happen if the first disk fails.
Then you have to make sure grub and the initrd can handle mdadm devices - on both disks. Very version/distro specific.

Can be done, but as you are currently stuck with legacy grub, you might as well upgrade to CentOS 7 and set it up from the start - and get grub2 as a bonus.
Not to mention systemd ....
 
Old 01-27-2017, 08:54 PM   #4
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,779

Rep: Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212
GRUB legacy doesn't understand RAID devices. The only way /boot can be on a RAID device is if you use RAID header format 0.9 or 1.0, which are placed at the end of the device. An unaware program will just see the filesystem. The danger there is that anything that writes to the filesystem in that condition (and that includes just mounting it read/write) will desynchronize the array and compromise the data.

Beyond that, if you're lucky and the BIOS treats whatever disk it booted from as the "first BIOS disk" (0x80), then simply running "grub-install /dev/sdb" should "just work". If not, the suggestions I've seen are to have a fallback stanza in the GRUB menu with "root (hd1,0)" in place of "root (hd0,0)". No, I don't think that fallback is going to be automatic, and that would pose a problem for an unattended reboot. Testing whether all that works for the various ways the first disk might fail and how your BIOS might handle it is quite a challenge.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] How to migrate from raid 6 to raid 5 using mdadm and LVM haerta Linux - General 5 04-20-2012 06:16 PM
How to migrate RAID 5 to RAID 1(Mirroring) parkarnoor Linux - Newbie 2 10-12-2010 02:11 AM
How to migrate from RAID 5 to RAID 1 parkarnoor Red Hat 2 10-10-2010 08:12 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > CentOS

All times are GMT -5. The time now is 01:47 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration