How to add RAID1 without a reboot
I recently learned how to do this and thought posting this might help others as I had a difficult time finding the information on the web on exactly how to do this. It's laid out in a step by step manner to hopefully help one get the job done "quick and dirty" so to speak.
How to add a 2nd mirrored (RAID 1) set of SCSI drives on a Linux server without a reboot STEP 1: After inserting the new disks, rescan the SCSI bus. The "0 0 2 0" part in the example below should be changed to reflect the Host, Channel, SCSI ID, and LUN. In most cases only the SCSI ID would change. The command is as follows to add SCSI ID 2: echo "scsi add-single-device 0 0 2 0" > /proc/scsi/scsi Next check to see if the drives came online: cat /proc/scsi/scsi. The output should look something like this if you have 4 drives on one SCSI controller: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: DDYS-T18350M Rev: S80D Type: Direct-Access ANSI SCSI revision: 03 Host: scsi0 Channel: 00 Id: 01 Lun: 00 Vendor: IBM Model: DDYS-T18350M Rev: S80D Type: Direct-Access ANSI SCSI revision: 03 Host: scsi0 Channel: 00 Id: 02 Lun: 00 Vendor: IBM Model: DDYS-T18350M Rev: S80D Type: Direct-Access ANSI SCSI revision: 03 Host: scsi0 Channel: 00 Id: 03 Lun: 00 Vendor: IBM Model: DDYS-T18350M Rev: S80D Type: Direct-Access ANSI SCSI revision: 03 Host: scsi0 Channel: 00 Id: 06 Lun: 00 Vendor: ESG-SHV Model: SCA HSBP M10 Rev: 0.03 Type: Processor ANSI SCSI revision: 02 STEP 2: Find out the existing block device names in use and define the new RAID device in /etc/raidtab. cat /etc/raidtab The output will look something like this: raiddev /dev/md0 < 1st RAID device mounted on /boot raid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 #nr-spare-disks 0 device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 raiddev /dev/md1 <2nd RAID device mounted on / raid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 #nr-spare-disks 0 device /dev/sda5 raid-disk 0 device /dev/sdb5 raid-disk 1 The /dev/sda1 and /dev/sdb1 tells you the block device names in use. The translation is like this: sd means SCSI device. The following a character indicates the first SCSI device, b would be the second and so on. The 1 means first partition on that particular SCSI device. So you will be adding devices /dev/sdc1 and /dev/sdd1. Edit /etc/raidtab with the vi editor like so: vi /etc/raidtab Define the new mirror set (RAID device) at the end of the file like this: raiddev /dev/md2 raid-level 1 nr-raid-disks 2 chunk-size 64k persistent-superblock 1 #nr-spare-disks 0 device /dev/sdc1 raid-disk 0 device /dev/sdd1 Use i to go into insert mode then type in the information above at the end of the 2nd RAID device. The order of this info is important. When finished hit Escape and type :wq this will write the file and quit the editor. STEP 3: Run fdisk to define the RAID partition types. This will need to be done for each physical device in the new array. Use the m command for help. fdisk /dev/sdc Delete any previously existing known partitions on this device. Create the new partition Change the partition’s system id to: fd (Linux RAID autodetect) Finally w to write the table to disk and exit the fdisk utility. Do this for all new RAID devices defined in the /etc/raidtab file. STEP 4: Run the mkraid utility to create the RAID device. mkraid /dev/md2 If the following message appears then follow it’s instructions but use the --really-force argument if –f won’t work. handling MD device /dev/md2 analyzing super-block disk 0: /dev/sdc1, 17920476kB, raid superblock at 17920384kB /dev/sdc1 appears to be already part of a raid array -- use -f to force the destruction of the old superblock mkraid: aborted, see the syslog and /proc/mdstat for potential clues. If you wish to check the status of your RAID devices you can run cat /proc/mdstat You should see something like this: Personalities : [linear] [raid0] [raid1] [raid5] [translucent] read_ahead 1024 sectors md2 : active raid1 sdd1[1] sdc1[0] 17920384 blocks [2/2] [UU] md0 : active raid1 sdb1[1] sda1[0] 56128 blocks [2/2] [UU] md1 : active raid1 sdb5[1] sda5[0] 17334016 blocks [2/2] [UU] unused devices: <none> If the new RAID device does not show up try running raidstart /dev/<device> STEP 5: Format the RAID device with the e2fs file system. mke2fs /dev/md2 STEP 6: Define a mount point. Create a directory with a name that describes the new mirror array like so. mkdir /mnt/raid1data Next edit the /etc/fstab file to include the mount point. This will allow the array to automatically mount on reboot. vi /etc/fstab The file may look something like this if you already have 2 RAID arrays: /dev/md1 / ext2 defaults 1 1 /dev/md0 /boot ext2 defaults 1 2 /dev/cdrom /mnt/cdrom iso9660 noauto,owner,ro 0 0 /dev/fd0 /mnt/floppy auto noauto,owner 0 0 none /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/sda6 swap swap defaults 0 0 /dev/sdb6 swap swap defaults 0 0 Add a line to the bottom of the file to define the mount point like this. /dev/md2 /mnt/raid1data ext2 defaults 0 0 STEP 7: Mount the device or directory to the file system tree hierarchy. mount /dev/md2 (or /mnt/raid1data) The RAID volume is now ready to be used. :) Let me know what you think. Thanks. |
Step by step instruction in setup Raid 1 after compile kernel 2.4.18
Hi,
I am new to Linux. I am using red hat 7.2 Appreciate if you be able to provide me the step by step in set up raid 1 after recompile new kernel 2.4.18. Regards, x2000koh@yahoo.com.sg |
All times are GMT -5. The time now is 07:20 AM. |