LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (http://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   problems adding disks/ raid configuration - intel embedded hardware raid (http://www.linuxquestions.org/questions/linux-hardware-18/problems-adding-disks-raid-configuration-intel-embedded-hardware-raid-834075/)

birdmanpdx 09-23-2010 02:17 PM

problems adding disks/ raid configuration - intel embedded hardware raid
 
Have a server running CentOS5, 2 disks in a RAID 1 configuration, using Intel embedded server RAID. We set up a raid configuration, it all works fine.

Now, we are trying to add 2 more disks to the system to increase space. We don't want to mess with the existing disks, so we wanted to add a new logical drive configuration for the 2 new physical drives. We installed the disks and entered the BIOS config utility, attempting to create a new logical array for these two new physical drives. When we got to the appropriate menu (we tried this using "easy" configuration and also the behavior was the same if we used "view/add configuration"), it would then show 4 drives: 0, 1, 2, and 3. 0 and 1 had a drive number next to each, whereas 2 and 3 did not. This is what I would expect, since 0 and 1 are already configured into an array. However, if I used the arrow keys to go down to 2 and 3, it wouldn't let me it would just skip over them. I was only allowed to select 0 or 1. It was almost as if 2 and 3 simply weren't available.

I then booted up the machine normally to check out the file system to see if I could see the new drives there. If I ran fdisk l, I got this output:

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 13 104391 83 Linux
/dev/sdb2 14 2563 20482875 83 Linux
/dev/sdb3 2564 3085 4192965 82 Linux swap / Solaris
/dev/sdb4 3086 60779 463427055 83 Linux

and if I run df k, I get this:

[root@localhost dev]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 19840924 3433940 15382844 19% /
/dev/sda4 448913220 417168396 8573472 98% /u1
/dev/sda1 101086 16336 79531 18% /boot
tmpfs 1036096 0 1036096 0% /dev/shm

These are reflective of the 2 original disks, but nothing here about the two news ones. Not sure if there is even supposed to be, we were just trying to dig for info.

So, are we doing this right? and/or what might this be indicative of?

thanks


All times are GMT -5. The time now is 07:49 AM.