LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   Adding RAID 10 to current setup (https://www.linuxquestions.org/questions/linux-hardware-18/adding-raid-10-to-current-setup-764520/)

Myiagros 10-26-2009 07:43 AM

Adding RAID 10 to current setup
 
I've been attempting to get a raid 10 setup going with a current linux install of Centos 5.3. I have set the raid up with the onboard intel driver(jmicron) and it shows as RAID10 when I boot up. Once I get into Linux however and do fdisk -i I get this as the following output.
Quote:

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sda: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sda doesn't contain a valid partition table

Disk /dev/sdb: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdd: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 91202 732574583+ ee EFI GPT

Disk /dev/sde: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sde1 * 1 13 104391 83 Linux
/dev/sde2 14 60801 488279610 8e Linux LVM
The 4 750GB drives are the ones set up in the the raid array so I'm assuming the GPT warning is for that. My other guess as to why the drives are shown without partitions would be because I have yet to add any partitions. Would I do that by using gparted and creating a max size partition on each drive? Then of course I would have to add an entry to the fstab, which drive would I put as the source?

cardy 10-26-2009 08:30 AM

The jmicron is supported through the dmraid (Device Mapper) tool. You will need to ensure you have the rpm package dmraid installed.

I would suggest doing.
Quote:

rpm -qa | grep dmraid
if it comes back with a package name then you have it installed if not try
Quote:

yum install dmraid

Myiagros 10-26-2009 08:57 AM

dmraid is installed and I just updated it. I'm unsure as to how to use this program though. dmraid -r gave this output:
[homeworld] ~ > dmraid -r
/dev/sda: isw, "isw_dijhgahgdb", GROUP, ok, 1465149165 sectors, data@ 0
/dev/sdb: isw, "isw_dijhgahgdb", GROUP, ok, 1465149165 sectors, data@ 0
/dev/sdc: isw, "isw_dijhgahgdb", GROUP, ok, 1465149165 sectors, data@ 0
/dev/sdd: isw, "isw_dijhgahgdb", GROUP, ok, 1465149165 sectors, data@ 0

Myiagros 10-26-2009 10:35 AM

I'm playing around with some commands at the moment trying to get it set up properly. I'll put down whatever commands I use and hopefully it works out right.

Quote:

[homeworld] ~ > dmraid -a yes
RAID set "isw_dijhgahgdb_Volume0-0" was activated
RAID set "isw_dijhgahgdb_Volume0-1" was activated
RAID set "isw_dijhgahgdb_Volume0" was activated
device "isw_dijhgahgdb_Volume0-0" is now registered with dmeventd for monitoring
device "isw_dijhgahgdb_Volume0-1" is now registered with dmeventd for monitoring
device "isw_dijhgahgdb_Volume0" is now registered with dmeventd for monitoring

[homeworld] ~ > ls /dev/mapper
control isw_dijhgahgdb_Volume0-0 VolGroup00-LogVol00
isw_dijhgahgdb_Volume0 isw_dijhgahgdb_Volume0-1 VolGroup00-LogVol01

Volume0 refers to the RAID volume name I set with the Intel controller on boot, isw makes it Intel Software raid. To set this in fstab my guess would be /dev/mapper/isw_dijhgahgdb_Volume0 has to be the source. If that is correct then the only issue I am still trying to figure out would be the partitioning of the array.

Myiagros 10-26-2009 02:49 PM

I tried another approach at setting up the array. I started over and used mdadm to create the arrays which went fine. I enabled logging for the array incase anything happened to it and right away I got sent an email saying that the array was degraded, same thing was being shown from the Intel manager the first attempt with it. I shut down and re-organized all the cables thinking something was loose, start back up and the same thing. Here is the output of the email.
Quote:

This is an automatically generated mail message from mdadm
running on homeworld

A DegradedArray event had been detected on md device /dev/md1.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1]
md1 : active raid1 sdc1[0]
732571904 blocks [2/1] [U_]

md0 : active raid1 sdb1[1]
732571904 blocks [2/1] [_U]

unused devices: <none>
Output of fdisk -l
Quote:

[root@homeworld ~]# fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 60801 488279610 8e Linux LVM

Disk /dev/sdb: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 91201 732572001 fd Linux raid autodetect

Disk /dev/sdc: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 * 1 91201 732572001 fd Linux raid autodetect

Disk /dev/sdd: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 * 1 91201 732572001 fd Linux raid autodetect

WARNING: GPT (GUID Partition Table) detected on '/dev/sde'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sde: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sde1 1 91202 732574583+ ee EFI GPT

Disk /dev/md0: 750.1 GB, 750153629696 bytes
2 heads, 4 sectors/track, 183142976 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

This doesn't look like a partition table
Probably you selected the wrong device.

Device Boot Start End Blocks Id System
/dev/md0p1 ? 27266189 240014990 850995205 72 Unknown
Partition 1 does not end on cylinder boundary.
/dev/md0p2 ? 91131273 159128113 271987362 74 Unknown
Partition 2 does not end on cylinder boundary.
/dev/md0p3 ? 21081743 21081743 0 65 Novell Netware 386
Partition 3 does not end on cylinder boundary.
/dev/md0p4 336617473 336623927 25817+ 0 Empty
Partition 4 does not end on cylinder boundary.

Partition table entries are not in disk order

Disk /dev/md1: 750.1 GB, 750153629696 bytes
2 heads, 4 sectors/track, 183142976 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table
By the looks of it /dev/sde is the problem. It has 1 more block than the other 3 identical drives?????? and somehow it formatted with ID ee instead of fd, and is EFI GPT file system instead of the Linux Raid that I told it to format with.
End of the day however so if there are any solutions I'll try them out in the morning.

cardy 10-28-2009 06:55 AM

Mmm the dmraid solution seemed to have defined the device. If you want to use mdadm then you would have to turn off all raid in the bios just making the disks ide disks. You can then use mdadm to setup software raid for the disks.

Quote:

[homeworld] ~ > ls /dev/mapper
control isw_dijhgahgdb_Volume0-0 VolGroup00-LogVol00
isw_dijhgahgdb_Volume0 isw_dijhgahgdb_Volume0-1 VolGroup00-LogVol01
For the above command I would suggest doing

Quote:

ls -l /dev/mapper
That should show more clearly what files are available after doing the

Quote:

dmraid -a yes

It seems to me at this point you had the raid device activated and working you just needed to partition this and apply a file system.

Myiagros 10-28-2009 07:27 AM

Yesterday I managed to get the array going in RAID10 using mdadm. As soon as I turned on the logging I got an email saying the array was degraded. Looks like one of the drives is no good anymore so it's getting RMA'd.


All times are GMT -5. The time now is 08:11 PM.