LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (http://www.linuxquestions.org/questions/slackware-14/)
-   -   Question about Raid disk format (http://www.linuxquestions.org/questions/slackware-14/question-about-raid-disk-format-4175440307/)

vdemuth 12-06-2012 11:47 AM

Question about Raid disk format
 
A quick overview. I have just converted an old server which was previously set up as JBOD with 4 drives to 2 distinct drives and a further 2 as a Raid1 array.
This as a precursor to getting some new large drives and doing the job correctly by way of a bit of self learning.
Previously I had 4 drives, /dev/sda to /dev/sdd set up as follows:
All under Slackware 13.37

/dev/sda=/dev/root (160Gb)
/dev/sdb=/media (160Gb)
/dev/sdc=/home (80Gb)
/dev/sdd=/var/www (80Gb)

All formatted as ReiserFS

So, moved the /home and the /var/www/partitions onto /dev/sda and then using WEBMIN, created a Raid0 array formatting to Ext4.

Set as /dev/md0=MediaFiles

Activated the array and tested it by transferring some of the files from /media to MediaFiles and it all seems to work OK,

However, interrogating the drives individually still shows /dev/sdc/and /dev/sdd as being formatted as ReiserFS, whilst interrogating /dev/md0 shows the array formatted as Ext4.

The ultimate aim is for 4 1Tb drives set up as Raid10

So, to a couple of questions. How is it possible to have this discrepancy about the formatted drive types and is there any danger in leaving them well alone?

How do I know that the stripe is actually striping?

Is there any way of looking at the data and seeing half of it on /dev/sdc and the other half on /dev/sdd?

As I said, just a self education exercise at the moment.

wildwizard 12-07-2012 01:47 AM

Quote:

Originally Posted by vdemuth (Post 4843926)
How is it possible to have this discrepancy about the formatted drive types and is there any danger in leaving them well alone?

Ignore it, it's not so much your change to RAID that did it as the way that the filesystems and partitions lay out their data on the disk (if you want to avoid it in future try writing 0's to the first 1MB of the disk when you want a "clean slate")

Quote:

How do I know that the stripe is actually striping?
cat /proc/mdstat

Though in RAID 0 it's either working completely or not at all.

Quote:

Is there any way of looking at the data and seeing half of it on /dev/sdc and the other half on /dev/sdd?
No, the way RAID 0 works means the data is broken in 2 not split in 2, while some forensic tools may be able to extract fragments from such a disk, in normal use if a RAID 0 member disk goes offline the data is lost.

vdemuth 12-07-2012 02:27 AM

Hi wildwizard,

Thanks for that. A bit clearer now.

slugman 12-08-2012 02:54 AM

hmm
 
Anyone with more knowledge and experience, please correct me if I am wrong... However, according to my research, the superblock locations of a md array and of a Reiserfs filesystem are different.

For both ReiserFS and MD arrays, the superblock contains the metadata of the filesystem. The exact information that is kept and location differs for both filesystems.

According the info I read for Reiserfs, the superblock is the 16th block from the start of the disk, where the first 64k is kept for partition information/boot loader stuff.

The superblock location for an mdarray depends on the version of metadata it utilizes. All modern versions of mdadm utilize version 1.2 metadata by default. A 1.2v metadata superblock is located 4k from the beginning of the device.

I believe when you are quering /dev/sdc and /dev/sdd individually, the residual superblock data from reiserfs still exists, which is why you see the same data.

Unless you defined a different blocksize for your reiserfs disks, the default is 4k. As an experiment, you can try the following:

Code:

dd bs=4098 skip=64k count=1 if=/dev/zero of=/dev/sdc
And retry quering /dev/sdc and /dev/sdd to see if they still show as riserfs disks, i.e.

Quote:

fsck -N /dev/sdc /dev/sdd
Also, the best way to determine if your raid 0 array is experiencing any benefit from striping is to perform a IO test to the array.

I recommend creating two files, /MediaFiles/test

and an abritrary test file on of of the non-raid disks, i.e. /home/test

and perform the following:
Code:

dd if=/dev/zero of=/home/test bs=1mb count=256
dd if=/dev/zero of=/MediaFiles/test bs=1mb count=256

Compare the times of the two IO tests. If the time for the 2nd test is signifiantly improved/faster than the first test, you'll know striping is in effect.

It is important to note however, that this is not a complete apples to apples test (more like gala apples and granny smith). The filesystem overhead will be different between you're /root partition and /MediaFiles, considering /root is still in RiserFS and /MediaFiles is in ext4.

Note: IO performance increases significantly for Raid array's when the block size is a multiple of the chunk size. I can't recall the sweet spot, but I remember you will get different results for different block sizes.

Sources:
a. Linux MD Raid Wiki: https://raid.wiki.kernel.org/index.p...rblock_formats

b. ReiserFS: http://homes.cerias.purdue.edu/~flor...r/reiserfs.php


All times are GMT -5. The time now is 11:54 PM.