LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 12-06-2012, 11:47 AM   #1
vdemuth
Member
 
Registered: Oct 2003
Location: West Midlands, UK
Distribution: Slackware 14 (Server),OpenSuse 13.2 (Laptop & Desktop),, OpenSuse 13.2 on the wifes lappy
Posts: 781

Rep: Reputation: 98
Question about Raid disk format


A quick overview. I have just converted an old server which was previously set up as JBOD with 4 drives to 2 distinct drives and a further 2 as a Raid1 array.
This as a precursor to getting some new large drives and doing the job correctly by way of a bit of self learning.
Previously I had 4 drives, /dev/sda to /dev/sdd set up as follows:
All under Slackware 13.37

/dev/sda=/dev/root (160Gb)
/dev/sdb=/media (160Gb)
/dev/sdc=/home (80Gb)
/dev/sdd=/var/www (80Gb)

All formatted as ReiserFS

So, moved the /home and the /var/www/partitions onto /dev/sda and then using WEBMIN, created a Raid0 array formatting to Ext4.

Set as /dev/md0=MediaFiles

Activated the array and tested it by transferring some of the files from /media to MediaFiles and it all seems to work OK,

However, interrogating the drives individually still shows /dev/sdc/and /dev/sdd as being formatted as ReiserFS, whilst interrogating /dev/md0 shows the array formatted as Ext4.

The ultimate aim is for 4 1Tb drives set up as Raid10

So, to a couple of questions. How is it possible to have this discrepancy about the formatted drive types and is there any danger in leaving them well alone?

How do I know that the stripe is actually striping?

Is there any way of looking at the data and seeing half of it on /dev/sdc and the other half on /dev/sdd?

As I said, just a self education exercise at the moment.
 
Old 12-07-2012, 01:47 AM   #2
wildwizard
Member
 
Registered: Apr 2009
Location: Oz
Distribution: slackware64-14.0
Posts: 875

Rep: Reputation: 282Reputation: 282Reputation: 282
Quote:
Originally Posted by vdemuth View Post
How is it possible to have this discrepancy about the formatted drive types and is there any danger in leaving them well alone?
Ignore it, it's not so much your change to RAID that did it as the way that the filesystems and partitions lay out their data on the disk (if you want to avoid it in future try writing 0's to the first 1MB of the disk when you want a "clean slate")

Quote:
How do I know that the stripe is actually striping?
cat /proc/mdstat

Though in RAID 0 it's either working completely or not at all.

Quote:
Is there any way of looking at the data and seeing half of it on /dev/sdc and the other half on /dev/sdd?
No, the way RAID 0 works means the data is broken in 2 not split in 2, while some forensic tools may be able to extract fragments from such a disk, in normal use if a RAID 0 member disk goes offline the data is lost.
 
1 members found this post helpful.
Old 12-07-2012, 02:27 AM   #3
vdemuth
Member
 
Registered: Oct 2003
Location: West Midlands, UK
Distribution: Slackware 14 (Server),OpenSuse 13.2 (Laptop & Desktop),, OpenSuse 13.2 on the wifes lappy
Posts: 781

Original Poster
Rep: Reputation: 98
Hi wildwizard,

Thanks for that. A bit clearer now.
 
Old 12-08-2012, 02:54 AM   #4
slugman
Member
 
Registered: Jun 2010
Location: AZ
Distribution: Slackware
Posts: 106

Rep: Reputation: 1
Lightbulb hmm

Anyone with more knowledge and experience, please correct me if I am wrong... However, according to my research, the superblock locations of a md array and of a Reiserfs filesystem are different.

For both ReiserFS and MD arrays, the superblock contains the metadata of the filesystem. The exact information that is kept and location differs for both filesystems.

According the info I read for Reiserfs, the superblock is the 16th block from the start of the disk, where the first 64k is kept for partition information/boot loader stuff.

The superblock location for an mdarray depends on the version of metadata it utilizes. All modern versions of mdadm utilize version 1.2 metadata by default. A 1.2v metadata superblock is located 4k from the beginning of the device.

I believe when you are quering /dev/sdc and /dev/sdd individually, the residual superblock data from reiserfs still exists, which is why you see the same data.

Unless you defined a different blocksize for your reiserfs disks, the default is 4k. As an experiment, you can try the following:

Code:
dd bs=4098 skip=64k count=1 if=/dev/zero of=/dev/sdc
And retry quering /dev/sdc and /dev/sdd to see if they still show as riserfs disks, i.e.

Quote:
fsck -N /dev/sdc /dev/sdd
Also, the best way to determine if your raid 0 array is experiencing any benefit from striping is to perform a IO test to the array.

I recommend creating two files, /MediaFiles/test

and an abritrary test file on of of the non-raid disks, i.e. /home/test

and perform the following:
Code:
dd if=/dev/zero of=/home/test bs=1mb count=256
dd if=/dev/zero of=/MediaFiles/test bs=1mb count=256
Compare the times of the two IO tests. If the time for the 2nd test is signifiantly improved/faster than the first test, you'll know striping is in effect.

It is important to note however, that this is not a complete apples to apples test (more like gala apples and granny smith). The filesystem overhead will be different between you're /root partition and /MediaFiles, considering /root is still in RiserFS and /MediaFiles is in ext4.

Note: IO performance increases significantly for Raid array's when the block size is a multiple of the chunk size. I can't recall the sweet spot, but I remember you will get different results for different block sizes.

Sources:
a. Linux MD Raid Wiki: https://raid.wiki.kernel.org/index.p...rblock_formats

b. ReiserFS: http://homes.cerias.purdue.edu/~flor...r/reiserfs.php
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
kvm-qemu-img: qcow2 disk image format changes to raw format in RHEL-KVM 5.6 rajivdp Linux - Virtualization and Cloud 2 09-19-2011 11:23 AM
ICP raid controller, no automatic rebuild of raid 5 after replacing bad disk auclark@wsu.edu Linux - Newbie 3 12-14-2009 10:54 AM
Fedora 11 RAID 1, Disk failure - How to boot from the single working disk NothingSpecial Linux - Hardware 2 10-18-2009 06:20 PM
[SOLVED] Software RAID (mdadm) - RAID 0 returns incorrect status for disk failure/disk removed Marjonel Montejo Linux - General 4 10-04-2009 06:15 PM
Software Raid Question(install new disk with mdraid) poj Linux - Software 3 05-23-2006 05:50 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 12:23 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration