LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices

Reply
 
Search this Thread
Old 11-04-2010, 04:55 AM   #1
djspits
LQ Newbie
 
Registered: Oct 2009
Location: The Hague, The Netherlands
Distribution: Ubuntu 10.04 LTS
Posts: 15

Rep: Reputation: 0
The intended use of RAID and LVM?


Using: Linux server 2.6.32-25-generic #45-Ubuntu SMP Sat Oct 16 19:52:42 UTC 2010 x86_64 GNU/Linux

I have two identical 2 TB eSata external harddisks. Until recently I had them configured as a RAID-1 array. I find that resyncing takes a lot of time, reducing availability too much.

Question: is it good idea to split the disks into 4 partitions of 500GB and use LVM to make them appear as 1 again?

The idea is to reduce syncing time to 1/4-th but I might be over-designing here. How likely is it that only 1 partition would need resyncing? Is there a significant performance-penalty when combining RAID and LVM? Is there a simpler solution? How about using inotify to automatically back-up essential changes?

Looking forward to your ideas.

Dirk Jan Spits.
 
Old 11-04-2010, 08:15 AM   #2
alli_yas
Member
 
Registered: Apr 2010
Location: Johannesburg
Distribution: Fedora 14, RHEL 5.5, CentOS 5.5, Ubuntu 10.04
Posts: 559

Rep: Reputation: 92
Hi

Not following what you mean by resyncing - are you referring to restoring data onto a corrupt device?

Having the drives configured as a RAID 1 array is actually quite a good idea as it will increase your read speed since the data is contained on both drives - and you have double the IOPS available for reads.

Conversely; write speed will be adversely affected since both drives need to be written.

I don't follow what you mean by splitting into 4 partitions and then using LVM to combine again. When you install the OS, you can create 4 LVM Physical Volumes (PVs) OR partition these as "ordinary" non-LVM partitions.

If you do the latter (that is normal ext3/ext4 partitions); you cannot use LVM to "combine" these - you can only add physical volumes to volume groups; and logical volumes can only be created from volume groups.

Coming back to your "re-syncing"; if you're using RAID; I'm 100% sure that you can't synchronize on a parition level via a HW RAID controller - I also doubt SW raid would allow you to do this though I could be wrong.

You would usually synchronize the entire disk in the case of corruption/loss of data.

Perhaps you could explain a bit more clearly what you're trying to achieve and I can try to assist further.
 
Old 11-04-2010, 11:18 AM   #3
djspits
LQ Newbie
 
Registered: Oct 2009
Location: The Hague, The Netherlands
Distribution: Ubuntu 10.04 LTS
Posts: 15

Original Poster
Rep: Reputation: 0
Thanks for your reply, Yas.

I really, really, really tried to present my problem in a clear an concise way - in the hope of generating a lot of feedback - but from your questions I surmise I failed bitterly...

The OS is installed on an internal drive. The two disks I am configuring are external, for storage only.

Resynchronisation is actually the term the SW RAID itself uses for the process that follows the creation of an array or the detection of a problem, e.g. after one of the cables got loose or a power outage.

These things have happened and the resync took more than 12 hours.

What I'm trying to do is to protect a bunch of files. First step was to centralize the lot from several desktops and laptops and make them available through the network. This helped to create awareness of the size, the importance and value of the data and called for extra security measures. Off-site backup is one of them, using mirrored storage is another.

After a previous RAID1 array actually suffered a hardware failure, I discovered spares were not available and that particular model of disks is no longer sold. So we bought two shiny new disks,... But the shear size of them is now posing a problem in the reduced availability due to very long running maintenance operations. I'm shooting myself in the foot because this quickly reduces the user's enthusiasm for central storage.

Guides, HowTo's and man-pages do not answer questions like mine.

So one approach to this problem I have is to break up the 2 TB into 4 partitions of 500 GB hoping to reduce the offline-time to a couple of hours. The smaller size of the partitions should reduce the time needed for syncing/formatting etc. Of course the chunk size of 500 GB is a trivial one but that's where LVM might help by joining the PV again into one logical volume.

At least, that was the plan.

Cheers, DJ
 
Old 11-05-2010, 12:37 AM   #4
alli_yas
Member
 
Registered: Apr 2010
Location: Johannesburg
Distribution: Fedora 14, RHEL 5.5, CentOS 5.5, Ubuntu 10.04
Posts: 559

Rep: Reputation: 92
Hey DJ,

Quote:
So one approach to this problem I have is to break up the 2 TB into 4 partitions of 500 GB hoping to reduce the offline-time to a couple of hours. The smaller size of the partitions should reduce the time needed for syncing/formatting etc. Of course the chunk size of 500 GB is a trivial one but that's where LVM might help by joining the PV again into one logical volume.
I think the above is where we're missing each other. The SW RAID is a level above your partitioning; in the sense that you add physical devices (disks) into your RAID group that's managed by the SW RAID. In the case of the failure of a disk; you can't rebuild selected partitions; you have to rebuild the entire disk - that's how RAID works (whether its HW or SW) - it's meant to protect physical failures on a disk and rebuilds a failed disk completely.

The reason your rebuild is taking so long is due to the sheer size of the drive combined with the speed of the drive (I'm assuming you're using a 7200 rpm SCSI disk?).

Quote:
I'm shooting myself in the foot because this quickly reduces the user's enthusiasm for central storage.
My suggestion here is that you look at a smallish NAS device and configure perhaps a 4 disk RAID 5 group (or RAID 1/0 if you wish) - it will give you a quicker rebuild time and better speed. Also you don't strictly need to rebuild the disk in an offline mode when it fails.

If you have the funds a hardware RAID controller may also be a good idea as it will be quicker in rebuilding a failed drive.
 
Old 11-05-2010, 01:05 AM   #5
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.5, Centos 5.10
Posts: 16,261

Rep: Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028
Actually, you can use fdisk partitions as RAID members. I've done it myself whilst practicing on a single disk machine.
Ditto LVM.
I've even built LVM on top of RAID on 1 disk; again just to practice the cmds.
In the real world however, RAID is usually there to protect against a disk failure. If a disk fails, all partitions are suspect or worse.
Not forgetting you've only got one set of R/W heads per physical disk.
Using RAID 5 with a hot spare is recommended http://en.wikipedia.org/wiki/RAID
To get faster re-syncs, use faster disks & check the bandwidth on the controller.
LVM is designed (primarily) to allow you to keep adding disks to a pool at random times in the future. These constitute a pool of space, which you can slice n dice with VGs & LVs, but does NOT provide any redundancy, unlike RAID.
2 different concepts.
http://en.wikipedia.org/wiki/Logical_volume_management
http://tldp.org/HOWTO/LVM-HOWTO/

Off-site backups enable you to recover from file deletions/corruptions & worse.
(Assuming you have a cycle of backups & notice before all become corrupted.)
 
Old 11-05-2010, 10:49 AM   #6
djspits
LQ Newbie
 
Registered: Oct 2009
Location: The Hague, The Netherlands
Distribution: Ubuntu 10.04 LTS
Posts: 15

Original Poster
Rep: Reputation: 0
Thanks guys,

If you don't mind I would like to continue this discussion under the assumption that I'm not able or willing to change the hardware situation. Also, I'm well aware of the theoretical differences between RAID, Backup and LVM. At the moment I have an off-site backup and a functioning file server. I also have the two new disks and am searching for the optimal way to deploy them. I was consciously steering away from the backup issue. Maybe we can draw that back into the equation later on.

So, we've established that it is possible to build a SW RAID array based on partitions. If you would split two disks and build an array on two halves you would still be able to use the two remaining halves for normal storage. If one of the drives physically and permanently breaks down the RAID part is saved the data on the normal partition is lost. However, temporary failures exist too. They can break the equivalence of the two devices and cause the md-drive to resynchronize. In fact, it takes a lot of time to analyze the differences. A resync actually can propagate errors onto the disk that had the best state! In this kind of scenario's RAID is actually performing worse than a normal backup, I would say.

But, I'm assuming that during repair the data on normal partitions is not touched. Am I right?

So, if you can classify the data into two classes of availability i.e. "high: RAID" and "normal: backup" that would be a good solution, would you agree?

On top of that, you could abstract-away the (size of the) normal partitions by putting them in an LVM group.

Is this a case of over-engineering?

As ever, interested in your thoughts.

Cheers,
DJ
 
Old 11-07-2010, 11:23 PM   #7
alli_yas
Member
 
Registered: Apr 2010
Location: Johannesburg
Distribution: Fedora 14, RHEL 5.5, CentOS 5.5, Ubuntu 10.04
Posts: 559

Rep: Reputation: 92
Quote:
They can break the equivalence of the two devices and cause the md-drive to resynchronize. In fact, it takes a lot of time to analyze the differences. A resync actually can propagate errors onto the disk that had the best state! In this kind of scenario's RAID is actually performing worse than a normal backup, I would say.
You are quite right about this. It depends on error classification - the way I think about it is "Physical" Errors and "Logical Errors". Physical errors are when you have a disk failure of some sort where the disk needs to be physically replaced. In this scenario, a resync would restore all your data - which should be all you need to do.

A logical error is when for some reason your data becomes out of sync or it becomes corrupt for some reason. In this scenario; doing a sync between drives will result in both drives becoming corrupt; since RAID has propagated the error onto both drives. The thing is that RAID is meant to provide redundancy against physical disk failure and not really against logical errors. Thus for critical data you'd need to have both RAID as well as some sort of offsite backup.

This is common practice in for example, enterprise Oracle Database deployments; wherein RAID is used at the primary site; and also RMAN backups being shipped somewhere offsite (or sometimes onsite) to cater for logical errors.
 
1 members found this post helpful.
Old 11-07-2010, 11:47 PM   #8
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.5, Centos 5.10
Posts: 16,261

Rep: Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028
You could split the disks half RAID, half un-RAIDed, but be aware that a problem on one 'half' can affect the other half, as per your example.
Twice as much effort to recover.
Also, controlling the performance of the physical disk is now twice as hard.
In fact, I've never heard of anyone doing it for real ie on a serious server.
Normally, I'd put the most important stuff (usually DB or Web server for example) on RAID, and all the other 'stuff' on normal disks, if you want to minimize use of RAID ie num of allocated disks.

Similarly, you can put LVM on top of RAID, but consider the complications when it breaks (and it will sooner or later). You now have to take both LVM and RAID issues into acct during recovery.

I'm not saying don't do it, just plan ahead and test, test & test again your recovery procedures.
You are going to document all this aren't you?
 
Old 11-10-2010, 01:06 PM   #9
djspits
LQ Newbie
 
Registered: Oct 2009
Location: The Hague, The Netherlands
Distribution: Ubuntu 10.04 LTS
Posts: 15

Original Poster
Rep: Reputation: 0
Thanks again.

chrism01: You could split the disks half RAID, half un-RAIDed, but be aware that a problem on one 'half' can affect the other half, as per your example.

How? In what way do failures (logical only, as we have determined that the physical case is trivial) on one partition influence the availability (e.g. time to restore) of the others? I don't see that. Which example are you referring to? Ah, the twelve hour rebuild. Yes, but that why I started this discussion. By splitting large disks into smaller partitions you improve the availability of data (1) on RAID partitions by shortening the time needed for rebuilds (2) on partitions protected by backups only by removing the penalty for using RAID for data that doesn't need it in the first place.


Looking forward to your thoughts.
DJ
 
Old 11-11-2010, 12:16 AM   #10
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.5, Centos 5.10
Posts: 16,261

Rep: Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028
Certainly a physical failure will require the recovery of both areas. By not splitting disks down the middle you only have one thing to worry about (per disk). Depending on your business, you may even be able to limp along (basic service) on just the non-broken areas for a short time.
At the logical level, you are sharing the perfomance across 2 'apps', so you can't optimize for one. This may or may not be an issue eg if you get a runaway or overload on on 'side' of the disk, it affects the perf of the other 'side'.
Eg if you had a corruption on the RAID & had to re-sync, that load could affect the whole physical disk, depending on how the RAID & HW drivers work.
There's really no exact answer; you just need to consider what works best for you, taking into acct that everything breaks sooner or later & for a business server, that usually means a monetary loss.
It may make sense to split the disks, especially if you've only got a few and you need to use every byte of space with max efficiency.
 
Old 11-11-2010, 01:15 AM   #11
btncix
Member
 
Registered: Aug 2009
Location: NC, USA
Distribution: Slackware x86
Posts: 141

Rep: Reputation: 26
Greetings. I've enjoyed following this thread. Please allow me to ask two questions to help me to better follow this thread.

Does it take equal or less time to resync four 500 MB RAID partitions on two disks compared to the time it takes to resync one 2TB RAID partition on two disks?

In the case of having four 500 MB partitions, after a power failure to the hard drives (whether by a power outage or loose cable), would all four partitions have to be resynced with their corresponding partitions on the second disk or just one/two?
 
Old 11-11-2010, 12:40 PM   #12
djspits
LQ Newbie
 
Registered: Oct 2009
Location: The Hague, The Netherlands
Distribution: Ubuntu 10.04 LTS
Posts: 15

Original Poster
Rep: Reputation: 0
Hi again to all.

btncix: Comparison of resync-times between 1 x 2TB versus 4 x 500GB?

I can only answer the question in part. I verified that time is proportional to the size of the partition. However, (at least on my machine) a resync of two partitions on the same disk is _not_ executed parallel. The second manually triggered sync is started but waits for the first to finish. I do not know if other HW setups exist that would allow the resync processes to run in parallel.

Of course, if all logical errors would lead to a resync of all partitions there would be no gain in splitting the volume into smaller partitions. However, I don't see why that would be so. Each (couple of) physical partitions is managed as a single virtual device, completely independent of any or all other arrays on the same disks. Each RAID array monitors its own internal state and takes action if something goes awry. Logical errors or in-balances are caused by (pending) write-operations at the time of failure. An array that was mounted read-only (for the sake of argument) would never suffer any such damage. Would you agree?

btncix: ...a power failure to the hard drives (whether by a power outage or loose cable)...

Mind you, I noticed a misunderstanding in your question; the cables we were talking about earlier on in this discussion are data cables, not power cables. The point here being that a power outage probably hits both physical disks at the same time but a faulty data-connection hits only one, precisely causing the kind of problems we are discussing here.

I would like to add that I see a fresh relevance in this discussion. This might be a problem with existing technology that only recently _could_ come to light.
1) Developments in price and performance suddenly increase the number of these large devices being in-use and configuring SW RAID has become a lot more user-friendly leading to an increase in the installed-base
2) With new standards like USB 3.0 and external-SATA there are more data-cables lying around and more separately powered devices, giving both more potential points of failure and more of these particular type of failures.
3) There is one brand (the one I bought of course) that ships their disks with an environmental-friendly tristate On/Off/Auto switch. The later setting intended to reduce power consumption by switching the drive off if there is no traffic for a certain amount of time. (Kids, do not try this at home!)
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Dual drive failure in RAID 5 (also, RAID 1, and LVM) ABL Linux - Server 6 05-27-2009 08:01 PM
Proof That RAID Is Not Intended For Backups trickykid General 6 01-02-2009 11:05 PM
software raid 5 + LVM and other raid questions slackman Slackware 5 05-09-2007 02:58 PM
raid-1 and lvm together?? Vasili Linux - Enterprise 7 12-04-2006 04:59 AM
LVM and Raid Cybercool Linux - Newbie 17 03-11-2004 07:09 AM


All times are GMT -5. The time now is 10:49 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration