LVM: If hard disk fails i loose it all?
I have tree simple questions about LVM that just could not be answered after 4 hours of search in LVM howto, Google, Linux questions and linux-lvm Archives.
I have a small file server with 600Gbytes in one liner volume group consisting of 3 200 Gbyte Hard disk. My questions are:
1. If one hard disk fails (hardware) do I loose all the data stored on the VG?
2. Can I add a new hard disk in the VG without having to format it before? (I mean if it is full of data can I just add it?)
3. In case of failure can I recover the data from a single disk on another box?
Thanks in advance
I think you can remove volumes from LVM control without destroying them, so remove the good devices.
Another thing I've had to do (not your question, but somewhat related) was to convert an ext3 volume to an ext2 one when a sector in the journal file area went bad.
there is an option to reduce the vg,and to move the VG from one disk to another to remove the bad disk. but that is before it fails and i am no prophet to know when on disk is going to fail.
"The failure or removal of a drive that LVM is currently using will cause problems with current use and future activations of the VG that was using it." - TLDP LVM2 FAQ
If you don't do some kind of RAID (other then a straight strip) then there is no mechanism to ensure no data loss when you loose a disk.
RAID 5 is the common way of combining 3 disks in a fast yet safe manner. It works by doing a strip across two disks and uses the third for parity. This way if any one disk fails you either have all the data still or can easily calculate what the data previously was.
it seems to me that i have to go for raid 5. the controlers are expensive so i will try it in software. Back to studining. 2 quick questions.
1. can I do it with different size disks?
2. The usable/real space ratio is 4/5? (if i have 5 200GB HD i will have 800 GB space) Is there another raid with better ratio and more disks?
Which is more expensive... a controller-card, or your data? :jawa: Consider your moves carefully so that you are not "penny wise and pound foolish." Hardware-RAID is no panacea, but it does tend to be a good bit more efficient.
Notice, however, that hardware-efficiency is most important for the "first-tier" data; the data which is accessed most frequently (continuously...) and which needs the fastest delivery-time. Your logical-volume structure may contain other storage areas which are somewhat less speedy, and it may contain external devices such as FireWire or USB-2.0 units. These might be appropriate for the use of software-RAID.
One of the goals of an LVM is to allow data to be migrated off a drive so that the drive can be taken out of the installation even while the storage-cluster is running. But the data on any drive is subject to the vagaries of the device itself. If the device fails, and there is no RAID and/or no current, usable backup, then the data is gone no matter what you do.
Hard drives have built-in diagnostic capabilities, such as SMART (see man smartd, man smartctl) which can be set up for periodic polling by the operating system to alert you to possible failures. If you are managing a large storage cluster, you should have this in place.
Related to post #1:
1) You should be able to recover the data on the other disks, but there are no guarantees. Look at the options in “man lvm”.
2) Yes, you can add an entire disk as a physical volume, but not the data (there may be some exceptions if the drive was already a physical volume on another system). You would need to mount the drive/partitions as usual and copy the contents into the LVM.
You need to zero out the initial clusters to make an entire drive into a physical volume. That’s kind of tough on the partition/filesystem structure. Look at the warning early in “man pvcreate”.
Related to post #5:
1) The drives can be different sizes, unless you’re doing a scattered lvm (essentially a raid0).
2) Regarding increased reliability issues, software raid is a lot cheaper than a hardware raid controller, but it takes a lot of CPU power to do fast writes to a software raid5. On the flip side, the reads from a software raid5 tend to be blazingly fast, similar to the case for raid1 reads.
If you have limited CPU power, a good option is software raid10, where you write as a raid0 to pairs of raid1 drives. Obviously, you would need twice as many drives as you would need for a non-raid situation.
Another software raid option is to separated heavily used folders onto different raid1 devices, but it takes a lot of advanced planing to get it right. For instance, in the case of a web server, putting /var on one pair of raid1 drives and putting /usr (or “/”) on a different pair of raid1 drives, tends to give decent write rates and fast read rates. For a user file server, you might want to separate /home from /usr. Again, you have to think through it and plan for future needs before you set it up.
First of all thanks to sundialsvcs and WhatsHisName your help was helpfull.
I'm now considering hardware solutions with hardware controler raid 5. The problem is quite simple acctually i have a budget of 3000 euros (about 3500$) to make 3 or 4 Terrabyte file server. this file server is no very demanding in bandwith since it will mainly consist of movies and mp3s, there wont be more than 400 persons to use. and we are all connected to 100mbits interfaces so max b/w of all network will be 8 builting * 100mbits per builting = 800Mbits. one option i was considering is this http://www.linuxquestions.org/questi...d.php?t=390115
but it seems to me its very complex so i was thinking of going like the rest of the world with 16 port SATA contoler and cheap 300GB disks
i will start searching about what filesystem i'll use and os. I'am between FC4 and FreeBSD 6. many things to worry...
|All times are GMT -5. The time now is 10:48 PM.|