[SOLVED] virtualized datacenter solution considerations on storage
Linux - Virtualization and CloudThis forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
virtualized datacenter solution considerations on storage
Hi, I'm trying to find the best solution for my small scale datacenter, I have about 10 HP servers all connected to an HP EVA SAN. I have virtual machines (kvm vms and vmware) on the servers local storages.now I'm considering if it's better to move the virtual machines to the SAN. do you think it is wise or not? how about the performance? if some one like me has the SAN storage available, is'nt it better to remove additional server hard disks and only use the san storage?(this way we can make some investment on new server costs). also I'm aware of some great advantages of a shared storage like san, for example the live migration features and so..., but I think the down side is that using these features need non-free VmWare products. what do you think?
Distribution: CentOS, RHEL, Solaris 10, AIX, HP-UX
we using shared storages in all of our virtualized environment. We are not using SAN boot feature, which takes us in trouble in several environments. Performance depends on you use case, so best you will do a SAN capacity analyzation. We are using 8GB/s HBAs at the moment which works well. Also storage mirroring can be a thing for you to prevent from single storage failures. This can be done by using storage virtualization solutions like Falconstor oder Datacare provides.
Shared storage provides us with the possibility to have reliable, high performing, centralized storage, backup and live migration capabilities, which are required in high availability environments. I like this
so would you please tell me how you implement your virtual machines? as I know if you implement the virtual mashin disk image on the SAN then you need to boot from it, how did you not using SAN boot? also what technology do you use (vmware,kevm or xen)?
libvirt supports live migrations, and it is free, so with a SAN hosting the kvm/xen VM images
dyasny, I've been using libvirt for quite a long time, but the problem I encountered with my kvm images and libvirt is that I define several kvm vm's on the local storage of a machine connected to my SAN through FC HBA modules, and I've tried to add some SAN partitions to each vm, but the vm's cant see the defined partiotions because they cant recognize the HBA modules, if I'm right this needs a technology called N_Port ID Virtualization or NPIV and I think kvm and/or libvirt doesnt support it yet. am I wrong? so do you suggest that I define the whole vm on the SAN partition? ( and this may need boot from SAN)
I've just found that I had a miss understanding on not using SAN boot feature. in vmware ESX words you mean I setup a vmware esx on a machine connected to San, and then define the guest machines on a mounted SAN partition data-store, this can easily be implemented by kvm and libvirt too. did I get it right?
Distribution: CentOS, RHEL, Solaris 10, AIX, HP-UX
yes, you are right. The virtual machines are located on the SAN, the ESX/i/Xen etc is installed locally. Yes, i also use the local storage, that's wright, but only for things like testing-images or some like this, not for production machine.
So you did not loos the local storage, it's an additional space for playing and testing.
dyasny, I've been using libvirt for quite a long time, but the problem I encountered with my kvm images and libvirt is that I define several kvm vm's on the local storage of a machine connected to my SAN through FC HBA modules, and I've tried to add some SAN partitions to each vm, but the vm's cant see the defined partiotions because they cant recognize the HBA modules,
The idea is to put the entire VM set of drives on the SAN, whether in images or reserved LVs/partitions doesn't matter
if I'm right this needs a technology called N_Port ID Virtualization or NPIV and I think kvm and/or libvirt doesnt support it yet. am I wrong? so do you suggest that I define the whole vm on the SAN partition? ( and this may need boot from SAN)
The host will boot from it's local drives, then bring up the KVM VMs, whose v-disks are on the SAN. You'll need to set up GFS or something similar to manage the LUNs on that SAN, otherwise you might end up with several hosts trying to write to the same space, corrupting the v-disks, but that's a secondary thought.
BTW, for RHEV, there is no need to configure clustering, all you need is a set of hosts who can access the same LUNs on the SAN.
So yes, everything the VM has should be on the SAN, but the host OS shouldn't be there if you don't want to boot from SAN