KVM I/O performance: Raw partitions versus image files
Linux - Virtualization and CloudThis forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
In this case, our VM would have two virtual drives, /dev/vda (hosting the OS) and /dev/vdb (hosting the VM's core data). These are mapped against physical partitions 3 and 4 of the host's /dev/sda drive. These have been previously created and sized to meet our needs. Additional VMs may share the same physical drive using other partitions.
We were wondering if VMs mapped to physical partitions have any performance advantages over VMs that use pre-allocated image files. For example, instead of paritions sda3 and sda4 above, let's say we did this:
where vm-test-1.img and vm-test-2.img would have been previously created image files sized appropriately. I sometimes create VMs in exactly this manner, but I was always under the assumption that using a raw partition would yield better virtual disk performance than using image files. Our VMs can potentially be very I/O intensive, especially with the VM's data drive (/dev/vdb), so we're interested in what is the best approach to take.
The image files I am referring to here would be created using something like dd. I have never experimented with qcow2 image files. Would these be an option?
I have seen some tests of raw (as in dd) versus qcow2. I'd think that many raw device files could be faster but that leads us to question how one set up the drive mapping and such for the image.
The advantages of qcow 1,2 and three have little to do with speed. They have many other advantages that you may or may not wish to use. A qcow in grow does take a time penalty. Use of snapshots and other features would also but return in it's need or use.
Now as to a dd file or a single partition, I'd guess if both were optimized for the devices, then the test would be mostly equal.
From my simple tests, all hard drive access is counted the same. Only in some compressed formats do the disk numbers increase. That is to say if you had data that could compress well then need to access in on a fast cpu it could exceed the speed of the drive.
Intuitively, I'd think using partitions directly as opposed to image files (even non-sparse) would be faster. A VM accessing its virtual drive would have to go through another translation layer, would it not, if the virtual file system was mapped to a file instead of a raw partition? Or does the KVM drive management bypass the host's file system layer and talk directly to the device when image files are used?
I don't believe it does bypass the hosts drive access principals. In some newer implementations of vm, the host may have access to more real hardware but I don't think that is your case.
The only way to prove would be many tests on vm raw image versus vm mounting of a real partition. I'd guess them to be almost the same. I have never seen actual tests posted. Maybe the qemu or kvm creators or advanced users would know for sure.
That's an interesting post, but unfortunately there are no follow-ups. Plus it doesn't have a comparison with using raw partitions and image files. Ultimately it appears we will have to run our own benchmarks. There's not a lot of information out there...