LVM vs qcow2 on KVM - strange results with disk benchmarks ..
Just testing our new KVM setup in different scenarios and keep getting strange results with LVM vs qcow2 disk IO.
The hardware in question:
Dell Equallogic PS M4110 with 10Gb controllers
Dell PE M620 blades running Ubuntu server 12.10 as KVM host
No multipath just LACP bonding for now.
Running simple disk I/O benchmarks with DiskMark on windows server 2012 VM gave me following results:
LVM - NO cache
Writes: 9.92 MB/s
Reads: 23.25 MB/s
LVM - Cache
Writes: 1.66 GB/s
Reads: 2.06 GB/s
QCOW2 - NO cache
Writes: 15.76 MB/s
Reads: 88.84 MB/s
QCOW2 - Cache
Writes: 3.08 GB/s
Reads: 3.25 GB/s
Clearly, results are rather strange, at least to my eye.
qcow2 file based set up is much faster then LVM.
Any reason why? Either qcow2 got much faster or there is something seriously wrong with my LVM setup.
LVM setup: Separate iSCSI volumes for each blade sliced into logical volumes for VMs.
QCOW2 setup: Shared iSCSI volume formated with OCFS2 clustered filesystem and housing qcow2 files.
It is worth mentioning that I tried similar tests on a Linux VM with different figures but more or less the same end results.
Thanks for any input.
|All times are GMT -5. The time now is 12:49 AM.|