Testing out install to virtio disk - performance sucks!
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Testing out install to virtio disk - performance sucks!
Some kind person seems to have added in -current the necessary code to liloconfig to install to /dev/vda, which is excellent, even if it's done like this:
Code:
if dmidecode 2> /dev/null | grep -q QEMU 2> /dev/null ; then
if [ -r /dev/vda ]; then
MBR_TARGET=/dev/vda
echo $MBR_TARGET > $TMP/LILOMBR
fi
fi
Side note: IMHO Slack shouldn't be looking for the string QEMU, if the device is there the driver will have already detected it so, /proc/partitions will contain vda. Checking for the existence of a /dev file is not a good way to check if a device is present. But I'll let that one slide for the moment...
My problem is when I do this and add the option -drive file=mydisk.img,io=virtio to qemu execution an install of Slackware runs significantly slower compared to just using -drive file=mydisk.img. I thought it'd be quicker, is there something else I have to do (qemu command-line options or whatever) to make the slackware install run faster?
are you already including the virtio_blk module in the initrd of the emulated Slackware ?
Hi Matteo, with -current that's only required for first boot after install whereas I'm only testing the install. My issue is just performance of the install which seems to be poor.
that's just strange, because the install disk should already use the virtio disk module, or you won't see /dev/vda at all, I think...
how are you measuring the performance of the install compared to one to a virtual disk not using virtio? I suppose in that case you are installing to /dev/sda, right?
that's just strange, because the install disk should already use the virtio disk module, or you won't see /dev/vda at all, I think...
Yes, strange indeed. The virtio install is going exactly as I would expect. First, I fdisk /dev/vda (instead of /dev/sda) create my partition there, then I run setup, it detects /dev/vda instead of /dev/sda gives the usual prompts about formatting the disk, then for the lilo step it figures out that it should install lilo to MBR on /dev/vda etc. The virtio block driver is obviously there in the huge kernel, otherwise nothing will work. I haven't given the emulated system any other disks that it could install to, and I've checked that:
how are you measuring the performance of the install compared to one to a virtual disk not using virtio?
That's a bit complicated but I'm automating the install with an expect script, and the script just detects what is there, either /dev/sda or /dev/vda, runs setup and powers off when setup completes.
Quote:
Originally Posted by ponce
I suppose in that case you are installing to /dev/sda, right?
That's a bit complicated but I'm automating the install with an expect script, and the script just detects what is there, either /dev/sda or /dev/vda, runs setup and powers off when setup completes.
as I am not experiencing the same here maybe you could have some hint doing the operations manually (eg. not using expect)?
as I am not experiencing the same here maybe you could have some hint doing the operations manually (eg. not using expect)?
Yes, worth doing. So what was your performance like? Any improvement with virtio, if so what % (roughly)? I saw it was about 30% slower using virtio, with qcow2 disk.
now I'm confused because it seems to me that you are introducing two variables: are you doing one test with a raw disk image and the standard disk driver and another with virtio and a qcow2 image file?
how can you be sure that it's the virtio driver the cause of the procedure being slower if you change also the disk drive image format?
also, are you using a preallocated qcow2 file or one that grows dinamically?
IMHO you should test one step at a time, if you want to pinpoint what is happening there.
now I'm confused because it seems to me that you are introducing two variables: are you doing one test with a raw disk image and the standard disk driver and another with virtio and a qcow2 image file?
No, and I'd be bonkers to knowingly do so.
Quote:
Originally Posted by ponce
how can you be sure that it's the virtio driver the cause of the procedure being slower if you change also the disk drive image format?
also, are you using a preallocated qcow2 file or one that grows dinamically?
I introduced 'qcow2' into the thread now only to compare with any results you might have, not so I can compare with mine. Of course, I used this in both my test cases, deleting the disk each time, something like:
I switched to dynamically growing qcow2 images (the one you seem to use too as you create them with that syntax) ages ago so I don't have any test available, sorry.
but yes, performance is a bit slower with those in respect of preallocated images, that's the price you pay for a dynamic resize during use: they have also another advantage, that you can use qemu-img to compress them (when the vm is down) so that they are smaller.
the last time I tested (still ages ago) virtio was instead way faster than the standard block driver: I basically use just virtio, also with windows guests, now so I cannot compare.
the last time I tested (still ages ago) virtio was instead way faster than the standard block driver: I basically use just virtio, also with windows guests, now so I cannot compare.
I've re-tested this and got different results on a different machine. This time round I saw no noticeable difference between virtio and the normal block driver, at least for the Slackware install. Perhaps compiles will run faster though.
iirc, the virtio-scsi is more performant than the older virtio-blk. All my vm's are using virtio-scsi, and I have no problems as these devices present as scsi in the vm.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.