kvm and Slackware - slimming it down
Hey all,
I've started experimenting with using Slackware 13 as a host for server consolidation using kvm. So far it's working pretty well, but I do have some questions I hope some experienced Slackware/kvm users can help answer: 1: I'd like the host to be headless. What is considered "the best way" to manage a kvm host? virt-manager? libvirt? Plain old SSH access and qemu? Or? 2: So far I've experienced mostly with raw qemu images. I/O is fast, and it seems to just work, but it appears to me that snapshots are not possible with raw images, unless I use LVM? If I'm forced to use LVM, would it be considered good practice to use LVM on top of my regular software RAID1 setup? 3: Network. Coming from VirtualBox where setting up the network is a breeze, I feel a bit in the dark with qemu-kvm. For my tests I've just used the build-in DHCP/NAT system, but for actual live servers I'd much prefer if they can be assigned one or more IP numbers on my LAN. What is considered best practice for that? 4: Upgrading. When I upgrade my current VirtualBox setup, I just stop all the guests, upgrade Virtualbox, start the guests and wham bam job done. Is the procedure with qemu-kvm as simple? 5: virtio - what's the deal with that? I've read a bit about it, but I'm not quite sure where it fits in. 6: I will be running Slackware VM's exclusively. Are there any optimization tricks I can do in the WM? 7: Filesystem. For my tests I've just used ext4 for host and guests. Is ext4 a good choice, or should I go with something different? That should just about cover it for now. :) |
I'm running KVM with Slackware on both hosts and guests so here goes.
1) I access my hosts and guests both through SSH and Webmin. I do have shared keyboard, video and mouse for the hosts that is available. I used it much more when I was learning but not is rarely used. 2) I use raw images too. Snapshots are available if you use qcow2 format. I don't use LVM but IIRC LVM is only needed if you want live migration between hosts. 3) My guests pull IPs from my LAN DHCP server just like the hosts. I use tunctl to build an ethernet bridge on the host and connect to that with tap commands on guest start up. 4) You don't need to stop the guests while you upgrade qemu-kvm on the host however after the host upgrade I stop and start the guest to read the new code. 5) Speed is the reason for using virtio. They are paravirtual drivers that give guests direct access to host devices. I use virtio for ethernet and drive access whenever possible. 6) Tricks? I'll have to think about this one. I use a lot of homegrown scripting to manage hosts and guests but I can't remember any Slackware specific tricks. 7) I primarily use ext3 on the hosts and guests because it has yet to fail me. I don't see why ext4 should be a problem if that's what you want to run. |
I use slackware64-13 as host to several windows guests.
1 - I use SSH. 2 - I use only qcow2. 3 - I use VDE + bridge. It just magically works :) I used VDE to create tap0 device, then bridged tap0 with eth0. This setup allows me have any VM talking with the physical machines. VMs just get an IP address from dhcp server like any physical machine would. Also, all VMs share tap0 as their network device (as long as each VM has a unique mac address) so I don't need to make a new tap interface for each new VM I decide to create. 4 - I stop the guests, upgrade kvm, then start guests again. It never failed to work :) 5 - Using virtio drivers for HD and NIC in windows guests makes a huge difference in performance. Never used it on linux guests though... 6 - No linux guest experience, sorry. 7 - I use ext3 and had zero problems so far, although ext4 should be ok as well. EDIT: I use kvm (www.linux-kvm.org) not qemu + kvm. |
Quote:
Hmmm, actually I don't really need snapshots either, as I already do backups of every bit of data that have any kind of significance.. Must think a bit on that. Quote:
Is there a super-fast "local" connection between VM's? Something equivalent to VirtualBox internal/host network setup? Quote:
Quote:
I'll do some tests with ext3. How about keeping all the raw images on a ZFS filesystem (OpenSolaris or something like that) and exporting them via NFS? :) /Thomas |
Quote:
Quote:
Quote:
Man, I really need to learn more about networking on Linux. I utterly suck at this. Any and all advice, hints, tips and links are more than welcome. :) /Thomas |
Quote:
Quote:
Quote:
Quote:
|
Quote:
I prepare the VMs on a slackware 13 laptop, so I can configure it properly and turn on RDP and whatnot, but then migrate them to a slackware64 13 headless server. Quote:
I recomend you convert one of your VMs from raw to qcow2 and run both "versions" in parallel :) Quote:
Think of it as "virtual network switch" + "virtual network cables" that you can use to make a network environment on a single machine. You can build very complex networks, and even connect virtual switches across the internet through ssh :) I recommend you read AlienBOB's wiki on VDE. http://alien.slackbook.org/dokuwiki/...=slackware:vde Although he does not use a bridge, he pointed me in the right direction. You can also get his VDE package here: http://connie.slackware.com/~alien/slackbuilds/vde/ Here is my script to start vde and bridge. I call this script from rc.inet1, so it will config the br0 interface (the bridge) automatically. Code:
#! /bin/sh |
Quote:
|
Quote:
Quote:
I succeeded in getting a bridge and a tap up and running today, so my test-VM's (or one of them at least) could connect to my LAN and my LAN could connect to them. I'm wondering whether I will have to create bridges and taps for each of the VM's, or if they can share them? Also, if I have a few VM's who will have to transmit lots of data between them, should I just use the bridge that connects the VM's to my LAN, or should I set up a "private" bridge between the VM's? So many questions... :) /Thomas |
Quote:
Quote:
Quote:
They REALLY make a difference. |
Quote:
I think I'm going to do some more experimenting with this method, before I look into VDE. Quote:
Yes, I must admit that the performance numbers are very clear. :) A few of the more experienced IRC #kvm people have expressed concerns about the quality of the virtio block device drivers. I wonder if the same goes for the ethernet drivers? I guess I'm just going to have to try it. /Thomas |
Quote:
To answer some of your other questions from my perspective: 1. My hosts are headless and so are my clients. With bridged networking, it's no different than managing regular servers. 2. As others have said, qcow2 format supports VM snapshots. Unrelated: using LVM on top of software raid1 is fine. From LVM's perspective, a raid1 is just another block device that can be used as a physical volume. 3. As above. You can have bind or dnsmasq give out static and dynamic addresses. 4. I started another thread on here yesterday because I upgraded kvm userspace and it stopped working, possibly due to having an older kvm kernel module. I haven't got around to building a new one just yet. It may just work then but at least from my point of view, upgrading was not as simple as removing the old and configure && make && make install'ing the new. 5. No experience with virtio, but this thread seems positive, so I might try it at least for networking drivers. 6. n/a 7. ext3/ext4 are fine. Do note: if your host does write caching and crashes, your guests may lose data even if they use a journaling filesystem. |
Quote:
A single tap device is created by slackware's rc.scripts and then is used by any and all VMs you want. There is no need to run a script that creates tap devices each time a VM is started and no scripts to delete said tap devices once the VM is stopped. It just works :) Here is my typical VM-starting script. It can't be any simpler than this: Code:
#!/bin/sh |
All times are GMT -5. The time now is 03:50 AM. |