LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   kvm and Slackware - slimming it down (https://www.linuxquestions.org/questions/slackware-14/kvm-and-slackware-slimming-it-down-793173/)

TL_CLD 03-04-2010 08:57 AM

kvm and Slackware - slimming it down
 
Hey all,

I've started experimenting with using Slackware 13 as a host for server consolidation using kvm.

So far it's working pretty well, but I do have some questions I hope some experienced Slackware/kvm users can help answer:

1:
I'd like the host to be headless. What is considered "the best way" to manage a kvm host? virt-manager? libvirt? Plain old SSH access and qemu? Or?

2:
So far I've experienced mostly with raw qemu images. I/O is fast, and it seems to just work, but it appears to me that snapshots are not possible with raw images, unless I use LVM? If I'm forced to use LVM, would it be considered good practice to use LVM on top of my regular software RAID1 setup?

3:
Network. Coming from VirtualBox where setting up the network is a breeze, I feel a bit in the dark with qemu-kvm. For my tests I've just used the build-in DHCP/NAT system, but for actual live servers I'd much prefer if they can be assigned one or more IP numbers on my LAN. What is considered best practice for that?

4:
Upgrading. When I upgrade my current VirtualBox setup, I just stop all the guests, upgrade Virtualbox, start the guests and wham bam job done. Is the procedure with qemu-kvm as simple?

5:
virtio - what's the deal with that? I've read a bit about it, but I'm not quite sure where it fits in.

6:
I will be running Slackware VM's exclusively. Are there any optimization tricks I can do in the WM?

7:
Filesystem. For my tests I've just used ext4 for host and guests. Is ext4 a good choice, or should I go with something different?

That should just about cover it for now. :)

Chuck56 03-04-2010 09:26 AM

I'm running KVM with Slackware on both hosts and guests so here goes.

1) I access my hosts and guests both through SSH and Webmin. I do have shared keyboard, video and mouse for the hosts that is available. I used it much more when I was learning but not is rarely used.

2) I use raw images too. Snapshots are available if you use qcow2 format. I don't use LVM but IIRC LVM is only needed if you want live migration between hosts.

3) My guests pull IPs from my LAN DHCP server just like the hosts. I use tunctl to build an ethernet bridge on the host and connect to that with tap commands on guest start up.

4) You don't need to stop the guests while you upgrade qemu-kvm on the host however after the host upgrade I stop and start the guest to read the new code.

5) Speed is the reason for using virtio. They are paravirtual drivers that give guests direct access to host devices. I use virtio for ethernet and drive access whenever possible.

6) Tricks? I'll have to think about this one. I use a lot of homegrown scripting to manage hosts and guests but I can't remember any Slackware specific tricks.

7) I primarily use ext3 on the hosts and guests because it has yet to fail me. I don't see why ext4 should be a problem if that's what you want to run.

Slax-Dude 03-05-2010 09:20 AM

I use slackware64-13 as host to several windows guests.

1 - I use SSH.

2 - I use only qcow2.

3 - I use VDE + bridge. It just magically works :)
I used VDE to create tap0 device, then bridged tap0 with eth0.
This setup allows me have any VM talking with the physical machines.
VMs just get an IP address from dhcp server like any physical machine would.
Also, all VMs share tap0 as their network device (as long as each VM has a unique mac address) so I don't need to make a new tap interface for each new VM I decide to create.

4 - I stop the guests, upgrade kvm, then start guests again. It never failed to work :)

5 - Using virtio drivers for HD and NIC in windows guests makes a huge difference in performance. Never used it on linux guests though...

6 - No linux guest experience, sorry.

7 - I use ext3 and had zero problems so far, although ext4 should be ok as well.

EDIT: I use kvm (www.linux-kvm.org) not qemu + kvm.

TL_CLD 03-05-2010 11:13 AM

Quote:

Originally Posted by Chuck56 (Post 3885596)
2) I use raw images too. Snapshots are available if you use qcow2 format. I don't use LVM but IIRC LVM is only needed if you want live migration between hosts.

So you don't do snapshots?

Hmmm, actually I don't really need snapshots either, as I already do backups of every bit of data that have any kind of significance.. Must think a bit on that.

Quote:

Originally Posted by Chuck56 (Post 3885596)
3) My guests pull IPs from my LAN DHCP server just like the hosts. I use tunctl to build an ethernet bridge on the host and connect to that with tap commands on guest start up.

I guess I have some reading to do. tunctl, ethernet bridge, tap - fairly unknown stuff to me.

Is there a super-fast "local" connection between VM's? Something equivalent to VirtualBox internal/host network setup?

Quote:

Originally Posted by Chuck56 (Post 3885596)
5) Speed is the reason for using virtio. They are paravirtual drivers that give guests direct access to host devices. I use virtio for ethernet and drive access whenever possible.

The virtio drivers are part of the kernel right?

Quote:

Originally Posted by Chuck56 (Post 3885596)
7) I primarily use ext3 on the hosts and guests because it has yet to fail me. I don't see why ext4 should be a problem if that's what you want to run.

I've done some tests with ext4 on both host and guest, and the I/O performance seems pretty good.

I'll do some tests with ext3.

How about keeping all the raw images on a ZFS filesystem (OpenSolaris or something like that) and exporting them via NFS?

:)
/Thomas

TL_CLD 03-05-2010 11:31 AM

Quote:

Originally Posted by Slax-Dude (Post 3887026)
I use slackware64-13 as host to several windows guests.

1 - I use SSH.

So you're running your Windows VM's on a headless Slackware 13 server? How do you access them? Builtin VNC? RDP?

Quote:

Originally Posted by Slax-Dude (Post 3887026)
2 - I use only qcow2.

I've read there are some significant performance issues with qcow2, and since I intend to use KVM for servers, I need as much performance as I can get. Maybe I should try qcow2 before dismissing it. :)

Quote:

Originally Posted by Slax-Dude (Post 3887026)
3 - I use VDE + bridge. It just magically works :)
I used VDE to create tap0 device, then bridged tap0 with eth0.
This setup allows me have any VM talking with the physical machines.
VMs just get an IP address from dhcp server like any physical machine would.
Also, all VMs share tap0 as their network device (as long as each VM has a unique mac address) so I don't need to make a new tap interface for each new VM I decide to create.

VDE?

Man, I really need to learn more about networking on Linux. I utterly suck at this.

Any and all advice, hints, tips and links are more than welcome. :)

/Thomas

Chuck56 03-05-2010 12:01 PM

Quote:

Originally Posted by TL_CLD (Post 3887143)
So you don't do snapshots?

Nope. I use rsnapshot nightly to backup critical files but the name is a coincidence.

Quote:

Originally Posted by TL_CLD (Post 3887143)
Is there a super-fast "local" connection between VM's? Something equivalent to VirtualBox internal/host network setup?

VDE might be what you're looking for but I've never used it. It may have a slight impact on ethernet performance because it introduces additional emulators for some of the virtual ethernet components.

Quote:

Originally Posted by TL_CLD (Post 3887143)
The virtio drivers are part of the kernel right?

Yep. They are built as kernel modules in Slackware.

Quote:

Originally Posted by TL_CLD (Post 3887143)
How about keeping all the raw images on a ZFS filesystem (OpenSolaris or something like that) and exporting them via NFS?

Use whatever file system you are most comfortable with. I prefer ext3/4 due to the journal and recovery. Once your raw files are in place they don't move around unless you are making copies of them. The data inside the VM images is what is changing.

Slax-Dude 03-08-2010 06:51 AM

Quote:

Originally Posted by TL_CLD (Post 3887166)
So you're running your Windows VM's on a headless Slackware 13 server? How do you access them? Builtin VNC? RDP?

I use the guest RDP.
I prepare the VMs on a slackware 13 laptop, so I can configure it properly and turn on RDP and whatnot, but then migrate them to a slackware64 13 headless server.



Quote:

Originally Posted by TL_CLD (Post 3887166)
I've read there are some significant performance issues with qcow2, and since I intend to use KVM for servers, I need as much performance as I can get. Maybe I should try qcow2 before dismissing it. :)

Never noticed any severe performance issues with qcow2.
I recomend you convert one of your VMs from raw to qcow2 and run both "versions" in parallel :)



Quote:

Originally Posted by TL_CLD (Post 3887166)
VDE?

Man, I really need to learn more about networking on Linux. I utterly suck at this.

VDE = Virtual Distributed Ethernet http://wiki.virtualsquare.org/index....sic_Networking
Think of it as "virtual network switch" + "virtual network cables" that you can use to make a network environment on a single machine.
You can build very complex networks, and even connect virtual switches across the internet through ssh :)

I recommend you read AlienBOB's wiki on VDE.
http://alien.slackbook.org/dokuwiki/...=slackware:vde
Although he does not use a bridge, he pointed me in the right direction.

You can also get his VDE package here: http://connie.slackware.com/~alien/slackbuilds/vde/

Here is my script to start vde and bridge.
I call this script from rc.inet1, so it will config the br0 interface (the bridge) automatically.
Code:

#! /bin/sh                                 
# /etc/rc.d/rc.bridge                     
# This script is used to bring up the network bridge.
#                                                   

#########################
# NETWORK BRIDGE CONFIG #
#########################

# Config information for br0:
NICS_IN_BRIDGE="eth0 tap0" 

###########
# LOGGING #
###########

# If possible, log events in /var/log/messages:
if [ -f /var/run/syslogd.pid -a -x /usr/bin/logger ]; then
  LOGGER=/usr/bin/logger                                 
else # output to stdout/stderr:                         
  LOGGER=/bin/cat                                       
fi                                                       

####################
# BRIDGE FUNCTIONS #
####################

# Function to start a network bridge.
bridge_start() {                   
  modprobe tun                     
  echo "STARTING VDE AND CREATING TAP DEVICE" | $LOGGER
  echo "/etc/rc.d/rc.bridge:  vde_switch -t tap0 -d" | $LOGGER
  vde_switch -t tap0 -d                                     
  echo "STARTING BRIDGE br0" | $LOGGER                       
  echo "/etc/rc.d/rc.bridge:  brctl addbr br0" | $LOGGER     
  brctl addbr br0                                           
  for NIC_TO_BRIDGE in $NICS_IN_BRIDGE; do                   
    ifconfig $NIC_TO_BRIDGE down                             
    ifconfig $NIC_TO_BRIDGE 0.0.0.0 promisc up               
    echo "/etc/rc.d/rc.bridge:  brctl addif br0 $NIC_TO_BRIDGE" | $LOGGER
    brctl addif br0 $NIC_TO_BRIDGE                                     
  done                                                                 
}                                                                       

# Function to stop a network bridge.
bridge_stop() {                   
  echo "STOPING BRIDGE br0" | $LOGGER
  for NIC_TO_BRIDGE in $NICS_IN_BRIDGE; do
    echo "/etc/rc.d/rc.bridge:  brctl delif br0 $NIC_TO_BRIDGE" | $LOGGER
    brctl delif br0 $NIC_TO_BRIDGE                                     
  done                                                                 
  ifconfig br0 down                                                     
  echo "/etc/rc.d/rc.bridge:  brctl delbr br0" | $LOGGER               
  brctl delbr br0                                                       
  echo "STOPING VDE AND REMOVING TAP DEVICE" | $LOGGER                 
  echo "/etc/rc.d/rc.bridge:  killall vde_switch" | $LOGGER
  killall vde_switch
  modprobe -r tun
}

############
### MAIN ###
############

case "$1" in
'start') # "start" starts the bridge:
  bridge_start
  ;;
'stop') # "stop" stops the bridge:
  bridge_stop
  ;;
'restart') # "restart" restarts the bridge:
  bridge_stop
  bridge_start
  ;;
*) # The default is to start the bridge:
  bridge_start
esac

# End of /etc/rc.d/rc.bridge


TL_CLD 03-08-2010 03:45 PM

Quote:

Originally Posted by Chuck56 (Post 3887197)
Use whatever file system you are most comfortable with. I prefer ext3/4 due to the journal and recovery. Once your raw files are in place they don't move around unless you are making copies of them. The data inside the VM images is what is changing.

I was thinking about ZFS because of its ability to do really fast and cheap snapshots. I've not been able to find a Linux filesystem with similar capabilities.

TL_CLD 03-08-2010 03:54 PM

Quote:

Originally Posted by Slax-Dude (Post 3890113)
Never noticed any severe performance issues with qcow2.
I recomend you convert one of your VMs from raw to qcow2 and run both "versions" in parallel :)

Good plan. I will do that.

Quote:

Originally Posted by Slax-Dude (Post 3890113)
VDE = Virtual Distributed Ethernet http://wiki.virtualsquare.org/index....sic_Networking
Think of it as "virtual network switch" + "virtual network cables" that you can use to make a network environment on a single machine.
You can build very complex networks, and even connect virtual switches across the internet through ssh :)

VDE looks very interesting, though I'm puzzled as to why the KVM/Qemu combo have made networking so "complicated". Hopefully future versions will make it a bit simpler. :)

I succeeded in getting a bridge and a tap up and running today, so my test-VM's (or one of them at least) could connect to my LAN and my LAN could connect to them.

I'm wondering whether I will have to create bridges and taps for each of the VM's, or if they can share them?

Also, if I have a few VM's who will have to transmit lots of data between them, should I just use the bridge that connects the VM's to my LAN, or should I set up a "private" bridge between the VM's?

So many questions... :)
/Thomas

Slax-Dude 03-08-2010 05:22 PM

Quote:

Originally Posted by TL_CLD (Post 3890686)
I'm puzzled as to why the KVM/Qemu combo have made networking so "complicated".

I was wandering the same thing :)

Quote:

Originally Posted by TL_CLD (Post 3890686)
I'm wondering whether I will have to create bridges and taps for each of the VM's, or if they can share them?

With VDE, you only need 1 bridge and 1 tap device.

Quote:

Originally Posted by TL_CLD (Post 3890686)
Also, if I have a few VM's who will have to transmit lots of data between them, should I just use the bridge that connects the VM's to my LAN, or should I set up a "private" bridge between the VM's?

I became happy with the network performance between VMs once I started using virtio drivers.
They REALLY make a difference.

TL_CLD 03-09-2010 12:35 AM

Quote:

Originally Posted by Slax-Dude (Post 3890763)
I was wandering the same thing :)

With VDE, you only need 1 bridge and 1 tap device.

I had a chat with some of the #kvm people yesterday, and they pointed out that one bridge could be shared among all the VM's but each of them would need their own tap device.

I think I'm going to do some more experimenting with this method, before I look into VDE.

Quote:

Originally Posted by Slax-Dude (Post 3890763)
I became happy with the network performance between VMs once I started using virtio drivers.
They REALLY make a difference.

http://www.linux-kvm.org/page/Using_VirtIO_NIC

Yes, I must admit that the performance numbers are very clear. :)

A few of the more experienced IRC #kvm people have expressed concerns about the quality of the virtio block device drivers. I wonder if the same goes for the ethernet drivers?

I guess I'm just going to have to try it.

/Thomas

[GOD]Anck 03-09-2010 02:25 AM

Quote:

Originally Posted by TL_CLD (Post 3891083)
I had a chat with some of the #kvm people yesterday, and they pointed out that one bridge could be shared among all the VM's but each of them would need their own tap device.

I think I'm going to do some more experimenting with this method, before I look into VDE.

Yes, that is possible; this is how I set it up as well. This page might be helpful: http://blog.cynapses.org/2007/07/12/...network-setup/

To answer some of your other questions from my perspective:

1. My hosts are headless and so are my clients. With bridged networking, it's no different than managing regular servers.

2. As others have said, qcow2 format supports VM snapshots. Unrelated: using LVM on top of software raid1 is fine. From LVM's perspective, a raid1 is just another block device that can be used as a physical volume.

3. As above. You can have bind or dnsmasq give out static and dynamic addresses.

4. I started another thread on here yesterday because I upgraded kvm userspace and it stopped working, possibly due to having an older kvm kernel module. I haven't got around to building a new one just yet. It may just work then but at least from my point of view, upgrading was not as simple as removing the old and configure && make && make install'ing the new.

5. No experience with virtio, but this thread seems positive, so I might try it at least for networking drivers.

6. n/a

7. ext3/ext4 are fine. Do note: if your host does write caching and crashes, your guests may lose data even if they use a journaling filesystem.

Slax-Dude 03-09-2010 04:54 AM

Quote:

Originally Posted by TL_CLD (Post 3891083)
I had a chat with some of the #kvm people yesterday, and they pointed out that one bridge could be shared among all the VM's but each of them would need their own tap device.

That's why I like VDE so much: it is not needlessly complex.
A single tap device is created by slackware's rc.scripts and then is used by any and all VMs you want.
There is no need to run a script that creates tap devices each time a VM is started and no scripts to delete said tap devices once the VM is stopped.

It just works :)

Here is my typical VM-starting script.
It can't be any simpler than this:
Code:

#!/bin/sh
# script to start windows 2003 server
vdeqemu \
-k pt \
-m 1024 \
-nographic \
-no-fd-bootchk \
-localtime \
-drive file=/root/vm/win2k3/win2k3-hermes-disk0.qcow2,if=virtio,boot=on \
-net vde \
-net nic,model=virtio,macaddr=52:54:00:00:00:02 \
-name "hermes" $*
# NOTE: macaddr value should be unique per VM



All times are GMT -5. The time now is 03:50 AM.