Solaris 10 guest endless reboot under KVM/Qemu
I've been searching the forum and I have not found specific information about my problem.
The situation is that I have managed to install a Solaris 10 guest under KVM/Qemu in a Ubuntu Gusty machine with AMD Turion X2 processors and 512MiB for the VM. The problem is that after the install, when I try to boot the VM (either KVM/Qemu or Qemu), it goes into a endless reboot loop.
Thanks in advance!!
Hi! The name may look a bit different but I am that same guy who posted on the Sun forums a year ago. I have to say that I have seen a lot of progress since then. My original suspicions were eventually confirmed, it really was an issue with the newer Intel CPUs not being (properly) supported. But that part got solved with the next Solaris release although it still left me without sound or network. Both worked fine when I switched over to the Solaris Express edition, though. It tends to be a bit ahead in terms of hardware support. I am now a happy (though occasional) Solaris user.
As for your issue, I suspect that it is indirectly related to mine.I imagine that Qemu is having the same issues with the bootloader that VMware or native installs on Core 2 Duo systems had back then. I have heard about similar Qemu/Solaris issues over the last two years so this may be a structural limitation. Unless the issue gets addressed by Qemu or the folks at Sun, there isn't all that much that you can do. Maybe try Solaris Express or one of the live distros that are based on openSolaris.
Ok i get it.
Ill be getting a copy of solaris express. I'll tell you what happens next.
Did you have any successes with Solaris on kvm in the meantime?
Must only be every few months that someone tries to get Solaris happening with QEMU. I'm on Ubuntu 9.04. Just installed the latest Solaris 10 x86_64 (not OpenSolaris) and I am having the constant reboot issue. Have tried multiple configurations and not getting anywhere.
Good to see i'm not the only one having this problem, but not the best news that there isn't a quick fix available.
Don't want to use OpenSolaris because of consistency in testing, but thought i'd post to say that this is still ongoing and I feel sorry for you if you have spent any time on this problem ;) .
CPU is a C2D 2.4.
Off to install VMWare, have had success with Sol10 in VMWare Workstation for those looking for an alternative.
VirtualBox works nicely with Solaris too.
Second you should give it 1024M of RAM at least for the install.
Just cuz i have been working on this for a while now with Fedora 8-12 :) ill share some notes.
They are motoring at fixing things right now. In Fedora it is actually pretty decent if I can work out some of the speed issues with 64-bit.
You need 768 minimum ram to install solaris 8 u8 (u7 was 768 with zfs and 384 without it.) previous versions had some issues with low memory. a gig is about the minimum you really want to give the install. It helps for speed and it does weird things when it runs out of memory.
Then you can knock it down to 512 after you remove the zfs packages, zone packages and live upgrade and possibly the dtrace packages (sunwlxr sunwlxu need to go as well if you remove dtrace) There are also a ton of fibre nic drivers and a whole host of other things that can come out of the kernel.
Make sure after removing packages you init 6 rather then reboot to update the kernel.
To turn off the gui login:
svcadm disable cde-login
(the gui actually runs pretty well on 32-bit/32-bit as does solaris. If you run 64-bit solaris on a 32-bit machine it takes like 8 hours to compile samba. :P 64-bit/64-bit leaves a bit to be desired at this point and a single cpu seems to be better then multiple cpus for 64-bit.
Use virtio if you can get it to work. It is actually what I was looking for. :) I couldn't get it to work with 64-bit/64-bit.
I am not sure I had it working with 32-bit/32-bit either.
Supposedly you just need to add kernel/unix to the kernel line in grub after installing. I haven't tried it yet.
You can't use hugepages if you are using libvirtd, but if you are just running qemu from the command line you can. (libvirtd needs to parse the options..)
It seems to work best with raw file images rather then qcow2.
(fmd seems to not appreciate qcow2 and core dumps a lot.)
Depending on the version of qemu, you may have display issues however the v.10.x and v.11 releases have been pretty decent.
v.7 of libvirt is better.
Im just geeked they abstracted the network layer so i can stay sshd in to the vm, switch from wireless to hardwire for the outside network, and put the virtual and physical machine to sleep and still remain connected to the ssh session using their nat.
Don't reconfigure the devices. IE touch /reconfigure.
It kills the performance for whatever reason and be a little shy about patching it too. It might reconfigure it. :)
I probably should file a bug on it..
Hope that helps.
|All times are GMT -5. The time now is 03:46 PM.|