Linux - Virtualization and CloudThis forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I only use qemu several time, so I am really newbie to qemu virtualization (but I am experience in using linux for several years). In linux, I know I can measure a command by time command. However, when I boot up an image using qemu-system-x86_64, how do I know how long qemu takes to boot the image until e.g. login screen gets display (or to boot a specific process)?
The information I find so far is https://stefano-garzarella.github.io...nux-boot-time/. But this looks like patching linux and qemu from source code. So I am curious is there any way to measure such metrics (boot time) without applying patch? For instance a flag in qemu command can be used to measure or monitor such info.
Qemu I use is version 4.1.0. Linux kernel version 5.2.0-2-amd64
By that I mean, why do you want to measure boot time, and what are the criteria? It helps to know what you REALLY want to measure and why before suggesting an answer.
Update:
I managed to install kernel source 5.4 by apt-get source linux-source-5.4. And set PERF_EXEC_PATH to /path/to/linux-source-5.4/tools/perf. But the result is the same (no qemu_init_end error is still thrown).
Thanks for all the reply. I come across to https://github.com/stefano-garzarella/qemu-boot-time It looks like what I need. So I run the command based on How to use section by replacing qemu command I use. Then it generates qemu_perf.data. However when I try the command at [code]Example of output[/b]
in trace_begin
Traceback (most recent call last):
File "qemu-boot-time/perf-script/qemu-perf-script.py", line 195, in kvm__kvm_entry
if (events.traces[pid].qemu_init_end != 0):
AttributeError: 'collections.defaultdict' object has no attribute 'qemu_init_end'
Fatal Python error: problem in Python trace event handler
It seems to me the problem stems from the qemu-perf-script.py line 125 (https://github.com/stefano-garzarell...script.py#L125) because it uses autodict() which default do not have qemu_init_end (but I could be wrong). However, when search packages with pip, pip3, or apt-cache I don't find any related packages e.g. pip3 search autodict, apt-cache search autodict. What packages should I install to fix this issue? Or any thing/ code that I should add in order to fix this (I haven't use autodict() so I am not sure how that works or any docs linking to that lib)?
Thanks for help
Last edited by shogun1234; 04-01-2020 at 04:08 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.