qemu
There are many different ways of using qemu. Hopefully, people will add notes for what they do.
Use as a server
qemu can be built without any graphical environment and run on a server, where the display will be made available by the integrated vnc server.
Building qemu without any graphical environment on a system that has a graphical environment installed seems to be a bit tricky. I was successful with the following commands:
pathremove $XORG_PREFIX/bin PATH pathremove $XORG_PREFIX/lib/pkgconfig PKG_CONFIG_PATH pathremove $XORG_PREFIX/share/pkgconfig PKG_CONFIG_PATH pathremove $XORG_PREFIX/lib LIBRARY_PATH pathremove $XORG_PREFIX/include C_INCLUDE_PATH pathremove $XORG_PREFIX/include CPLUS_INCLUDE_PATH export PATH PKG_CONFIG_PATH ACLOCAL LIBRARY_PATH C_INCLUDE_PATH CPLUS_INCLUDE_PATH MANPATH unset XORG_CONFIG XORG_PREFIX=/opt/X11 mkdir build cd build || exit ../configure --prefix=/usr --disable-pa --disable-jack \ --disable-vte --sysconfdir=/etc --disable-sdl --disable-sdl-image \ --disable-gtk --enable-vnc --audio-drv-list=alsa --disable-vnc-sasl --enable-kvm \ --disable-opengl --target-list="$TARGET_LIST" make
Note that this probably requires to have xorg installed outside /usr, which I have.
However, you should also be able to run qemu with the display on the integrated vnc server if you have linked against xorg and gtk and whatnot, the most important thing is to build qemu with vnc support, which is achieved by building it with '--enable-vnc --disable-vnc-sasl'.
In order to then run a qemu virutal machine on a server and connect to its display on a client, you will only need vncviewer on the client.
Vnc ports usually start at 5900, so when you define your vnc port for your virtual machine, qemu adds 5900 to it for the real port number.
The following will work:
# # start the qemu vm with vnc display on the server # QEMU_LOG_FILE=/tmp/myvm.log QEMU_SERVER_VNC_ADDR=192.168.219.5 QEMU_SERVER_VNC_PORT=1 nohup /usr/bin/qemu $ALL_YOUR_USUAL_QEMU_OPTIONS_FOR_THIS_VM -vnc $QEMU_SERVER_VNC_ADDR:$QEMU_SERVER_VNC_PORT > $QEMU_LOG_FILE 2>&1 &
qemu will then make the display of the vm available on 192.168.219.5:5901
So, on your client desktop, you can then connect to it by:
vncviewer 192.168.219.5:5901
Xorg drivers if using qemu to build an LFS desktop guest
For a plain guest BLFS x86 or x86_64 system running Xorg without spice, use the modesetting driver as a part of Xorg-server (provided you enabled libepoxy).
It requires CONFIG_DRM_FBDEV_EMULATION, CONFIG_DRM_BOCHS, and you should also use CONFIG_DRM_VGEM in your kernel config to speed up video output (e.g. when listing files). For me this defaults to a 1024x768 framebuffer, but offers modes which will depend on your machine. Although you can force this with '-vga std' that is now the default.
If the 1024x768 framebuffer console is too small to read, enable the 12x22 font in the kernel, you can optionally use the LatGrkCyr-12x22 font in /etc/sysconfig/console provided you are running a UTF-8 system.
other notes on setting up Xorg in a guest
To get other screen sizes, use xrandr. Read the subsection named "Fine Tuning Display Settings" in the section "Xorg-7 Testing and Configuration" of the BLFS book for details.
HOWEVER, in some circumstances (e.g. using icewm as the wm in the guest, and with xscreensaver running on the host machine), from time to time the desktop of the guest can be mostly black when you return to it. The reason for this is not understood, but for icewm right clicking on what should be an empty area of the desktop (if you have one), selecting the 'logout' menu entry and clicking on the triangle at its right end, then selecting 'restart icewm' will restore the desktop.
Integrating with KVM and SPICE
Please note that these instructions are preliminary. The following packages need to be added in addition to those listed currently in BLFS:
lz4
There are no additional dependencies for lz4.
wget https://www.github.com/lz4/lz4/archive/v1.9.1/lz4-1.9.1.tar.gz && tar -xf lz4-1.9.1.tar.gz && cd lz4-1.9.1 && make -C lib PREFIX=/usr && make -C programs PREFIX=/usr lz4 lz4c && sudo make install PREFIX=/usr MANDIR=/usr/share/man/man1
usbredir
Usbredir depends only on libusb.
wget http://www.spice-space.org/download/usbredir/usbredir-0.8.0.tar.bz2 && tar -xf usbredir-0.8.0.tar.bz2 && cd usbredir-0.8.0 && ./configure --prefix=/usr && make && sudo make install
celt-0.5.1 (this version specifically does not interfere with newer versions)
Celt-0.5.1 depends on libogg.
wget https://downloads.us.xiph.org/releases/celt/celt-0.5.1.3.tar.gz && tar -xf celt-0.5.1.3.tar.gz && cd celt-0.5.1.3 && ./configure --prefix=/usr && make && sudo make install
spice-protocol
No additional dependencies.
wget https://gitlab.freedesktop.org/spice/spice-protocol/uploads/f18acfa4a10482062b3f3484bddeb9fa/spice-protocol-0.14.0.tar.bz2 && tar -xf spice-protocol-0.14.0.tar.bz2 && cd spice-protocol-0.14.0 && ./configure --prefix=/usr && make && sudo make install
orc
No additional dependencies.
git clone git://anongit.freedesktop.org/gstreamer/orc && cd orc && mkdir build && cd build && meson --prefix=/usr .. && ninja && sudo ninja install
spice-server
Depends on celt-0.5.1, cyrus-sasl, glib2, gtk2, gst-libav, libjpeg, lz4, openssl, opus, orc, pixman, and python2. Optional dependencies are libvpx, valgrind, and x264.
wget https://www.spice-space.org/download/releases/spice-server/spice-0.14.1.tar.bz2 && tar -xf spice-0.14.1.tar.bz2 && cd spice-0.14.1 && ./configure --prefix=/usr --disable-static --enable-client --disable-werror --enable-celt051 && # --enable-smartcard make && sudo make install
spice-gtk
Depends on Cyrus-SASL, DBUS, gst-libav, gtk+-3.x, lz4, PulseAudio, and spice-server. Optionally depends on usbredir for usb redirection to host.
wget https://www.spice-space.org/download/gtk/spice-gtk-0.36.tar.bz2 && tar -xf spice-gtk-0.36.tar.bz2 && cd spice-gtk-0.36 && ./configure --prefix=/usr --enable-celt051 && make && sudo make install
xf86-video-qxl
This is needed on the guest, not the host, however, xspice can be built on any real host as well to allow remote control. This should generally perform better than VNC or XFWD, and in some cases, better than NX server. Instructions for use on real hardware will be added at a later time. Depends on spice-protocols and xorg-server.
wget http://xorg.freedesktop.org/releases/individual/driver/xf86-video-qxl-0.1.5.tar.bz2 && tar -xf xf86-video-qxl-0.1.5.tar.bz2 && cd xf86-video-qxl-0.1.5 && ./configure $XORG_CONFIG && # --enable-xspice=yes (use if you want to use spice as a remote control application on the host) make && sudo make install
spice-vdagent
This is needed on your guests (and can be useful on a real host if using spice for remote control of host) Depends on libinput, libpciaccess, and xf86-video-qxl.
wget https://www.spice-space.org/download/releases/spice-vdagent-0.19.0.tar.bz2 && tar -xf spice-vdagent-0.19.0.tar.bz2 && cd spice-vdagent-0.19.0 && ./configure --prefix=/usr --sysconfdir=/etc --with-init-script=systemd && # || --with-init-script=redhat # if using SysV book (but it needs to be modified) sed 's@-Werror@@g' -i Makefile && make && sudo make install
libvirt
Libvirt is a generic interface for managing multiple types of hypervisors.
wget https://libvirt.org/sources/libvirt-5.4.0.tar.xz && tar -xf libvirt-5.4.0.tar.xz && cd libvirt-5.4.0 && ./configure --prefix=/usr -with-storage-lvm --without-xen \ --with-udev --without-hal --disable-static \ --with-init-script=systemd --with-qemu-user=nobody \ --with-qemu-group=nobody --with-interface && # || --with-init-script=none # if using SysV book (will need to create them, look at the systemd services files in: # tools/libvirt-guests.service.in, src/remote/libvirtd.service.in, src/logging/virtlogd.service.in, and src/locking/virtlockd.service.in # --with-netcf --with-storage-iscsi # if you have these available make && sudo make install
Now build qemu as per BLFS instructions, then continue on with virt-manager.
virt-manager
Virt-manager is a PyGTK control interface for libvirt that allows you to create, modify, monitor, and operate your VMs.
Depends on several python modules not in BLFS. For the ones in BLFS, pycairo, pygobject, and pygtk against python2 along with VTE. The others are requests and ipaddrinfo. Also need libosinfo (all to be added later).
wget https://virt-manager.org/download/sources/virt-manager/virt-manager-2.1.0.tar.gz && tar -xf virt-manager-2.1.0.tar.gz && cd virt-manager-2.1.0 && python setup.py build && sudo python setup.py install --optimize=1
Using Virtio devices
Instead of completely emulating existing hardware devices (network interface, block device, GPU, ...), qemu allows to present virtual devices to the guest, which directly access host's hardware. This allows to get near bare metal performance in virtual machines. Those devices are known as virtio devices, and the linux kernel has powerful drivers for them.
Virtio block device
Qemu command line options are numerous and redundant. There are often several ways to do the same thing. A way to enable virtio block device is described in the "Meaning of the command line options" paragraph in the book's page. I'll describe "stacking" block devices, and using this stack as a device on the guest. This allows more flexibility in defining what is passed to the guest. For more information, have a look at the qemu documentation. Usually, a whole disk on the guest is actually only a file on the host. Suppose a qcow2 image has been created with qemu-img as /path/to/image.qcow2. Then the first level of the stack is given by:
-blockdev driver=file,node-name=img1,filename=/path/to/image.qcow2
It just says that you want to access a file named /path/to/image.qcow2, and creates a handle (the "node-name") that the second level of the stack can use. The second level is given by:
-blockdev driver=qcow2,file=img1,node-name=disk1
Now, it says that the file will be accessed as a qcow2 img, and provides a handle to the next level. The stack as a whole describes a disk drive, that we can use as a device on the guest. This device is defined as virtio-blk:
-device virtio-blk,drive=disk1
Now that we have the machine, we need to enable the appropriate options in the guest kernel:
Device Drivers ---> [*] Block devices ---> CONFIG_BLK_DEV <*> Virtio block driver CONFIG_VIRTIO_BLK SCSI Device support ---> [*] SCSI low-level drivers ---> CONFIG_SCSI_LOWLEVEL <*> virtio-scsi support CONFIG_SCSI_VIRTIO [*] Virtio drivers ---> CONFIG_VIRTIO_MENU <*> PCI driver for virtio devices CONFIG_VIRTIO_PCI
Note that you need first to select CONFIG_VIRTIO_MENU and CONFIG_VIRTIO_PCI so that the other options appear.
The last thing to remember is that the virtio drives are named /dev/vdxn by the kernel, where x is the drive letter and n the partition number. Don't forget to update your /etc/fstab file and to change the linux command line in grub!
Virtio GPU
There are two main ways to obtain graphics acceleration in qemu:
- GPU pass-through: this allows the guest to access directly the hardware GPU through MMIO. It provides native performance. But the drawback is that the GPU is reserved for the guest, so if there is only one GPU, the host display is lost. So this configuration is only usable on machine with two GPUs (either two video cards, or a video card and a integrated GP on the CPU). This setting will not be described further here. There are many tutorials on internet.
- Virtio GPU: the graphics primitive are converted to GL, and sent directly to the GL engine on the host. This can be used inside a window on the host display.
The virtio GPU is only enabled in qemu if it is built against virglrenderer. Grab the most recent version, and build it with:
mkdir build && cd build && meson --prefix=/usr -Dvideo=true .. && ninja # as root ninja install
The -Dvideo=true
switch enables video acceleration. Then rebuild qemu. virgl should be enabled automatically.
On the guest kernel, two switches need to be enabled:
Device Drivers ---> Graphics support ---> <*> Virtio GPU driver CONFIG_DRM_VIRTIO_GPU Character devices ---> <*> Virtio console CONFIG_VIRTIO_CONSOLE
On the guest mesa needs to be compiled with virgl enabled. Furthermore, for mesa-22.2.0, a patch needs to be applied. Note that it is committed, and should not be needed in later versions.
Last but not least, the command line options to pass to qemu are:
-device virtio-vga-gl \ -display gtk,gl=on
Note that the gtk window starts very small (600x400 or so), and does not grow automatically when the kernel enters graphic mode. It can be resized manually. If you prefer, you can start with the sdl window (replace gtk with sdl). It has less features, but seems less buggy.