Using KVM with Qemu on ARM

This is part two of my blog post about Kernel-Virtual Machine (KVM) on a 32-Bit ARM architecture. The post is meant as a starting point for those who want to play with KVM and provide a useful collection of Qemu commands for virtualization.

Virtualization host setup

The Kernel configuration I used for my platforms Host kernel can be found here. Since I run my experiments on a Toradex Colibri iMX7D module, I started with the v4.1 configuration of the BSP kernel and updated that to v4.8 plus enabled KVM as well as KSM (Kernel same-page merging).

As root file system I use a slightly modified version of the Ångström distributions “development-image”, version 2015.12 (built from scratch with OpenEmbedded). Any recent ARM root file system should do it. I let Qemu v2.6.0 preinstall (by just adding “qemu” to the image and specifying ANGSTROM_QEMU_VERSION = “2.6.0” in conf/distro/angstrom-v2015.12.conf).

Virtualization guest setup

For the virtualization guest setup I was looking for something minimalistic. I uploaded the compiled binary of the Kernel (as tared zImage) and initramfs (as cpio.gz).

I built a custom kernel directly using v4.7 sources and a modified/stripped down version of the vexpress_defconfig (virt_guest_defconfig). I found it useful to look into Qemu’s “virt” machine setup code (hw/arm/virt.c) to understand what peripherals are actually emulated (and hence what drivers are actually required).

As root file system I was looking for something which I easily can spawn multiple images with, e.g. a squashfs or initramfs. I ended up building Yocto Project’s “poky-tiny” distribution. I used the following local.conf configuration:

MACHINE ??= "qemuarm"
PREFERRED_PROVIDER_virtual/kernel="linux-yocto"
EXTRA_IMAGE_FEATURES ?= "debug-tweaks read-only-rootfs"

And adjusted the machine (meta/conf/machine/qemuarm.conf) slightly to suit my hardware virtualization needs:

+++ b/meta/conf/machine/qemuarm.conf
@@ -3,10 +3,11 @@
 #@DESCRIPTION: arm_versatile_926ejs
 
 require conf/machine/include/qemu.inc
-require conf/machine/include/tune-arm926ejs.inc
+require conf/machine/include/tune-cortexa7.inc
 #require conf/machine/include/tune-arm1136jf-s.inc
 
 KERNEL_IMAGETYPE = "zImage"
 
-SERIAL_CONSOLES = "115200;ttyAMA0 115200;ttyAMA1"
+SERIAL_CONSOLES = "115200;ttyAMA0"

The initramfs (cpio.gz) archive ended up being just slightly above 700kB. I also made a squahfs image of the same rootfs.

Some useful Qemu/KVM commands

The following command starts a virtual machine and redirects stdin/stdout directly to its serial console (in this case an emulated PL011). If required, additional kernel parameters can be passed using the –append option. In my case, the kernel had already the required console specification (“console=ttyAMA0”) built-in.

qemu-system-arm -enable-kvm -M virt -cpu host \
-kernel zImage -initrd core-image-minimal-qemuarm.cpio.gz \
-nographic -serial stdio -monitor none

Note that we don’t need to specify a device tree… Qemu’s machine “virt” creates a device tree on the fly (implemented in hw/arm/virt.c). If you wonder how the device tree looks like you can browse it under /proc/device-tree in your guest (or get the fdt binary from /sys/firmware/fdt). A boot log of the the guest can be found here.

Besides the ARM PrimeCell peripherals such as PL011 (UART, ttyAM0) or PL031 (RTC) Qemu also generates 32 MMIO mapped VirtIO transport descriptors. Qemu assigns VirtIO based peripherals to those descriptors dynamically. A device can be created using Qemu’s -device parameter. For instance, to create a VirtIO based console:

qemu-system-arm -enable-kvm -M virt -cpu host \
-kernel zImage -initrd core-image-minimal-qemuarm.cpio.gz \
-nographic -monitor none -serial none \
-device virtio-serial-device -device virtconsole,chardev=char0 -chardev stdio,id=char0 \
-append "console=hvc0"

Note that we need to set -serial none, otherwise Qemu would allocate a PL011 based UART and redirect that to stdio.

You need to make sure that a Getty gets started on /dev/hvc0 to actually get a login prompt. With OpenEmbedded I got a login shell after extending SERIAL_CONSOLES (and fixing a bug in the inittab generation for busybox):

SERIAL_CONSOLES = "115200;ttyAMA0 115200;hvc0"

To use a VirtIO based block device as root file system, use the following command line:

qemu-system-arm -enable-kvm -M virt -cpu host \
-kernel zImage -nographic -serial stdio -monitor none \
-drive if=none,file=core-image-minimal-qemuarm.ext4,id=rfs -device virtio-blk-device,drive=rfs \
-append "root=/dev/vda"

Similarly a VirtIO network device can be added using

qemu-system-arm -enable-kvm -M virt -cpu host \
-kernel zImage -initrd core-image-minimal-qemuarm.cpio.gz \
-nographic -serial stdio -monitor none \
-netdev user,id=net0 -device virtio-net-device,netdev=net0

I found the following two commands useful to get a list of VirtIO devices and the properties supported by them.

qemu-system-arm -M virt -device help 2>&1 | grep virtio
qemu-system-arm -M virt -device virtio-net-device,help

Run many machines using Qemu and KVM

How many Virtual Machines running Linux can my embedded device with just 512MiB of RAM execute? A little shell script and a serial console over TCP should answer the question:

#!/bin/bash 
PORT=4500
while [[ $PORT -le 4530 ]]; do
        echo The counter is $PORT
        qemu-system-arm -enable-kvm -M virt -cpu host \
          -kernel zImage -initrd core-image-minimal-qemuarm.cpio.gz \
          -nographic -monitor none -m 24 -smp 1 \
          -chardev socket,host=127.0.0.1,telnet,port=$PORT,server,nowait,id=char0 \
          -serial chardev:char0 -append "console=ttyAMA0,115200 quiet" &
        PORT=$((PORT+1))
        sleep 10
done

With that, I could use telnet 127.0.0.1 <port> to connect to the individual virtual machines. All machines were really responsive, and CPU usage was not that high. But after 15 machines, Qemu failed to allocate enough memory for more virtual machines.

In a second try I enabled KSM (Kernel Samepage Merging) wich allows the host to share pages with the same content across different user space processes. This should help quite a bit, since we have a unpacked Kernel image for each of the virtual machine in memory…

The feature needs to be enabled using the following command:

echo 1 > /sys/kernel/mm/ksm/run

With that I reached 27 virtual machines! Not too bad…

A more lightweight alternative: kvmtool

There is a more lightweight alternative to the Qemu/KVM combination: kvmtool (with KVM). I will explore this option in another blog post.

  1. I have a question. If you use machine “virt”, but boot the guest with qemu-uefi, won’t qemu-uefi create a device tree for the guest based on the host device tree? Does this mean the guest will use devices based on the uefi created device tree (like the gic) vs the emulated gic?

  2. Not sure, what is qemu-uefi? Is that a fork of qemu?

    I doubt that it would create a dynamic device tree based on the hardware, regular Qemu is not doing that either afaict.

  3. Hi, Can you give a bit more detail about how to preinstall qemu as part of the host image? I am using Yocto to build my linux kernel for cortex-A7

  4. IMAGE_INSTALL_append = ” qemu” should add Qemu to your host image.

Leave a Comment