QEMU#

Use declarative configuration to configure and run virtual machines on a server via QEMU

Running#

See also

  • A collection of scripts I have written and/or adapted that I currently use on my systems as automated tasks [1]

Basic steps#

  1. install these packages

    apt-get install qemu-system-x86 openssh-client python3-yaml
    
  2. install fpyutils. See reference

  3. create a new user

    useradd --system -s /bin/bash -U qvm
    passwd qvm
    usermod -aG jobs qvm
    
  4. create the jobs directories. See reference

    mkdir -p /home/jobs/{scripts,services}/by-user/qvm
    
  5. create the qvm script

    /home/jobs/scripts/by-user/qvm/qvm.py#
    #!/usr/bin/env python3
    #
    # qvm.py
    #
    # Copyright (C) 2021,2023 Franco Masotti (franco \D\o\T masotti {-A-T-} tutanota \D\o\T com)
    #
    # This program is free software: you can redistribute it and/or modify
    # it under the terms of the GNU General Public License as published by
    # the Free Software Foundation, either version 3 of the License, or
    # (at your option) any later version.
    #
    # This program is distributed in the hope that it will be useful,
    # but WITHOUT ANY WARRANTY; without even the implied warranty of
    # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    # GNU General Public License for more details.
    #
    # You should have received a copy of the GNU General Public License
    # along with this program.  If not, see <http://www.gnu.org/licenses/>.
    #
    #
    # Original license header:
    #
    # qvm - Trivial management of 64 bit virtual machines with qemu.
    #
    # Written in 2016 by Franco Masotti (franco \D\o\T masotti {-A-T-} tutanota \D\o\T com)
    #
    # To the extent possible under law, the author(s) have dedicated all
    # copyright and related and neighboring rights to this software to the public
    # domain worldwide. This software is distributed without any warranty.
    #
    # You should have received a copy of the CC0 Public Domain Dedication along
    # with this software. If not, see
    # <http://creativecommons.org/publicdomain/zero/1.0/>.
    r"""Run virtual machines."""
    
    import shlex
    import sys
    
    import fpyutils
    import yaml
    
    
    def build_remote_command(prf: dict) -> str:
        if prf['system']['display']['enabled']:
            # See https://unix.stackexchange.com/a/83812
            # See also the 'TCP FORWARDING' section in man 1 ssh.
            ssh = '-f -p ' + prf['system']['network']['ports']['host'][
                'ssh'] + ' -L ' + prf['system']['network']['ports']['local'][
                    'vnc'] + ':127.0.0.1:' + prf['system']['network']['ports'][
                        'host']['vnc'] + ' -l ' + prf['system']['users'][
                            'host'] + ' ' + prf['system']['network']['addresses'][
                                'host']
            ssh += ' sleep 10; vncviewer 127.0.0.1::' + prf['system']['network'][
                'ports']['local']['vnc']
        else:
            ssh = '-p ' + prf['system']['network']['ports']['guest'][
                'ssh'] + ' -l ' + prf['system']['users']['guest'] + ' ' + prf[
                    'system']['network']['addresses']['host']
    
        return (prf['executables']['ssh'] + ' ' + ssh)
    
    
    def build_local_command(prf: dict) -> str:
        head = ''
    
        # Generic options.
        generic_options: str = '-enable-kvm'
    
        # Memory.
        memory = '-m ' + prf['system']['memory']
    
        # CPU.
        cpu = '-smp ' + prf['system']['cpu']['cores'] + ' -cpu ' + prf['system'][
            'cpu']['type']
    
        # Display.
        if prf['system']['display']['enabled']:
            if prf['system']['display']['vnc']['enabled']:
                display_number = int(
                    prf['system']['display']['vnc']['port']) - 5900
                display = '-display none -monitor pty -vnc 127.0.0.1:' + str(
                    display_number)
            else:
                display = '-display gtk'
        else:
            display = '-display none'
    
        # Audio.
        if prf['system']['audio']['enabled']:
            audio = '-device ' + prf['system']['audio']['device']
            head += 'export QEMU_AUDIO_DRV=alsa;'
        else:
            audio = ''
    
        # Network.
        user_net: str = ''
        for i in prf['system']['network']['ports']:
            if i['type'] == 'user':
                # User Networking (SLIRP).
                if i['enabled']:
                    user_net += ',hostfwd=tcp::' + i['host'] + '-:' + i['guest']
            else:
                # TAP, VDE, socket.
                print('error: not implemented. setting empty network')
    
        user_net = '-netdev user,id=user.0' + user_net + ' -device virtio-net-pci,netdev=user.0'
    
        # nw = user_nw + ' ' + tap_nw + ' ' + vde_nw + ' ' + socker_nw
        net: str = user_net
    
        # CD-ROM.
        if prf['system']['cdrom']['enabled']:
            cdrom = '-cdrom ' + prf['system']['cdrom']['device'] + ' -boot order=d'
        else:
            cdrom = ''
    
        # Mounts.
        mnt = ''
        for i in prf['system']['mount']:
            if i['enabled']:
                if i['type'] == 'virtfs':
                    mnt += ' -virtfs local,path=' + i[
                        'path'] + ',security_model=passthrough,mount_tag=' + i[
                            'ount_tag']
                elif i['type'] == 'virtiofs':
                    mnt += (
                        '-object memory-backend-file,id=mem,size=' +
                        prf['system']['memory'] + ',mem-path=/dev/shm,share=on' +
                        ' -numa node,memdev=mem' +
                        ' -chardev socket,id=char0,path=' + i['path'] +
                        ' -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag='
                        + i['mount_tag'])
                else:
                    print('error: not implemented. cannot set mount')
    
        # Mass memory.
        hdd = ''
        for drive in prf['system']['drives']:
            hdd += ' -drive file=' + drive
    
        return (head + ' ' + prf['executables']['qemu'] + ' ' +
                prf['options']['start'] + ' ' + generic_options + ' ' + memory +
                ' ' + cpu + ' ' + display + ' ' + net + ' ' + cdrom + ' ' + audio +
                ' ' + mnt + ' ' + prf['options']['end'] + ' ' + hdd)
    
    
    if __name__ == '__main__':
        configuration_file = shlex.quote(sys.argv[1])
        config = yaml.load(open(configuration_file), Loader=yaml.SafeLoader)
        type = shlex.quote(sys.argv[2])
        profile = shlex.quote(sys.argv[3])
    
        prf = config[type][profile]
        if prf['enabled']:
            if type == 'local':
                command = build_local_command(prf)
            elif type == 'remote':
                command = build_remote_command(prf)
    
        fpyutils.shell.execute_command_live_output(command, dry_run=False)
    
  6. create the configuration for qvm

    /home/jobs/scripts/by-user/qvm/qvm.yaml#
    #
    # qvm.yaml
    #
    # Copyright (C) 2021-2023 Franco Masotti (franco \D\o\T masotti {-A-T-} tutanota \D\o\T com)
    #
    # This program is free software: you can redistribute it and/or modify
    # it under the terms of the GNU General Public License as published by
    # the Free Software Foundation, either version 3 of the License, or
    # (at your option) any later version.
    #
    # This program is distributed in the hope that it will be useful,
    # but WITHOUT ANY WARRANTY; without even the implied warranty of
    # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    # GNU General Public License for more details.
    #
    # You should have received a copy of the GNU General Public License
    # along with this program.  If not, see <http://www.gnu.org/licenses/>.
    #
    #
    # Original license header:
    #
    # qvm - Trivial management of 64 bit virtual machines with qemu.
    #
    # Written in 2016 by Franco Masotti (franco \D\o\T masotti {-A-T-} tutanota \D\o\T com)
    #
    # To the extent possible under law, the author(s) have dedicated all
    # copyright and related and neighboring rights to this software to the public
    # domain worldwide. This software is distributed without any warranty.
    #
    # You should have received a copy of the CC0 Public Domain Dedication along
    # with this software. If not, see
    # <http://creativecommons.org/publicdomain/zero/1.0/>.
    
    local:
        test:
            enabled: true
    
            executables:
                qemu: '/usr/bin/qemu-system-x86_64'
    
            options:
                start: ''
                end: ''
    
            system:
                memory: '12G'
                cpu:
                    cores: '6'
    
                    # See
                    # $ qemu -cpu help
                    type: 'host'
    
                # Mass memory. Use device name with options.
                drives:
                    # IMPORTANT: enable the following during the setup.
                    # - '/home/user/qvm/development.qcow2'
    
                    # Enable the following after the setup.
                    - '/home/user/qvm/development.qcow2.mod'
    
                   # Mass memory. Use device name with options.
                   # - '/dev/sdx,format=raw'
                   # - '/dev/sdy,format=raw'
    
                # Enable this for maintenance or installation.
                cdrom:
                    # IMPORTANT: set the following to true during the setup.
                    # enabled: true
    
                    # Set the following to true after the setup.
                    enabled: false
    
                    # Device or file name.
                    device: debian.iso
    
                # Shared data using virtfs or virtiofs.
                mount:
                    # Enable this if you need to mount a directory using 9p.
                    - name:         'shared'
                      enabled:      false
                      type:         'virtfs'
                      path:         '/home/qvm/shares/test'
                      mount_tag:    'shared'
                    # This type of mounting requires more configuration
                    - name:         'shared_performance'
                      enabled:      false
                      type:         'virtiofs'
                      path:         '/var/run/qemu-vm-001.sock'
                      mount_tag:    'shared'
    
                # If enabled is false never show a display.
                display:
                    enabled: true
                    vnc:
                        enabled: true
                        port: 5900
    
                audio:
                    enabled: true
                    device: 'AC97'
    
                network:
                    # TCP ports only.
                    ports:
                        - name:      'ssh'
                          type:      'user'
                          enabled:    true
                          host:      '2222'
                          guest:     '22'
                        - name:      'other_0'
                          type:      'user'
                          enabled:    true
                          host:      '5555'
                          guest:     '3050'
                        - name:      'other_1'
                          type:      'user'
                          enabled:    true
                          host:      '5556'
                          guest:     '3051'
    
  7. create the Systemd service unit file for qvm

    /home/jobs/services/by-user/qvm/qvm.local_test.service#
    [Unit]
    Description=Run qvm local test
    Requires=network-online.target
    # Requires=qvm-virtiofs.local_test-qemu-vm-001.service
    After=network-online.target
    # After=qvm-virtiofs.local_test-qemu-vm-001.service
    
    [Service]
    Type=simple
    # ExecStartPre=/usr/bin/sleep 4
    ExecStart=/home/jobs/scripts/by-user/qvm/qvm.py /home/jobs/scripts/by-user/qvm/qvm.yaml local test
    ExecStop=/usr/bin/bash -c '/usr/bin/ssh -p 2222 powermanager@127.0.0.1 sudo poweroff; sleep 30'
    User=qvm
    Group=qvm
    
    [Install]
    WantedBy=multi-user.target
    
  8. fix the permissions

    chmod 700 -R /home/jobs/{scripts,services}/by-user/qvm
    chown -R qvm:qvm /home/jobs/{scripts,services}/by-user/qvm
    
  9. create a new virtual hard disk

    sudo -i -u qvm
    qemu-img create -f qcow2 development.qcow2 64G
    
  10. modify the configuration to point to development.qcow2, the virtual hdd. See the commented local.test.system.drives[0] key

    Also set the cdrom device and enabled values in local.test.cdrom

  11. run the installation

    ./qvm ./qvm.py local development
    
  12. once the installation is finished power down the machine

  13. create a backup virtual hard disk

    qemu-img create -f qcow2 -b development.qcow2 development.qcow2.mod
    
  14. set local.test.system.cdrom.enabled to false. Set local.test.system.drives[0] to development.qcow2.mod

  15. run the virual machine

    ./qvm ./qvm.py local development
    
  16. quit the virtual machine and go back to the root user

    exit
    
  17. run the deploy script

  18. if you are using iptables rules on the host machine modify the rules to let data through the shared ports

  19. continue with the client configuration

Automatic shutdown#

See also

  • SSH config host match port [2]

To be able to shutdown automatically when the Systemd service is stopped follow these instructions

  1. connect to the guest machine

  2. create a new user

    sudo -i
    useradd -m -s /bin/bash -U powermanager
    passwd powermanager
    
  3. change the sudoers file

    visudo
    

    Add this line

    powermanager ALL=(ALL) NOPASSWD:/sbin/poweroff
    
  4. go back on the host machine and create an SSH key so that the qvm host user can connect to the powermanager guest user. Do not encrypt the key with a passphrase

    sudo -i -u qvm
    ssh-keygen -t rsa -b 16384 -C "qvm@host-2022-01-01"
    

    Save the keys as ~/.ssh/powermanager_test.

    Have a look at the ExecStop command in the Systemd service unit file

  5. in the host machine, configure the SSH config file like this

    /home/qvm/.ssh/config#
    Match host 127.0.0.1 user powermanager exec "test %p = 2222"
        IdentitiesOnly yes
        IdentityFile ~/.ssh/powermanager_test
    
  6. copy the content of /home/qvm/.ssh/powermanager_test.pub into /home/powermanager/.ssh/authorized_keys of the guest machine

Using physical partitions#

Instead of using QCOW2 disk files you can use existing phisical partitions and filesystems.

Warning

Remember NOT to mount the partitions while running because data loss occurs in that case.

  1. uncomment and edit the highlighted lines and comment the original drive line

    /home/jobs/scripts/by-user/qvm/qvm.yaml#
                # Mass memory. Use device name with options.
                drives:
                    # IMPORTANT: enable the following during the setup.
                    # - '/home/user/qvm/development.qcow2'
    
                    # Enable the following after the setup.
                    - '/home/user/qvm/development.qcow2.mod'
    
                   # Mass memory. Use device name with options.
                   # - '/dev/sdx,format=raw'
    
  2. run the deploy script

Share a directory#

9p filesystem#

If you need to share a directory you can use a 9p filesystem if the guest kernel supports it. This is the simplest way to share a directory and it does not require the root user. The downside is low performance.

  1. connect to the guest machine and run the following

    sudo -i
    modprobe 9pnet_virtio
    

    In case your kernel does not support 9p you might get a message like the following. For example this is the case for Debian’s linux-image-cloud-amd64 kernel

    modprobe: FATAL: Module 9pnet_virtio not found in directory /lib/modules/5.10.0-9-cloud-amd64
    

    If that is the case you need to find a different method such as SSHFS

  2. create the mount directory

    exit
    mkdir -p /home/qvm/shares/test
    
  3. add a shared directory in the configuration

    /home/jobs/scripts/by-user/qvm/qvm.yaml#
                mount:
                    # Enable this if you need to mount a directory using 9p.
                    - name:         'shared'
                      enabled:      false
                      type:         'virtfs'
                      path:         '/home/qvm/shares/test'
                      mount_tag:    'shared'
                    # This type of mounting requires more configuration
    
  4. connect to the guest machine and add an fstab entry. In this example the directory is mounted in /home/vm/shared

    /etc/fstab#
    shared    /home/vm/shared   9p      auto,access=any,x-systemd.automount,msize=268435456,trans=virtio,version=9p2000.L       0        0
    
  5. create the shared directory

    mkdir /home/vm/shared
    
  6. quit the virtual machine

  7. restart the virtual machine

    systemctl restart qvm.local_test.service
    

virtiofs#

As an alternative to plan9p you can use virtiofs to get more performance.

See also

  • virtiofs - shared file system for virtual machines / Standalone usage [7]

  • QEMU/KVM + virtio-fs - Sharing a host directory with a virtual machine - TauCeti blog [8]

  • QEMU - ArchWiki - Using filesystem passthrough and VirtFS [9]

  1. add a shared directory in the configuration

    /home/jobs/scripts/by-user/qvm/qvm.yaml#
                      mount_tag:    'shared'
                    # This type of mounting requires more configuration
                    - name:         'shared_performance'
                      enabled:      false
                      type:         'virtiofs'
                      path:         '/var/run/qemu-vm-001.sock'
                      mount_tag:    'shared'
    
                # If enabled is false never show a display.
    

    By using virtiofs you need to pass the socket not the actual path of the shared directory on the host. See below

  2. create this script

    /home/jobs/scripts/by-user/root/qvm_virtiofs.sh#
    #!/usr/bin/env bash
    
    set -euo pipefail
    
    SOCKET_PATH="${1}"
    SOURCE_DIR="${2}"
    PID_FILE="${3}"
    
    /usr/lib/qemu/virtiofsd --socket-path="${SOCKET_PATH}" -o source="${SOURCE_DIR}" -o cache=always &
    pid=${!}
    echo "virtiofsd pid: ${pid}"
    sleep 1
    chgrp --verbose kvm "${SOCKET_PATH}"
    chmod --verbose g+rxw "${SOCKET_PATH}"
    echo ${pid} > "${PID_FILE}"
    wait ${pid}
    
  3. create the Systemd service unit file for the qvm virtiofs script

    /home/jobs/services/by-user/root/qvm-virtiofs.local_test-qemu-vm-001.service#
    [Unit]
    Description=Run qvm-virtiofs for local test shared_performance
    Requires=qvm.mount
    After=qvm.mount
    
    [Service]
    Type=simple
    ExecStart=/home/jobs/scripts/by-user/root/qvm_virtiofs.sh /var/run/qemu-vm-001.sock /tmp/vm-001 /run/qemu-vm-001.pid
    User=root
    Group=root
    

    The path of the shared directory on the host filesystem is /tmp/vm-001

  4. modify the Systemd service unit file for qvm by uncommening the highlighed lines

    /home/jobs/services/by-user/qvm/qvm.local_test.service#
    [Unit]
    Description=Run qvm local test
    Requires=network-online.target
    # Requires=qvm-virtiofs.local_test-qemu-vm-001.service
    After=network-online.target
    # After=qvm-virtiofs.local_test-qemu-vm-001.service
    
    [Service]
    Type=simple
    # ExecStartPre=/usr/bin/sleep 4
    ExecStart=/home/jobs/scripts/by-user/qvm/qvm.py /home/jobs/scripts/by-user/qvm/qvm.yaml local test
    ExecStop=/usr/bin/bash -c '/usr/bin/ssh -p 2222 powermanager@127.0.0.1 sudo poweroff; sleep 30'
    User=qvm
    Group=qvm
    
    [Install]
    WantedBy=multi-user.target
    
  5. connect to the guest machine and add an fstab entry. In this example the directory is mounted in /mnt

    /etc/fstab#
    shared     /mnt   virtiofs    auto,rw,_netdev     0       0
    
  6. quit the virtual machine

  7. restart the virtual machine

    systemctl restart qvm.local_test.service
    

Important

I noticed a very high memory usage with the provided options (~= 13 GB resident memory for ~= 40 GB of ~= 46000 files). Try setting, for example, -o cache=none instead of -o cache=always in the qvm_virtiofs.sh script.

Resize disks#

See also

  • How to Expand QCOW2 [3]

  • Provider::VMonG5K — EnOSlib 8.0.0a12 8.0.0a12 documentation [4]

  1. stop the virtual machine

  2. install this package

    apt-get install libguestfs-tools
    

    Have a look at this bug report if you have problem installing

  3. resize for example by increasing to 40G more, where virtual_hard_disk is a backing file. I always set the backing file ending in .mod.

    qemu-img resize ${virtual_hard_disk} +40G
    
  4. make a backup

    cp -aR image.qcow2.mod image.qcow2.mod.bak
    
  5. get the partition name you want to expand. Usually partition names start from /dev/sda1 (note: these partition names are not the same as the host system ones!)

    virt-filesystems --long --human-readable --all --add ${virtual_hard_disk}
    
  6. execute the actual resize operation on the virtual partition and filesystem. You can use sda1 for example as partition_name

    virt-resize --expand ${patition_name} ${virtual_hard_disk}.bak ${virtual_hard_disk}
    
  7. start the virtual machine

  8. if everything works remove the ${virtual_hard_disk}.bak file

Rename disk files#

See also

  • Qemu-img Cheatsheet | Programster’s Blog [5]

Rename backing file#

TODO

FSCK#

You must run FSCK while the virtual machine is off if you want to fix the root partition.

See also

  • kvm virtualization - How to run fsck on guest VMs from KVM - Server Fault [6]

  1. identify the troubling filesystem by running the virtual machine in “display” mode, not via SSH: if the broken partition is root your virtual machine might not get to load SSHD.

  2. stop the virtual machine

  3. load the virtual hard disk

    guestfish -a ${virtual_hard_disk}
    run
    list-filesystems
    

    You get a list of partitions with the last command, for example /dev/sda1.

  4. You can run various fsck commands such as

    e2fsck-f ${partition_name}
    e2fsck ${partition_name} /dev/sda1 forceall:true
    e2fsck ${partition_name} correct:true
    fsck ${partition_name}
    
  5. Quit the program and start the virtual machine

    exit
    

Footnotes