LXD 4.0 quick recipe: LXC and KVM coexisting

LXD 4.0 quick recipe: LXC and KVM coexisting

A quick guide to show how to setup LXD to manage KVM guests and LXC containers

Before I begin…

I’m NOT an engineer from the LXD/LXC development team. I am an Ubuntu Core Developer and I am mainly focused in packages managed by the Ubuntu/Canonical Server Team. I’m stating this to give myself a poetic license not to talk about LXD as a product, targeting its roadmap or beta features, but as a tool. A tool that surprises me each release and, without doubts, perhaps one of the best, if not THE best, projects Canonical has done so far.

Why I find LXD so Amazing

Over the years I have found myself developing numerous shell scripts to manage containers and virtual machines with ZFS filesystem - when it wasn’t yet used by Ubuntu - trying to get quick clones for development and/or laboratory purposes. Some leftovers of those scripts, that were constantly changing… can be found HERE just for curiosity (please don’t judge too much, most of them were created in a very specific need and I had to rush to have things done).

Lately I have been using a combination among QEMU image creation + cloud-init templates AND LXD with similar cloud-init templates for the containers, as you can see HERE. I had even created a KVM specific set of scripts to take benefit of cloud-init but still have control in what I was doing with the images.

Well, after LXD was released all my LXC stuff was gone right away. There wasn’t a need for me to worry about having ZFS filesystems and snapshots, clone the existing containers into new ones, manage the templates and clones, etc (some may say LXC zfs backend does that, but that wasn’t true when I first started with LXC). Magically I was able to download images and have them managed by this “daemon” that would create new containers and clone them for me. I wasn’t much into all the other features of LXD yet, but that, per se, was already awesome.

As the time has passed, I started creating all my development environment - to manage Ubuntu Source Packages, Debugging, Development, Sponsoring, Merges, Reviews - with LXD and I even have a directory that I share among my host machine and ALL my containers: The WORK directory. With my work directory being shared with all LXD containers - by using templates - I can have Debian SID, Ubuntu LTS and Ubuntu Devel, all together, in parallel, accessing the same working directory, and do all development work I need.

I’m such a fan of LXD, that my LXD profiles create a perfect development environment in all the images I use to create my containers. I can run Eclipse with all -dbgsym packages installed to analyse a core dump inside a Ubuntu Xenial at the same time I do the same thing for an Ubuntu Focal, for example. At the same time I can quickly create a Focal Ubuntu container to review a Ruby merge being done by a friend of mine and giving a +1 in his merge.

I hope you enjoy this discourse topic.

Creating the environment I had so far

Install LXD using snap:

rafaeldtinoco@lxdexample:~$ snap install lxd
lxd 4.0.0 from Canonical✓ installed

Do a LXD initial configuration. In this case I’m using ZFS in a loopback file, but, for performance reasons, you might want to use a block device for the ZFS backend for the storage pool you are creating. Remember, idea here is “to enable” one to use LXD and not to teach best practices or anything (poetic license, remember ?).

rafaeldtinoco@lxdexample:~$ sudo /snap/bin/lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: zfs
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=15GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: no
Would you like LXD to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]:
Port to bind LXD to [default=8443]:
Trust password for new clients:
Again:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

Let’s see if I can list non-existing (yet) containers:

rafaeldtinoco@lxdexample:~$ sudo usermod -a -G lxd rafaeldtinoco

rafaeldtinoco@lxdexample:~$ sudo aa-status
apparmor module is loaded.
28 profiles are loaded.
28 profiles are in enforce mode.
   /snap/core/8689/usr/lib/snapd/snap-confine
   /snap/core/8689/usr/lib/snapd/snap-confine/mount-namespace-capture-helper
...

rafaeldtinoco@lxdexample:~$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

And install zfsutils-linux package just so we can see ZFS volumes being created:

rafaeldtinoco@lxdexample:~$ apt-get install zfsutils-linux

rafaeldtinoco@lxdexample:~$ sudo zfs list
NAME                               USED  AVAIL     REFER  MOUNTPOINT
default                            490K  13.1G       24K  none
default/containers                  24K  13.1G       24K  none
default/custom                      24K  13.1G       24K  none
default/deleted                    120K  13.1G       24K  none
default/deleted/containers          24K  13.1G       24K  none
default/deleted/custom              24K  13.1G       24K  none
default/deleted/images              24K  13.1G       24K  none
default/deleted/virtual-machines    24K  13.1G       24K  none
default/images                      24K  13.1G       24K  none
default/virtual-machines            24K  13.1G       24K  none

Important Note: I don’t want to go into details here BUT do have in mind that LXD runs in a difference namespace than your current environment… specially the mount namespace. It is possible that all ZFS filesystems created by LXD are seen by “zfs” userland tool BUT it may also be possible that you won’t be able to see filesystems for stopped containers, or other stuff related to ZFS that LXD might do under the cover.

And test if I can reach the image repository:

rafaeldtinoco@lxdexample:~$ lxc image list images:ubuntu/focal amd64 -c lfpasut
+-----------------------------+--------------+--------+--------------+----------+------------------------------+-----------------+
|            ALIAS            | FINGERPRINT  | PUBLIC | ARCHITECTURE |   SIZE   |         UPLOAD DATE          |      TYPE       |
+-----------------------------+--------------+--------+--------------+----------+------------------------------+-----------------+
| ubuntu/focal (7 more)       | 47e9e45537dd | yes    | x86_64       | 97.10MB  | Apr 2, 2020 at 12:00am (UTC) | CONTAINER       |
+-----------------------------+--------------+--------+--------------+----------+------------------------------+-----------------+
| ubuntu/focal (7 more)       | 645fa1179e2c | yes    | x86_64       | 231.50MB | Apr 2, 2020 at 12:00am (UTC) | VIRTUAL-MACHINE |
+-----------------------------+--------------+--------+--------------+----------+------------------------------+-----------------+
| ubuntu/focal/cloud (3 more) | 2bb112ae3d8b | yes    | x86_64       | 111.79MB | Apr 2, 2020 at 12:00am (UTC) | CONTAINER       |
+-----------------------------+--------------+--------+--------------+----------+------------------------------+-----------------+
| ubuntu/focal/cloud (3 more) | a53cab9c344b | yes    | x86_64       | 254.63MB | Apr 2, 2020 at 12:00am (UTC) | VIRTUAL-MACHINE |
+-----------------------------+--------------+--------+--------------+----------+------------------------------+-----------------+

Preparing LXD for MY stuff

Okay, so we are now able to create containers based on images. LXD container images are obtained from HERE but LXD daemon does it for you: every LXD daemon can be also an image server (lxc remote list ← give it a try) but I won’t go there.

Idea for this session is simple: All LXD container images are ready for cloud-init and I have to explore that in order to create “templates” for the containers I’m about to create. I usually do pacemaker/corosync development so I have a template called “cluster”. I also do virtualization work, and reviews, for a friend of mine… so I have a template called “qemu”.

Why is that ? Why to have 2 templates instead of a generic one ? That is simple!

That’s because in the cluster template I have MULTIPLE networks defined: internal01, internal02, public01, public02, iscsi01, iscsi02. This way I can test multipath accessing disks coming from different iscsi paths, and I can also have 2 special networks for the cluster interconnects… and 2 public network interfaces to simulate virtual IPs floating around interfaces, etc.

Now, in the QEMU template I have my host’s directories: /etc/libvirt/qemu and /var/lib/libvirt/images shared with my containers having the qemu template. This way I can test MULTIPLE QEMU versions with the same images I have in my host environment. So I can test Ubuntu Bionic, Eoan and Focal different QEMUs, for example… by having 3 different containers all sharing the same images.

Cutting the crap, let’s do it. I’m gonna create just 1 network and 1 profile as examples:

rafaeldtinoco@lxdexample:~$ lxc network create lxdbr0
Network lxdbr0 created

rafaeldtinoco@lxdexample:~$ lxc network set lxdbr0 <tab><tab>
bridge.driver               fan.overlay_subnet          ipv4.dhcp.ranges            ipv6.address                ipv6.nat.address
bridge.external_interfaces  fan.type                    ipv4.firewall               ipv6.dhcp                   ipv6.nat.order
bridge.hwaddr               fan.underlay_subnet         ipv4.nat                    ipv6.dhcp.expiry            ipv6.routes
bridge.mode                 ipv4.address                ipv4.nat.address            ipv6.dhcp.ranges            ipv6.routing
bridge.mtu                  ipv4.dhcp                   ipv4.nat.order              ipv6.dhcp.stateful          maas.subnet.ipv4
dns.domain                  ipv4.dhcp.expiry            ipv4.routes                 ipv6.firewall               maas.subnet.ipv6
dns.mode                    ipv4.dhcp.gateway           ipv4.routing                ipv6.nat                    raw.dnsmasq

rafaeldtinoco@lxdexample:~$ lxc network set lxdbr0 ipv4.address 172.16.0.1/24
rafaeldtinoco@lxdexample:~$ lxc network set lxdbr0 ipv4.nat="true"
rafaeldtinoco@lxdexample:~$ lxc network set lxdbr0 ipv4.dhcp="true"
rafaeldtinoco@lxdexample:~$ lxc network set lxdbr0 ipv4.dhcp.ranges "172.16.0.100-172.16.0.254"
rafaeldtinoco@lxdexample:~$ lxc network set lxdbr0 ipv6.address=""
rafaeldtinoco@lxdexample:~$ lxc network set lxdbr0 ipv6.nat="false"

rafaeldtinoco@lxdexample:~$ lxc network show lxdbr0
config:
  ipv4.address: 172.16.0.1/24
  ipv4.dhcp: "true"
  ipv4.dhcp.ranges: 172.16.0.100-172.16.0.254
  ipv4.nat: "true"
  ipv6.nat: "false"
description: ""
name: lxdbr0
type: bridge
used_by: []
managed: true
status: Created
locations:
- none

With the network lxdbr0 created let’s now create our default profile:

rafaeldtinoco@lxdexample:~$ cat default.yaml | pastebinit
https://paste.ubuntu.com/p/8BXwm5tBnD/

rafaeldtinoco@lxdexample:~$ lxc profile edit default < default.yaml

I have defined the profile default with a yaml file I already had. You may find examples HERE OR in the pastebin from the command above, which will be the one I’m going to use for this example.

We are all set to create our first container. A few things to have in mind:

  • I am using a yaml file template that creates the user “rafaeldtinoco” automatically
  • My ssh key is imported to “rafaeldtinoco” user automatically
  • The template is also sharing /root and /home with the container to be created
  • I have the following 2 options set not to worry about sharing folders:
    • security.nesting: “true”
    • security.privileged: “true”

Creating the container

rafaeldtinoco@lxdexample:~$ lxc launch ubuntu-daily:focal focalcontainer                                                                      
Creating focalcontainer
Retrieving image: rootfs: 87% (12.96MB/s)
...
Creating focalcontainer
Starting focalcontainer

rafaeldtinoco@lxdexample:~$ lxc list -c ns4t
+----------------+---------+---------------------+-----------+
|      NAME      |  STATE  |        IPV4         |   TYPE    |
+----------------+---------+---------------------+-----------+
| focalcontainer | RUNNING | 172.16.0.242 (eth0) | CONTAINER |
+----------------+---------+---------------------+-----------+

You can follow cloud-init working by executing

rafaeldtinoco@lxdexample:~$ lxc console focalcontainer
To detach from the console, press: <ctrl>+a q

Password:
Login timed out after 60 seconds.
[  OK  ] Stopped Console Getty.
[  OK  ] Started Console Getty.

Ubuntu Focal Fossa (development branch) focalcontainer console

focalcontainer login:
focalcontainer login:          Mounting Mount unit for core, revision 8689...
[  OK  ] Mounted Mount unit for core, revision 8689.
[  OK  ] Stopped Snap Daemon.
         Starting Snap Daemon...
[  OK  ] Started Snap Daemon.
         Mounting Mount unit for lxd, revision 14133...
[  OK  ] Mounted Mount unit for lxd, revision 14133.
[  OK  ] Listening on Socket unix for snap application lxd.daemon.
         Starting Service for snap application lxd.activate...
[  OK  ] Finished Service for snap application lxd.activate.
[  OK  ] Finished Wait until snapd is fully seeded.
         Starting Apply the settings specified in cloud-config...
[  OK  ] Reached target Multi-User System.
[  OK  ] Reached target Graphical Interface.
         Starting Update UTMP about System Runlevel Changes...
[  OK  ] Finished Update UTMP about System Runlevel Changes.
[ 2222.174841] cloud-init[1542]: Get:1 http://us.archive.ubuntu.com/ubuntu focal InRelease [265 kB]
[ 2222.975069] cloud-init[1542]: Get:2 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease [89.1 kB]
[ 2223.140668] cloud-init[1542]: Get:3 http://us.archive.ubuntu.com/ubuntu focal-proposed InRelease [265 kB]
[ 2223.240950] cloud-init[1542]: Get:4 http://us.archive.ubuntu.com/ubuntu focal/main Sources [841 kB]
[ 2223.388595] cloud-init[1542]: Get:5 http://us.archive.ubuntu.com/ubuntu focal/universe Sources [9707 kB]

The container will execute everything we have defined in the default template and after everything is finished it will reboot. You can check if everything is finished with the command:

rafaeldtinoco@lxdexample:~$ lxc shell focalcontainer
(c)root@focalcontainer:~$ cloud-init status
status: done

And voilá =). We have a “focalcontainer” installed.

Advantages of having containers on ZFS

rafaeldtinoco@lxdexample:~$ time lxc copy focalcontainer clonedcontainer

real    0m0.367s
user    0m0.029s
sys     0m0.050s

rafaeldtinoco@lxdexample:~$ lxc list -c ns4t
+-----------------+---------+---------------------+-----------+
|      NAME       |  STATE  |        IPV4         |   TYPE    |
+-----------------+---------+---------------------+-----------+
| clonedcontainer | STOPPED |                     | CONTAINER |
+-----------------+---------+---------------------+-----------+
| focalcontainer  | RUNNING | 172.16.0.242 (eth0) | CONTAINER |
+-----------------+---------+---------------------+-----------+

rafaeldtinoco@lxdexample:~$ lxc start clonedcontainer

The cloned container will run cloud-init scripts again - from the default template, so, in our case, it will run “apt-get update, apt-get dist-upgrade, etc”… and it will reboot, being ready for you. You could use an empty template here so your cloned image does not run any commands from the template cloud-init yaml file, for example =).

rafaeldtinoco@lxdexample:~$ lxc shell clonedcontainer 

(c)root@clonedcontainer:~$ sudo su - rafaeldtinoco

(c)rafaeldtinoco@clonedcontainer:~$ ls
default.yaml  desktop  devel  downloads  patches  snap  technical  work

The cool thing is that, because of my profile, all my home directory is ready for both containers!

Adding virtual machines to the same environment

Alright, alright… I knew everything you said in previous items already… show me how to have virtual machines with LXD…

With the advantage of the already created environment… in order for us to have virtual machines mixed with the containers just created we can execute:

rafaeldtinoco@lxdexample:~$ lxc init ubuntu-daily:focal focalvm --vm
Creating focalvm
Retrieving image: metadata: 100% (2.74GB/s)
...

rafaeldtinoco@lxdexample:~$ lxc list -c ns4t
+-----------------+---------+---------------------+-----------------+
|      NAME       |  STATE  |        IPV4         |      TYPE       |
+-----------------+---------+---------------------+-----------------+
| clonedcontainer | RUNNING | 172.16.0.130 (eth0) | CONTAINER       |
+-----------------+---------+---------------------+-----------------+
| focalcontainer  | RUNNING | 172.16.0.242 (eth0) | CONTAINER       |
+-----------------+---------+---------------------+-----------------+
| focalvm         | STOPPED |                     | VIRTUAL-MACHINE |
+-----------------+---------+---------------------+-----------------+

Let’s add a config disk (the disk that will tell cloud-init all the options we want - and from our default profile - to run during cloud-init initialization)

rafaeldtinoco@lxdexample:~$ lxc config device add focalvm config disk source=cloud-init:config
Device config added to focalvm

rafaeldtinoco@lxdexample:~$ lxc list -c ns4t
+------------+---------+------------------------+-----------------+
|    NAME    |  STATE  |          IPV4          |      TYPE       |
+------------+---------+------------------------+-----------------+
| focalvm    | RUNNING | 10.250.97.249 (lxdbr0) | VIRTUAL-MACHINE |
+------------+---------+------------------------+-----------------+

AND, just like the container, “lxc console” will show cloud-init running and configuring the image automatically until it gets rebooted and is ready for you.

If you want to change the virtual machine CPU and MEM configuration you can execute the following commands when the virtual machine is off:

$ lxc config set focalvm limits.memory 8GiB
$ lxc config set focalvm limits.cpu 4

Feel free to clone your new virtual machine as well, just like you did with the container:

rafaeldtinoco@lxdexample:~$ time lxc copy focalvm focalvmcopy
real    0m0.898s
user    0m0.035s
sys     0m0.035s

rafaeldtinoco@lxdexample:~$ lxc list -c ns4t
+-----------------+---------+---------------------+-----------------+
|      NAME       |  STATE  |        IPV4         |      TYPE       |
+-----------------+---------+---------------------+-----------------+
| clonedcontainer | RUNNING | 172.16.0.130 (eth0) | CONTAINER       |
+-----------------+---------+---------------------+-----------------+
| focalcontainer  | RUNNING | 172.16.0.242 (eth0) | CONTAINER       |
+-----------------+---------+---------------------+-----------------+
| focalvm         | STOPPED |                     | VIRTUAL-MACHINE |
+-----------------+---------+---------------------+-----------------+
| focalvmcopy     | STOPPED |                     | VIRTUAL-MACHINE |
+-----------------+---------+---------------------+-----------------+

I hope you liked this!


Rafael David Tinoco
rafaeldtinoco@ubuntu.com
Ubuntu Linux Core Developer
Canonical Server Team

8 Likes

Hi,

All the github links are broken.