An update on the licence change and community image server

Change to AGPL-3.0-only

As stated in the LXD 5.20 release notes, LXD is now released under AGPL-3.0-only. All Canonical contributions have been relicenced and are now under AGPL-3.0-only. Community contributions remain under Apache-2.0 - this refers to all contributions made before December 12th, 2023.

Going forward, all contributions will be released as AGPL-3.0-only, unless specified otherwise in the commit message. In other words, all Apache-2.0 code will be identifiable by the commit messages or, where applicable, by the file header.

Reiterating what was stated in the release notes, the change in the licence does not prevent our users from using, modifying, or providing LXD-based software solutions, provided that they share the source code if they are modifying it and making it available to others. It is designed to encourage those looking to modify the software to contribute back to the project and the community.

What about the Go SDK client package?

Following the announcement, several community members voiced their concerns about the licence of the Go SDK client package (which is statically linked with usersā€™ own software). We have no intention of hindering community usage or integrations, and the Go SDK client package will remain under Apache-2.0, we will shortly update the package to reflect that. The Python SDK client package also remains Apache-2.0 licensed.

Iā€™m concerned about losing access to the community image server

The LinuxContainers project has decided to restrict access to the community image server for LXD users. We regret their decision and the disruption it will cause for the community.

We have no intention of restricting LXD only to Ubuntu users and will be providing a replacement image server serving images for other Linux distributions. The work is in progress and weā€™ll be providing an update on the initiative in the near future. In the meantime, we are happy to get feedback from our users in terms of what images we should prioritise.

4 Likes

Iā€™d love to see the Slackware image (already available in the LinuxContainers image server) to be included in the Canonical one.

Definitely Alpine images, do you have any timelines? The community server is going to take down lxd access in May.

1 Like

The plan is to have a replacement ready before the cutoff time. Weā€™ll give a specific update as soon as we finalize the details.

Iā€™m using LXD on Arch Linux and since 15th of January I can no longer use ā€˜imagesā€™ remote. Is there any workaround for that?

As for distros, I would like to see Debian (especially cloud version), because I use it for all my VMs.

2 Likes

Rocky Linux (8 and 9) would be most important for us. This change hit us today and Iā€™m not very happy about it. Quoting myself from a thread in another forum (https://discuss.linuxcontainers.org/t/important-notice-for-lxd-users-image-server/18479/31?u=perlun):

Also seeing similar problemsā€¦ We have been using images: with LXD (LTS and non-LTS) until now. Was working on some infrastructure changes for our LXD-based CI runners and discovered this weirdness with LTS-based LXD and ā€œnormalā€ LXD seeing a different set of imagesā€¦ until I found this thread half an hour ago. :neutral_face:

My problem is that LXD LTS does not work for our use case, because we use functionality which got released as part of LXD 5.1 (https://discuss.linuxcontainers.org/t/lxd-5-1-has-been-released/13956#setting-profiles-during-an-image-copy-8), specifically the lxd image copy --profile flagā€¦

So Iā€™m currently stuck with an LXD LTS which doesnā€™t work (because we canā€™t differentiate the profile being used on Ubuntu and Rocky Linux-based guests, which is needed for proper operation) but can access the images, or an LXD non-LTS which works but canā€™t access the required images. :sob:

Or, we can switch to Incus which is not released as a stable version yet. Iā€™m not sure Iā€™m willing to put this on our CI infrastructure just yet.

Sorry of this sounds very negative, Iā€™m just frustrated after wasting some time on this today. I know this is not the fault of LinuxContainers.org. Itā€™s just so incredibly sad to see all the extra work and pain this incurs on a lot of people in the community. :confused:

5 Likes

Just a clarification on this part of what I wrote: Incus 0.x releases are ā€œas stable and on a similar schedule to LXD 5.x releasesā€, per one of their maintainers. I wanted to make this clear because my previous message was a bit factually incorrect. Will not discuss this any further in this forum, since Incus != LXD, just wanted to get the facts straightened out.

We are using CentOS 8-Streams and planning to upgrade to 9-Streams in due course, so it would be really nice if you could prioritise those two.

2 Likes

Almalinux 9 and Archlinux containers

Donā€™t prioritize, just do what they are doing. Build the images with distrobox, setup CI/CD and host the images. This way we will get all the images we had before. This change is already affecting users in production environments which is really bad. Waiting for May will make things even worse and a lot of people will migrate to Incus because of the delay and LXD will lose users. Itā€™s not like we were not warned about it by linux containers maintainers.

1 Like

Any progress update on this? Weā€™d love to know how itā€™s progressing.

2 Likes

Donā€™t prioritize, just do what they are doing.

Iā€™d like to echo @sekurilabā€™s suggested course of action. Cloud-init needs to be able to test everything we support. See our supported distro list here if you need to prioritize. In the meantime we can use Incus locally for manual testing, but the tooling we use for testing currently only supports lxd - regaining lost testing capabilities ASAP would be best.

6 Likes

Please add Centos 7.

btw, is there a way to build self-hosted image server and documentation how to build images?

i have a controversial request, for which incus are weak as well - create a way to :

  • create own self-hosted repositories via http/simplestreams
  • instruction how to transform supported OS Distro ISO to LXD/LXC Image
  • how to integrate it to the hosted repository

Probably some integration to the Sonatype Nexus OSS could be made, or own docker/lxc based image with the ā€œimageā€ server for lxd & simplestreams/http proto.

P.S.
And yes, coz of delay i already forced to migrate my home-lab to incus coz iā€™m fedora server user alongside of the ubuntu and currently stomped on how bad situation is with creating own images and hosting them are in Incus (linuxcontainers) and LXD (canonical)

heā€™s not one of the maintainer but core developer actually, and what he described is basically from which point Incus forked from LXD but in the future, nobody guarantee their compatibility as well on how bright would be Incus future(or how much that guy would be able to support the infra?).

On LXD side nothing is better - think is that for half of year nothing were made to create/migrate to own LXD repositories, alongside the ā€œubuntu:ā€ one and absence of possibility and clear guides on how to create own ones raises a concern, especially for environments without internet.

Hi All

Iā€™ve begun working on the image server, and also setup the build system for the images. You can find more info here. The goal is to get something up and running within 2 weeks.

Weā€™re prioritizing getting and MVP up and running, given that we have limited resources Iā€™m going to be focusing primarily on alpine container images for aarch64 and amd64 for lxd. Thatā€™s the basic setup we need for our product.

The basic build configuration is there but itā€™s still missing one piece that pushes the built image to the server which I will solve in due time.

Iā€™m making good progress and am positive about getting the MVP up in 2 weeks. If there is any particular OS youā€™d like to see, feel free to submit an issue on the opsmaru-images or submit a pull request. Documentation is a little sparse right now but you can read the code itā€™s pretty straight forward.

Project Goals

To create an open image server that is transparent and can be easily managed by the community.

Deployment Options

Users will either be able to sign up to our image server, get a token and use the image server managed by upmaru OR users will be able to host their own image server.

MVP Features

  • Provide basic functionality for serving lxd / incus images
  • Provide a way to build / push images to S3 compatible storage
  • Provide a way to use CDN in-front to ensure fast performance

Roadmap

  • Some UI to enable easy management for users to issue their own tokens.
  • Some UI to enable users to Bring their own CDN so they can be responsible for their own bandwidth / usage.
2 Likes

I wanted to update everyone on my progress.

I got the hash calculation working, and have setup the repo that will host the github action.

You can see how the metadata for the items are being built here:

[
  %{
    name: "incus.tar.xz",
    size: 880,
    path: "images/alpine/3.19/amd64/default/1/incus.tar.xz",
    source: "/home/zacksiri/test/incus.tar.xz",
    file_type: "incus.tar.xz",
    hash: "45c9f70358879bc8163843ef86d51592a6ff8d1177db0f1df9b6c6e571322eab",
    combined_hashes: [
      %{
        name: "combined_squashfs_sha256",
        hash: "0d0c64a632ead7ac29516562cdcff63359e49544214ac08dd2dbc43aa9fc3ed4"
      }
    ],
    is_metadata: true
  },
  %{
    name: "lxd.tar.xz",
    size: 880,
    path: "images/alpine/3.19/amd64/default/1/incus.tar.xz",
    file_type: "lxd.tar.xz",
    hash: "45c9f70358879bc8163843ef86d51592a6ff8d1177db0f1df9b6c6e571322eab",
    combined_hashes: [
      %{
        name: "combined_squashfs_sha256",
        hash: "0d0c64a632ead7ac29516562cdcff63359e49544214ac08dd2dbc43aa9fc3ed4"
      }
    ],
    is_metadata: true
  },
  %{
    name: "root.squashfs",
    size: 2850816,
    path: "images/alpine/3.19/amd64/default/1/rootfs.squashfs",
    source: "/home/zacksiri/test/rootfs.squashfs",
    file_type: "squashfs",
    hash: "b5409f8817f9e297c51fa96a426f5876c20447c1b1767902ea84f199853889b9"
  }
]

The Goal of the icepak repo is it will run inside the github action. It will essentially do the following:

  • Push built artifacts to S3 storage
  • Compute all the hash / metadata necessary
  • Create version for a particular product

This will essentially give polar all the information needed to generate:

  • /streams/v1/index.json
  • /streams/v1/images.json

Those two json endpoints are basically based on simplestream protocol.

I donā€™t mean to bug anyone, but is there an update on this. We had to move to Incus on one project due to the fact we needed a Debian image. Both projects are great, but I would have loved to not spend a day migrating things.

Also, it would be great to have Ubuntu images with generic kernel instead of the ā€œkvmā€ kernel for VMs, by default.

We are actively working on it.

WRT to generic kernels, I believe from 23.10 onwards (including the forthcoming 24.04 LTS release) the images from ubuntu: and ubuntu-daily: are using the generic kernels.

lxc launch ubuntu:23.10 --vm v1
Creating v1
Starting v1                                   
lxc exec v1 -- uname -a
Linux v1 6.5.0-17-generic #17-Ubuntu SMP PREEMPT_DYNAMIC Thu Jan 11 14:01:59 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
2 Likes