FAQ: LXD has been moved to Canonical

IMHO LXD discourse was a life saver for lxd users. If this rate limit or so on is imposed not sure it would make sense for average user to wait for approval/delay. I sometimes think it would be nicer to move to github discussions as it is one place for both referring/seeing related issues.

4 Likes

Whether the move for LXD questions and answers from discuss.linuxcontainers.org to discourse.ubuntu.com was really such a good idea? I doubt that the search for lxd related problems and solutions between all these ubuntu questions will work as expected in such a way. Maybe it would have been better to start an AskLXD at Stack Exchange or a separate discourse platform - but why???
There probably marketing had their fingers involved in that decision to move and have not thought until noon … all very difficult to understand and not convinced …

1 Like

All previous (and future) release announcements are available here on Github.

Thank you for the information. I would greatly appreciate it if you could revive the translations of past release notes, even on GitHub.

but all release announcements will be posted here (and linked on Github) and translations to other languages are more than welcome.

Do you have any tips for translating and publishing useful posts like the release announcements on discourse.ubuntu.com? Should we simply comment on the translated version? Is there a way for multiple users to review and correct, like in git PR/MR?

We understand that this might cause some inconvenience to the users.

I understand that it’s inevitable to have deadlinks at this point. I’m looking forward to them being fixed in the future.

Thank you for your feedback. This is important for us, and we are discussing how to set up things in a better way in the long term.

Clarifying the issue you experienced earlier - this is the place to ask support questions. Not everyone was aware of the changes we are considering and we are working with the Community here to find the best approach for this. We have set up a new LXD - Support sub-category. This category is set up in a way that it does not interfere too much with regular usage of the Ubuntu Discourse. The LXD team will be answering questions there, as usual.

I have updated the FAQ with a section on this as well.

2 Likes

I found my way here because the images published on images.linuxcontainers.org suddenly removed all the 32bit images for debian on armhf. My apologies if this is the wrong place to ask this question, but will the removed architectures from images.linuxcontainers.org be restored using canonical resources, or am I out of luck and now need to start building my own images by hand for armhf? I was relying on this infrastructure for building deb packages for raspberry pi 3b+'s using debian 32bit images. Fortunately I still have the last published images stored on several cloud VMs and was able to export them, so my CI pipelines are not completely broken, but I would like to know what the plan is for these lost architectures moving forward.

Thank you for your time and attention.

-A

1 Like

The LXD Go code import path has changed to github.com/canonical/lxd.
Wouldn’t it be better to have set it to canonical.com/lxd? The code can still be on github, but setting the import path to the canonical domain would be closer to using canonical’s infrastructure and branding, instead of using github’s branding. All it takes for this is a simple http server on canonical’s domain that points go imports to the github repository. It’s like having email on one’s own domain, while using another company’s email servers.

I absolutely second that.
Dropping community support (or severely limiting bandwidth) is going to strangle LXD.
I understand need to lower burden on core developers, but that’s exactly what community-powered forums are for.

Note: I wouldn’t have posted this (contributing to sheer noise) if not for the need to “upgrade” my user status.

Hi @ablomberg

There are no plans to restore building images for those architectures and distributions.

The images supported by Canonical are those available on the ubuntu: remote.

See:

lxc image list ubuntu:

The community images are maintained by the https://linuxcontainers.org/ project for use with LXD and LXC.

That’s what I was afraid of. I guess the days of a free lunch are over. It was good while it lasted.

Thank you for the confirmation.

3 Likes

Hm, that’s sad. I’m troubleshooting a bug in ubuntu armhf that does NOT happen in debian armhf, and I was hoping to be able to spin up a debian armhf container, but I see the armhf images are gone from the default images: remote.

Since I was working on this bug on a Sunday in my free time, I guess I’ll stop here.

1 Like

I’m disappointed to learn that the images: remote will no longer be available for LXD, as announced here.

It appears that I now have to decide between continuing to use LXD or transitioning to Incus.

Is there any plan for LXD to introduce a fork of images: or alternative methods to address this change?

Additionally, I’m concerned about the future compatibility of ubuntu: images with Incus. Will Incus still be able to use these images, or are there potential issues on the horizon?

Thank you.

1 Like

Yeah this is a major blow. We rely on images: specifically alpine images containers. Without images: support we will be forced to move to incus.

@tomp does canonical have plans on continuing support for images: ?

You can use Incus to export an image from images: and then import it to LXD. I use both Incus and LXD, and I typically build and export images on one platform that I then use on both Incus and LXD systems.

It’s a good idea to use images that you’ve imported to your system, rather than getting the latest image from “images:” whenever you need it, because the latest image may have bugs or changes that break your instances. This has happened to me several times in the past, so I no longer build containers directly from “images:”, except once in a while to build a container that I then publish as an image.

It’s also a good idea to learn to use distrobuilder so you can build your own images from scratch, just in case. For example, there is no images:alpine/3.19 image yet, but you can make your own using distrobuilder.

As for compatibility, I hope LXD and Incus can agree on a standard image format that they both use. Apparently the format is the same for now, but there is no guarantee that it will remain so.

1 Like

I’ve decided to embark on the journey to setup my own image building and hosting. Apparently it’s conceptually quite simple. It’s just a static http server with a few json endpoints. Since I’m not able to use incus (they dropped support for fan networking) I will need to stay with LXD.

The challenge is the maintenance of keeping it up to date. Which can be solved with a basic web app, ci and some automation. I’m going to build this out and release it as a solution. It should make building / hosting images much easier.

I also hope the formats do not diverge, that would be another problem to solve.

Could you elaborate? Perhaps create a project page somewhere (forum topic, github repository) to discuss?

I haven’t started yet, I’m basically going to design and start this soon. I have to finish the initial MVP before May since I have a project we’re launching soon that currently depends on images:, you can imagine the predicament I’m in on one hand there is the LXD / incus fork, then incus drops fan networking, and then images: is announced that it will stop working for LXD.

My initial solution was to switch to incus, but without fan networking some features we require (cross-host networking for containers) become cumbersome to manage and setup. OVN is just overkill for what we need. So given that incus won’t work, we have to stick with LXD, which brings us to the fact that images: will be shutdown by May 2024 for lxd. Which leads me to take this matter into my own hands.

You can see the website for the project here https://opsmaru.com basically it’s a self-hosted PaaS built on top of LXD hosted in any cloud (launching with AWS / Digitalocean) It’s all running on the customer’s cloud account. It uses LXD as the container engine and enable users to deploy their rails / php / elixir / go / python / whatever apps on top in a few clicks.

We’ve been developing the project for the past year, what you will see on the website is the old version, the new version automates away all the complexity of setting up a lxd cluster.

We use alpine linux as the main OS in the container. So basically the MVP image server we setup will host alpine X86_64 and arm64 versions default variant for lxd only.

Essentially though the main designs behind it is.

  1. Uses distrobuilder to build the images requested
  2. Database to store different versions / metadata
  3. Repository on github responsible for building the image using github action
  4. Github action pushes built artifacts to this web app via API
  5. Webapp receives and process the artifact (generates all necessary hashes etc…)
  6. Webapp pushes artifacts to S3 compatible bucket of choice
  7. Webapp serves files requested by lxd
  8. CDN infront to cache images
  9. Some kind of user authentication token in the path.

the Webapp will enable users to get a private url so for example.

https://images.example.com/:token

Which is then added to the lxd cluster as the image source. Which will in turn be consumed by LXD as

https://images.example.com/:token/streams/v1/index.json
https://images.example.com/:token/streams/v1/images.json

This way it can track / throttle prevent abusive behaviour.

Initially we will build this to serve our own customers since it’s what is primarily needed. It’ll be open-sourced so if anyone wishes to run their own instance they can.

Created a github repo here https://github.com/upmaru/polar

Development will probably start sometime in late Jan 2024 as we’re going to launch our service first. Since there is a lot of work that have been scheduled to go out as public beta. That then leaves us just about 2-3 months to develop / qa / deploy the image server.

1 Like

We have published an update regarding LXD images here. Let us know which distros you would like us to prioritize.

Regarding OVN being an overkill, I don’t know if you are referring to getting it up and running but you can get LXD and OVN quickly working together with our recent Micro* releases (MicroOVN, MicroCloud).

Yeah I tried microovn, it works. Fan networking is still simpler.