Additionally to the online documentation of LXD, we want to expose the LXD documentation website as part of a LXD deployment. The generated static website is already generated in the /doc/html folder as part of our doc pipeline. We would ‘just’ need to serve these static assets through a web server embedded within LXD.
Exposing the documentation website as part of a LXD deployment has the following advantages:
It provides a better accessibility: including the documentation within the binary means it’s always available, even if the user is offline or has intermittent network connectivity. It also eliminates the need to visit an external website to understand how to use the project, making it more convenient for users.
It ensures synchronization between the project and its documentation: hosting the documentation with the project binary ensures that the version of the documentation always matches the version of the software being used. This avoids any confusion or errors that might result from discrepancies between the documentation and the software version.
It makes it possible to link from the LXD UI to the docs: eventhough we could link to the online documentation, if a link changes, or a section is moved to another page, the link in the UI would break. So have it embedded avoid that issue.
However, it’s important to note that this approach also has potential downsides, such as increasing the size of the binary and the need to update the entire binary to update the documentation (87MB is the size of the LXD binary with the embedded website, compared to 61MB without it)
We’ll introduce a new http endpoint within LXD at /documentation/. This endpoint is served by a file server handling pre-compressed or deflated assets stored within the snap package.
The generated static documentation (make doc in our pipeline) will need to be copied in the source (instroduction of a lxd/gendocs/html folder to put the html/css/js/img assets. Note here that it resonates with this other spec document introducing the generated JSON config option metadata in lxd/metadata/configuration.json that would be exposed over LXD’s REST API)
Just as LXD UI, we need a flag to enable or not the documentation at the snap package level. I propose to do a snap set lxd documentation.enable=true and sudo systemctl reload snap.lxd.daemon which will set an internal env variable to enable this feature.
For this to work (and again, similar to LXD UI ) we need to expose the server to the network lxc config set core.https_address <ADDR>
No API changes (this is a new endpoint, not a new API route)
I always like having offline documentation of anything. My preference would be that the documentation can be easily downloaded separately and installed anywhere. A downloadable tar file would be a great start, although this would not fit with the snap behaviour of getting automatic updates.
If I have multiple LXD installations, I wouldn’t want to have multiple copies of the documentation.
I believe something like this belongs in a container, not in the LXD server itself. I already have documentation for many things in a documentation LXD container. I currently use a container that runs nginx and documentation files.
I think of LXD as a tool that makes my server modular. If I want to add functionality to my server, such as documentation, I want to add it in a container, not in the LXD server itself. In fact, I wish the LXD Object storage was added as a container, not in the LXD server.
Would it make sense to have an “LXD documentation container image”? But then that would suffer from bundling the documentation with a guest OS (choices, choices…) and not being able to easily use it outside LXD. It would also currently not be automatically updated, as containers do not automatically update. That’s something to think about. Why does LXD update automatically, but its containers don’t? Should there be a mechanism to update containers automatically? I think so, eventually.
I think another important rationale to mention here is that having the documentation always available as part of LXD is that it makes it possible to link from the UI to the docs. We could of course also link to the online documentation, but if a link changes, or a section is moved to another page, the link in the UI would break.
This is something we can avoid by shipping the docs with the product.
Btw, regarding offline documentation - it is possible to download a PDF of the documentation. We haven’t put any work into making this look nice so far, but it does contain the content of the docs.
Also we don’t bundle the ui in the binary so I don’t see why we need to bundle the docs too. We can just server them from a known location through lxd and bundle them in the snap. We should still figure out and fix why they are so oversized though.
After removing .doctrees (13MB), optimizing the assets (minimization of html/css/js + svg/png optimization) and finally gzipping them, we going down to a meager 4.9MB. Is it acceptable ?
And @sdeziel1, as this is a simple website, can’t we just have our gzip handler as part of the binary to avoid the burden of having an nginx deployment ? The set up for the end user would be much simpler IMHO
Oh no, sorry for not being clear, I was not suggesting to add nginx to the mix, just pointing out that it’s possible to serve statically gzip’ed file. Dunno if the Go HTTP handler we have can do that, if not then no worries.
The figures you got sound way better to me Thanks Ruth for the pointer!
@sdeziel1 I realized there was maybe an issue with that. Indeed, if, we were serving the assets separately (a.k.a not in the binary), we could have a support for gzip encoding in a Go handler and let nginx work to compress/decompress on the fly and that’s fine.
What I want to do is embed my files in the binary so that even a non-snap user can access the website without relying on any dependencies. For that the only solution is to embed the assets inside. The problem is that this is heavy.
So I came up with this that I think checks all the boxes:
We optimize + pre-compress everything at compile time (except maybe the audio/video files that can’t be more compressed).
We load everything in the binary (around 4.9MB of extra weight which I think is reasonable)
Implement a custom http.FileServer to match a normal asset path to its compressed asset path.
This is in theory even faster than nginx because we only need to let the client decompress the data. No compression needed at runtime.
Yep, I understand the benefits of embedding the assets and that makes sense to me assuming they are small-ish. Now that said, if you take the pre-compression route as I suggested, you need some way to handle simple clients (like wget) that by default don’t advertise gzip support. For those, you want the HTTP handler to decompress the embedded gzip'ed file and hand it to the simple client in non-compressed form.
I’ve provide basically the same in the PR you’ve put up. BTW shrinking it down to <5MB is pretty nice, well done!
Right, this case is not handled as of now without returning http.NotFound. I’ll take care of that. Regarding the size, yeah it’s definitely way better. I think I could even pre-compress using Brotli encoding to save maybe a few hundreds of kilobytes (maybe 1MB top). Is Brotli widely supported now in 2023 ? What do you think ?