using livecd-rootfs on its own is best described as “extremely challenging and difficult.” it’s not really “designed” to be executed as a single call script
The closes example to how one can call livecd-rootfs is in the source code in debian/tests/default-bootstraps.
I won’t try to document all the usage for it here, because it’s a never ending blackhole of possible variables. I’ll only break down what’s in the call in that script, as it’s a useful starting point. And a very high level summary of what a livecd-rootfs build looks like
Summary
livecd-rootfs creates an image, be it an ISO, some layers meant for an ISO, or a pre-installed image. It runs through a series of steps, normally calling lb config then lb build. what happens in those config or build steps is defined in live-build/auto/build and live-build/auto/config. They’re long shell scripts with tons of if statements based on environment variables. The most basic setup run
- runs debootstrap to setup an initial chroot
- runs through config
- runs through build
There is then a concept of hooks. There is documentation in live-build/ubuntu-cpc/README.cpc.md.
During the build portion, many commands are executed in the chroot. I won’t say all are executed in the chroot, because some are executed on the host affecting the chroot.
PROJECT and SUBPROJECT
Projects in livecd-rootfs can generally map to a colder in live-build/ so project ubuntu-cpc would execute scripts from within that area. From there, SUBPROJECT is used to denote variations upon those projects. The most common is minimized.
SUITE : the version of Ubuntu you wish to build. Fun note, livecd-roofs git repo has one branch per suite. can you build jammy with the resolute branch? Probably? but the differences between versions of Ubuntu are encoded in the many bash scripts. So if a tool changed its API, and the command is being. That’s why i mention the difference between executed in a chroot and to a chroot, because there are fun things that can happen there, so it is best if host and chroot are the same SUITE
ARCH: CPU Architecture. again, it’s not really supported to cross compile. Weird things can happen.
NOW: timestamp. i’m just be pedantic 
Should you use livecd-rootfs
Likely not, at least not directly. There are projects out there that wrap livecd-rootfs to help create suitable environments that mimic Launchpad (where all official Ubuntu builds happen). These are not officially supported Ubuntu tools (meaning you can use them, but don’t expect much support). I won’t even link them for that fear, but know they exist. One will mix you a drink, likely an old-fashioned (hint of the day).
There are also tools that help with modifying existing images. Packer is of course the most well known, but it has drawbacks. Notably most targets do not execute as mounted systems operating as chroot, instead booting the image, running modification, and running cleanup. there have been many cases of failed cleanup, so always double check important things like having a unique machine-id by booting a couple instances. There are targets that operate in a mounted choot fashion, ex: there’s a “surrogate-ec2” builder (exact name I can’t remember). There are also scripts put together to customize ISOs or pre-installed images. These are all “mileage-may-vary” and are not used “officially.”
For Ubuntu and Canonical sponsored tools, we are working on a craft-tool to replace livecd-rootfs. it’s still in development though, and can’t build every type of image the livecd-rootfs can.
The last option which is better in a cloud world is using image builders provided by the clouds themselves or VM Snapshot type workflows + cloud-init. I’ll link some official docs on building “Pro” but the difference between “Pro” and “Free” for using the tools is what image you start with. so just sub a “free” image for a “pro” and they should work
cloud-init isn’t “officially” documented for Golden Images these days, but you can encode customizations into cloud-init, then use cloud-init clean and save the snapshot of the VM. Docs on clean.
My Suggestion
As an Ubuntu member, I’d say “use the method that is closest to what you plan on deploying.” If you’re bound for a cloud, using the image builders provided by the clouds is the “safest” bet. Until forthcoming-image-building-craft-tool is officially launched, packer is your best alternative for “one set of scripts that should build anywhere.” If you’re looking for something more in your control, taking a “launch, customize, clean” approach, be it with cloud-init clean or manually is an option. Keeping all your configuration in source control, utilizing something like ansible is also really solid. My life pre-Canonical was a lot of ansible, so deploy then customize rather than launch golden images. But I didn’t need to scale fast, as it was much more a Pet shop (we were running large instances of things, in HA, all over the world, but we weren’t cloud-native. more “old school” but we tested the install scripts and environments continuously).