Single-node guided

This tutorial shows how to install OpenStack (based on project Sunbeam). It will deploy an OpenStack 2023.1 (Antelope) cloud.


You will need a single machine whose requirements are:

  • physical or virtual machine running Ubuntu 22.04 LTS
  • a multi-core amd64 processor (ideally with 4+ cores)
  • a minimum of 16 GiB of free memory
  • 50 GiB of SSD storage available on the root disk
  • two network interfaces
    • primary: for access to the OpenStack control plane
    • secondary: for remote access to cloud VMs

Caution: Any change in IP address of the local host will be detrimental to the deployment. A virtual host will generally have a more stable address.

Important: For environments constrained by a proxy server, the target machine must first be configured accordingly. See section Configure for the proxy at the OS level on the Manage a proxied environment page before proceeding.

Control plane networking

The network associated with the primary network interface requires a range of approximately ten IP addresses that will be used for API service endpoints.

For the purposes of this tutorial, the following configuration is in place:

Network component Value
Address range
Interface name on machine eno1

External networking

The network associated with the secondary network interface requires a range of IP addresses that will be sufficient for allocating floating IP addresses to VMs. This will, in turn, allow them to be contacted by remote hosts.

For the purposes of this tutorial, the following configuration is in place:

Network component Value
Address range
Interface name on machine eno2

Deploy the cloud

The cloud deployment process consists of several stages: installing a snap, preparing the cloud node machine, bootstrapping the cloud, and finally configuring the cloud.

Note: During the deployment process you will be asked to input information in order to configure your new cloud. These questions are explained in more detail on the Interactive configuration prompts page in the reference section.

Install the openstack snap

Begin by installing the openstack snap:

sudo snap install openstack --channel 2024.1/edge

Caution: It is highly recommended to use the --channel 2024.1/edge switch which includes all the latest bug fixes and updates before the next stable release coming in summer this year.

Prepare the machine

Sunbeam can generate a script to ensure that the machine has all of the required dependencies installed and is configured correctly for use in OpenStack - you can review this script using:

sunbeam prepare-node-script

or the script can be directly executed in this way:

sunbeam prepare-node-script | bash -x && newgrp snap_daemon

The script will ensure some software requirements are satisfied on the host. In particular, it will:

  • install openssh-server if it is not found
  • configure passwordless sudo for all commands for the current user (NOPASSWD:ALL)

Bootstrap the cloud

Deploy the OpenStack cloud using the cluster bootstrap command:

sunbeam cluster bootstrap

On snap channel 2023.2/edge, you will first be prompted whether or not to enable network proxy usage. If ‘Yes’, several sub-questions will be asked.

Use proxy to access external network resources? [y/n] (y):
Enter value for http_proxy: ():
Enter value for https_proxy: ():
Enter value for no_proxy: ():

Note that proxy settings can also be supplied by using a manifest (see Deployment manifest).

When prompted, enter the CIDR and the address range for the control plane networking. Here we use the values given earlier:

Management networks shared by hosts (CIDRs, separated by comma) (
MetalLB address allocation range (supports multiple ranges, comma separated) (

Configure the cloud

Now configure the deployed cloud using the configure command:

sunbeam configure --openrc demo-openrc

The --openrc option specifies a regular user (non-admin) cloud init file (demo-openrc here).

A series of questions will now be asked. Below is a sample output of an entire interactive session. The values in square brackets, when present, provide acceptable values. A value in parentheses is the default value. Here we use the values given earlier:

Local or remote access to VMs [local/remote] (local): remote
CIDR of network to use for external networking (
IP address of default gateway for external network (
Populate OpenStack cloud with demo user, default images, flavors etc [y/n] (y):
Username to use for access to OpenStack (demo):
Password to use for access to OpenStack (mt********):
Network range to use for project network (
Enable ping and SSH access to instances? [y/n] (y):
Start of IP allocation range for external network (
End of IP allocation range for external network (
Network type for access to external network [flat/vlan] (flat):
Writing openrc to demo-openrc ... done
Free network interface that will be configured for external traffic [eno1/eno2] (eno1): eno2

Any remote hosts intending to connect to VMs on this node (remote access in first question) must have connectivity with the interface selected for external traffic (last question above).

Launch a VM

Verify the cloud by launching a VM called ‘test’ based on the ‘ubuntu’ image (Ubuntu 22.04 LTS). The launch command is used:

sunbeam launch ubuntu --name test

Sample output:

Launching an OpenStack instance ...
Access instance with `ssh -i /home/ubuntu/.config/openstack/sunbeam ubuntu@`

Connect to the VM over SSH. Because remote VM access has been enabled, you will need the private SSH key given in the above output from the launching node. Copy it to the connecting host. Note that the VM will not be ready instantaneously; waiting time is mostly determined by the cloud’s available resources.

Related how-tos

Now that OpenStack is set up, be sure to check out the following howto guides:

1 Like

Hello! The latest tutorial for MicroStack looks great! I have tried it but I am facing an issue with the cinder-ceph workload.

After I use the cluster bootstrap command, the output message says that the node has been bootstrapped. However, if I check the juju status,

cinder-ceph/0*               blocked   idle         (ceph) integration missing

The cinder-ceph workload shows ‘blocked’ and the app itself is stuck on ‘waiting’ status

cinder-ceph                                        waiting      1  cinder-ceph-k8s            2023.1/stable   19  no       waiting for units to settle down

Is there a step to manually integrate the cinder-ceph unit?

Hello @flaringpants.

Yes, this is normal. By default, you do not deploy Ceph, therefore, cinder-ceph is not integrated with it.

If you want to follow the tutorial while deploying Ceph. You need to add the storage role:

sunbeam cluster bootstrap --role control --role compute --role storage

By default, bootstrap assigns the roles control and compute only.

P.S.: Make sure to have free block devices to use Ceph with.

1 Like

Great thank you for the info!

Another issue I faced with the last version as well was that whenever the Host PC(VM) went through a reboot, the horizon IP address changed and sometimes other services/apps get stuck in the ‘waiting’ state for over 30 minutes.

I always solved it by deleting the relative pod manually using the microk8s.kubectl delete po command which most of the time did the work.

However, after a reboot in this version, all of the components show as ‘active’ but the images, flavors, key pairs, volumes are not to be found.

Thanks for your feedback @flaringpants, this is a known issue, and a fix is currently being reviewed :slight_smile:


Btw I tried this method by adding the ‘storage’ flag as you mentioned with a free block device for Cepth to use however I ended up with this:

Is there some manual integration that has to be configured?

Update: It was fixed and a new volume can be created manually using the Dashboard or the CLI. However whenever I try to create an Instance + Volume at the same time, majority of the microk8s pods crash that are running openstack workloads.


I am getting a timeout waiting for OpenStack during the process “Starting step Deploying OpenStack Control Plane” when running the “sunbeam cluster bootstrap” command. I waited for around 1 hour when the program just quit because of the timeout error. I have tried it twice now and once in a VM and it never worked. I double-checked and my system meets all the requirements.

When I try to re-run the “sunbeam cluster bootstrap” command after the timeout, I get this error: “Error: Leader for application ‘mysql’ is missing from model ‘openstack’”.

Is this issue known? I really want to get this working on my system.

Hey, whenever I run this command, I always open a new terminal and use the

watch --color -- juju status --color -m openstack

It will watch the juju status live which will help you to actually see what App is stuck on ‘waiting’ status and if there is any kind of error during the whole bootstrap process.


Glad this was fixed, do you have enough resources to create the VM ? We’ve seen pod restarts when allocating a VM with too much RAM on a single node.

This is not a known issue, can you give us the output of juju status -m openstack ?

Timeouts are known to occur at the deploy control plane step (re-running bootstrap should fix things), but not at the stage you’re referring.

I am currently running a 16GB Ram, 6 Core processor VM for this Lab so that might be the case! Will let you know after trying this on a better spec VM.

Something that I failed to mention, there’s a known bug about volumes , the consequence should not restart the pods, but the volume attach would fail none the less

1 Like

good afternoon guys how are you? a pleasure to all, I would like to ask for help, since I am trying to follow the steps of this url and I find this error:

zuccadev@DESKTOP-2DVUOQF:~$ sunbeam cluster bootstrap
    Error creating user DESKTOP-2DVUOQF. in Juju
    Traceback (most recent call last):
      File "/snap/openstack/182/lib/python3.10/site-packages/sunbeam/commands/", line 279, in run
        process =, capture_output=True, text=True, check=True)
      File "/usr/lib/python3.10/", line 524, in run
        raise CalledProcessError(retcode, process.args,
    subprocess.CalledProcessError: Command '['/snap/openstack/182/juju/bin/juju', 'add-user', 'DESKTOP-2DVUOQF.']' returned
    non-zero exit status 1.
    ERROR invalid user name "DESKTOP-2DVUOQF."

    Error: Command '['/snap/openstack/182/juju/bin/juju', 'add-user', 'DESKTOP-2DVUOQF.']' returned non-zero exit status 1.

I don’t know where to start, check since I executed the first commands of the guide and everything normally, I just made it clear that I’m doing it from ubuntu in wsl.

Hi, what kind of host are you installing upon?

I comment what I did, I am a newbie, in this virtualization. but I have found some fascinating things to do with this project. In win11 wsl I try to install everything to practice and how openstack works. I don’t know if it’s more convenient to install it on a physical machine, with an OS, or if it works in wsl? . this as the first question. And I followed the steps from that same documentation. I try to manage my own server. I mean, I try to make my own digitalocean. but locally. for internal operation.

Currently, the host requirements are: “physical or virtual machine running Ubuntu 22.04 LTS”

I don’t believe WSL applies since it uses software emulation and there could be many things that don’t work because of that. We certainly do not test on WSL.

1 Like

zuccadev: Is domain name set on this node? What does hostname -f shows?

The error seems to be problem with creation of juju user.
Sunbeam creates an internal juju user with name as fqdn. And juju does not accept names ending with characters other than [a-zA-z0-9].
Can you raise a bug @

Setting domain name should allow you to proceed further.

Hello! Is there any possibility to integrate sunbeam with some state configuration manager like salt or puppet? Thanks in advance

Hello! I have this error:

Timed out while waiting for model 'openstack' to be ready
Error: Timed out while waiting for model 'openstack' to be ready

While running this: sunbeam cluster bootstrap --accept-defaults

Someone here also had same problem:

Can we maybe somehow resolve it ASAP? @pmatulis

Hello there. Could you test --channel 2023.1/edge to see whether the efforts being made to improve install speeds are making a difference?

sudo snap install openstack --channel 2023.1/edge

Bear in mind that you’ll be on software that is in active development and that the chance of breakages is much higher.