Multi-node with MAAS

This tutorial shows how to install a multi-node MicroStack cluster using MAAS as machine provider. It will deploy an OpenStack 2024.1 (Caracal) cloud.

Some steps provide estimated completion times. They are based on an average internet connection.

Prerequisite knowledge

A MAAS cluster is needed so familiarity with the MAAS machine provisioning system is a necessity. You should also be acquainted with these MAAS concepts:

Some knowledge of OpenStack networking will be helpful.

Hardware requirements

You will need a total of six machines. One to act as a client host and five to make up the MAAS cluster.

Client

The client machine will act as an administrative host. It does not require significant resources and it need not be dedicated to this task.

All the commands in this tutorial are run on the client.

Important: For environments constrained by a proxy server, the client machine must first be configured accordingly. See section Configure for the proxy at the OS level on the Manage a proxied environment page before proceeding.

MAAS cluster

The MAAS cluster will consist of five machines (the MAAS nodes): one will manage software orchestration, one will manage internal components like clusterd and three will host the actual cloud (the cloud nodes). MAAS needs to be set up in advance.

Orchestration node

The requirements for the orchestration node are:

  • physical or virtual machine
  • a dual-core processor
  • a minimum of 4 GiB of free memory
  • 128 GiB of storage available on the root disk
  • one network interface

Sunbeam node (previously ‘infra’ node)

The requirements for the sunbeam node are:

  • physical or virtual machine
  • a dual-core processor
  • a minimum of 4 GiB of free memory
  • 128 GiB of storage available on the root disk
  • one network interface

Cloud nodes

The requirements for each of the three cloud nodes are:

  • physical machine running
  • a 16+ core amd64 processor
  • a minimum of 32 GiB of free memory
  • 500 GiB of SSD storage available on the root disk
  • a least one un-partitioned disk of at least 200 GiB in size
  • two network interfaces
    • primary: for access to the OpenStack control plane
    • secondary: for remote access to cloud VMs

Summary of MAAS nodes

In this tutorial, the five MAAS nodes that comprise the MAAS cluster are described in this way:

Machine FQDN Storage device Purpose
sunbeam00 sunbeam00.example.com orchestration node
sunbeam01 sunbeam01.example.com sunbeam node
sunbeam02 sunbeam02.example.com /dev/sdb cloud node
sunbeam03 sunbeam03.example.com /dev/sdb cloud node
sunbeam04 sunbeam04.example.com /dev/sdb cloud node

Ensure that your MAAS cluster is built before proceeding.

Prepare cloud networking

Some planning in your environment and a corresponding configuration in MAAS is needed. This affects the primary and secondary network interfaces on the three cloud nodes.

Primary interface

In this tutorial, a single subnet will be used for the primary network interface on all five MAAS nodes: 10.5.0.0/16.

Ensure you have a subnet connected to the primary interface.

Note: The Network traffic isolation with MAAS page contains background information on the purpose of using multiple subnets.

Secondary interface

The secondary network interface must be set as ‘Unconfigured’ in MAAS and be connected to a subnet that has unused (available) IP addresses. This requirement permits the VMs to be contacted by remote hosts and comprises the “external networking” of the cloud. This interface must be tagged with a network tag neutron:physnet1.

In this tutorial, the following values are used:

External networking parameter Value
CIDR 172.16.2.0/24
default gateway 172.16.2.1
address range 172.16.2.2 - 172.16.2.254

The number of addresses needed is dependent upon the number of VMs you wish to be remotely contactable (simultaneously). When used in this way, the addresses are known as “floating IP addresses”.

You will be asked at a later step, via an interactive prompt, what addressing to use for external networking.

Configure MAAS

Several aspects of MAAS need to be configured. Perform the steps in the following sections within the MAAS web UI.

Configure MAAS Reserved IP Ranges

Two particular cloud networks need to be assigned their own (labelled) Reserved IP Range with a minimum number of available IP addresses. A label is created by using the Comment field for the range. Its name is based upon the chosen deployment name, which for this tutorial is mycloud (created later).

This is what is needed:

  1. a range for network internal (minimum of five addresses) and with label mycloud-internal-api
  2. a range for network public (minimum of ten addresses) and with label mycloud-public-api

In this tutorial, the ranges are defined in this way:

Cloud network Reserved IP Range label IP range
internal mycloud-internal-api 172.16.1.201 - 172.16.1.205
public mycloud-public-api 172.16.1.206 - 172.16.1.215

Configure your ranges now.

Create network space

Create the network space that all the MAAS nodes will use.

In this tutorial, a single space is used, called myspace.

The space will be mapped to cloud networks in a later step.

Caution: While MAAS supports _ in space names, sunbeam and juju do not. Avoid using _ in space names.

Choosing the MAAS resource tag

Sunbeam will look for machine bearing the “resource tag”. This tag is used to identify the machines that will be used for the deployment.
This is built from the deployment name: openstack-<deployment name>.

In this tutorial, the resource tag is called openstack-mycloud.

Prepare the client

Duration: 5 minutes

Prepare the client host by installing and configuring software.

Begin by installing the openstack snap:

sudo snap install openstack --channel 2024.1/beta

Caution: It is highly recommended to use the --channel 2024.1/beta switch which includes all the latest bug fixes and updates before the next stable release coming in Q4 2024.

MicroStack can generate a script to ensure that the client has all of the required dependencies installed and is configured correctly for use in MicroStack - you can review this script using:

sunbeam prepare-node-script --client

or the script can be directly executed in this way:

sunbeam prepare-node-script --client | bash -x

The script will ensure some software requirements are satisfied on the host. In particular, it will:

  • install orchestration software (i.e. Juju)
  • create any necessary data directories

Prepare the MAAS nodes

In this tutorial, the five MAAS nodes look like this:

Machine Machine tag Storage device Storage tag Network tag
sunbeam00 openstack-mycloud, juju-controller
sunbeam01 openstack-mycloud, sunbeam
sunbeam02 openstack-mycloud, control, compute, storage /dev/sdb ceph neutron:physnet1
sunbeam03 openstack-mycloud, control, compute, storage /dev/sdb ceph neutron:physnet1
sunbeam04 openstack-mycloud, control, compute, storage /dev/sdb ceph neutron:physnet1

To prepare the MAAS nodes for the deployment, use the above information to perform the following:

  1. assign the machine tags (to the machines)
  2. assign the storage tags (to the storage devices)
  3. assign the network tags (to the secondary interfaces)

The tags must be named as per the above table, but the machine names can be anything you like.

Add the MAAS deployment

Adding the MAAS deployment informs the orchestration node about the MAAS cluster.

To do this you will need to pass options that describe the deployment. They are:

  • name: an arbitrary name (e.g. mycloud)
  • token: a MAAS API key (e.g. z6sbVdQTuKWPFCFvPF:WkRdtsJnwXu38aRHUz:77SqG9DmaugFRHNT4SFtyGqubmLawNBJ)
  • url: the MAAS URL (e.g. http://10.236.110.5:5240/MAAS)

Add the MAAS deployment now:

sunbeam deployment add maas mycloud <token> <maas url>

The above command will check for the following:

  • working authentication
  • uniqueness of the deployment name

Map network spaces to cloud networks

Certain machines need access to certain cloud networks.

In this tutorial, because we’re using a single subnet (and space) for the primary network interface on each cloud node, map the same space (myspace) to each supported cloud network:

sunbeam deployment space map myspace # will use myspace as default for every network

These mappings tell MicroStack where to route certain types of cloud traffic.

Validate the added deployment

MicroStack expects a correctly configured MAAS, which includes adequate networking.

To check whether your environment is ready, use the deployment validate command:

sunbeam deployment validate

Example output:

Checking machines, roles, networks and storage... OK
Checking zone distribution... OK
Checking networking... OK
Report saved to '/home/ubuntu/snap/openstack/common/reports/validate-deployment-mycloud-<...>.yaml'

A report will be generated under $HOME/snap/openstack/common/reports if a failure is detected. A sample failure looks like this:

- diagnostics: A machine root disk needs to be at least 500GB to be a part of an openstack
    deployment.
  machine: sunbeam02
  message: root disk is too small
  name: Root disk check
  passed: warning

Note: A validation warning will lessen the chances of a successful deployment but it will not block an attempted deployment.

Initialise the orchestration and sunbeam layer

Duration: 30 minutes

Set up the orchestration and sunbeam layer using the cluster bootstrap command. This will provision the MAAS nodes that are assigned the juju-controller tag and sunbeam tag:

sunbeam cluster bootstrap

You will first be prompted whether or not to enable network proxy usage. If ‘Yes’, several sub-questions will be asked.

Use proxy to access external network resources? [y/n] (y):
http_proxy ():
https_proxy ():
no_proxy ():

Note that proxy settings can also be supplied by using a manifest (see Deployment manifest).

Deploy the cloud

Duration: 30 minutes
This estimate does not take into account base-level provisioning (operating system install). This can take a while for some systems (e.g. bare metal).

Deploy the cloud using the cluster deploy command. This will provision the remaining three MAAS nodes:

sunbeam cluster deploy

Configure the cloud

Duration: 5 minutes

Configure the deployed cloud using the configure command:

sunbeam configure --openrc demo-openrc

The --openrc option specifies a regular user (non-admin) cloud init file (demo-openrc here).

A series of questions will now be asked interactively. Below is a sample session. The values in square brackets, when present, provide acceptable values. A value in parentheses is the default value. We use the values for “external networking” given earlier:

External network (172.16.2.0/24):
External network's gateway (172.16.2.1):
Populate OpenStack cloud with demo user, default images, flavors etc [y/n] (y):
Username to use for access to OpenStack (demo):
Password to use for access to OpenStack (mt********):
Project network (192.168.0.0/24):
Enable ping and SSH access to instances? [y/n] (y):
External network’s allocation range (172.16.2.2-172.16.2.254):
External network’s type [flat/vlan] (flat):
Writing openrc to demo-openrc ... done

The network range for the initial project defaults to 192.168.122.0/24. This is for OpenStack internal purposes (“private networking”) and should suffice for most clouds.

These questions are explained in more detail on the Interactive configuration prompts page in the reference section.

Launch a VM

Duration: 2 minutes
The first launch will take longer than any subsequent launches due to caching.

Verify the cloud by launching a VM called ‘test’ based on the ‘ubuntu’ image (Ubuntu 22.04 LTS). The launch command is used:

sunbeam launch ubuntu --name test

Sample output:

Launching an OpenStack instance ...
Access instance with `ssh -i /home/ubuntu/.config/openstack/sunbeam ubuntu@172.16.2.200`

Connect to the VM over SSH using the provided command.

Tear down

Tear down the cloud by running:

sunbeam cluster destroy

Sample output:

This will destroy the deployment. Are you sure? [y/n]: y
Deployment destroyed.

Related how-tos

Now that OpenStack is set up, be sure to check out the following how-to guides:

1 Like

@gboutry / @pmatulis under the MAAS cluster section, the requirements for both the Orchestration node and Cloud nodes should say “machine commissioned with Ubuntu 22.04 LTS” rather than “machine running Ubuntu 22.04 LTS” as Sunbeam expects those machines to be ready to be acquired in MAAS, but not running yet. I’ve just faced that on my own and though that it’s worth improving. You can also grant me edit rights to this post so that I can fix it on my own.

@tkurek I simply removed the OS requirement from both sections

@gboutry Shouldn’t Network space section talk about spaces? It currently says create one space as you use that in tutorial, but it should mention you can have several different spaces.
Also, what does it mean “Create the network space that all the MAAS nodes will use.” ? There is no command or anything.

This tutorial is a simple deployment and teaching how to actually manage the spaces on MAAS side was deemed out of scope of this document.

To have more information about networks and spaces, this document Network space section was created.

@gboutry Really THANKS!!! This worked perfect.
Just one of the million questions sourced from this deployment: what’s the better way to add a new additional node (controller, storage, compute)? Can I leverage on MASS to add a new node? Or I have to install a physical machine then add node to cluster? Can I do it by sunbeam+juju? Thanks and have a nice day

@giancarlo-birello

For nodes of type controller, storage, compute
The intended way to add new nodes in Sunbeam + MAAS, is to have the physical nodes added to MAAS, with the right tags. Then they should show up in sunbeam deployment machine list.

You can then run the validate cli sunbeam deployment machine validate <machine>.

(validate whole deployment with sunbeam deployment validate)

If everything looks ok, just re-run sunbeam cluster deploy, and the new nodes should be picked up automatically!

You’ll also need to re-run sunbeam configure to make sure the hypervisor are correctly configured.

1 Like

@gboutry I followed the steps to add 4th node and all was good. Just a dubt:
juju status -m openstack-machines returns all app/units x 4
App Version Status Scale Charm Channel Rev Exposed Message
microceph active 4 microceph reef/edge 69 no
microk8s active 4 microk8s legacy/stable 121 no
openstack-hypervisor active 4 openstack-hypervisor 2024.1/edge 200 no
sunbeam-machine active 4 sunbeam-machine 2024.1/edge 33 no

Unit Workload Agent Machine Public address Ports Message
microceph/0* active idle 3 10.20.100.58
microceph/1 active idle 4 10.20.100.62
microceph/2 active idle 5 10.20.100.63
microceph/3 active idle 7 10.20.100.54



while juju status -m openstack returns only units x3
Unit Workload Agent Address Ports Message
certificate-authority/0* active idle 10.1.38.194
cinder-ceph-mysql-router/0 active idle 10.1.38.217
cinder-ceph-mysql-router/1 active idle 10.1.26.23
cinder-ceph-mysql-router/2* active idle 10.1.243.93
cinder-ceph/0 active idle 10.1.38.219
cinder-ceph/1 active idle 10.1.243.95
cinder-ceph/2* active idle 10.1.26.25



Do I need another step? Is this right?

Thanks!!!

I try to answer myself: Do I need sunbeam cluster resize to align cluster to new topology (4 nodes)?

No, this is not needed, you indeed have 4 machines. The control plane being on k8s, the unit count does not need to match the machine count.

There’s different scaling strategies applied depending on the number of nodes.

At deploy time (the first one), the only difference between 3 and 4 control nodes are how k8s pods are spread on the cluster.

Deploying at first 3 control nodes, then adding 1 won’t being much benefit without manually rebalancing the stateful set.

We plan to automate this re-balancing, but we need work to happen in some of our dependencies for that.

Let me add that, it would have been perfectly valid, for example for the 4th not be a control node (just having storage and compute tags for example).

And at this point in time, it’s not possible to remove a role from a node.

1 Like

Thanks! I try to resume to check if I understood:

  • number of control roles does not need to match machine count
  • control role is on control plane being on k8s
  • first deploying spreads k8s pods on each machine, that is if I have 5 machines, k8s pods will be spread onto all 5 machines, obviously deploying all 3 roles control, storage and compute
  • at this point, adding a machine after first deploying, doesn’t extend k8s pods onto this new machine even if the new machine has control role so I have 2 option:
    A) adding a machine after first deploying without control role, i.e. storage and compute only
    B) manual re-balancing k8s pods to include laso new machine

Can I try to remove 4th machine and re-add with only tag storage and compute?
Just for my information, how can I rebalancing k8s pods?

Really thanks for your time and attention.
Have a nice day

I started from a fresh install with 3 nodes and then added a 4th with only compute and storage roles and it worked like a charme!! Thanks again