Single-node guide

Hi @zachthill, this is a known issue with the mysql-operator. When it’s querying for permissions on K8S too early it will fail to get the permissions (most often a timeout error), but this should go away after some time as it will realize it does have permission.

Having only HDDs will make the bootstrap really slow and prone to timeouts.

Moreover, how did you run bootstrap ? I can see that you’re getting 1 mysql per service. That’s the default behavior when the host has more that 32GB of ram.
But if you don’t plan to add other nodes, you could pass the options --database single to disable mysql HA. (This will reduce the number of MySQL application from ~7 to 1) (on a new installation)

Just for exploration, you could have only as SSD the root disk (if you have no special configs for storing /var/snap and /snap). This is where most of the writing happens during installation. But this is not advised for production use.

@gboutry

Thank you for the prompt reply. I will try again later today with your suggestions. I am a just a lone DevOps Engineer trying to run Microstack on my homelab so I can create my own code ecosystem with terraform and other automation tools. Reason for me choosing openstack instead of Proxmox or Vsphere is the fact that I want to capitalize on the strengths of the cloud with as minimal costs as possible.

My thought for a workaround was to purchase 2 SSDs that I would turn into a mirror and set it as the root volune while having some fault tolerance. After that I’ll keep my HDDs to be used for block storage and other things the like.

Trying to setup single node Microstacks setup and running into a problem. When I get to the step to run sunbeam configure --openrc demo-openrc i get a error

This is the error.

https://docs.pyroute2.org/ipdb_toc.htmlDeprecation warning https://docs.pyroute2.org/ipdb_toc.html
To remove this DeprecationWarning exception, start IPDB(deprecation_warning=False, ...)
An unexpected error has occurred. Please run 'sunbeam inspect' to generate an inspection report.
Error: Node does not exist in the sunbeam cluster

This appears right after specifying the network interface for external traffic.

The next step in the tutorial is not working. It looks like this is causing a problem with creating the single node cluster. Any help would be appreciated.

Hello.
I am looking for some help.

I have been trying to set up a single node microstack, following the guide.
My set up
1 16 core chip, 64gb ram, 2 disks (both ssd, using one with desire for microceph later), 2 nics (ethernet and wifi),
I have tried with Ubuntu server minimal and full install, and recently with minimal desktop install.
I have tried with openstack channel 2023.1 and 2023.2

Where I end up stuck:
Last step and last question
“Free network interface that will be configured for external traffic”
With both server and desktop installs i only ever get the above text. No mention of possible nics, such as in example: [eno1/eno2] (eno1):
It makes me think something failed in finding them, some assumption?
If useful my external network is the ethernet and backplane the wifi. The backplane can be reached and I can view the web dashboard
I have tried entering the nic i wish to use but that aks, are you sure? This nic is set up

What can i do to narrow down the issue, and at last play with microstack?
Or is it because I have used wifi?

Improvements for the single node guide.

The guide should start assuming a fresh install of Ubuntu latest LTS/ what you support.
That means, the initial set up script also does
for minimal server:
apt install rsyslog

For full serverinstall and other flavours of ubuntu
edits /etc/hosts so that there is a valid entry(ask a question)
chmod 640 ~/.ssh/ (probably not your bug, but in the way)

Then I would like to see a list of checks to make before going further. The last two suggestions above could be part of this also.

Example check:
The nics need to be set up first
I need to make wifi have priority and ethernet not. Setting the metric and gateways defaults up. I am thinking that the nics will always need a bit of setting up beforehand.

Hope that helps

Hi @llerrac , thanks for providing feedback to the tutorial. It’s not entirely clear however what specific problems you encountered. What platform/environment were you using and what errors surfaced?

Distro I used:
Ubuntu Server 22.04.4 LTS. Mininal install, and normal install
Ubuntu 22.04.4 LTS, minimal install.
Both versions downloaded last week. The desktop installs were then Updated to whatever is available at that time.
I am not using Ubnutu pro, (more faffing needed)

Should I be using a a cloud version?

For the issue which has me stuck, I will again re-install and step through, I did not copy the exact text text for the second question. The operation appears to trash the external network.

My question was however was, Where can I look for more information? Or perhaps better what can I do to log more?

Then perhaps I can supply better question/answer here.

For second post, these are fixes to issues I hit with every install ( i have made at least 5 minimal server, one normal server, 5 desktop). I can repeat every time.
With the minimal install syslog is required and is not installed from a fresh ubuntu server mininal install. For the other issues I can give a basic guide of how I fix my set up,

I did deviate from the guide to start with.
The channel for openstack i used was for the minimal server : channel 2023.2 , and once 2023.1 ,
After that I stuck to the guide, only channel 2023.1
thansk for reply Alex

My previous post about improvements was not clear enough. Didn’t have enough time.

So issues I hit everytime
With a fresh install of ubuntu 22.04 LTS minimal, with no updates there is no syslog to be found when running the steps in the script.
The script dies when running $sunbeam prepare-node-script | bash -x && newgrp snap_daemon,
this fixes the issue:
$ sudo apt install rsyslog

While runing apt, install an easy to use editor, then
sudo “your text editor of choice” etc/hosts
change the entry for the server to include a canonical_hostname with a period in it and then i give the hostname as an alias.
so
entries go from
127.0.0.1 localhost
127.0.1.1 blackbox-l

to:
127.0.0.1 localhost
127.0.1.1 blackbox-l.home blackbox-l
Without this the services being used do not agree about the hostname and script stops.
Perhaps I have a cheeky solution that is not good enough.

Then I begin to step through the guide.
after running:
sunbeam prepare-node-script | bash -x && newgrp snap_daemon

I check
$ls -al ~/.shh
the will be an entry like:
-rw-rw-r-- 1 snap_daemon … authorized_keys
For the next step to work, this command is needed
chmod 640 ~/.shh/authorized_keys

Then I can run
$sunbeam cluster bootstrap

The nics comment.
I have two nics, its not your issue, but a comment noting how they should be set up could be useful
I want wifi for internet/backplane.
It would be useful to know if the nics need to up or down before/during install, or need to set up in a specific mannner.

For completeness, if anyone else wants to do this.
so I want wifi default routing, so i run:
nmcli connection modify ipv4.route-metric 50 ipv6.route-metric 50
nmcli connection modify ipv4.never-default yes ipv6.never-default yes

(ethernet is default routing turned off, the default fresh install metric for ethernet is 100, by setting the metric for wifi below ethernet i give it priority)

I hope this is useful
alex

Hi,
As others have mentioned in their comments, I have stopped at similar events. My explanation may be lacking, but I would appreciate your support.

The problem I am facing :
*After creating an instance using the command as per the procedure, the status is RUNNING, but I cannot make SSH connection.
I cannot create a volume from an image.
The log displayed on the GUI-Horizon screen is as follows.

Error: Failed to perform requested operation on instance "testvm1", the instance has an error status: Please try again later [Error: Build of instance a5e17acd-9970-45ea-b656-b828141a7716 aborted: Volume e0450ae3-7c52-43bf-9152-9bef868c34ca did not finish being created even after we waited 0 seconds or 1 attempts. And its status is error.].

My background :.
I am in the process of validating OpenStack functionality on an instance on AWS, AWS does not allow tag VLANs and nested VMs either. To overcome this problem, I thought that it would be possible to build a complete system with one METAL server. I tried to build it with OpenStack Ansible, but it didn’t work, so I tried this procedure (trusting Canonical).

My environment information:
Cloud service: AWS EC2
Instance type: c5n.metal (cheapest! : 72core, 192GiB)

  • warn, I can use “Bare-metal-instance”.
    Storage: EBS 100GB (as root device)gp2->SSD
    Network: 2 network interfaces attached on the same subnet
    Subnet CIDR: 10.0.128.0/20
  • DeviceName: IP Address
  • enp126s0: 10.0.128.24
  • enp127s0: 10.0.128.176
    OS: Ubuntu22.04

My Procedure:

$ sudo su
# apt update && apt dist-upgrade -y
# apt install -y ubuntu-desktop xrdp
 ***To launch a browser on ubuntu-desktop to access Horizon.***
# passwd ubuntu
xxxx
xxxx
# reboot
$ sudo snap install openstack --channel 2023.2
$ sunbeam prepare-node-script | bash -x && newgrp snap_daemon
$ sunbeam cluster bootstrap
10.0.128.0/20
10.0.128.201-10.0.128.220
$ sudo microk8s.kubectl get po -A
$
$ sunbeam configure --openrc demo-openrc
Local or remote access to VMs [local/remote] (local) : remote
CIDR of network to use for external networking (10.20.20.0/24): 10.0.128.0/20
IP address of default gateway for external network (10.0.128.1): 10.0.128.1
Start of IP allocation range for external network (10.0.128.2): 10.0.128.2
End of IP allocation range for external network (10.0.143.254): 10.0.143.254
Network range to use for project network [flat/vlan] (flat): flat
Populate OpenStack cloud with demo user, default images, flavors etc [y/n] (y): y
Username to use for access to OpenStack (demo): demo
Password to use  for access to OpenStack (v8******): 
Network range to use for project network (192.168.122.0/24): 
List of nameservers guests should use for DNS resolution (10.0.0.2):
Enable ping and SSH access to instances? [y/n] (y):
Writing openrc to demo-openrc ... done
Free network interface that will be configured for external traffic: enp127s0
WARNING: Interface enp127s0 is configured. Any configuration will be lost, are you sure you want to continue? [y/n]: y
Deprecation warning https://docs.pyroute2.org/ipdb_toc.html
To remove this DeprecationWarning exception, start IPDB(deprecation_warning=False, ...)
$
$ sunbeam launch ubuntu --name test
Launching an OpenStack instance ...
Access instance with `ssh -i /home/ubuntu/snap/openstack/324/sunbeam ubuntu@10.0.0.274`
$
$ ssh -i /home/ubuntu/snap/openstack/324/sunbeam ubuntu@10.0.0.274
Ssh; connect to host 10.0.128.97 port 22: No route to host
$ . demo-openrc
$ openstack server list
:
| xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | test | ACTIVE | demo-network=10.0.0.274, 192.168.122.227 | ubuntu | m1.tiny |
:
$ openstack server ssh xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
sh: 1: Permission denied
$ subeam dashboard-url
http://10.0.128.204:80/openstack-horizon
$
After this I connected to Ubuntu via RDP, launched a browser on my desktop and accessed HorizonURL.
Error on instance creation and volume creation.

Question:
- I need to know the correct settings for External network for SSH connection.

  • Do I need to configure Ceph to enable volume creation? If there is a procedure, please share it.

Hello, when I run sunbeam configure --accept-defaults --openrc demo-openrc, Terraform fails with these errors:

Plan: 19 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + OS_PASSWORD            = (sensitive value)
  + OS_PROJECT_DOMAIN_NAME = "users"
  + OS_PROJECT_NAME        = (sensitive value)
  + OS_USERNAME            = (sensitive value)
  + OS_USER_DOMAIN_NAME    = "users"
openstack_compute_flavor_v2.m1_large: Creating...
openstack_compute_flavor_v2.m1_medium: Creating...
openstack_networking_network_v2.external_network: Creating...
openstack_compute_flavor_v2.m1_tiny: Creating...
openstack_identity_project_v3.users_domain: Creating...
openstack_compute_flavor_v2.m1_small: Creating...
openstack_images_image_v2.ubuntu: Creating...
openstack_identity_project_v3.users_domain: Creation complete after 3s [id=8fbf9093327143bd99fe82babc4a9876]
openstack_identity_project_v3.user_project: Creating...
openstack_compute_flavor_v2.m1_tiny: Creation complete after 4s [id=90fcf667-151a-4437-b3da-2cb8d9da5b99]
openstack_compute_flavor_v2.m1_medium: Creation complete after 4s [id=fb8091fc-ae94-41bf-b086-2dd8617eb16a]
openstack_identity_project_v3.user_project: Creation complete after 2s [id=88b4d2d392c8460c89217c8c2557544a]
openstack_identity_user_v3.user: Creating...
data.openstack_networking_secgroup_v2.secgroup_default: Reading...
openstack_compute_quotaset_v2.compute_quota: Creating...
openstack_networking_quota_v2.network_quota: Creating...
openstack_networking_network_v2.user_network: Creating...
openstack_compute_flavor_v2.m1_large: Creation complete after 5s [id=619d9dc5-d3f4-43ae-85bf-f215c28599be]
openstack_compute_flavor_v2.m1_small: Creation complete after 6s [id=2af899fe-d1b9-48c1-a0e1-a39c1ea622ea]
openstack_identity_user_v3.user: Creation complete after 1s [id=2e7ed5aab53c4fe8beb6984f54472b7e]
openstack_identity_role_assignment_v3.role_assignment_1: Creating...
openstack_identity_role_assignment_v3.role_assignment_1: Creation complete after 2s [id=/88b4d2d392c8460c89217c8c2557544a//2e7ed5aab53c4fe8beb6984f54472b7e/4608edd2a8e04dbcad1d33c2673238fa]
openstack_networking_quota_v2.network_quota: Creation complete after 3s [id=88b4d2d392c8460c89217c8c2557544a/]
data.openstack_networking_secgroup_v2.secgroup_default: Read complete after 3s [id=f12013cc-15cd-4618-9d79-b4c6fd6f40aa]
openstack_networking_secgroup_rule_v2.secgroup_rule_ping_ingress[0]: Creating...
openstack_networking_secgroup_rule_v2.secgroup_rule_ssh_ingress[0]: Creating...
openstack_compute_quotaset_v2.compute_quota: Creation complete after 3s [id=88b4d2d392c8460c89217c8c2557544a/]
openstack_networking_secgroup_rule_v2.secgroup_rule_ssh_ingress[0]: Creation complete after 1s [id=c209dffd-3528-4e6c-9c1c-94c25e7682bf]
openstack_networking_network_v2.external_network: Still creating... [10s elapsed]
openstack_images_image_v2.ubuntu: Still creating... [10s elapsed]
openstack_networking_secgroup_rule_v2.secgroup_rule_ping_ingress[0]: Creation complete after 2s [id=fc3bc936-4e23-408b-9d41-9330aade938e]
openstack_networking_network_v2.external_network: Creation complete after 11s [id=d2a9f599-f9d0-42c0-8672-e531b293a054]
openstack_networking_subnet_v2.external_subnet: Creating...
openstack_networking_network_v2.user_network: Still creating... [10s elapsed]
openstack_networking_network_v2.user_network: Creation complete after 11s [id=6de2a3b8-7514-4cd6-b535-e7d35d7361cf]
openstack_networking_subnet_v2.user_subnet: Creating...
openstack_networking_subnet_v2.external_subnet: Creation complete after 6s [id=c455cbaf-8bae-4935-8273-6ed3545e2c1a]
openstack_networking_router_v2.user_router: Creating...
openstack_images_image_v2.ubuntu: Still creating... [20s elapsed]
openstack_networking_subnet_v2.user_subnet: Creation complete after 8s [id=b1bd1ba2-4469-4485-85bb-ce880b9bae3f]
openstack_networking_router_v2.user_router: Creation complete after 10s [id=b35e843c-6c2e-4009-803d-117cba64e934]
openstack_networking_router_interface_v2.user_router_interface: Creating...
openstack_images_image_v2.ubuntu: Still creating... [30s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [10s elapsed]
openstack_images_image_v2.ubuntu: Still creating... [40s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [20s elapsed]
openstack_images_image_v2.ubuntu: Still creating... [50s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [30s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [40s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [50s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [1m0s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [1m10s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [1m20s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [1m30s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [1m40s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [1m50s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [2m0s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [2m10s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [2m20s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [2m30s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [2m40s elapsed]
openstack_networking_router_interface_v2.user_router_interface: Still creating... [2m50s elapsed]


Error: Error while uploading file "/home/openstackcct/snap/openstack/324/.terraform/image_cache/c276b9b0caf2cb0105c5b96245b44372.img": Unable to re-authenticate: Expected HTTP response code [204] when accessing [PUT http://10.20.21.10:80/openstack-glance/v2/images/ca12345b-590a-4ac4-b048-a315b76e7e41/file], but got 401 instead
{"message": "This server could not verify that you are authorized to access the document you requested. Either you supplied the wrong credentials (e.g., bad password), or your browser does not understand how to supply the credentials required.<br /><br />\n\n\n", "code": "401 Unauthorized", "title": "Unauthorized"}: Internal Server Error

  with openstack_images_image_v2.ubuntu,
  on main.tf line 46, in resource "openstack_images_image_v2" "ubuntu":
  46: resource "openstack_images_image_v2" "ubuntu" {


Error: Error waiting for openstack_networking_router_interface_v2 b35e843c-6c2e-4009-803d-117cba64e934 to become available: Internal Server Error

  with openstack_networking_router_interface_v2.user_router_interface,
  on main.tf line 141, in resource "openstack_networking_router_interface_v2" "user_router_interface":
 141: resource "openstack_networking_router_interface_v2" "user_router_interface" {


⠹ Creating demonstration user, project and networking ... Error configuring cloud
Traceback (most recent call last):
  File "/snap/openstack/324/lib/python3.10/site-packages/sunbeam/commands/terraform.py", line 200, in apply
    process = subprocess.run(
  File "/usr/lib/python3.10/subprocess.py", line 526, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/snap/openstack/324/bin/terraform', 'apply', '-auto-approve', '-no-color']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/snap/openstack/324/lib/python3.10/site-packages/sunbeam/commands/configure.py", line 585, in run
    self.tfhelper.apply()
  File "/snap/openstack/324/lib/python3.10/site-packages/sunbeam/commands/terraform.py", line 214, in apply
    raise TerraformException(str(e))
sunbeam.commands.terraform.TerraformException: Command '['/snap/openstack/324/bin/terraform', 'apply', '-auto-approve', '-no-color']' returned non-zero exit status 1.
Error: Command '['/snap/openstack/324/bin/terraform', 'apply', '-auto-approve', '-no-color']' returned non-zero exit status 1.

It seems like it is not able to authenticate, why?
The only thing I changed in the procedure was modifying /etc/hosts by replacing 127.0.1.1 with the actual IP address of my VM. Thank you

Hello everyone, I just wanted to let you know that I have been working on this issue now.

I would like to share with you that I was not aware of this issue, but I have taken the following actions and all the problems have been resolved.

  • On AWS, the network interfaces need to be attached to separate subnets compleately. I was paying out two network interfaces to the same subnet, which was wrong.

  • I was able to successfully boot Ceph by granting --role storage in bootstrap and specifying /dev/nvme1n1 as the second storage device name referenced by fdisk -l.

    $ subneam cluster bootstrap --role control --role compute --role storage
    Management network shared by hosts (CIDRs, separated by comma) (10.0.128.0/20):
    MetalLB address allocation range (supports multiple ranges, comma separated) (10.0.128.201-10.0.128.220): 
    Disks to attach to MicroCeph (/dev/sdb): /dev/nvme1n1
    Node has been bootstrapped with roles: control. compute, storage
    $
    

Continuing I am faced with the following next problem

Event:

  • After booting 2 instances (tiny ubuntu) and logging in via SSH, I run “nslookup test-vm”, but no DNS resolution is performed.

My understanding:

  • dnsmasq must be attached to deo-subnet and not running.

What I tried:

  • Set nameserver to Subnet.

    $ openstack subnet set --dns-nameserver 192.168.122.2 demo-subnet
    
  • Set dns-name to the instance port. (I ran it, but the setting was not reflected.)

    $ openstack port set --dns-name "test-vm" 888e44el-c6712-3342-9991-efa37d990cc01
    

The situation remains the same, DNS resolution is not possible.

It needs some additional options for neutron regarding DNS resolution? I am concerned about the “Designation” keyword.

I appreciate your kind support. Thanks you.

Hi @tatsuromakita, thank you for raising this issue regarding the instance dns resolution.

Currently, MicroStack does not enable the internal dns settings for Neutron in order to provide this for instances. As a result, I have opened bug #2062053. We will work to incorporate this capability soon.

1 Like

Good afternoon, I am sorry to bother you with something that may be very basic for you, but I am stuck at this point.
I have deployed openstack following the instructions you show here. the demo environment works without problems.
But here comes my question: I have launched all the commands with “–accept-defaults”. So when I want to enter as adminsitrator in the environment to test configuration, creation, etc… I don’t know where they have been generated and where I can see those admin and password credentials.

Sorry if this is too basic for you, but I am trying to understand the Openstack environment and make it work my way.

Thank you very much for your time. Jesus

Translated with www.DeepL.com/Translator (free version)

1 Like

I answer myself:
sunbeam openrc > admin_openrc
in case someone encounters the same problem.

Thanks

1 Like

Hi, I am new to MicroStack and I was trying to deploy it on a single machine (bare metal) following the instruction here.
The machine is running Ubuntu 24.04.1 LTS (GNU/Linux 6.8.0-47-generic x86_64).
First step (snap install openstack …) was successful but the second “Prepare a machine” failed with following Error

# sunbeam prepare-node-script | bash -x && newgrp snap_daemon
++ lsb_release -sc
+ '[' noble '!=' jammy ']'
+ echo 'ERROR: Sunbeam deploy only supported on jammy'
ERROR: Sunbeam deploy only supported on jammy
+ exit 1

Is it possible to install Microstack on Ubuntu 24.04.01 LTS?? or do I have to downgrade to Ubuntu 22.04 Jammy ?
I tried several snap channels ( --channel 2024.1/beta, --channel 2024.1/edge or stable) without success. Thank you.

I get to this in the sunbeam cluster bootstrap command:

⠸ Adding K8S unit to machine …

On another terminal screen, I am running this command:

watch -n 2 -c juju status --color

And this is the error I am seeing:

ERROR unable to connect to API: tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification erro
r” while trying to verify candidate authority certificate “juju-ca”)

Here is where my setup is in this process:

ubuntu@ringleader:~$ juju status

Model Controller Cloud/Region Version SLA Timestamp
controller sunbeam-controller moral-swan/default 3.6.1 unsupported 10:10:47-07:00

App Version Status Scale Charm Channel Rev Exposed Message
controller active 1 juju-controller 3.6/stable 116 no
k8s waiting 1 k8s 1.31/candidate 141 no Waiting to bootstrap k8s snap
sunbeam-machine active 1 sunbeam-machine 2024.1/beta 49 no

Unit Workload Agent Machine Public address Ports Message
controller/0* active idle 0 172.17.0.1
k8s/0* waiting idle 0 172.17.0.1 Waiting to bootstrap k8s snap
sunbeam-machine/0* active idle 0 172.17.0.1

Machine State Address Inst id Base AZ Message
0 started 172.17.0.1 manual: ubuntu@24.04 Manually provisioned machine

If I rerun sunbeam -v cluster bootstrap, here is where it errors:

⠙ Adding K8S unit to machine … DEBUG Skipping step Add K8S unit common.py:262
DEBUG Starting step ‘Store K8S kubeconfig’ common.py:252
DEBUG [get] http+unix://%2Fvar%2Fsnap%2Fopenstack%2Fcommon%2Fstate%2Fcontrol.socket/1.0/config/K8SKubeConfig, args={‘allow_redirects’: service.py:148
True}
DEBUG http://localhost:None “GET /1.0/config/K8SKubeConfig HTTP/1.1” 404 125 connectionpool.py:474
DEBUG Response(<Response [404]>) = {“type”:“error”,“status”:“”,“status_code”:0,“operation”:“”,“error_code”:404,“error”:“ConfigItem not service.py:159
found”,“metadata”:null}

       DEBUG    Running step Store K8S kubeconfig                                                                                                  common.py:268

⠋ Storing K8S configuration in sunbeam database … DEBUG Connector: closing controller connection connector.py:131
⠙ Storing K8S configuration in sunbeam database … DEBUG k8s/0 k8s.py:435
⠹ Storing K8S configuration in sunbeam database … DEBUG Connector: closing controller connection connector.py:131
⠼ Storing K8S configuration in sunbeam database … [10:44:03] DEBUG Connector: closing controller connection connector.py:131
⠧ Storing K8S configuration in sunbeam database … [10:44:04] DEBUG Failed to store k8s config k8s.py:452
Traceback (most recent call last):
File “/snap/openstack/637/lib/python3.12/site-packages/sunbeam/steps/k8s.py”, line 436, in run
result = run_sync(
^^^^^^^^^
File “/snap/openstack/637/lib/python3.12/site-packages/sunbeam/core/juju.py”, line 73, in run_sync
result = asyncio.get_event_loop().run_until_complete(coro)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3.12/asyncio/base_events.py”, line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File “/snap/openstack/637/lib/python3.12/site-packages/sunbeam/core/juju.py”, line 670, in run_action
raise ActionFailedException(output)
sunbeam.core.juju.ActionFailedException: {‘return-code’: 0}
DEBUG Finished running step ‘Store K8S kubeconfig’. Result: ResultType.FAILED common.py:271
Error: {‘return-code’: 0}

hello, --bootstrap is no longer working

sunbeam prepare-node-script --bootstrap
Usage: sunbeam prepare-node-script [OPTIONS]
Try 'sunbeam prepare-node-script -h' for help.

Error: No such option: --bootstrap

Hi, I am having some difficulties… I am installing this setup on a Laptop, 32 GB RAM, more than enough cores and 1 TB SSD. How do I choose the management network and the external one? My laptop is on 192.168.1.0/24. Do I have to create a subnet for the management network and use 192.168.1.0/24 as external? Should the management network be accessible by other devices on my LAN? Sorry for the rookie questions, I am still getting used to things. Thanks in advance.

The guide was great! I was able to setup a basic single node but, since I don’t need a demo user, images, etc. I did not go through that step. What is the default admin username, password, and domain? These don’t appear to be documented anywhere in this guide, the quick start guide, or by OpenStack. If we do not create the demo user, how do we login to the dashboard?

Attempting to install it on some bare metal servers, specifically a Dell R730 with Ubuntu installed. Get stuck on the step of this command, though:

sunbeam prepare-node-script --bootstrap | bash -x && newgrp snap_daemon

I’m able to connect to the IP to get the script, but it fails when running with the following error:

subprocess.CalledProcessError: Command ['/snap/openstack/669/juju/bin/juju', 'bootstrap', '--config', 'controller-service-type=loadbalancer', '--model-default=secret-backend=internal', 'rapid-gnat-k8s', 'sunbeam-controller'] returned non-zero exit status 1. 

Looking further into it, it seems to fail when attempting to bootstrap the K8 pod, as it also says this:

ERROR failed to bootstrap model: creating controller stack: creating statefulset for controller: timed out waiting for controller pod: pending - 
WARNING destroy k8s model timeout