CaaS

This plugin deploys Magnum, the OpenStack CaaS service.

Note: This feature is currently only supported in channel 2023.2 of the openstack snap.

Enabling CaaS

To enable CaaS, run the following command:

sunbeam enable caas

Use the OpenStack CLI to manage container infrastructures. See the upstream Magnum documentation for details.

Note: The Secrets and Orchestration plugins are dependencies of the CaaS plugin. Make sure to enable them.

When using the CaaS plugin in conjunction with the Load Balancer plugin, you are subject to the same limitations as the latter plugin. In particular, the OVN provider only supports the SOURCE_IP_PORT load balancing algorithm.

Configuring CaaS

To configure the cloud for CaaS usage, run the following command:

sunbeam configure caas

Disabling CaaS

To disable CaaS, run the following command:

sunbeam disable caas

Usage

Create a cluster template using the following command:

openstack coe cluster template \
   create k8s-cluster-template-ovn \
   --image fedora-coreos-38 \
   --keypair sunbeam \
   --external-network external-network \
   --flavor m1.small \
   --docker-volume-size 15 \
   --master-lb-enabled \
   --labels octavia_provider=ovn \
   --labels octavia_lb_algorithm=SOURCE_IP_PORT \
   --network-driver flannel \
   --coe kubernetes

Sample output:

Request to create cluster template k8s-cluster-template-ovn accepted
+-----------------------+-----------------------------------------------------------------------+
| Field                 | Value                                                                 |
+-----------------------+-----------------------------------------------------------------------+
| insecure_registry     | -                                                                     |
| labels                | {'octavia_provider': 'ovn', 'octavia_lb_algorithm': 'SOURCE_IP_PORT'} |
| updated_at            | -                                                                     |
| floating_ip_enabled   | True                                                                  |
| fixed_subnet          | -                                                                     |
| master_flavor_id      | -                                                                     |
| uuid                  | 4d675c2b-c4e6-4877-a949-987195125fbc                                  |
| no_proxy              | -                                                                     |
| https_proxy           | -                                                                     |
| tls_disabled          | False                                                                 |
| keypair_id            | sunbeam                                                               |
| public                | False                                                                 |
| http_proxy            | -                                                                     |
| docker_volume_size    | 15                                                                    |
| server_type           | vm                                                                    |
| external_network_id   | external-network                                                      |
| cluster_distro        | fedora-coreos                                                         |
| image_id              | fedora-coreos-38                                                      |
| volume_driver         | -                                                                     |
| registry_enabled      | False                                                                 |
| docker_storage_driver | overlay2                                                              |
| apiserver_port        | -                                                                     |
| name                  | k8s-cluster-template-ovn                                              |
| created_at            | 2023-10-16T09:45:24.751362+00:00                                      |
| network_driver        | flannel                                                               |
| fixed_network         | -                                                                     |
| coe                   | kubernetes                                                            |
| flavor_id             | m1.small                                                              |
| master_lb_enabled     | True                                                                  |
| dns_nameserver        | 8.8.8.8                                                               |
| hidden                | False                                                                 |
| tags                  | -                                                                     |
+-----------------------+-----------------------------------------------------------------------+

Create a Kubernetes cluster using the following command:

openstack coe cluster create --cluster-template k8s-cluster-template-ovn --node-count 1 --timeout 60 sunbeam-k8s-ovn

Sample output:

Request to create cluster 27eba31c-66a5-4efe-8373-49dd186567e6 accepted

Check cluster list status using the following command:

openstack coe cluster list

+--------------------------------------+-----------------+---------+------------+--------------+-----------------+---------------+
| uuid                                 | name            | keypair | node_count | master_count | status          | health_status |
+--------------------------------------+-----------------+---------+------------+--------------+-----------------+---------------+
| 27eba31c-66a5-4efe-8373-49dd186567e6 | sunbeam-k8s-ovn | sunbeam |          1 |            1 | CREATE_COMPLETE | HEALTHY       |
+--------------------------------------+-----------------+---------+------------+--------------+-----------------+---------------+

Note: You may need to wait a few minutes before the cluster is ready.

Check cluster status using the following command:

openstack coe cluster show sunbeam-k8s-ovn

+----------------------+---------------------------------------------------------------------------------------------------------------------------+
| Field                | Value                                                                                                                     |
+----------------------+---------------------------------------------------------------------------------------------------------------------------+
| status               | CREATE_COMPLETE                                                                                                           |
| health_status        | HEALTHY                                                                                                                   |
| cluster_template_id  | 4d675c2b-c4e6-4877-a949-987195125fbc                                                                                      |
| node_addresses       | ['10.20.20.227']                                                                                                          |
| uuid                 | 27eba31c-66a5-4efe-8373-49dd186567e6                                                                                      |
| stack_id             | a4221337-395e-4328-a878-de3f08a29bb2                                                                                      |
| status_reason        | None                                                                                                                      |
| created_at           | 2023-10-16T11:11:37+00:00                                                                                                 |
| updated_at           | 2023-10-16T11:18:24+00:00                                                                                                 |
| coe_version          | v1.18.16                                                                                                                  |
| labels               | {'octavia_provider': 'ovn', 'octavia_lb_algorithm': 'SOURCE_IP_PORT'}                                                     |
| labels_overridden    | {}                                                                                                                        |
| labels_skipped       | {}                                                                                                                        |
| labels_added         | {}                                                                                                                        |
| fixed_network        | None                                                                                                                      |
| fixed_subnet         | None                                                                                                                      |
| floating_ip_enabled  | True                                                                                                                      |
| faults               |                                                                                                                           |
| keypair              | sunbeam                                                                                                                   |
| api_address          | https://10.20.20.215:6443                                                                                                 |
| master_addresses     | ['10.20.20.52']                                                                                                           |
| master_lb_enabled    | True                                                                                                                      |
| create_timeout       | 60                                                                                                                        |
| node_count           | 1                                                                                                                         |
| discovery_url        | https://discovery.etcd.io/e98c17817a572118135f4cfa60397792                                                                |
| docker_volume_size   | 15                                                                                                                        |
| master_count         | 1                                                                                                                         |
| container_version    | 1.12.6                                                                                                                    |
| name                 | sunbeam-k8s-ovn                                                                                                           |
| master_flavor_id     | None                                                                                                                      |
| flavor_id            | m1.small                                                                                                                  |
| health_status_reason | {'sunbeam-k8s-ovn-fvwzbaayuols-master-0.Ready': 'True', 'sunbeam-k8s-ovn-fvwzbaayuols-node-0.Ready': 'True', 'api': 'ok'} |
| project_id           | cf669675a9784b84805a5aa42afb21fe                                                                                          |
+----------------------+---------------------------------------------------------------------------------------------------------------------------+

Access your Kubernetes cluster using the following commands:

mkdir config-dir
openstack coe cluster config sunbeam-k8s-ovn --dir config-dir/
export KUBECONFIG=/home/ubuntu/config-dir/config
kubectl get pods -A

NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   coredns-56448757b9-km7qj                     1/1     Running   0          4m43s
kube-system   coredns-56448757b9-w46cq                     1/1     Running   0          4m43s
kube-system   dashboard-metrics-scraper-67f57ff746-6phd6   1/1     Running   0          4m40s
kube-system   k8s-keystone-auth-4sqx8                      1/1     Running   0          4m39s
kube-system   kube-dns-autoscaler-6d5b5dc777-wbt4w         1/1     Running   0          4m42s
kube-system   kube-flannel-ds-c8dqt                        1/1     Running   0          2m44s
kube-system   kube-flannel-ds-t5kc8                        1/1     Running   0          4m42s
kube-system   kubernetes-dashboard-7b88d986b4-2qgm5        1/1     Running   0          4m40s
kube-system   magnum-metrics-server-6c4c77844b-p2ws4       1/1     Running   0          4m34s
kube-system   npd-h7xsg                                    1/1     Running   0          2m23s
kube-system   openstack-cloud-controller-manager-j8l4l     1/1     Running   0          4m43s
1 Like

Hello, I’m looking for a way to build a small Openstack environment with Microstack,
I am very interested in building a small Openstack environment with Microstack and have implemented this procedure.
I am experiencing the following issue and would appreciate any comments.

[my environment]

  • I am using AWS.
  • Bare metal instance type c5n.metal
  • 1 unit instance
  • Storage: 80gb(root-SSD), 50gb(add device-SSD)
  • Network: 2 nic (fully separated subnets, both with internet access)
  • openstack: 2023.2

[Problems I am facing]

  • Status is CREATE_FAILED when it creats coe Cluster.
  • openstack coe cluster show xxx The following is shown.
  {'default-master': 'Resource CREATE failed: DBConnectionError: resources.kube_masters: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'heat-mysql-router.openstack.svc.cluster.local\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)', 'default-worker': 'Resource CREATE failed: DBConnectionError: resources.kube_masters: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'heat-mysql-router.openstack.svc.cluster.local\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)'}
  • ‘heat-mysql-0’ was restarted when running sudo microk8s.kubectl get pod -A.

  • Logs around ‘heat-mysql-0’ restarted.

  • 2024-04-26T06:24:06.663Z [container-agent] 2024-04-26 06:24:06 ERROR juju-log Failed to flush [<MySQLTextLogs.ERROR: 'ERROR LOGS'>, <MySQLTextLogs.GENERAL: 'GENERAL LOGS'>, <MySQLTextLogs.SLOW: 'SLOW LOGS'>] logs.
    2024-04-26T06:24:06.663Z [container-agent] Traceback (most recent call last):
    2024-04-26T06:24:06.663Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/src/mysql_k8s_helpers.py", line 666, in _run_mysqlsh_script
    2024-04-26T06:24:06.663Z [container-agent]     stdout, _ = process.wait_output()
    2024-04-26T06:24:06.663Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 1540, in wait_output
    2024-04-26T06:24:06.663Z [container-agent]     raise ExecError[AnyStr](self._command, exit_code, out_value, err_value)
    2024-04-26T06:24:06.663Z [container-agent] ops.pebble.ExecError: non-zero exit code 1 executing ['/usr/bin/mysqlsh', '--no-wizard', '--python', '--verbose=1', '-f', '/tmp/script.py', ';', 'rm', '/tmp/script.py'], stdout='', stderr='Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory\nverbose: 2024-04-26T06:24:06Z: Loading startup files...\nverbose: 2024-04-26T06:24:06Z: Loading plugins...\nverbose: 2024-04-26T06:24:06Z: Connecting to MySQL at: serverconfig@heat-mysql-0.heat-mysql-endpoints.openstack.svc.cluster.local\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\nmysqlsh.DBError: MySQL Error (2003): Shell.connect: Can\'t connect to MySQL server on \'heat-mysql-0.heat-mysql-endpoints.openstack.svc.cluster.local:3306\' (111)\n'
    2024-04-26T06:24:06.663Z [container-agent] 
    2024-04-26T06:24:06.663Z [container-agent] During handling of the above exception, another exception occurred:
    2024-04-26T06:24:06.663Z [container-agent] 
    2024-04-26T06:24:06.663Z [container-agent] Traceback (most recent call last):
    2024-04-26T06:24:06.663Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/lib/charms/mysql/v0/mysql.py", line 2532, in flush_mysql_logs
    2024-04-26T06:24:06.663Z [container-agent]     self._run_mysqlsh_script("\n".join(flush_logs_commands))
    2024-04-26T06:24:06.663Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/src/mysql_k8s_helpers.py", line 669, in _run_mysqlsh_script
    2024-04-26T06:24:06.663Z [container-agent]     raise MySQLClientError(e.stderr)
    2024-04-26T06:24:06.663Z [container-agent] charms.mysql.v0.mysql.MySQLClientError: Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
    2024-04-26T06:24:06.663Z [container-agent] verbose: 2024-04-26T06:24:06Z: Loading startup files...
    2024-04-26T06:24:06.663Z [container-agent] verbose: 2024-04-26T06:24:06Z: Loading plugins...
    2024-04-26T06:24:06.663Z [container-agent] verbose: 2024-04-26T06:24:06Z: Connecting to MySQL at: serverconfig@heat-mysql-0.heat-mysql-endpoints.openstack.svc.cluster.local
    2024-04-26T06:24:06.663Z [container-agent] Traceback (most recent call last):
    2024-04-26T06:24:06.663Z [container-agent]   File "<string>", line 1, in <module>
    2024-04-26T06:24:06.663Z [container-agent] mysqlsh.DBError: MySQL Error (2003): Shell.connect: Can't connect to MySQL server on 'heat-mysql-0.heat-mysql-endpoints.openstack.svc.cluster.local:3306' (111)
    2024-04-26T06:24:06.663Z [container-agent] 
    2024-04-26T06:24:47.496Z [container-agent] 2024-04-26 06:24:47 INFO juju-log Unit workload member-state is offline with member-role unknown
    2024-04-26T06:24:47.511Z [container-agent] 2024-04-26 06:24:47 INFO juju-log Attempting reboot from complete outage.
    2024-04-26T06:24:51.746Z [container-agent] 2024-04-26 06:24:51 INFO juju.worker.uniter.operation runhook.go:186 ran "update-status" hook (via hook dispatching script: dispatch)
    2024-04-26T06:29:02.010Z [container-agent] 2024-04-26 06:29:02 ERROR juju-log Uncaught exception while in charm code:
    2024-04-26T06:29:02.010Z [container-agent] Traceback (most recent call last):
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/./src/charm.py", line 770, in <module>
    2024-04-26T06:29:02.010Z [container-agent]     main(MySQLOperatorCharm)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/main.py", line 456, in main
    2024-04-26T06:29:02.010Z [container-agent]     _emit_charm_event(charm, dispatcher.event_name)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/main.py", line 144, in _emit_charm_event
    2024-04-26T06:29:02.010Z [container-agent]     event_to_emit.emit(*args, **kwargs)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/framework.py", line 351, in emit
    2024-04-26T06:29:02.010Z [container-agent]     framework._emit(event)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/framework.py", line 853, in _emit
    2024-04-26T06:29:02.010Z [container-agent]     self._reemit(event_path)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/framework.py", line 943, in _reemit
    2024-04-26T06:29:02.010Z [container-agent]     custom_handler(event)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/src/rotate_mysql_logs.py", line 55, in _rotate_mysql_logs
    2024-04-26T06:29:02.010Z [container-agent]     self.charm._mysql._execute_commands(["logrotate", "-f", LOG_ROTATE_CONFIG_FILE])
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/src/mysql_k8s_helpers.py", line 628, in _execute_commands
    2024-04-26T06:29:02.010Z [container-agent]     stdout, stderr = process.wait_output()
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 1535, in wait_output
    2024-04-26T06:29:02.010Z [container-agent]     exit_code: int = self._wait()
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 1474, in _wait
    2024-04-26T06:29:02.010Z [container-agent]     change = self._client.wait_change(self._change_id, timeout=timeout)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 1992, in wait_change
    2024-04-26T06:29:02.010Z [container-agent]     return self._wait_change_using_wait(change_id, timeout)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 2013, in _wait_change_using_wait
    2024-04-26T06:29:02.010Z [container-agent]     return self._wait_change(change_id, this_timeout)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 2027, in _wait_change
    2024-04-26T06:29:02.010Z [container-agent]     resp = self._request('GET', f'/v1/changes/{change_id}/wait', query)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 1754, in _request
    2024-04-26T06:29:02.010Z [container-agent]     response = self._request_raw(method, path, query, headers, data)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 1789, in _request_raw
    2024-04-26T06:29:02.010Z [container-agent]     response = self.opener.open(request, timeout=self.timeout)
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/urllib/request.py", line 519, in open
    2024-04-26T06:29:02.010Z [container-agent]     response = self._open(req, data)
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/urllib/request.py", line 536, in _open
    2024-04-26T06:29:02.010Z [container-agent]     result = self._call_chain(self.handle_open, protocol, protocol +
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain
    2024-04-26T06:29:02.010Z [container-agent]     result = func(*args)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 326, in http_open
    2024-04-26T06:29:02.010Z [container-agent]     return self.do_open(_UnixSocketConnection, req,  # type:ignore
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/urllib/request.py", line 1352, in do_open
    2024-04-26T06:29:02.010Z [container-agent]     r = h.getresponse()
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/http/client.py", line 1375, in getresponse
    2024-04-26T06:29:02.010Z [container-agent]     response.begin()
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/http/client.py", line 318, in begin
    2024-04-26T06:29:02.010Z [container-agent]     version, status, reason = self._read_status()
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/http/client.py", line 287, in _read_status
    2024-04-26T06:29:02.010Z [container-agent]     raise RemoteDisconnected("Remote end closed connection without"
    2024-04-26T06:29:02.010Z [container-agent] http.client.RemoteDisconnected: Remote end closed connection without response
    2024-04-26T06:29:02.140Z [container-agent] 2024-04-26 06:29:02 INFO juju.util.exec exec.go:209 run result: exit status 1
    

[QUESTION]

  • Is the cause of the MySQL restarts related to juju frash ?
  • Not only heat-mysql-0, but also other mysql are repeatedly restarting.
  • Is this a known bug? Or is there a workaround?

Hi @tatsuromakita,

Which version of the snap are you installing on ?

There was a known issue of mysql pods getting restarted that was caused by a bug of a tool used by Juju. This has been fixed Juju 3.4.1. If you’re on 2023.2/stable, you should easily be able to modify the prepare-node-script to install juju 3.4/stable, and then bootstrap the whole environment. This should solve the MySQL issues you’re facing.

Juju bug ticket: Bug #2052517 “Workload container probes are too unforgiving” : Bugs : Canonical Juju
Snap Openstack bug ticket: Bug #2051915 “Mysql pods being restarted by possible OOM killer?...” : Bugs : OpenStack Snap

We’re working towards the next stable release for snap-openstack, which should install juju 3.4/stable by default.

1 Like

Hi @gboutry ,

Thank you for your speedy reply!
got it! I upgade juju 3.4.

[my environment : addtional info]

  • snap: 2023.2
  • ubuntu22.04
  • juju: 3.2.4
    ‘‘‘
    $ juju version
    3.2.4-genericlinux-amd64
    $
    ‘‘‘

I look forward to the release of the 2024.1stable!

thanks regards.

Hi @gboutry

I have tried the juju version upgrade 3.4/stable that you gave me.
However, I am experiencing the following event.

My environment information is the same AWS, so I will skip the explanation.

[ Event ]
When executing sunbeam cluster bootstrap, timeout occurs in 24/31 or 29/31.
Actual message:

$ sunbeam cluster bootstrap --role control --role compute --role storage 
:
Deploying OpenStack Control Plane to Kubernetes (this may take a while) ... Waiting for services to come online (24/31) Timed out while waiting for model 'openstack ' to be ready
Error: Timed out while waiting for model 'openstack' to be ready 
$

I re-run the command again with the following command, but it was same event.
and I re-run juju 3.5/stable, but it wat same event.

$ sunbeam cluster bootstrap --accept-defaults
Deploying OpenStack Control Plane to Kubernetes (this may take a while) ... Waiting for services to come online (24/31) Timed out while waiting for model 'openstack ' to be ready
Error: Timed out while waiting for model 'openstack' to be ready 
$

[ How to upgrade to juju 3.4/LTS ]

$ sunbeam prepare-node-script > prepare-node-script
$ vi prepare-node-script
:
sudo snap install --channel 3.4/stable juju
:
$ cat prepare-node-script | bash -x && newgrp snap_daemon
:
Juju (3.4/stable) 2.4.2 from Canonical installed
$

No particular error is output to syslog. Could you give me any advice?

I have made some progress.
I just gave the bootstrap command option --topology single --database single,
(24/24) and OpenStack was installed.

However, if I continue to enable the sunbeam options: secrets, orchestration, loadbalancer, caas, I get the same Error: timed out.
Only the vault succeeds.

$ sunbeam enable vault
OpenStack vault application enabled.
$
$ sunbeam enable secrets
Enabling OpenStack secrets application ... Timed out while waiting for model 'openstack' to be ready 
Error: Timed out while waiting for model 'openstack' to be ready
$
$ sunbeam enable orchestration
Enabling OpenStack orchestration application ... Timed out while waiting for model 'openstack' to be ready 
Error: Timed out while waiting for model 'openstack' to be ready
$
$ sunbeam enable loadbalancer
Enabling OpenStack loadbalancer application ... Timed out while waiting for model 'openstack' to be ready 
Error: Timed out while waiting for model 'openstack' to be ready
$
$ sunbeam enable caas
Enabling OpenStack caas application ... Timed out while waiting for model 'openstack' to be ready 
Error: Timed out while waiting for model 'openstack' to be ready
$

However, the pods appear to be activated except for Magnum.

$ sudo microk8s.kubectl get pod -A
NAMESPACE        NAME                                       READY   STATUS    RESTARTS   AGE
kube-system      hostpath-provisioner-7df77bc496-skpnb      1/1     Running   0          172m
metallb-system   controller-5f7bb57799-7rgjp                1/1     Running   0          172m
metallb-system   speaker-zdvr4                              1/1     Running   0          172m
kube-system      coredns-864597b5fd-58hlp                   1/1     Running   0          172m
kube-system      calico-node-hqd7l                          1/1     Running   0          170m
kube-system      calico-kube-controllers-6b86b85cb4-k4gxn   1/1     Running   0          170m
openstack        modeloperator-8574dbb7f8-z6vrf             1/1     Running   0          170m
openstack        certificate-authority-0                    1/1     Running   0          169m
openstack        placement-mysql-router-0                   2/2     Running   0          168m
openstack        nova-mysql-router-0                        2/2     Running   0          168m
openstack        nova-api-mysql-router-0                    2/2     Running   0          168m
openstack        neutron-mysql-router-0                     2/2     Running   0          168m
openstack        horizon-mysql-router-0                     2/2     Running   0          168m
openstack        cinder-ceph-mysql-router-0                 2/2     Running   0          168m
openstack        nova-cell-mysql-router-0                   2/2     Running   0          168m
openstack        keystone-mysql-router-0                    2/2     Running   0          167m
openstack        glance-mysql-router-0                      2/2     Running   0          168m
openstack        cinder-mysql-router-0                      2/2     Running   0          168m
openstack        rabbitmq-0                                 2/2     Running   0          167m
openstack        mysql-0                                    2/2     Running   0          166m
openstack        ovn-relay-0                                2/2     Running   0          167m
openstack        placement-0                                2/2     Running   0          168m
openstack        traefik-0                                  2/2     Running   0          167m
openstack        traefik-public-0                           2/2     Running   0          167m
openstack        ovn-central-0                              4/4     Running   0          167m
openstack        keystone-0                                 2/2     Running   0          166m
openstack        cinder-ceph-0                              2/2     Running   0          166m
openstack        cinder-0                                   3/3     Running   0          167m
openstack        nova-0                                     4/4     Running   0          168m
openstack        neutron-0                                  2/2     Running   0          167m
openstack        horizon-0                                  2/2     Running   0          167m
openstack        glance-0                                   2/2     Running   0          166m
openstack        designate-mysql-router-0                   2/2     Running   0          139m
openstack        designate-0                                2/2     Running   0          139m
openstack        bind-0                                     2/2     Running   0          139m
openstack        vault-0                                    2/2     Running   0          123m
openstack        barbican-mysql-router-0                    2/2     Running   0          121m
openstack        barbican-0                                 3/3     Running   0          121m
openstack        heat-mysql-router-0                        2/2     Running   0          103m
openstack        heat-0                                     4/4     Running   0          103m
openstack        octavia-mysql-router-0                     2/2     Running   0          87m
openstack        octavia-0                                  4/4     Running   0          87m
$ 

Question
Are there any restrictions on --database single, such as the caas option still not being enabled?

Sorry for the consecutive posts.

I found that the logs I get from sunbeam instpect repeatedly show DEBUG ConfigItem not found .

Actual log: ~/snap/openstack/common/logs/sunbeam-20240510-05176.150196.log

05:17:26,263 sunbeam.plugins.interface.v1.base DEBUG ConfigItem not found
05:17:26,263 sunbeam.clusterd.service DEBUG [get] http+unix://%2Fvar%2Fsnap%2Fopenstack%2Fcommon%2Fstate%2Fcontrol.socket/1.0/config/ Plugin-dns,.
 args={'allow_redirects': True}
05:17:26,264 urllib3.connectionpool DEBUG http://localhost:None "GET /1.0/config/Plugin-dns HTTP/1.1" 404 125
05:17:26,264 sunbeam.clusterd.service DEBUG Response(<Response [404]>) = {"type": "error", "status":"", "status_code":0, "operation":"","" error_code"
:404, "error": "ConfigItem not found", "metadata":null}

Is there any way to pass this log?