CaaS

Note: The CaaS feature is still considered experimental and is only available when installing the openstack snap from the beta or edge channel.

This feature deploys Magnum, the OpenStack CaaS service.

Enabling CaaS

To enable CaaS, run the following command:

sunbeam enable caas

Use the OpenStack CLI to manage container infrastructures. See the upstream Magnum documentation for details.

Note: The Secrets and Orchestration features are dependencies of the CaaS feature. Make sure to enable them.

When using the CaaS feature in conjunction with the Load Balancer feature, you are subject to the same limitations as the latter feature. In particular, the OVN provider only supports the SOURCE_IP_PORT load balancing algorithm.

Configuring CaaS

To configure the cloud for CaaS usage, run the following command:

sunbeam configure caas

Disabling CaaS

To disable CaaS, run the following command:

sunbeam disable caas

Usage

Create a cluster template using the following command:

openstack coe cluster template \
   create k8s-cluster-template-ovn \
   --image fedora-coreos-38 \
   --keypair sunbeam \
   --external-network external-network \
   --flavor m1.small \
   --docker-volume-size 15 \
   --master-lb-enabled \
   --labels octavia_provider=ovn \
   --labels octavia_lb_algorithm=SOURCE_IP_PORT \
   --network-driver flannel \
   --coe kubernetes

Sample output:

Request to create cluster template k8s-cluster-template-ovn accepted
+-----------------------+-----------------------------------------------------------------------+
| Field                 | Value                                                                 |
+-----------------------+-----------------------------------------------------------------------+
| insecure_registry     | -                                                                     |
| labels                | {'octavia_provider': 'ovn', 'octavia_lb_algorithm': 'SOURCE_IP_PORT'} |
| updated_at            | -                                                                     |
| floating_ip_enabled   | True                                                                  |
| fixed_subnet          | -                                                                     |
| master_flavor_id      | -                                                                     |
| uuid                  | 4d675c2b-c4e6-4877-a949-987195125fbc                                  |
| no_proxy              | -                                                                     |
| https_proxy           | -                                                                     |
| tls_disabled          | False                                                                 |
| keypair_id            | sunbeam                                                               |
| public                | False                                                                 |
| http_proxy            | -                                                                     |
| docker_volume_size    | 15                                                                    |
| server_type           | vm                                                                    |
| external_network_id   | external-network                                                      |
| cluster_distro        | fedora-coreos                                                         |
| image_id              | fedora-coreos-38                                                      |
| volume_driver         | -                                                                     |
| registry_enabled      | False                                                                 |
| docker_storage_driver | overlay2                                                              |
| apiserver_port        | -                                                                     |
| name                  | k8s-cluster-template-ovn                                              |
| created_at            | 2023-10-16T09:45:24.751362+00:00                                      |
| network_driver        | flannel                                                               |
| fixed_network         | -                                                                     |
| coe                   | kubernetes                                                            |
| flavor_id             | m1.small                                                              |
| master_lb_enabled     | True                                                                  |
| dns_nameserver        | 8.8.8.8                                                               |
| hidden                | False                                                                 |
| tags                  | -                                                                     |
+-----------------------+-----------------------------------------------------------------------+

Create a Kubernetes cluster using the following command:

openstack coe cluster create --cluster-template k8s-cluster-template-ovn --node-count 1 --timeout 60 sunbeam-k8s-ovn

Sample output:

Request to create cluster 27eba31c-66a5-4efe-8373-49dd186567e6 accepted

Check cluster list status using the following command:

openstack coe cluster list

+--------------------------------------+-----------------+---------+------------+--------------+-----------------+---------------+
| uuid                                 | name            | keypair | node_count | master_count | status          | health_status |
+--------------------------------------+-----------------+---------+------------+--------------+-----------------+---------------+
| 27eba31c-66a5-4efe-8373-49dd186567e6 | sunbeam-k8s-ovn | sunbeam |          1 |            1 | CREATE_COMPLETE | HEALTHY       |
+--------------------------------------+-----------------+---------+------------+--------------+-----------------+---------------+

Note: You may need to wait a few minutes before the cluster is ready.

Check cluster status using the following command:

openstack coe cluster show sunbeam-k8s-ovn

+----------------------+---------------------------------------------------------------------------------------------------------------------------+
| Field                | Value                                                                                                                     |
+----------------------+---------------------------------------------------------------------------------------------------------------------------+
| status               | CREATE_COMPLETE                                                                                                           |
| health_status        | HEALTHY                                                                                                                   |
| cluster_template_id  | 4d675c2b-c4e6-4877-a949-987195125fbc                                                                                      |
| node_addresses       | ['10.20.20.227']                                                                                                          |
| uuid                 | 27eba31c-66a5-4efe-8373-49dd186567e6                                                                                      |
| stack_id             | a4221337-395e-4328-a878-de3f08a29bb2                                                                                      |
| status_reason        | None                                                                                                                      |
| created_at           | 2023-10-16T11:11:37+00:00                                                                                                 |
| updated_at           | 2023-10-16T11:18:24+00:00                                                                                                 |
| coe_version          | v1.18.16                                                                                                                  |
| labels               | {'octavia_provider': 'ovn', 'octavia_lb_algorithm': 'SOURCE_IP_PORT'}                                                     |
| labels_overridden    | {}                                                                                                                        |
| labels_skipped       | {}                                                                                                                        |
| labels_added         | {}                                                                                                                        |
| fixed_network        | None                                                                                                                      |
| fixed_subnet         | None                                                                                                                      |
| floating_ip_enabled  | True                                                                                                                      |
| faults               |                                                                                                                           |
| keypair              | sunbeam                                                                                                                   |
| api_address          | https://10.20.20.215:6443                                                                                                 |
| master_addresses     | ['10.20.20.52']                                                                                                           |
| master_lb_enabled    | True                                                                                                                      |
| create_timeout       | 60                                                                                                                        |
| node_count           | 1                                                                                                                         |
| discovery_url        | https://discovery.etcd.io/e98c17817a572118135f4cfa60397792                                                                |
| docker_volume_size   | 15                                                                                                                        |
| master_count         | 1                                                                                                                         |
| container_version    | 1.12.6                                                                                                                    |
| name                 | sunbeam-k8s-ovn                                                                                                           |
| master_flavor_id     | None                                                                                                                      |
| flavor_id            | m1.small                                                                                                                  |
| health_status_reason | {'sunbeam-k8s-ovn-fvwzbaayuols-master-0.Ready': 'True', 'sunbeam-k8s-ovn-fvwzbaayuols-node-0.Ready': 'True', 'api': 'ok'} |
| project_id           | cf669675a9784b84805a5aa42afb21fe                                                                                          |
+----------------------+---------------------------------------------------------------------------------------------------------------------------+

Access your Kubernetes cluster using the following commands:

mkdir config-dir
openstack coe cluster config sunbeam-k8s-ovn --dir config-dir/
export KUBECONFIG=/home/ubuntu/config-dir/config
kubectl get pods -A

NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   coredns-56448757b9-km7qj                     1/1     Running   0          4m43s
kube-system   coredns-56448757b9-w46cq                     1/1     Running   0          4m43s
kube-system   dashboard-metrics-scraper-67f57ff746-6phd6   1/1     Running   0          4m40s
kube-system   k8s-keystone-auth-4sqx8                      1/1     Running   0          4m39s
kube-system   kube-dns-autoscaler-6d5b5dc777-wbt4w         1/1     Running   0          4m42s
kube-system   kube-flannel-ds-c8dqt                        1/1     Running   0          2m44s
kube-system   kube-flannel-ds-t5kc8                        1/1     Running   0          4m42s
kube-system   kubernetes-dashboard-7b88d986b4-2qgm5        1/1     Running   0          4m40s
kube-system   magnum-metrics-server-6c4c77844b-p2ws4       1/1     Running   0          4m34s
kube-system   npd-h7xsg                                    1/1     Running   0          2m23s
kube-system   openstack-cloud-controller-manager-j8l4l     1/1     Running   0          4m43s
1 Like

Hello, I’m looking for a way to build a small Openstack environment with Microstack,
I am very interested in building a small Openstack environment with Microstack and have implemented this procedure.
I am experiencing the following issue and would appreciate any comments.

[my environment]

  • I am using AWS.
  • Bare metal instance type c5n.metal
  • 1 unit instance
  • Storage: 80gb(root-SSD), 50gb(add device-SSD)
  • Network: 2 nic (fully separated subnets, both with internet access)
  • openstack: 2023.2

[Problems I am facing]

  • Status is CREATE_FAILED when it creats coe Cluster.
  • openstack coe cluster show xxx The following is shown.
  {'default-master': 'Resource CREATE failed: DBConnectionError: resources.kube_masters: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'heat-mysql-router.openstack.svc.cluster.local\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)', 'default-worker': 'Resource CREATE failed: DBConnectionError: resources.kube_masters: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'heat-mysql-router.openstack.svc.cluster.local\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)'}
  • ‘heat-mysql-0’ was restarted when running sudo microk8s.kubectl get pod -A.

  • Logs around ‘heat-mysql-0’ restarted.

  • 2024-04-26T06:24:06.663Z [container-agent] 2024-04-26 06:24:06 ERROR juju-log Failed to flush [<MySQLTextLogs.ERROR: 'ERROR LOGS'>, <MySQLTextLogs.GENERAL: 'GENERAL LOGS'>, <MySQLTextLogs.SLOW: 'SLOW LOGS'>] logs.
    2024-04-26T06:24:06.663Z [container-agent] Traceback (most recent call last):
    2024-04-26T06:24:06.663Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/src/mysql_k8s_helpers.py", line 666, in _run_mysqlsh_script
    2024-04-26T06:24:06.663Z [container-agent]     stdout, _ = process.wait_output()
    2024-04-26T06:24:06.663Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 1540, in wait_output
    2024-04-26T06:24:06.663Z [container-agent]     raise ExecError[AnyStr](self._command, exit_code, out_value, err_value)
    2024-04-26T06:24:06.663Z [container-agent] ops.pebble.ExecError: non-zero exit code 1 executing ['/usr/bin/mysqlsh', '--no-wizard', '--python', '--verbose=1', '-f', '/tmp/script.py', ';', 'rm', '/tmp/script.py'], stdout='', stderr='Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory\nverbose: 2024-04-26T06:24:06Z: Loading startup files...\nverbose: 2024-04-26T06:24:06Z: Loading plugins...\nverbose: 2024-04-26T06:24:06Z: Connecting to MySQL at: serverconfig@heat-mysql-0.heat-mysql-endpoints.openstack.svc.cluster.local\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\nmysqlsh.DBError: MySQL Error (2003): Shell.connect: Can\'t connect to MySQL server on \'heat-mysql-0.heat-mysql-endpoints.openstack.svc.cluster.local:3306\' (111)\n'
    2024-04-26T06:24:06.663Z [container-agent] 
    2024-04-26T06:24:06.663Z [container-agent] During handling of the above exception, another exception occurred:
    2024-04-26T06:24:06.663Z [container-agent] 
    2024-04-26T06:24:06.663Z [container-agent] Traceback (most recent call last):
    2024-04-26T06:24:06.663Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/lib/charms/mysql/v0/mysql.py", line 2532, in flush_mysql_logs
    2024-04-26T06:24:06.663Z [container-agent]     self._run_mysqlsh_script("\n".join(flush_logs_commands))
    2024-04-26T06:24:06.663Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/src/mysql_k8s_helpers.py", line 669, in _run_mysqlsh_script
    2024-04-26T06:24:06.663Z [container-agent]     raise MySQLClientError(e.stderr)
    2024-04-26T06:24:06.663Z [container-agent] charms.mysql.v0.mysql.MySQLClientError: Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
    2024-04-26T06:24:06.663Z [container-agent] verbose: 2024-04-26T06:24:06Z: Loading startup files...
    2024-04-26T06:24:06.663Z [container-agent] verbose: 2024-04-26T06:24:06Z: Loading plugins...
    2024-04-26T06:24:06.663Z [container-agent] verbose: 2024-04-26T06:24:06Z: Connecting to MySQL at: serverconfig@heat-mysql-0.heat-mysql-endpoints.openstack.svc.cluster.local
    2024-04-26T06:24:06.663Z [container-agent] Traceback (most recent call last):
    2024-04-26T06:24:06.663Z [container-agent]   File "<string>", line 1, in <module>
    2024-04-26T06:24:06.663Z [container-agent] mysqlsh.DBError: MySQL Error (2003): Shell.connect: Can't connect to MySQL server on 'heat-mysql-0.heat-mysql-endpoints.openstack.svc.cluster.local:3306' (111)
    2024-04-26T06:24:06.663Z [container-agent] 
    2024-04-26T06:24:47.496Z [container-agent] 2024-04-26 06:24:47 INFO juju-log Unit workload member-state is offline with member-role unknown
    2024-04-26T06:24:47.511Z [container-agent] 2024-04-26 06:24:47 INFO juju-log Attempting reboot from complete outage.
    2024-04-26T06:24:51.746Z [container-agent] 2024-04-26 06:24:51 INFO juju.worker.uniter.operation runhook.go:186 ran "update-status" hook (via hook dispatching script: dispatch)
    2024-04-26T06:29:02.010Z [container-agent] 2024-04-26 06:29:02 ERROR juju-log Uncaught exception while in charm code:
    2024-04-26T06:29:02.010Z [container-agent] Traceback (most recent call last):
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/./src/charm.py", line 770, in <module>
    2024-04-26T06:29:02.010Z [container-agent]     main(MySQLOperatorCharm)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/main.py", line 456, in main
    2024-04-26T06:29:02.010Z [container-agent]     _emit_charm_event(charm, dispatcher.event_name)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/main.py", line 144, in _emit_charm_event
    2024-04-26T06:29:02.010Z [container-agent]     event_to_emit.emit(*args, **kwargs)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/framework.py", line 351, in emit
    2024-04-26T06:29:02.010Z [container-agent]     framework._emit(event)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/framework.py", line 853, in _emit
    2024-04-26T06:29:02.010Z [container-agent]     self._reemit(event_path)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/framework.py", line 943, in _reemit
    2024-04-26T06:29:02.010Z [container-agent]     custom_handler(event)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/src/rotate_mysql_logs.py", line 55, in _rotate_mysql_logs
    2024-04-26T06:29:02.010Z [container-agent]     self.charm._mysql._execute_commands(["logrotate", "-f", LOG_ROTATE_CONFIG_FILE])
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/src/mysql_k8s_helpers.py", line 628, in _execute_commands
    2024-04-26T06:29:02.010Z [container-agent]     stdout, stderr = process.wait_output()
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 1535, in wait_output
    2024-04-26T06:29:02.010Z [container-agent]     exit_code: int = self._wait()
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 1474, in _wait
    2024-04-26T06:29:02.010Z [container-agent]     change = self._client.wait_change(self._change_id, timeout=timeout)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 1992, in wait_change
    2024-04-26T06:29:02.010Z [container-agent]     return self._wait_change_using_wait(change_id, timeout)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 2013, in _wait_change_using_wait
    2024-04-26T06:29:02.010Z [container-agent]     return self._wait_change(change_id, this_timeout)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 2027, in _wait_change
    2024-04-26T06:29:02.010Z [container-agent]     resp = self._request('GET', f'/v1/changes/{change_id}/wait', query)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 1754, in _request
    2024-04-26T06:29:02.010Z [container-agent]     response = self._request_raw(method, path, query, headers, data)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 1789, in _request_raw
    2024-04-26T06:29:02.010Z [container-agent]     response = self.opener.open(request, timeout=self.timeout)
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/urllib/request.py", line 519, in open
    2024-04-26T06:29:02.010Z [container-agent]     response = self._open(req, data)
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/urllib/request.py", line 536, in _open
    2024-04-26T06:29:02.010Z [container-agent]     result = self._call_chain(self.handle_open, protocol, protocol +
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain
    2024-04-26T06:29:02.010Z [container-agent]     result = func(*args)
    2024-04-26T06:29:02.010Z [container-agent]   File "/var/lib/juju/agents/unit-heat-mysql-0/charm/venv/ops/pebble.py", line 326, in http_open
    2024-04-26T06:29:02.010Z [container-agent]     return self.do_open(_UnixSocketConnection, req,  # type:ignore
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/urllib/request.py", line 1352, in do_open
    2024-04-26T06:29:02.010Z [container-agent]     r = h.getresponse()
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/http/client.py", line 1375, in getresponse
    2024-04-26T06:29:02.010Z [container-agent]     response.begin()
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/http/client.py", line 318, in begin
    2024-04-26T06:29:02.010Z [container-agent]     version, status, reason = self._read_status()
    2024-04-26T06:29:02.010Z [container-agent]   File "/usr/lib/python3.10/http/client.py", line 287, in _read_status
    2024-04-26T06:29:02.010Z [container-agent]     raise RemoteDisconnected("Remote end closed connection without"
    2024-04-26T06:29:02.010Z [container-agent] http.client.RemoteDisconnected: Remote end closed connection without response
    2024-04-26T06:29:02.140Z [container-agent] 2024-04-26 06:29:02 INFO juju.util.exec exec.go:209 run result: exit status 1
    

[QUESTION]

  • Is the cause of the MySQL restarts related to juju frash ?
  • Not only heat-mysql-0, but also other mysql are repeatedly restarting.
  • Is this a known bug? Or is there a workaround?

Hi @tatsuromakita,

Which version of the snap are you installing on ?

There was a known issue of mysql pods getting restarted that was caused by a bug of a tool used by Juju. This has been fixed Juju 3.4.1. If you’re on 2023.2/stable, you should easily be able to modify the prepare-node-script to install juju 3.4/stable, and then bootstrap the whole environment. This should solve the MySQL issues you’re facing.

Juju bug ticket: Bug #2052517 “Workload container probes are too unforgiving” : Bugs : Canonical Juju
Snap Openstack bug ticket: Bug #2051915 “Mysql pods being restarted by possible OOM killer?...” : Bugs : OpenStack Snap

We’re working towards the next stable release for snap-openstack, which should install juju 3.4/stable by default.

1 Like

Hi @gboutry ,

Thank you for your speedy reply!
got it! I upgade juju 3.4.

[my environment : addtional info]

  • snap: 2023.2
  • ubuntu22.04
  • juju: 3.2.4
    ‘‘‘
    $ juju version
    3.2.4-genericlinux-amd64
    $
    ‘‘‘

I look forward to the release of the 2024.1stable!

thanks regards.

Hi @gboutry

I have tried the juju version upgrade 3.4/stable that you gave me.
However, I am experiencing the following event.

My environment information is the same AWS, so I will skip the explanation.

[ Event ]
When executing sunbeam cluster bootstrap, timeout occurs in 24/31 or 29/31.
Actual message:

$ sunbeam cluster bootstrap --role control --role compute --role storage 
:
Deploying OpenStack Control Plane to Kubernetes (this may take a while) ... Waiting for services to come online (24/31) Timed out while waiting for model 'openstack ' to be ready
Error: Timed out while waiting for model 'openstack' to be ready 
$

I re-run the command again with the following command, but it was same event.
and I re-run juju 3.5/stable, but it wat same event.

$ sunbeam cluster bootstrap --accept-defaults
Deploying OpenStack Control Plane to Kubernetes (this may take a while) ... Waiting for services to come online (24/31) Timed out while waiting for model 'openstack ' to be ready
Error: Timed out while waiting for model 'openstack' to be ready 
$

[ How to upgrade to juju 3.4/LTS ]

$ sunbeam prepare-node-script > prepare-node-script
$ vi prepare-node-script
:
sudo snap install --channel 3.4/stable juju
:
$ cat prepare-node-script | bash -x && newgrp snap_daemon
:
Juju (3.4/stable) 2.4.2 from Canonical installed
$

No particular error is output to syslog. Could you give me any advice?

I have made some progress.
I just gave the bootstrap command option --topology single --database single,
(24/24) and OpenStack was installed.

However, if I continue to enable the sunbeam options: secrets, orchestration, loadbalancer, caas, I get the same Error: timed out.
Only the vault succeeds.

$ sunbeam enable vault
OpenStack vault application enabled.
$
$ sunbeam enable secrets
Enabling OpenStack secrets application ... Timed out while waiting for model 'openstack' to be ready 
Error: Timed out while waiting for model 'openstack' to be ready
$
$ sunbeam enable orchestration
Enabling OpenStack orchestration application ... Timed out while waiting for model 'openstack' to be ready 
Error: Timed out while waiting for model 'openstack' to be ready
$
$ sunbeam enable loadbalancer
Enabling OpenStack loadbalancer application ... Timed out while waiting for model 'openstack' to be ready 
Error: Timed out while waiting for model 'openstack' to be ready
$
$ sunbeam enable caas
Enabling OpenStack caas application ... Timed out while waiting for model 'openstack' to be ready 
Error: Timed out while waiting for model 'openstack' to be ready
$

However, the pods appear to be activated except for Magnum.

$ sudo microk8s.kubectl get pod -A
NAMESPACE        NAME                                       READY   STATUS    RESTARTS   AGE
kube-system      hostpath-provisioner-7df77bc496-skpnb      1/1     Running   0          172m
metallb-system   controller-5f7bb57799-7rgjp                1/1     Running   0          172m
metallb-system   speaker-zdvr4                              1/1     Running   0          172m
kube-system      coredns-864597b5fd-58hlp                   1/1     Running   0          172m
kube-system      calico-node-hqd7l                          1/1     Running   0          170m
kube-system      calico-kube-controllers-6b86b85cb4-k4gxn   1/1     Running   0          170m
openstack        modeloperator-8574dbb7f8-z6vrf             1/1     Running   0          170m
openstack        certificate-authority-0                    1/1     Running   0          169m
openstack        placement-mysql-router-0                   2/2     Running   0          168m
openstack        nova-mysql-router-0                        2/2     Running   0          168m
openstack        nova-api-mysql-router-0                    2/2     Running   0          168m
openstack        neutron-mysql-router-0                     2/2     Running   0          168m
openstack        horizon-mysql-router-0                     2/2     Running   0          168m
openstack        cinder-ceph-mysql-router-0                 2/2     Running   0          168m
openstack        nova-cell-mysql-router-0                   2/2     Running   0          168m
openstack        keystone-mysql-router-0                    2/2     Running   0          167m
openstack        glance-mysql-router-0                      2/2     Running   0          168m
openstack        cinder-mysql-router-0                      2/2     Running   0          168m
openstack        rabbitmq-0                                 2/2     Running   0          167m
openstack        mysql-0                                    2/2     Running   0          166m
openstack        ovn-relay-0                                2/2     Running   0          167m
openstack        placement-0                                2/2     Running   0          168m
openstack        traefik-0                                  2/2     Running   0          167m
openstack        traefik-public-0                           2/2     Running   0          167m
openstack        ovn-central-0                              4/4     Running   0          167m
openstack        keystone-0                                 2/2     Running   0          166m
openstack        cinder-ceph-0                              2/2     Running   0          166m
openstack        cinder-0                                   3/3     Running   0          167m
openstack        nova-0                                     4/4     Running   0          168m
openstack        neutron-0                                  2/2     Running   0          167m
openstack        horizon-0                                  2/2     Running   0          167m
openstack        glance-0                                   2/2     Running   0          166m
openstack        designate-mysql-router-0                   2/2     Running   0          139m
openstack        designate-0                                2/2     Running   0          139m
openstack        bind-0                                     2/2     Running   0          139m
openstack        vault-0                                    2/2     Running   0          123m
openstack        barbican-mysql-router-0                    2/2     Running   0          121m
openstack        barbican-0                                 3/3     Running   0          121m
openstack        heat-mysql-router-0                        2/2     Running   0          103m
openstack        heat-0                                     4/4     Running   0          103m
openstack        octavia-mysql-router-0                     2/2     Running   0          87m
openstack        octavia-0                                  4/4     Running   0          87m
$ 

Question
Are there any restrictions on --database single, such as the caas option still not being enabled?

Sorry for the consecutive posts.

I found that the logs I get from sunbeam instpect repeatedly show DEBUG ConfigItem not found .

Actual log: ~/snap/openstack/common/logs/sunbeam-20240510-05176.150196.log

05:17:26,263 sunbeam.plugins.interface.v1.base DEBUG ConfigItem not found
05:17:26,263 sunbeam.clusterd.service DEBUG [get] http+unix://%2Fvar%2Fsnap%2Fopenstack%2Fcommon%2Fstate%2Fcontrol.socket/1.0/config/ Plugin-dns,.
 args={'allow_redirects': True}
05:17:26,264 urllib3.connectionpool DEBUG http://localhost:None "GET /1.0/config/Plugin-dns HTTP/1.1" 404 125
05:17:26,264 sunbeam.clusterd.service DEBUG Response(<Response [404]>) = {"type": "error", "status":"", "status_code":0, "operation":"","" error_code"
:404, "error": "ConfigItem not found", "metadata":null}

Is there any way to pass this log?

hi, installed and working microcloud (sunbean 2024.1, 3 nodes) all seems working, exept this:
sunbeam configure caas

terraform init failed:

Initializing the backend…

Error refreshing state: HTTP remote state endpoint invalid auth

Error: Command ‘[’/snap/openstack/550/bin/terraform’, ‘init’, ‘-upgrade’, ‘-no-color’]’ returned non-zero exit status 1.

no idea where could be an error,
report juju status, and enithing working
what i miss???
thanks

juju status -m openstack
Model Controller Cloud/Region Version SLA Timestamp
openstack sunbeam-controller sunbeam-microk8s/localhost 3.4.3 unsupported 12:35:14Z

SAAS Status Store URL
microceph active local admin/controller.microceph

App Version Status Scale Charm Channel Rev Address Exposed Message
barbican active 3 barbican-k8s 2024.1/edge 43 10.152.183.192 no
barbican-mysql 8.0.36-0ubuntu0.22.04.1 active 3 mysql-k8s 8.0/stable 153 10.152.183.246 no
barbican-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.237 no
certificate-authority active 1 self-signed-certificates latest/beta 151 10.152.183.229 no
cinder active 3 cinder-k8s 2024.1/edge 80 10.152.183.151 no
cinder-ceph active 3 cinder-ceph-k8s 2024.1/edge 77 10.152.183.111 no
cinder-ceph-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.92 no
cinder-mysql 8.0.36-0ubuntu0.22.04.1 active 3 mysql-k8s 8.0/stable 153 10.152.183.29 no
cinder-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.142 no
glance active 3 glance-k8s 2024.1/edge 98 10.152.183.91 no
glance-mysql 8.0.36-0ubuntu0.22.04.1 active 3 mysql-k8s 8.0/stable 153 10.152.183.28 no
glance-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.175 no
heat active 3 heat-k8s 2024.1/edge 62 10.152.183.57 no
heat-mysql 8.0.36-0ubuntu0.22.04.1 active 3 mysql-k8s 8.0/stable 153 10.152.183.144 no
heat-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.230 no
horizon active 3 horizon-k8s 2024.1/edge 92 10.152.183.110 no http://172.16.3.205/openstack-horizon
horizon-mysql 8.0.36-0ubuntu0.22.04.1 active 3 mysql-k8s 8.0/stable 153 10.152.183.102 no
horizon-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.123 no
images-sync active 1 openstack-images-sync-k8s 2024.1/edge 18 10.152.183.182 no
keystone active 3 keystone-k8s 2024.1/edge 195 10.152.183.186 no
keystone-mysql 8.0.36-0ubuntu0.22.04.1 active 3 mysql-k8s 8.0/stable 153 10.152.183.99 no
keystone-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.45 no
magnum active 3 magnum-k8s 2024.1/edge 38 10.152.183.212 no
magnum-mysql 8.0.36-0ubuntu0.22.04.1 active 3 mysql-k8s 8.0/stable 153 10.152.183.185 no
magnum-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.200 no
neutron active 3 neutron-k8s 2024.1/edge 100 10.152.183.32 no
neutron-mysql 8.0.36-0ubuntu0.22.04.1 active 3 mysql-k8s 8.0/stable 153 10.152.183.19 no
neutron-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.160 no
nova active 3 nova-k8s 2024.1/edge 90 10.152.183.189 no
nova-api-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.247 no
nova-cell-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.219 no
nova-mysql 8.0.36-0ubuntu0.22.04.1 active 3 mysql-k8s 8.0/stable 153 10.152.183.106 no
nova-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.145 no
octavia active 3 octavia-k8s 2024.1/edge 41 10.152.183.187 no
octavia-mysql 8.0.36-0ubuntu0.22.04.1 active 3 mysql-k8s 8.0/stable 153 10.152.183.195 no
octavia-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.156 no
ovn-central active 3 ovn-central-k8s 24.03/edge 92 10.152.183.26 no
ovn-relay active 3 ovn-relay-k8s 24.03/edge 79 172.16.3.202 no
placement active 3 placement-k8s 2024.1/edge 75 10.152.183.198 no
placement-mysql 8.0.36-0ubuntu0.22.04.1 active 3 mysql-k8s 8.0/stable 153 10.152.183.138 no
placement-mysql-router 8.0.35-0ubuntu0.22.04.1 active 3 mysql-router-k8s 8.0/stable 96 10.152.183.30 no
rabbitmq 3.12.1 active 3 rabbitmq-k8s 3.12/stable 33 172.16.3.201 no
tempest active 1 tempest-k8s 2024.1/edge 43 10.152.183.40 no
traefik 2.10.4 active 3 traefik-k8s 1.0/stable 164 172.16.3.204 no
traefik-public 2.10.4 active 3 traefik-k8s 1.0/stable 164 172.16.3.205 no
traefik-rgw 2.10.4 active 3 traefik-k8s 1.0/stable 164 172.16.3.203 no
vault active 1 vault-k8s latest/edge 61 10.152.183.37 no

Unit Workload Agent Address Ports Message
barbican-mysql-router/0* active idle 10.1.138.245
barbican-mysql-router/1 active idle 10.1.120.183
barbican-mysql-router/2 active idle 10.1.175.241
barbican-mysql/0 active idle 10.1.175.242
barbican-mysql/1 active idle 10.1.138.246
barbican-mysql/2* active idle 10.1.120.184 Primary
barbican/0 active idle 10.1.175.243
barbican/1 active idle 10.1.138.247
barbican/2* active idle 10.1.120.185
certificate-authority/0* active idle 10.1.120.135
cinder-ceph-mysql-router/0* active idle 10.1.120.170
cinder-ceph-mysql-router/1 active idle 10.1.138.237
cinder-ceph-mysql-router/2 active idle 10.1.175.237
cinder-ceph/0* active idle 10.1.120.171
cinder-ceph/1 active idle 10.1.138.200
cinder-ceph/2 active idle 10.1.175.200
cinder-mysql-router/0* active idle 10.1.120.169
cinder-mysql-router/1 active idle 10.1.138.196
cinder-mysql-router/2 active idle 10.1.175.196
cinder-mysql/0* active idle 10.1.120.157 Primary
cinder-mysql/1 active idle 10.1.175.224
cinder-mysql/2 active idle 10.1.138.225
cinder/0* active idle 10.1.120.180
cinder/1 active idle 10.1.175.197
cinder/2 active idle 10.1.138.197
glance-mysql-router/0* active idle 10.1.120.172
glance-mysql-router/1 active idle 10.1.138.198
glance-mysql-router/2 active idle 10.1.175.198
glance-mysql/0* active idle 10.1.120.149 Primary
glance-mysql/1 active idle 10.1.175.229
glance-mysql/2 active idle 10.1.138.228
glance/0* active idle 10.1.120.178
glance/1 active idle 10.1.175.236
glance/2 active idle 10.1.138.236
heat-mysql-router/0 active idle 10.1.175.246
heat-mysql-router/1* active idle 10.1.120.188
heat-mysql-router/2 active idle 10.1.138.250
heat-mysql/0* active idle 10.1.175.245 Primary
heat-mysql/1 active idle 10.1.120.187
heat-mysql/2 active idle 10.1.138.249
heat/0 active idle 10.1.175.247
heat/1 active idle 10.1.138.251
heat/2* active idle 10.1.120.189
horizon-mysql-router/0* active idle 10.1.120.165
horizon-mysql-router/1 active idle 10.1.175.208
horizon-mysql-router/2 active idle 10.1.138.208
horizon-mysql/0* active idle 10.1.120.155 Primary
horizon-mysql/1 active idle 10.1.138.230
horizon-mysql/2 active idle 10.1.175.231
horizon/0* active idle 10.1.120.168
horizon/1 active idle 10.1.138.201
horizon/2 active idle 10.1.175.201
images-sync/0* active idle 10.1.138.216
keystone-mysql-router/0* active idle 10.1.120.160
keystone-mysql-router/1 active idle 10.1.175.202
keystone-mysql-router/2 active idle 10.1.138.202
keystone-mysql/0* active idle 10.1.120.158 Primary
keystone-mysql/1 active idle 10.1.175.227
keystone-mysql/2 active idle 10.1.138.226
keystone/0* active idle 10.1.120.164
keystone/1 active idle 10.1.138.235
keystone/2 active idle 10.1.175.235
magnum-mysql-router/0 active idle 10.1.138.254
magnum-mysql-router/1 active idle 10.1.175.250
magnum-mysql-router/2* active idle 10.1.120.129
magnum-mysql/0 active idle 10.1.175.249
magnum-mysql/1* active idle 10.1.120.191 Primary
magnum-mysql/2 active idle 10.1.138.253
magnum/0 active idle 10.1.175.251
magnum/1 active idle 10.1.138.255
magnum/2* active idle 10.1.120.136
neutron-mysql-router/0* active idle 10.1.120.174
neutron-mysql-router/1 active idle 10.1.138.199
neutron-mysql-router/2 active idle 10.1.175.199
neutron-mysql/0* active idle 10.1.120.148 Primary
neutron-mysql/1 active idle 10.1.175.218
neutron-mysql/2 active idle 10.1.138.219
neutron/0* active idle 10.1.120.175
neutron/1 active idle 10.1.138.207
neutron/2 active idle 10.1.175.207
nova-api-mysql-router/0* active idle 10.1.120.177
nova-api-mysql-router/1 active idle 10.1.138.204
nova-api-mysql-router/2 active idle 10.1.175.204
nova-cell-mysql-router/0* active idle 10.1.120.179
nova-cell-mysql-router/1 active idle 10.1.138.206
nova-cell-mysql-router/2 active idle 10.1.175.206
nova-mysql-router/0* active idle 10.1.120.176
nova-mysql-router/1 active idle 10.1.138.205
nova-mysql-router/2 active idle 10.1.175.205
nova-mysql/0* active idle 10.1.120.146 Primary
nova-mysql/1 active idle 10.1.175.233
nova-mysql/2 active idle 10.1.138.232
nova/0* active idle 10.1.120.181
nova/1 active idle 10.1.175.209
nova/2 active idle 10.1.138.209
octavia-mysql-router/0 active idle 10.1.175.254
octavia-mysql-router/1 active idle 10.1.120.142
octavia-mysql-router/2* active idle 10.1.138.212
octavia-mysql/0 active idle 10.1.175.253
octavia-mysql/1 active idle 10.1.120.141
octavia-mysql/2* active idle 10.1.138.194 Primary
octavia/0 active idle 10.1.175.195
octavia/1 active idle 10.1.138.215
octavia/2* active idle 10.1.120.144
ovn-central/0* active idle 10.1.120.163
ovn-central/1 active idle 10.1.175.216
ovn-central/2 active idle 10.1.138.214
ovn-relay/0* active idle 10.1.120.139
ovn-relay/1 active idle 10.1.138.193
ovn-relay/2 active idle 10.1.175.193
placement-mysql-router/0* active idle 10.1.120.166
placement-mysql-router/1 active idle 10.1.175.203
placement-mysql-router/2 active idle 10.1.138.203
placement-mysql/0* active idle 10.1.120.156 Primary
placement-mysql/1 active idle 10.1.175.217
placement-mysql/2 active idle 10.1.138.218
placement/0* active idle 10.1.120.167
placement/1 active idle 10.1.175.238
placement/2 active idle 10.1.138.238
rabbitmq/0* active idle 10.1.120.138
rabbitmq/1 active idle 10.1.175.210
rabbitmq/2 active idle 10.1.138.210
tempest/0* active idle 10.1.175.239
traefik-public/0* active idle 10.1.120.152
traefik-public/1 active idle 10.1.138.220
traefik-public/2 active idle 10.1.175.221
traefik-rgw/0* active idle 10.1.120.145
traefik-rgw/1 active idle 10.1.175.211
traefik-rgw/2 active idle 10.1.138.211
traefik/0* active idle 10.1.120.140
traefik/1 active idle 10.1.138.223
traefik/2 active idle 10.1.175.222
vault/0* active idle 10.1.138.243

Offer Application Charm Rev Connected Endpoint Interface Role
cert-distributor keystone keystone-k8s 195 1/1 send-ca-cert certificate_transfer provider
certificate-authority certificate-authority self-signed-certificates 151 1/1 certificates tls-certificates provider
cinder-ceph cinder-ceph cinder-ceph-k8s 77 1/1 ceph-access cinder-ceph-key provider
keystone-credentials keystone keystone-k8s 195 1/1 identity-credentials keystone-credentials provider
keystone-endpoints keystone keystone-k8s 195 1/1 identity-service keystone provider
nova nova nova-k8s 90 1/1 nova-service nova provider
ovn-relay ovn-relay ovn-relay-k8s 79 1/1 ovsdb-cms-relay ovsdb-cms provider
rabbitmq rabbitmq rabbitmq-k8s 33 1/1 amqp rabbitmq provider
traefik-rgw traefik-rgw traefik-k8s 164 1/1 traefik-route traefik_route provider

@gboutry I installed all and everything works fine except for sunbeam configure caas
as also reported @cristianomeloni

The log:

           ...
           DEBUG    Updating /home/giancarlo/snap/openstack/common/etc/fossrc/caas-setup from                            deployment.py:286
                    /snap/openstack/577/lib/python3.10/site-packages/sunbeam/features/caas/etc/caas-setup...                              
           DEBUG    Starting step 'Initialize Terraform'                                                                     common.py:242
           DEBUG    Running step Initialize Terraform                                                                        common.py:258
           DEBUG    Running command /snap/openstack/577/bin/terraform init -upgrade -no-color                             terraform.py:164
[08:23:40] ERROR    terraform init failed:                                                                                terraform.py:177
                    Initializing the backend...                                                                                           
                                                                                                                                          
           WARNING  Error refreshing state: HTTP remote state endpoint invalid auth                                       terraform.py:178
                                                                                                                                          
           DEBUG    Finished running step 'Initialize Terraform'. Result: ResultType.FAILED                                  common.py:261
Error: Command '['/snap/openstack/577/bin/terraform', 'init', '-upgrade', '-no-color']' returned non-zero exit status 1.

Any suggestion?

Thanks
Giancarlo

In addition, I see that even without caas configure, the openstack dashboard is able to manage cluster and template (menu Infra) so probably the CAAS configure is not necessary to proceed. Right?

Thanks for reporting, I’ll take a look.

sunbeam configure caas

is basically only uploading the coreos image to glance, so it will work without it if you upload yourself an image

Bugs : OpenStack Snap is our preferred place to report bugs, thank you!

Edit: there was indeed an issue on the authent creds for the terraform backend, I submitted a fix, it should land this week.

1 Like

FYI and if useful for anyone, I uploaded in this manner from client host:

wget https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/38.20230806.3.0/x86_64/fedora-coreos-38.20230806.3.0-openstack.x86_64.qcow2.xz
unxz fedora-coreos-38.20230806.3.0-openstack.x86_64.qcow2.xz
nohup openstack image create --disk-format=qcow2 --container-format=bare --file=fedora-coreos-38.20230806.3.0-openstack.x86_64.qcow2 --property os_distro='fedora-coreos' fedora-coreos-latest &

Now, I forward to test.

Ok, I’ll do.

Thanks !

@gboutry After loading CoreOs and created template, when I run cluster create I receive a CREATE_FAILED and status_reason | Programming error choosing an endpoint.
Is this related to missing sunbeam configure caas or could it be a bug to report ?
Thanks for any suggestions.

I see new openstack release 578 (yesterday night), no more auth issue but at the end of upload I receive an error even if the image look well uploaded and if I run again configure caas, the past image is considered “tainted”, deleted and uploaded again with the same error.
Here the log:

 WARNING                                                                                                        terraform.py:212
                    Error: Error waiting for Image: context deadline exceeded                                                             
                                                                                                                                          
                      with openstack_images_image_v2.caas-image,                                                                          
                      on main.tf line 28, in resource "openstack_images_image_v2" "caas-image":                                           
                      28: resource "openstack_images_image_v2" "caas-image" {                                                             
                                                                                                                                          
                                                                                                                                          
           ERROR    Error configuring Container as a Service feature.                                                       feature.py:113
                    Traceback (most recent call last):                                                                                    
                      File "/snap/openstack/578/lib/python3.10/site-packages/sunbeam/commands/terraform.py", line 199, in                 
                    apply                                                                                                                 
                        process = subprocess.run(                                                                                         
                      File "/usr/lib/python3.10/subprocess.py", line 526, in run                                                          
                        raise CalledProcessError(retcode, process.args,                                                                   
                    subprocess.CalledProcessError: Command '['/snap/openstack/578/bin/terraform', 'apply', '-auto-approve',               
                    '-no-color']' returned non-zero exit status 1.

I don’t really get your issue. You uploaded a first image, and then run configure caas. This resulted in a fail?

No, sorry, my bad explanation. The issue is get when run configure with a large image, probably slow network and some type of timeout. I check it with a small image and all goes well.
But now the issue is with cluster create as posted before: it seems an error at first step of magnum deploying cluster, something related to “no endpoint” as posted in this bug Bug #2077534 “CAAS cluster create failed with reason Programming...” : Bugs : OpenStack Snap . I debugged a little more deeper and the error maybe come from here: keystoneauth/keystoneauth1/access/service_catalog.py in this function keystoneauth/keystoneauth1/access/service_catalog.py at cca6c92f038a85f75f697659a21c451b72f9ff1d · openstack/keystoneauth · GitHub .
Do you have any idea about that? Maybe Magnum missed any configuration/relation or other things?
Thanks anyway for your attention.

@gboutry UPDATE: I started a fresh deployment (1 client, 1 juju controller, 1 infra, 3 nodes) and was good until coe cluster create, when cluster create failed for this (new for me) error:

Stack ID
    487dfdea-6ffc-443e-8fb7-d20105bd860e
Stack Faults
    as follows:
default-master
    Resource CREATE failed: StackValidationFailed: resources.api_lb.resources.floating: Property error: floating.Properties.port_id: Multiple port matches found for name '', use an ID to be more specific.
default-worker
    Resource CREATE failed: StackValidationFailed: resources.api_lb.resources.floating: Property error: floating.Properties.port_id: Multiple port matches found for name '', use an ID to be more specific.

This is some steps ahead regarding my previous deployment but something is wrong/bug or my fault.
Any idea?
Thanks !!!

@gboutry UPDATE2: I continue having problems with cluster create. Now, cluster create starts , I can see Stack starting running resources, some complete and other init complete BUT “kube_masters” stay in Create in Progress then cluster create timeout.
Also I see in Load Balancers, Operating Status ERROR.
Any idea? how can I debug why creation timeout?
Thanks