Manage a proxied environment

This page shows how to configure proxy settings for MicroStack. This is required for an environment that has network egress traffic restrictions placed upon it. These restrictions are typically implemented via a corporate proxy server that is separate from the MicroStack deployment.

The proxy server itself must permit access to certain external (internet) resources in order for MicroStack to deploy (and operate) correctly. These resources are listed on the Proxy ACL access reference page.

Note: A proxied environment is currently only supported in channel 2023.2/candidate and later of the openstack snap.

Configure for the proxy at the OS level

The steps given in the following two sub-sections will allow a network host to “talk” to your local proxy server.

Important: Perform the instructions on all of your MicroStack nodes. Do this before installing your cluster (as described on either the Multi-node or Single-node guided pages).

Provide the initial settings

Set proxy values in the /etc/environment file via well-known environment variables. Ensure to set the management CIDR and metallb/loadbalancer CIDR in the NO_PROXY variable.

Below are example commands for providing these initial proxy settings:

echo "HTTP_PROXY=http://squid.proxy:3128" | sudo tee -a /etc/environment
echo "HTTPS_PROXY=http://squid.proxy:3128" | sudo tee -a /etc/environment
echo "NO_PROXY=localhost,127.0.0.1,localhost,10.121.193.0/24,10.20.21.0/27" | sudo tee -a /etc/environment

Restart snapd

Restart snapd so that it becomes aware of the new settings in /etc/environment:

sudo snap restart snapd

This will allow snaps to be installed on the configured nodes.

Show proxy settings

Run the following command to view the proxy settings:

sunbeam proxy show

Here is sample output from the above command:

Proxy variable Value
HTTP_PROXY http://10.121.193.112:3128
HTTPS_PROXY http://10.121.193.112:3128
NO_PROXY localhost,127.0.0.1,localhost,10.121.193.0/24,10.20.21.0/27

Update proxy settings

User can update the proxy settings at later point of time after bootstrap is completed.
To update the proxy settings, run the command

sunbeam proxy set --http-proxy <> --https-proxy <> --no-proxy <> 

Clear proxy settings

To clear the proxy settings, run the following command

sunbeam proxy clear

The above command will clear the proxy settings in /etc/environment and model-configs for sunbeam created models.

$ cat /etc/environment  | grep PROXY
HTTP_PROXY=http://prx.domain.local:3128
HTTPS_PROXY=http://prx.domain.local:3128
NO_PROXY=127.0.0.0/8,10.0.0.0/8,192.168.0.0/16,172.16.0.0/16,localhost,.domain.local
$ sudo snap restart snapd
error: snap "snapd" has no services
$ sunbeam proxy show
Usage: sunbeam [OPTIONS] COMMAND [ARGS]...
Try 'sunbeam -h' for help.

Error: No such command 'proxy'.
$ snap list
Name       Version        Rev    Tracking       Publisher   Notes
core20     20240227       2264   latest/stable  canonical✓  base
core22     20240408       1380   latest/stable  canonical✓  base
juju       3.2.4          25443  3.2/stable     canonical✓  -
lxd        5.0.3-d921d2e  28373  5.0/stable/…   canonical✓  -
openstack  2023.2         335    2023.2/stable  canonical✓  -
snapd      2.62           21465  latest/stable  canonical✓  snapd

What’s wrong ?
~$ sunbeam -v cluster bootstrap --role control --role compute --role storage

 Bootstrapping Juju onto machine ... [14:31:50] ERROR    Error bootstrapping Juju                                                                                                                                                                                     juju.py:296
                    Traceback (most recent call last):
                      File "/snap/openstack/335/lib/python3.10/site-packages/sunbeam/commands/juju.py", line 289, in run
                        process = subprocess.run(cmd, capture_output=True, text=True, check=True)
                      File "/usr/lib/python3.10/subprocess.py", line 526, in run
                        raise CalledProcessError(retcode, process.args,
                    subprocess.CalledProcessError: Command '['/snap/openstack/335/juju/bin/juju', 'bootstrap', 'sunbeam', 'sunbeam-controller']' returned non-zero exit status 1.
           WARNING  Creating Juju controller "sunbeam-controller" on sunbeam/default                                                                                                                                             juju.py:297
                    Looking for packaged Juju agent version 3.2.4 for amd64
                    WARNING Got error requesting "https://streams.canonical.com/juju/tools/streams/v1/index2.sjson": Get "https://streams.canonical.com/juju/tools/streams/v1/index2.sjson": dial tcp: lookup
                    streams.canonical.com on 127.0.0.53:53: read udp 127.0.0.1:60200->127.0.0.53:53: i/o timeout
                    WARNING Got error requesting "https://streams.canonical.com/juju/tools/streams/v1/index2.sjson": Get "https://streams.canonical.com/juju/tools/streams/v1/index2.sjson": dial tcp: lookup
                    streams.canonical.com on 127.0.0.53:53: server misbehaving
                    WARNING Got error requesting "https://streams.canonical.com/juju/tools/streams/v1/index2.sjson": Get "https://streams.canonical.com/juju/tools/streams/v1/index2.sjson": dial tcp: lookup
                    streams.canonical.com on 127.0.0.53:53: read udp 127.0.0.1:50459->127.0.0.53:53: i/o timeout
                    ERROR failed to bootstrap model: cannot read index data, attempt count exceeded: cannot access URL "https://streams.canonical.com/juju/tools/streams/v1/index2.sjson": Get
                    "https://streams.canonical.com/juju/tools/streams/v1/index2.sjson": dial tcp: lookup streams.canonical.com on 127.0.0.53:53: read udp 127.0.0.1:50459->127.0.0.53:53: i/o timeout

           DEBUG    Finished running step 'Bootstrap Juju'. Result: ResultType.FAILED                                                                                                                                          common.py:260
Error: Command '['/snap/openstack/335/juju/bin/juju', 'bootstrap', 'sunbeam', 'sunbeam-controller']' returned non-zero exit status 1.

Hi @m4t7e0 , proxy support is found in the 2023.2/edge channel of the openstack snap (as stated). You appear to have 2023.2/stable installed.

Thanks @pmatulis , i switched the channel, and steps now MicroStack, is huggings to …

07:58:54] DEBUG    {'addons': {'dns': '', 'metallb': '192.168.1.140-192.168.1.149,192.168.1.180-192.168.1.189', 'hostpath-storage': ''}}                                                                                                                           microk8s.py:143
           DEBUG    [put] http+unix://%2Fvar%2Fsnap%2Fopenstack%2Fcommon%2Fstate%2Fcontrol.socket/1.0/config/TerraformVarsMicrok8sAddons, args={'data': '{"addons": {"dns": "", "metallb": "192.168.1.140-192.168.1.149,192.168.1.180-192.168.1.189",                service.py:120
                    "hostpath-storage": ""}}'}
           DEBUG    http://localhost:None "PUT /1.0/config/TerraformVarsMicrok8sAddons HTTP/1.1" 200 108                                                                                                                                                      connectionpool.py:456
           DEBUG    Response(<Response [200]>) = {"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":{}}                                                                                                         service.py:122

⠸ Deploying MicroK8S ... [07:58:55] DEBUG    Connector: closing controller connection                                                                                                                                                                                                       connector.py:124
           DEBUG    Skipping step Deploy MicroK8S                                                                                                                                                                                                                     common.py:270
           DEBUG    Starting step 'Add MicroK8S unit'                                                                                                                                                                                                                 common.py:260
           DEBUG    [get] http+unix://%2Fvar%2Fsnap%2Fopenstack%2Fcommon%2Fstate%2Fcontrol.socket/1.0/nodes, args={'allow_redirects': True}                                                                                                                          service.py:120
           DEBUG    http://localhost:None "GET /1.0/nodes HTTP/1.1" 200 219                                                                                                                                                                                   connectionpool.py:456
           DEBUG    Response(<Response [200]>) =                                                                                                                                                                                                                     service.py:122
                    {"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":[{"name":"node01.domain.local","role":["compute","control","storage"],"machineid":0,"systemid":""}]}

⠋ Adding MicroK8S unit to machine ...            DEBUG    Connector: closing controller connection                                                                                                                                                                                                       connector.py:124
           DEBUG    Skipping step Add MicroK8S unit                                                                                                                                                                                                                   common.py:270
           DEBUG    Starting step 'Store MicroK8S config'                                                                                                                                                                                                             common.py:260
           DEBUG    [get] http+unix://%2Fvar%2Fsnap%2Fopenstack%2Fcommon%2Fstate%2Fcontrol.socket/1.0/config/Microk8sConfig, args={'allow_redirects': True}                                                                                                          service.py:120
           DEBUG    http://localhost:None "GET /1.0/config/Microk8sConfig HTTP/1.1" 404 125                                                                                                                                                                   connectionpool.py:456
           DEBUG    Response(<Response [404]>) = {"type":"error","status":"","status_code":0,"operation":"","error_code":404,"error":"ConfigItem not found","metadata":null}                                                                                         service.py:122


           DEBUG    Running step Store MicroK8S config                                                                                                                                                                                                                common.py:276
⠋ Storing MicroK8S configuration in sunbeam database ...            DEBUG    Connector: closing controller connection                                                                                                                                                                                                       connector.py:124
⠙ Storing MicroK8S configuration in sunbeam database ...            DEBUG    Connector: closing controller connection                                                                                                                                                                                                       connector.py:124
⠸ Storing MicroK8S configuration in sunbeam database ...            DEBUG    Connector: closing controller connection                                                                                                                                                                                                       connector.py:124
⠼ Storing MicroK8S configuration in sunbeam database ...

Hello. I’m uncertain about your last message. Are you okay now? I do see:

DEBUG Response(<Response [404]>) = {"type":"error","status":"","status_code":0,"operation":"","error_code":404,"error":"ConfigItem not found","metadata":null} service.py:122

No, i revert the setup, following this guide MicroStack - Removing the primary node"
and

rm -rf /etc/apt/apt.conf.d/95-ju*
rm -rf /etc/systemd/system/snap-ju*
rm -rf /home/ubuntu/.local/share/ju*
rm -rf /tmp/snap-private-tmp/snap.open*
rm -rf /tmp/snap-private-tmp/snap.ju*
rm -rf /tmp/ju*
rm -rf /var/lib/snapd/mount/snap.ju*
rm -rf /var/lib/snapd/inhibit/ju*
rm -rf /var/lib/snapd/seccomp/bpf/snap.ju*
rm -rf /var/lib/snapd/sequence/ju*
rm -rf /var/lib/snapd/cookie/snap.ju*
rm -rf /var/lib/snapd/apparmor/profiles/snap.ju*
rm -rf /var/lib/snapd/apparmor/profiles/snap-update-ns.ju*
rm -rf /var/lib/snapd/snaps/ju*
rm -rf /var/lib/snapd/cgroup/snap.ju*
rm -rf /var/snap/ju*
rm -rf /var/cache/apparmor/72e9179b.0/snap.ju*
rm -rf /var/cache/apparmor/72e9179b.0/snap-update-ns.ju*
rm -rf /var/log/ju*
rm -rf /var/lib/snapd/snapshots/1_open*
rm -rf /var/lib/snapd/sequence/open*
rm -rf /run/snapd/lock/open*
rm -rf /run/snapd/lock/ju*
rm -rf /run/snapd/ns/ju*
rm -rf /run/snapd/ns/snap.ju*

Something is changend but…

⠦ Deploying MicroK8S ... [14:47:11] DEBUG    Command finished. stdout=data.juju_model.machine_model: Reading...                                                                                                                                                                             terraform.py:197
                    data.juju_model.machine_model: Read complete after 0s [id=b2ccbd26-52af-4d20-84d8-4bc7efd64224]

                    Terraform used the selected providers to generate the following execution
                    plan. Resource actions are indicated with the following symbols:
                      + create

                    Terraform will perform the following actions:

                      # juju_application.microk8s will be created
                      + resource "juju_application" "microk8s" {
                          + config      = {
                              + "addons"                        = "dns: hostpath-storage: metallb:192.168.1.140-192.168.1.149,192.168.1.180-192.168.1.189"
                              + "channel"                       = "1.28-strict/stable"
                              + "disable_cert_reissue"          = "true"
                              + "kubelet_serialize_image_pulls" = "false"
                              + "skip_verify"                   = "true"
                            }
                          + constraints = (known after apply)
                          + id          = (known after apply)
                          + model       = "controller"
                          + name        = "microk8s"
                          + placement   = (known after apply)
                          + principal   = (known after apply)
                          + trust       = true
                          + units       = 0

                          + charm {
                              + base     = "ubuntu@22.04"
                              + channel  = "legacy/stable"
                              + name     = "microk8s"
                              + revision = (known after apply)
                              + series   = (known after apply)
                            }
                        }

                    Plan: 1 to add, 0 to change, 0 to destroy.
                    juju_application.microk8s: Creating...
                    juju_application.microk8s: Creation complete after 1s [id=controller:microk8s]

                    Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
                    , stderr=
⠧ Deploying MicroK8S ...            DEBUG    Connector: closing controller connection                                                                                                                                                                                                       connector.py:124
⠏ Deploying MicroK8S ... [14:47:12] DEBUG    Connector: closing controller connection                                                                                                                                                                                                       connector.py:124
           DEBUG    Application 'microk8s' is in status: 'unknown'                                                                                                                                                                                                      juju.py:589
           DEBUG    Waiting for app status to be: unknown ['active', 'unknown']                                                                                                                                                                                         juju.py:592
           DEBUG    Finished running step 'Deploy MicroK8S'. Result: ResultType.COMPLETED                                                                                                                                                                             common.py:279
           DEBUG    Starting step 'Add MicroK8S unit'                                                                                                                                                                                                                 common.py:260
           DEBUG    [get] http+unix://%2Fvar%2Fsnap%2Fopenstack%2Fcommon%2Fstate%2Fcontrol.socket/1.0/nodes, args={'allow_redirects': True}                                                                                                                          service.py:120
           DEBUG    http://localhost:None "GET /1.0/nodes HTTP/1.1" 200 219                                                                                                                                                                                   connectionpool.py:456
           DEBUG    Response(<Response [200]>) =                                                                                                                                                                                                                     service.py:122
                    {"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":[{"name":"node03.domain.local","role":["compute","control","storage"],"machineid":0,"systemid":""}]}

⠋ Adding MicroK8S unit to machine ...            DEBUG    Connector: closing controller connection                                                                                                                                                                                                       connector.py:124
           DEBUG    Running step Add MicroK8S unit                                                                                                                                                                                                                    common.py:276
⠙ Adding MicroK8S unit to machine ...            DEBUG    Connector: closing controller connection                                                                                                                                                                                                       connector.py:124
           DEBUG    [get] http+unix://%2Fvar%2Fsnap%2Fopenstack%2Fcommon%2Fstate%2Fcontrol.socket/1.0/config/TerraformVarsMicrok8s, args={'allow_redirects': True}                                                                                                   service.py:120
           DEBUG    http://localhost:None "GET /1.0/config/TerraformVarsMicrok8s HTTP/1.1" 200 211                                                                                                                                                            connectionpool.py:456
           DEBUG    Response(<Response [200]>) = {"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":"{\"charm_microk8s_channel\": \"legacy/stable\", \"machine_ids\": [], \"machine_model\": \"controller\"}"}  service.py:122

           DEBUG    [put] http+unix://%2Fvar%2Fsnap%2Fopenstack%2Fcommon%2Fstate%2Fcontrol.socket/1.0/config/TerraformVarsMicrok8s, args={'data': '{"charm_microk8s_channel": "legacy/stable", "machine_ids": ["0"], "machine_model": "controller"}'}                service.py:120
⠹ Adding MicroK8S unit to machine ...            DEBUG    http://localhost:None "PUT /1.0/config/TerraformVarsMicrok8s HTTP/1.1" 200 108                                                                                                                                                            connectionpool.py:456
           DEBUG    Response(<Response [200]>) = {"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":{}}                                                                                                         service.py:122

⠸ Adding MicroK8S unit to machine ...            DEBUG    Connector: closing controller connection                                                                                                                                                                                                       connector.py:124
           DEBUG    Unit 'microk8s/0' is in status: agent='allocating', workload='waiting'                                                                                                                                                                              juju.py:693
⠏ Adding MicroK8S unit to machine ...

What am I looking at? Can you be specific in what you need help with?

Secondly, I discovered that deploying from the 2023.2/edge channel has some inherent problems. I needed to do the following to ensure that all dependency software uses the correct versions:

sunbeam cluster bootstrap --manifest /snap/openstack/current/etc/manifests/edge.yml

i’m trying to deploy microstack in proxy environments as reported on the web guide…

sudo snap install openstack --channel 2023.2/edge
sunbeam prepare-node-script | bash -x && newgrp snap_daemon
sunbeam -v  cluster bootstrap --role control --role compute --role storage  --manifest /snap/openstack/current/etc/manifests/edge.yml

i’m tryng to running MicroStack with my 4 node (ubuntu 22-04)

Network was setup
bond0 2x10gpbs
vlan bond0.20 set as ovn br-api (192.168.20.X)
vlan bond0.21 set as ovn br-floating (192.168.21.X)
bond1 2x10gbps
vlan bond1.31 set as ovn br-storage (192.168.31.X)
1 gigabit ovn br-mgm (192.168.2.X)

with local proxy for download packets from internet.

i tested many times this bootstrap for find the right proxy settings :

cat /etc/environment looks like that:

HTTP_PROXY=http://prx.domain.local:3129
HTTPS_PROXY=http://prx.domain.local:3129
NO_PROXY=192.168.0.0/16,127.0.0.1,.domain.local,10.1.0.0/16,10.0.0.0/8,127.0.0.0/8,.domain2.net,172.16.0.0/10,.svc,localhost,10.152.183.0/24,127.0.0.53
no_proxy=localhost,domain.net,.domain.local,127.0.0.1,127.0.0.53,10.0.0.0/8,192.168.0.0/16,172.16.0.0/10,127.0.0.0/8
http_proxy=http://prx.domain.local:3129

in every step i had some issue:
First was juju proxy settings… (solved)

Second was MicroK8s (still some issue)

ubuntu@node-03:/$ journalctl -r
May 06 16:25:35 node-03 microk8s.daemon-kubelite[468900]: + sleep 2
May 06 16:25:35 node-03 microk8s.daemon-kubelite[468900]: + n=5
May 06 16:25:35 node-03 microk8s.daemon-kubelite[468900]: Waiting for default route to appear. (attempt 4)
May 06 16:25:35 node-03 microk8s.daemon-kubelite[468900]: + echo 'Waiting for default route to appear. (attempt 4)'
May 06 16:25:35 node-03 microk8s.daemon-kubelite[469397]: + ip -6 route
May 06 16:25:35 node-03 microk8s.daemon-kubelite[469398]: + grep '^default'
May 06 16:25:35 node-03 microk8s.daemon-kubelite[469397]: + ip route
May 06 16:25:35 node-03 microk8s.daemon-kubelite[468900]: + default_route_exists
May 06 16:25:35 node-03 microk8s.daemon-kubelite[468900]: + '[' 4 -ge 5 ']'
May 06 16:25:34 node-03 microk8s.daemon-apiserver-kicker[469337]: Setting up the CNI
May 06 16:25:33 node-03 microk8s.daemon-kubelite[468900]: + sleep 2
May 06 16:25:33 node-03 microk8s.daemon-kubelite[468900]: + n=4
May 06 16:25:33 node-03 microk8s.daemon-kubelite[468900]: Waiting for default route to appear. (attempt 3)
May 06 16:25:33 node-03 microk8s.daemon-kubelite[468900]: + echo 'Waiting for default route to appear. (attempt 3)'
May 06 16:25:33 node-03 microk8s.daemon-kubelite[469312]: + ip -6 route
May 06 16:25:33 node-03 microk8s.daemon-kubelite[469313]: + grep '^default'
May 06 16:25:33 node-03 microk8s.daemon-kubelite[469312]: + ip route
May 06 16:25:33 node-03 microk8s.daemon-kubelite[468900]: + default_route_exists
May 06 16:25:33 node-03 microk8s.daemon-kubelite[468900]: + '[' 3 -ge 5 ']'
May 06 16:25:33 node-03 systemd[1]: snap.microk8s.microk8s-ee3f831c-9799-4a05-a1f1-c28e4b55f838.scope: Consumed 2.673s CPU time.
May 06 16:25:33 node-03 systemd[1]: snap.microk8s.microk8s-ee3f831c-9799-4a05-a1f1-c28e4b55f838.scope: Deactivated successfully.
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[190115]: 2024/05/06 16:25:32 Applying /var/snap/microk8s/common/etc/launcher/install.yaml
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[190115]: 2024/05/06 16:25:32 Failed to apply configuration file /var/snap/microk8s/common/etc/launcher/install.yaml: failed to apply config part 0: failed to reconcile addons: failed to enable addon "dns": c>
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: subprocess.CalledProcessError: Command '('/snap/microk8s/6532/microk8s-kubectl.wrapper', 'get', 'all,ingress', '--all-namespaces')' returned non-zero exit status 1.
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     raise CalledProcessError(self.returncode, self.args, self.stdout,
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/usr/lib/python3.8/subprocess.py", line 448, in check_returncode
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     result.check_returncode()
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/scripts/wrappers/common/utils.py", line 69, in run
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     return run(KUBECTL, "get", cmd, "--all-namespaces", die=False)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/scripts/wrappers/common/utils.py", line 248, in kubectl_get
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     kube_output = kubectl_get("all,ingress")
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/scripts/wrappers/common/utils.py", line 566, in get_status
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     enabled_addons_info, disabled_addons_info = get_status(available_addons_info, True)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/scripts/wrappers/common/utils.py", line 514, in unprotected_xable
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     unprotected_xable(action, addon_args)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/scripts/wrappers/common/utils.py", line 498, in protected_xable
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     protected_xable(action, addon_args)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/scripts/wrappers/common/utils.py", line 470, in xable
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     xable("enable", addons)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/scripts/wrappers/enable.py", line 37, in enable
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     return callback(*args, **kwargs)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     return ctx.invoke(self.callback, **ctx.params)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     rv = self.invoke(ctx)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/usr/lib/python3/dist-packages/click/core.py", line 717, in main
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     return self.main(*args, **kwargs)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:     enable(prog_name="microk8s enable")
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]:   File "/snap/microk8s/6532/scripts/wrappers/enable.py", line 41, in <module>
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: Traceback (most recent call last):
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="Heartbeat was sent 35.043453831s ago, sleep 15s seconds before retrying"
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="Matched trusted cert" fingerprint=ba0457eebf511b4262e9e1f7523aa7e5f515823c2cc793d099b243087c9a8a0a subject="CN=root@node-03,O=LXD"
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="Dqlite connected outbound" local="192.168.2.143:43086" remote="192.168.2.143:7443"
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="{true 0 map[]}"

Now during the bootstrap


Disks to attach to MicroCeph (comma separated list)
(/dev/disk/by-id/wwn-0x28ab18ee209f8d05,/dev/disk/by-id/wwn-0x28ab18ee209f8d11,/dev/disk/by-id/wwn-0x28ab18ee209f8d15,/dev/disk/by-id/wwn-0x28ab18ee209f8d21,/dev/disk/by-id/wwn-0x28ab18ee209f93fd,/dev/disk/by-id/wwn-0x28ab18ee209f9025,/dev/disk/by-id/wwn-0x28ab18ee209f9
069,/dev/disk/by-id/wwn-0x28ab18ee209f9421,/dev/disk/by-id/wwn-0x28ab18ee209f9439,/dev/disk/by-id/wwn-0x28ab18ee20a89d21,/dev/disk/by-id/wwn-0x28ab18ee20a32375,/dev/disk/by-id/wwn-0x28ab18ee20aff5a9,/dev/disk/by-id/wwn-0x28ab18ee20ac53fd,/dev/disk/by-id/wwn-0x28ab18ee20
ac5425,/dev/disk/by-id/wwn-0x28ab18ee20ac5429):            DEBUG    {'microceph_config': {'node-03.domain.local': {'osd_devices':                                                                                                                                                                microceph.py:241
                    '/dev/disk/by-id/wwn-0x28ab18ee209f8d05,/dev/disk/by-id/wwn-0x28ab18ee209f8d11,/dev/disk/by-id/wwn-0x28ab18ee209f8d15,/dev/disk/by-id/wwn-0x28ab18ee209f8d21,/dev/disk/by-id/wwn-0x28ab18ee209f93fd,/dev/disk/by-id/wwn-0x28ab18ee209f902
                    5,/dev/disk/by-id/wwn-0x28ab18ee209f9069,/dev/disk/by-id/wwn-0x28ab18ee209f9421,/dev/disk/by-id/wwn-0x28ab18ee209f9439,/dev/disk/by-id/wwn-0x28ab18ee20a89d21,/dev/disk/by-id/wwn-0x28ab18ee20a32375,/dev/disk/by-id/wwn-0x28ab18ee20aff5
                    a9,/dev/disk/by-id/wwn-0x28ab18ee20ac53fd,/dev/disk/by-id/wwn-0x28ab18ee20ac5425,/dev/disk/by-id/wwn-0x28ab18ee20ac5429'}}}
           DEBUG    [put] http+unix://%2Fvar%2Fsnap%2Fopenstack%2Fcommon%2Fstate%2Fcontrol.socket/1.0/config/TerraformVarsMicroceph, args={'data': '{"microceph_config": {"node-03.domain.local": {"osd_devices":                                  service.py:120
                    "/dev/disk/by-id/wwn-0x28ab18ee209f8d05,/dev/disk/by-id/wwn-0x28ab18ee209f8d11,/dev/disk/by-id/wwn-0x28ab18ee209f8d15,/dev/disk/by-id/wwn-0x28ab18ee209f8d21,/dev/disk/by-id/wwn-0x28ab18ee209f93fd,/dev/disk/by-id/wwn-0x28ab18ee209f9025,
                    /dev/disk/by-id/wwn-0x28ab18ee209f9069,/dev/disk/by-id/wwn-0x28ab18ee209f9421,/dev/disk/by-id/wwn-0x28ab18ee209f9439,/dev/disk/by-id/wwn-0x28ab18ee20a89d21,/dev/disk/by-id/wwn-0x28ab18ee20a32375,/dev/disk/by-id/wwn-0x28ab18ee20aff5a9,/
                    dev/disk/by-id/wwn-0x28ab18ee20ac53fd,/dev/disk/by-id/wwn-0x28ab18ee20ac5425,/dev/disk/by-id/wwn-0x28ab18ee20ac5429"}}}'}
           DEBUG    http://localhost:None "PUT /1.0/config/TerraformVarsMicroceph HTTP/1.1" 200 108                                                                                                                                                      connectionpool.py:456
           DEBUG    Response(<Response [200]>) = {"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":{}}                                                                                                    service.py:122

           DEBUG    Running step node-03.domain.local                                                                                                                                                                                             common.py:276
⠙ Configuring MicroCeph storage ...            DEBUG    Connector: closing controller connection                                                                                                                                                                                                  connector.py:124
           DEBUG    Running action add-osd on microceph/0                                                                                                                                                                                                     microceph.py:282
⠹ Configuring MicroCeph storage ...            DEBUG    Connector: closing controller connection                                                                                                                                                                                                  connector.py:124
⠸ Configuring MicroCeph storage ...            DEBUG    Connector: closing controller connection                                                                                                                                                                                                  connector.py:124
⠏ Configuring MicroCeph storage ... [16:18:20] DEBUG    Microceph Adding disks /dev/disk/by-id/wwn-0x28ab18ee209f8d15,/dev/disk/by-id/wwn-0x28ab18ee20ac5429,/dev/disk/by-id/wwn-0x28ab18ee20a89d21,/dev/disk/by-id/wwn-0x28ab18ee20aff5a9 failed: {'return-code': 0}                             microceph.py:296
           DEBUG    Finished running step 'node-03.domain.local'. Result: ResultType.FAILED                                                                                                                                                         common.py:279
Error: Microceph Adding disks /dev/disk/by-id/wwn-0x28ab18ee209f8d15,/dev/disk/by-id/wwn-0x28ab18ee20ac5429,/dev/disk/by-id/wwn-0x28ab18ee20a89d21,/dev/disk/by-id/wwn-0x28ab18ee20aff5a9 failed: {'return-code': 0}

journalctl -r

May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="Matched trusted cert" fingerprint=ba0457eebf511b4262e9e1f7523aa7e5f515823c2cc793d099b243087c9a8a0a subject="CN=root@node-03,O=LXD"
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="Dqlite connected outbound" local="192.168.2.143:43086" remote="192.168.2.143:7443"
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="{true 0 map[]}"

How to get this microstack working?

Actually i just neet to test some application, but for sure i will not use the br-mgm IPS as primary interface for the cluster… (is possibile to tell to the bootstrap process to use the IP instead the fqdn?)…

@m4t7e0 you need to put proxy settings into manifest

deployment:
  proxy:
    http_proxy: ...
    https_proxy: ...
    no_proxy: ...
    proxy_required: true

@marosg Without a manifest that covers the proxy settings should not the interactive prompts ask for the information?

In an environment with a proxy, I am trying to install MicroStack following the guides. As mentioned in the previous post, the network is configured as follows:

Network Topology Diagram

                   +------------+
                   |    br-api  (openvswitch) |
                   |    192.168.20.X/24
                   |    Gateway: 192.168.20.1
                   |    DNS: 10.0.0.1, 10.0.0.2
                   |    Interfaces: bond0.20
                   +------------+
                           |
                           |
                   +------------+
                   | br-floating (openvswitch) |
                   |    Interfaces: bond0.21 
                   |    192.168.21.X/24 (for  public network but NO IP associated to this bridge)
                   +------------+
                           |
                           |
                   +------------+
                   |    br-mgm  (openvswitch) |
                   |  192.168.2.X/24
                   |    DNS: 192.168.2.3, 192.168.2.4, 192.168.2.5
                   |    Interfaces: eno1
                   +------------+
                           |
                           |
                   +------------+
                   |    br-stor (openvswitch) |
                   |  192.168.31.X/24
                   |    DNS: 192.168.31.2, 192.168.31.4, 192.168.31.5
                   |    Interfaces: bond1.31
                   +------------+
                           |
                           |
                   +------------+
                   |    bond0   |
                   |  Interfaces: enp129s0f0np0, enp225s0f0np0
                   +------------+
                           |
                           |
                   +------------+
                   |    bond1   |
                   |  Interfaces: enp129s0f1np1, enp225s0f1np1
                   +------------+
                           |

Since I am in an environment under a proxy, I am using, as suggested in the documentation, version 2023.2/edge. When running the command “sunbeam cluster bootstrap --manifest /snap/openstack/current/etc/manifests/edge.yml,” the setup requires proxy information in the steps, allowing me to proceed with the setup. Yesterday, as mentioned, I installed Ubuntu from scratch and tried to redo the setup. However, during the process, it gets stuck at the step ‘Store MicroK8S config.’ Reviewing the journalctl, I see the following error:

DEBUG    Connector: closing controller connection
DEBUG    Skipping step Add MicroK8S unit
DEBUG    Starting step 'Store MicroK8S config'
DEBUG    [get] http+unix://%2Fvar%2Fsnap%2Fopenstack%2Fcommon%2Fstate%2Fcontrol.socket/1.0/config/Microk8sConfig, args={'allow_redirects': True}
DEBUG    http://localhost:None "GET /1.0/config/Microk8sConfig HTTP/1.1" 404 125
DEBUG    Response(<Response [404]>) = {"type":"error","status":"","status_code":0,"operation":"","error_code":404,"error":"ConfigItem not found","metadata":null}
DEBUG    Running step Store MicroK8S config

Reading the journactl

May 07 07:58:53 node-01 microk8s.daemon-containerd[18803]: time="2024-05-07T07:58:53.617387098Z" level=error msg="StopPodSandbox for \"18db0f14dd4e0abfd0a05eed4914e64fbcdeaf6fb44e8fec6ebe0614ed6330d6\" failed" error="failed to destroy network for sandbox \"18db0f14d>
May 07 07:58:53 node-01 microk8s.daemon-containerd[18803]: time="2024-05-07T07:58:53.579724923Z" level=info msg="StopPodSandbox for \"18db0f14dd4e0abfd0a05eed4914e64fbcdeaf6fb44e8fec6ebe0614ed6330d6\""
May 07 07:58:48 node-01 microk8s.daemon-kubelite[14687]: E0507 07:58:48.580529   14687 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"speaker\" with CreateContainerConfigError: \"secret \\\"memberlist\\\" not found\"" pod=>
May 07 07:58:48 node-01 microk8s.daemon-kubelite[14687]: E0507 07:58:48.580455   14687 kuberuntime_manager.go:1261] container &Container{Name:speaker,Image:quay.io/metallb/speaker:v0.13.3,Command:[],Args:[--port=7472 --log-level=info],WorkingDir:,Ports:[]ContainerPo>
May 07 07:58:48 node-01 microk8s.daemon-kubelite[14687]: E0507 07:58:48.579409   14687 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.20.2 192.168.31.2 >
May 07 07:58:46 node-01 microk8s.daemon-kubelite[14687]: E0507 07:58:46.384473   14687 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: Internal >
May 07 07:58:46 node-01 microk8s.daemon-kubelite[14687]: W0507 07:58:46.384443   14687 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: Internal error occurred: error resolving resource
May 07 07:58:46 node-01 microk8s.daemon-kubelite[14687]: E0507 07:58:46.383931   14687 customresource_handler.go:301] unable to load root certificates: unable to parse bytes as PEM block
May 07 07:58:45 node-01 microk8s.daemon-kubelite[14687]: E0507 07:58:45.622042   14687 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76fe9fd1-863c-477b-bac6-b79ad994668d\" with KillPodSandboxError: \"rpc error: code = Unk>
May 07 07:58:45 node-01 microk8s.daemon-kubelite[14687]: E0507 07:58:45.621992   14687 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76fe9fd1-863c-477b-bac6-b79ad994668d\" with KillPodSandboxError: \"rpc error: c>
May 07 07:58:45 node-01 microk8s.daemon-kubelite[14687]: E0507 07:58:45.621936   14687 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6997f6b4ef494c2090f0e3ca9371e69aa940dda1c1b640924c3c055791992629"}
May 07 07:58:45 node-01 microk8s.daemon-kubelite[14687]: E0507 07:58:45.621871   14687 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6997f6b4ef494c2090f0e3ca9371e69aa>

I can’t understand if it’s something I’m doing wrong or if there’s something not working as it should… any suggestions?

ubuntu@node-01:~$ juju status -m admin/controller

Model       Controller          Cloud/Region     Version  SLA          Timestamp
controller  sunbeam-controller  sunbeam/default  3.4.2    unsupported  10:36:16Z

App              Version  Status       Scale  Charm            Channel        Rev  Exposed  Message
controller                active           1  juju-controller  3.4/stable      79  no
microk8s                  maintenance      1  microk8s         legacy/stable  121  no       enabling microk8s addons: dns:, hostpath-storage:, metallb:192.168.20.230-192.168.20.239
sunbeam-machine           active           1  sunbeam-machine  2023.2/edge     14  no

Unit                Workload     Agent      Machine  Public address  Ports      Message
controller/0*       active       idle       0        192.168.20.101
microk8s/0*         maintenance  executing  0        192.168.20.101   16443/tcp  (config-changed) enabling microk8s addons: dns:, hostpath-storage:, metallb:192.168.20.230-192.168.20.239
sunbeam-machine/0*  active       idle       0        192.168.20.101

Machine  State    Address        Inst id  Base          AZ  Message
0        started  192.168.20.101  manual:  ubuntu@22.04      Manually provisioned machine

ubuntu@node-01:~$ sudo journalclt -r

May 07 10:13:15 node-01 microk8s.daemon-kubelite[1656153]: E0507 10:13:15.755898 1656153 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b2e4d25eec109dfabe51cae0270f7617f1fd6e8314ffdb4651302e31d2d0c27\": plugin type=\"calico\" failed (delete): stat /var/snap/microk8s/current/var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b2e4d25eec109dfabe51cae0270f7617f1fd6e8314ffdb4651302e31d2d0c27"
May 07 10:13:15 node-01 microk8s.daemon-kubelite[1656153]: E0507 10:13:15.755957 1656153 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2b2e4d25eec109dfabe51cae0270f7617f1fd6e8314ffdb4651302e31d2d0c27"}
May 07 10:13:15 node-01 microk8s.daemon-kubelite[1656153]: E0507 10:13:15.756004 1656153 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"423a09e3-62a6-4fca-bae6-b210d78705d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b2e4d25eec109dfabe51cae0270f7617f1fd6e8314ffdb4651302e31d2d0c27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/snap/microk8s/current/var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
May 07 10:13:15 node-01 microk8s.daemon-kubelite[1656153]: E0507 10:13:15.756054 1656153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"423a09e3-62a6-4fca-bae6-b210d78705d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b2e4d25eec109dfabe51cae0270f7617f1fd6e8314ffdb4651302e31d2d0c27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/snap/microk8s/current/var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="metallb-system/controller-5f7bb57799-cnjws" podUID="423a09e3-62a6-4fca-bae6-b210d78705d5"
May 07 10:13:16 node-01 microk8s.daemon-kubelite[1656153]: E0507 10:13:16.304883 1656153 shared_informer.go:314] unable to sync caches for garbage collector
May 07 10:13:16 node-01 microk8s.daemon-kubelite[1656153]: E0507 10:13:16.304909 1656153 garbagecollector.go:261] timed out waiting for dependency graph builder sync during GC sync (attempt 22)
May 07 10:13:16 node-01 microk8s.daemon-kubelite[1656153]: I0507 10:13:16.404161 1656153 shared_informer.go:311] Waiting for caches to sync for garbage collector
May 07 10:13:17 node-01 microk8s.daemon-kubelite[1656153]: I0507 10:13:17.716127 1656153 scope.go:117] "RemoveContainer" containerID="bacb5bebc39d82af9d2d25105656a9b55f1cc0e49281b202cc83f2d4b594bb91"
May 07 10:13:17 node-01 microk8s.daemon-kubelite[1656153]: E0507 10:13:17.716245 1656153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.20.1 10.0.0.4 10.0.0.2"
May 07 10:13:17 node-01 microk8s.daemon-kubelite[1656153]: E0507 10:13:17.716782 1656153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=calico-node pod=calico-node-rbf2z_kube-system(e24cccdb-0091-41d3-8ef4-bf65837b2111)\"" pod="kube-system/calico-node-rbf2z" podUID="e24cccdb-0091-41d3-8ef4-bf65837b2111"
May 07 10:13:20 node-01 microk8s.daemon-containerd[1660090]: time="2024-05-07T10:13:20.716916419Z" level=info msg="StopPodSandbox for \"2151d6b9bd9a3f7001f64d39dc3d07d271bb6134fa49958fe4794eafc260005d\""
May 07 10:13:20 node-01 microk8s.daemon-containerd[1660090]: time="2024-05-07T10:13:20.755215736Z" level=error msg="StopPodSandbox for \"2151d6b9bd9a3f7001f64d39dc3d07d271bb6134fa49958fe4794eafc260005d\" failed" error="failed to destroy network for sandbox \"2151d6b9bd9a3f7001f64d39dc3d07d271bb6134fa49958fe4794eafc260005d\": plugin type=\"calico\" failed (delete): stat /var/snap/microk8s/current/var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"

ubuntu@node-01:~$ sudo microk8s kubectl get pods --all-namespaces

NAMESPACE        NAME                                     READY   STATUS                       RESTARTS         AGE
kube-system      coredns-864597b5fd-727j7                 0/1     ContainerCreating            0                46m
kube-system      calico-kube-controllers-77bd7c5b-pmvsc   0/1     ContainerCreating            0                46m
kube-system      hostpath-provisioner-7df77bc496-gzdj6    0/1     ContainerCreating            0                45m
metallb-system   controller-5f7bb57799-cnjws              0/1     ContainerCreating            0                45m
metallb-system   speaker-qvb4q                            0/1     CreateContainerConfigError   0                45m
kube-system      calico-node-rbf2z                        0/1     CrashLoopBackOff             13 (3m26s ago)   46m

ubuntu@node-01:~$ kubectl describe pod controller-5f7bb57799-cnjws -n metallb-system

Name:             controller-5f7bb57799-cnjws
Namespace:        metallb-system
Priority:         0
Service Account:  controller
Node:             node-01/192.168.20.101
Start Time:       Tue, 07 May 2024 10:01:46 +0000
Labels:           app=metallb
                  component=controller
                  pod-template-hash=5f7bb57799
Annotations:      prometheus.io/port: 7472
                  prometheus.io/scrape: true
Status:           Pending
IP:
IPs:              <none>
Controlled By:    ReplicaSet/controller-5f7bb57799
Containers:
  controller:
    Container ID:
    Image:         quay.io/metallb/controller:v0.13.3
    Image ID:
    Ports:         7472/TCP, 9443/TCP
    Host Ports:    0/TCP, 0/TCP
    Args:
      --port=7472
      --log-level=info
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:monitoring/metrics delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:monitoring/metrics delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      METALLB_ML_SECRET_NAME:  memberlist
      METALLB_DEPLOYMENT:      controller
    Mounts:
      /tmp/k8s-webhook-server/serving-certs from cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5pcxj (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  webhook-server-cert
    Optional:    false
  kube-api-access-5pcxj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age                    From     Message
  ----    ------          ----                   ----     -------
  Normal  SandboxChanged  4m28s (x208 over 49m)  kubelet  Pod sandbox changed, it will be killed and re-created.

ubuntu@node-01:~$ kubectl describe pod speaker-qvb4q -n metallb-system


Name:             speaker-qvb4q
Namespace:        metallb-system
Priority:         0
Service Account:  speaker
Node:             node-01/192.168.20.101
Start Time:       Tue, 07 May 2024 10:01:46 +0000
Labels:           app=metallb
                  component=speaker
                  controller-revision-hash=5cb4594ccb
                  pod-template-generation=1
Annotations:      prometheus.io/port: 7472
                  prometheus.io/scrape: true
Status:           Pending
IP:               192.168.20.101
IPs:
  IP:           192.168.20.101
Controlled By:  DaemonSet/speaker
Containers:
  speaker:
    Container ID:
    Image:         quay.io/metallb/speaker:v0.13.3
    Image ID:
    Ports:         7472/TCP, 7946/TCP, 7946/UDP
    Host Ports:    7472/TCP, 7946/TCP, 7946/UDP
    Args:
      --port=7472
      --log-level=info
    State:          Waiting
      Reason:       CreateContainerConfigError
    Ready:          False
    Restart Count:  0
    **Liveness:       http-get http://:monitoring/metrics delay=10s timeout=1s period=10s #success=1 #failure=3**
**    Readiness:      http-get http://:monitoring/metrics delay=10s timeout=1s period=10s #success=1 #failure=3**
    Environment:
      METALLB_NODE_NAME:       (v1:spec.nodeName)
      METALLB_HOST:            (v1:status.hostIP)
      METALLB_ML_BIND_ADDR:    (v1:status.podIP)
      METALLB_ML_LABELS:      app=metallb,component=speaker
      METALLB_ML_SECRET_KEY:  <set to the key 'secretkey' in secret 'memberlist'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-msrzt (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-msrzt:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                             node-role.kubernetes.io/master:NoSchedule op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason            Age                  From     Message
  ----     ------            ----                 ----     -------
  Warning  DNSConfigForming  51s (x232 over 50m)  kubelet  Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.20.2 192.168.31.2 10.0.0.1

@m4t7e0 Can you check if the container images are getting downloaded

sudo microk8s.ctr image ls
sudo microk8s.ctr image pull docker.io/calico/node:v3.25.1

Also can you provide me the output for following commands:

sudo microk8s.kubectl -n kube-system describe po calico-node-rbf2z
sudo microk8s.kubectl -n kube-system logs calico-node-rbf2z
~$ sudo microk8s.ctr image ls
REF                                                                                             TYPE                                                      DIGEST                                                                  SIZE      PLATFORMS                                                                    LABELS
docker.io/calico/cni:v3.25.1                                                                    application/vnd.docker.distribution.manifest.list.v2+json sha256:9a2c99f0314053aa11e971bd5d72e17951767bf5c6ff1fd9c38c4582d7cb8a0a 85.7 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed
docker.io/calico/cni@sha256:9a2c99f0314053aa11e971bd5d72e17951767bf5c6ff1fd9c38c4582d7cb8a0a    application/vnd.docker.distribution.manifest.list.v2+json sha256:9a2c99f0314053aa11e971bd5d72e17951767bf5c6ff1fd9c38c4582d7cb8a0a 85.7 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed
docker.io/calico/node:v3.25.1                                                                   application/vnd.docker.distribution.manifest.list.v2+json sha256:0cd00e83d06b3af8cd712ad2c310be07b240235ad7ca1397e04eb14d20dcc20f 84.2 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed
docker.io/calico/node@sha256:0cd00e83d06b3af8cd712ad2c310be07b240235ad7ca1397e04eb14d20dcc20f   application/vnd.docker.distribution.manifest.list.v2+json sha256:0cd00e83d06b3af8cd712ad2c310be07b240235ad7ca1397e04eb14d20dcc20f 84.2 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed
quay.io/metallb/speaker:v0.13.3                                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:839ca1f96149ec65b3af5aa20606096bf1bd7d43727611a5ae16de21e0c32fcd 44.7 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed
quay.io/metallb/speaker@sha256:839ca1f96149ec65b3af5aa20606096bf1bd7d43727611a5ae16de21e0c32fcd application/vnd.docker.distribution.manifest.list.v2+json sha256:839ca1f96149ec65b3af5aa20606096bf1bd7d43727611a5ae16de21e0c32fcd 44.7 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed
registry.k8s.io/pause:3.7                                                                       application/vnd.docker.distribution.manifest.list.v2+json sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c 304.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed,io.cri-containerd.pinned=pinned
registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c   application/vnd.docker.distribution.manifest.list.v2+json sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c 304.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed,io.cri-containerd.pinned=pinned
sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165                         application/vnd.docker.distribution.manifest.list.v2+json sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c 304.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed,io.cri-containerd.pinned=pinned
sha256:a0138614e6094da7ee6a271dece76a7bb2f052000788e4c4e7d8d39ecea28190                         application/vnd.docker.distribution.manifest.list.v2+json sha256:9a2c99f0314053aa11e971bd5d72e17951767bf5c6ff1fd9c38c4582d7cb8a0a 85.7 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed
sha256:c3f1478f398eabc2c0e77fe95715886fd1e57b6049a12fe4b46eae0dcecded00                         application/vnd.docker.distribution.manifest.list.v2+json sha256:839ca1f96149ec65b3af5aa20606096bf1bd7d43727611a5ae16de21e0c32fcd 44.7 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed
sha256:cae61b85e9b45aad28474600edd1a81f3de281917516191b160eeed4275977d2                         application/vnd.docker.distribution.manifest.list.v2+json sha256:0cd00e83d06b3af8cd712ad2c310be07b240235ad7ca1397e04eb14d20dcc20f 84.2 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed
~$ sudo microk8s.ctr image pull docker.io/calico/node:v3.25.1
docker.io/calico/node:v3.25.1:                                                    resolved       |++++++++++++++++++++++++++++++++++++++|
index-sha256:0cd00e83d06b3af8cd712ad2c310be07b240235ad7ca1397e04eb14d20dcc20f:    done           |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:68fc6b7a097fab48a442e4572ccb0d3957665ade2a55a65631256500576d89da: done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:de3d34951e105833fda0ccefc8171f7bc42ff2e678eb042ece817a3c2232ed5d:    done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:cae61b85e9b45aad28474600edd1a81f3de281917516191b160eeed4275977d2:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:6c8ba610e03006748516517622e10428c11d069148b10734becf23f3bf8cb8f7:    done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 1.8 s                                                                    total:   0.0 B (0.0 B/s)
unpacking linux/amd64 sha256:0cd00e83d06b3af8cd712ad2c310be07b240235ad7ca1397e04eb14d20dcc20f...
done: 8.732806ms
~$ sudo microk8s.kubectl -n kube-system describe po calico-node-rbf2z
Name:                 calico-node-rbf2z
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      calico-node
Node:                 node-01/192.168.20.101
Start Time:           Tue, 07 May 2024 10:01:14 +0000
Labels:               controller-revision-hash=6fbb45588b
                      k8s-app=calico-node
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   192.168.20.101
IPs:
  IP:           192.168.20.101
Controlled By:  DaemonSet/calico-node
Init Containers:
  upgrade-ipam:
    Container ID:  containerd://bdaf4be6259e0195f5500497f4e17488c3136d5ffcb6c39548a79495a814cd74
    Image:         docker.io/calico/cni:v3.25.1
    Image ID:      docker.io/calico/cni@sha256:9a2c99f0314053aa11e971bd5d72e17951767bf5c6ff1fd9c38c4582d7cb8a0a
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/calico-ipam
      -upgrade
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 07 May 2024 10:01:36 +0000
      Finished:     Tue, 07 May 2024 10:01:36 +0000
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      KUBERNETES_NODE_NAME:        (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:  <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/lib/cni/networks from host-local-net-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6xt5 (ro)
  install-cni:
    Container ID:  containerd://d55ce11319a029b8895877f17d811d56736091d911307b43a9a9a9df5be12527
    Image:         docker.io/calico/cni:v3.25.1
    Image ID:      docker.io/calico/cni@sha256:9a2c99f0314053aa11e971bd5d72e17951767bf5c6ff1fd9c38c4582d7cb8a0a
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/install
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 07 May 2024 10:01:37 +0000
      Finished:     Tue, 07 May 2024 10:01:39 +0000
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      CNI_CONF_NAME:         10-calico.conflist
      CNI_NETWORK_CONFIG:    <set to the key 'cni_network_config' of config map 'calico-config'>  Optional: false
      KUBERNETES_NODE_NAME:   (v1:spec.nodeName)
      CNI_MTU:               <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      SLEEP:                 false
      CNI_NET_DIR:           /var/snap/microk8s/current/args/cni-network
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6xt5 (ro)
Containers:
  calico-node:
    Container ID:   containerd://51eb9228b36073e7a87671b199d6995e6947bed91976ef5d86fb6a93f0d73cbe
    Image:          docker.io/calico/node:v3.25.1
    Image ID:       docker.io/calico/node@sha256:0cd00e83d06b3af8cd712ad2c310be07b240235ad7ca1397e04eb14d20dcc20f
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 08 May 2024 09:31:24 +0000
      Finished:     Wed, 08 May 2024 09:31:24 +0000
    Ready:          False
    Restart Count:  280
    Requests:
      cpu:      250m
    Liveness:   exec [/bin/calico-node -felix-live] delay=10s timeout=10s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/calico-node -felix-ready] delay=0s timeout=10s period=10s #success=1 #failure=3
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      DATASTORE_TYPE:                     kubernetes
      WAIT_FOR_DATASTORE:                 true
      NODENAME:                            (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:          <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
      CLUSTER_TYPE:                       k8s,bgp
      IP:                                 autodetect
      IP_AUTODETECTION_METHOD:            first-found
      CALICO_IPV4POOL_VXLAN:              Always
      CALICO_IPV6POOL_VXLAN:              Never
      FELIX_IPINIPMTU:                    <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_VXLANMTU:                     <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_WIREGUARDMTU:                 <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      CALICO_IPV4POOL_CIDR:               10.1.0.0/16
      CALICO_DISABLE_FILE_LOGGING:        true
      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT
      FELIX_IPV6SUPPORT:                  false
      FELIX_HEALTHENABLED:                true
      FELIX_FEATUREDETECTOVERRIDE:        ChecksumOffloadBroken=true
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/calico from var-lib-calico (rw)
      /var/log/calico/cni from cni-log-dir (ro)
      /var/run/calico from var-run-calico (rw)
      /var/run/nodeagent from policysync (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6xt5 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:
  var-run-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/snap/microk8s/current/var/run/calico
    HostPathType:
  var-lib-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/snap/microk8s/current/var/lib/calico
    HostPathType:
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  sys-fs:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/
    HostPathType:  DirectoryOrCreate
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/snap/microk8s/current/opt/cni/bin
    HostPathType:
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/snap/microk8s/current/args/cni-network
    HostPathType:
  cni-log-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/snap/microk8s/common/var/log/calico/cni
    HostPathType:
  host-local-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/snap/microk8s/current/var/lib/cni/networks
    HostPathType:
  policysync:
    Type:          HostPath (bare host directory volume)
    Path:          /var/snap/microk8s/current/var/run/nodeagent
    HostPathType:  DirectoryOrCreate
  kube-api-access-x6xt5:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason            Age                     From     Message
  ----     ------            ----                    ----     -------
  Warning  DNSConfigForming  4m34s (x7218 over 23h)  kubelet  Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.20.2 192.168.31.2 10.0.0.1
sudo microk8s.kubectl -n kube-system logs calico-node-rbf2z
Defaulted container "calico-node" out of: calico-node, upgrade-ipam (init), install-cni (init)
2024-05-08 09:36:38.841 [INFO][9] startup/startup.go 427: Early log level set to info
2024-05-08 09:36:38.841 [INFO][9] startup/utils.go 126: Using NODENAME environment for node name node-01
2024-05-08 09:36:38.841 [INFO][9] startup/utils.go 138: Determined node name: node-01
2024-05-08 09:36:38.841 [INFO][9] startup/startup.go 94: Starting node node-01 with version v3.25.1
2024-05-08 09:36:38.842 [INFO][9] startup/startup.go 432: Checking datastore connection
2024-05-08 09:36:38.850 [INFO][9] startup/startup.go 456: Datastore connection verified
2024-05-08 09:36:38.850 [INFO][9] startup/startup.go 104: Datastore is ready
2024-05-08 09:36:38.853 [INFO][9] startup/customresource.go 102: Error getting resource Key=GlobalFelixConfig(name=CalicoVersion) Name="calicoversion" Resource="GlobalFelixConfigs" error=the server could not find the requested resource (get GlobalFelixConfigs.crd.projectcalico.org calicoversion)
2024-05-08 09:36:38.862 [INFO][9] startup/startup.go 485: Initialize BGP data
2024-05-08 09:36:38.864 [WARNING][9] startup/autodetection_methods.go 99: Unable to auto-detect an IPv4 address: no valid IPv4 addresses found on the host interfaces
2024-05-08 09:36:38.864 [WARNING][9] startup/startup.go 507: Couldn't autodetect an IPv4 address. If auto-detecting, choose a different autodetection method. Otherwise provide an explicit address.
2024-05-08 09:36:38.864 [INFO][9] startup/startup.go 391: Clearing out-of-date IPv4 address from this node IP=""
2024-05-08 09:36:38.872 [WARNING][9] startup/utils.go 48: Terminating
Calico node failed to start

in previus post… in log saw this probabily wrong setting, ip address is missing?

   Liveness:       http-get http://:monitoring/metrics delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:monitoring/metrics delay=10s timeout=1s period=10s #success=1 #failure=3

@m4t7e0 seems like calico is not able to autodetect an IPv4 address. I will look into calico code logic to autodetect interfaces. In the mean time can you provide ip a from the node.

Some references:
https://github.com/projectcalico/calico/issues/3094
https://github.com/projectcalico/calico/issues/5882

thanks @hemanth-n, thanks following your links, and looking this calico/node/configuration#ip-autodetection-methods

$ cat /var/snap/microk8s/current/args/cni-network/cni.yaml | grep br-api
              value: "interface=br-api"

line 4576:
# Auto-detect the BGP IP address.
- name: IP
value: “autodetect”
- name: IP_AUTODETECTION_METHOD
value: “interface=br-api”

$  microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml

Now setup is failling in ceph :smile: , there is any way to change the ceph interface during the sunbeam boostrap?

@m4t7e0 I am still wondering on why IP_AUTODETECTION failed. Good you figured out how to change the configuration. I raised a bug Bug #2065241 “Ability to change calico IP_AUTODETECTION_METHOD” : Bugs : OpenStack Snap just to ensure this need to be configurable from sunbeam UX.

Regarding the ceph interface, this will be supported in future releases where user should be able to set juju spaces and map them.
However if you want to try a workaround to bypass sunbeam and set it at microceph level, check this https://canonical-microceph.readthedocs-hosted.com/en/reef-stable/how-to/configure-network-keys/

I’m seeking your feedback/assistance on the following points. I have completed the deployment of Sunbeam on 3 nodes (a proxy environment, therefore 2023.2/edge) and have encountered these anomalies:

  1. The environment is very slow, with very high latencies in the dashboard.
  2. I have performed the bootstrap and, at the end of the bootstrap process, I have reconfigured Ceph to use a second bond to run only the storage traffic (but I don’t understand how to tell the cluster to use this new storage, since the keys have changed).
  3. I have set the correct subnet for the MetalLB IPs, but the range was incorrect, and I would like to modify it.
  4. The br-ex network does not seem to be functioning properly, as the bond0 vlan 3 was set during the cluster configuration, but it does not appear to be working.

Could you please provide some assistance with these issues?Please let me know if you need any clarification or have additional questions.

About point 4 … https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1852221

ubuntu@node-01:~$ sudo openstack-hypervisor.ovs-vsctl list-ifaces br-ex

bond0
patch-provnet-31a3abcf-04aa-44b8-bb81-2af259aad1a1-to-br-int

ubuntu@node-01:~$ sudo openstack-hypervisor.ovs-vsctl show

ec60c60b-5fe1-4225-9056-35737749bd5f
    Bridge br-int
        fail_mode: secure
        datapath_type: system
        Port tap8db0721f-e0
            Interface tap8db0721f-e0
        Port patch-br-int-to-provnet-31a3abcf-04aa-44b8-bb81-2af259aad1a1
            Interface patch-br-int-to-provnet-31a3abcf-04aa-44b8-bb81-2af259aad1a1
                type: patch
                options: {peer=patch-provnet-31a3abcf-04aa-44b8-bb81-2af259aad1a1-to-br-int}
        Port br-int
            Interface br-int
                type: internal
        Port tapf0d9957d-a4
            Interface tapf0d9957d-a4
    Bridge br-ex
        datapath_type: system
        Port patch-provnet-31a3abcf-04aa-44b8-bb81-2af259aad1a1-to-br-int
            Interface patch-provnet-31a3abcf-04aa-44b8-bb81-2af259aad1a1-to-br-int
                type: patch
                options: {peer=patch-br-int-to-provnet-31a3abcf-04aa-44b8-bb81-2af259aad1a1}
        Port bond0
            Interface bond0
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "3.2.1"

ubuntu@node-01:~$ sudo openstack-hypervisor.ovs-vsctl list-ports br-ex

bond0
patch-provnet-31a3abcf-04aa-44b8-bb81-2af259aad1a1-to-br-int

ubuntu@node-01:~$ sudo openstack-hypervisor.ovs-vsctl list-ifaces br-ex

bond0
patch-provnet-31a3abcf-04aa-44b8-bb81-2af259aad1a1-to-br-int

ubuntu@node-01:~$ sudo openstack-hypervisor.ovs-vsctl iface-to-br bond0

br-ex

ubuntu@node-01:~$ sudo openstack-hypervisor.ovs-ofctl show br-ex

2024-05-28T14:27:08Z|00001|vconn|WARN|unix:/var/snap/openstack-hypervisor/common/run/openvswitch/br-ex.mgmt: version negotiation failed (we support version 0x01, peer supports versions 0x04, 0x06)
ovs-ofctl: br-ex: failed to connect to socket (Broken pipe)

there some issue to edge manifest?

Hi @m4t7e0

  1. Currently there is no cache enabled in horizon component and so the slow experience. We are currently aware of this and part of our future plans.

  2. Can you change the manifest with proper metallb ip range and rerun sunbeam cluster bootstrap command on bootstrap node?
    You can check if the ippool is updated or not using the command sudo microk8s.kubectl -n metallb-system get ipaddresspool -o yaml

I will come back to you on points 2 and 4.

just for info…

sudo openstack-hypervisor.ovs-ofctl show br-ex

2024-06-04T12:12:11Z|00001|vconn|WARN|unix:/var/snap/openstack-hypervisor/common/run/openvswitch/br-ex.mgmt: version negotiation failed (we support version 0x01, peer supports versions 0x04, 0x06)
ovs-ofctl: br-ex: failed to connect to socket (Broken pipe)

sudo openstack-hypervisor.ovs-vsctl set bridge br-ex protocols=OpenFlow10

sudo openstack-hypervisor.ovs-ofctl show br-ex

OFPT_FEATURES_REPLY (xid=0x2): dpid:0000aa87d1cc7414
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 1(patch-provnet-3): addr:f6:68:2d:4d:6c:a5
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 3(bond3.50): addr:aa:87:d1:cc:74:14
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-ex): addr:aa:87:d1:cc:74:14
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0