i’m trying to deploy microstack in proxy environments as reported on the web guide…
sudo snap install openstack --channel 2023.2/edge
sunbeam prepare-node-script | bash -x && newgrp snap_daemon
sunbeam -v cluster bootstrap --role control --role compute --role storage --manifest /snap/openstack/current/etc/manifests/edge.yml
i’m tryng to running MicroStack with my 4 node (ubuntu 22-04)
Network was setup
bond0 2x10gpbs
vlan bond0.20 set as ovn br-api (192.168.20.X)
vlan bond0.21 set as ovn br-floating (192.168.21.X)
bond1 2x10gbps
vlan bond1.31 set as ovn br-storage (192.168.31.X)
1 gigabit ovn br-mgm (192.168.2.X)
with local proxy for download packets from internet.
i tested many times this bootstrap for find the right proxy settings :
cat /etc/environment looks like that:
HTTP_PROXY=http://prx.domain.local:3129
HTTPS_PROXY=http://prx.domain.local:3129
NO_PROXY=192.168.0.0/16,127.0.0.1,.domain.local,10.1.0.0/16,10.0.0.0/8,127.0.0.0/8,.domain2.net,172.16.0.0/10,.svc,localhost,10.152.183.0/24,127.0.0.53
no_proxy=localhost,domain.net,.domain.local,127.0.0.1,127.0.0.53,10.0.0.0/8,192.168.0.0/16,172.16.0.0/10,127.0.0.0/8
http_proxy=http://prx.domain.local:3129
in every step i had some issue:
First was juju proxy settings… (solved)
Second was MicroK8s (still some issue)
ubuntu@node-03:/$ journalctl -r
May 06 16:25:35 node-03 microk8s.daemon-kubelite[468900]: + sleep 2
May 06 16:25:35 node-03 microk8s.daemon-kubelite[468900]: + n=5
May 06 16:25:35 node-03 microk8s.daemon-kubelite[468900]: Waiting for default route to appear. (attempt 4)
May 06 16:25:35 node-03 microk8s.daemon-kubelite[468900]: + echo 'Waiting for default route to appear. (attempt 4)'
May 06 16:25:35 node-03 microk8s.daemon-kubelite[469397]: + ip -6 route
May 06 16:25:35 node-03 microk8s.daemon-kubelite[469398]: + grep '^default'
May 06 16:25:35 node-03 microk8s.daemon-kubelite[469397]: + ip route
May 06 16:25:35 node-03 microk8s.daemon-kubelite[468900]: + default_route_exists
May 06 16:25:35 node-03 microk8s.daemon-kubelite[468900]: + '[' 4 -ge 5 ']'
May 06 16:25:34 node-03 microk8s.daemon-apiserver-kicker[469337]: Setting up the CNI
May 06 16:25:33 node-03 microk8s.daemon-kubelite[468900]: + sleep 2
May 06 16:25:33 node-03 microk8s.daemon-kubelite[468900]: + n=4
May 06 16:25:33 node-03 microk8s.daemon-kubelite[468900]: Waiting for default route to appear. (attempt 3)
May 06 16:25:33 node-03 microk8s.daemon-kubelite[468900]: + echo 'Waiting for default route to appear. (attempt 3)'
May 06 16:25:33 node-03 microk8s.daemon-kubelite[469312]: + ip -6 route
May 06 16:25:33 node-03 microk8s.daemon-kubelite[469313]: + grep '^default'
May 06 16:25:33 node-03 microk8s.daemon-kubelite[469312]: + ip route
May 06 16:25:33 node-03 microk8s.daemon-kubelite[468900]: + default_route_exists
May 06 16:25:33 node-03 microk8s.daemon-kubelite[468900]: + '[' 3 -ge 5 ']'
May 06 16:25:33 node-03 systemd[1]: snap.microk8s.microk8s-ee3f831c-9799-4a05-a1f1-c28e4b55f838.scope: Consumed 2.673s CPU time.
May 06 16:25:33 node-03 systemd[1]: snap.microk8s.microk8s-ee3f831c-9799-4a05-a1f1-c28e4b55f838.scope: Deactivated successfully.
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[190115]: 2024/05/06 16:25:32 Applying /var/snap/microk8s/common/etc/launcher/install.yaml
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[190115]: 2024/05/06 16:25:32 Failed to apply configuration file /var/snap/microk8s/common/etc/launcher/install.yaml: failed to apply config part 0: failed to reconcile addons: failed to enable addon "dns": c>
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: subprocess.CalledProcessError: Command '('/snap/microk8s/6532/microk8s-kubectl.wrapper', 'get', 'all,ingress', '--all-namespaces')' returned non-zero exit status 1.
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: raise CalledProcessError(self.returncode, self.args, self.stdout,
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/usr/lib/python3.8/subprocess.py", line 448, in check_returncode
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: result.check_returncode()
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/scripts/wrappers/common/utils.py", line 69, in run
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: return run(KUBECTL, "get", cmd, "--all-namespaces", die=False)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/scripts/wrappers/common/utils.py", line 248, in kubectl_get
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: kube_output = kubectl_get("all,ingress")
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/scripts/wrappers/common/utils.py", line 566, in get_status
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: enabled_addons_info, disabled_addons_info = get_status(available_addons_info, True)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/scripts/wrappers/common/utils.py", line 514, in unprotected_xable
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: unprotected_xable(action, addon_args)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/scripts/wrappers/common/utils.py", line 498, in protected_xable
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: protected_xable(action, addon_args)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/scripts/wrappers/common/utils.py", line 470, in xable
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: xable("enable", addons)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/scripts/wrappers/enable.py", line 37, in enable
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: return callback(*args, **kwargs)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: return ctx.invoke(self.callback, **ctx.params)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: rv = self.invoke(ctx)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/usr/lib/python3/dist-packages/click/core.py", line 717, in main
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: return self.main(*args, **kwargs)
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: enable(prog_name="microk8s enable")
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: File "/snap/microk8s/6532/scripts/wrappers/enable.py", line 41, in <module>
May 06 16:25:32 node-03 microk8s.daemon-cluster-agent[467080]: Traceback (most recent call last):
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="Heartbeat was sent 35.043453831s ago, sleep 15s seconds before retrying"
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="Matched trusted cert" fingerprint=ba0457eebf511b4262e9e1f7523aa7e5f515823c2cc793d099b243087c9a8a0a subject="CN=root@node-03,O=LXD"
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="Dqlite connected outbound" local="192.168.2.143:43086" remote="192.168.2.143:7443"
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="{true 0 map[]}"
Now during the bootstrap
Disks to attach to MicroCeph (comma separated list)
(/dev/disk/by-id/wwn-0x28ab18ee209f8d05,/dev/disk/by-id/wwn-0x28ab18ee209f8d11,/dev/disk/by-id/wwn-0x28ab18ee209f8d15,/dev/disk/by-id/wwn-0x28ab18ee209f8d21,/dev/disk/by-id/wwn-0x28ab18ee209f93fd,/dev/disk/by-id/wwn-0x28ab18ee209f9025,/dev/disk/by-id/wwn-0x28ab18ee209f9
069,/dev/disk/by-id/wwn-0x28ab18ee209f9421,/dev/disk/by-id/wwn-0x28ab18ee209f9439,/dev/disk/by-id/wwn-0x28ab18ee20a89d21,/dev/disk/by-id/wwn-0x28ab18ee20a32375,/dev/disk/by-id/wwn-0x28ab18ee20aff5a9,/dev/disk/by-id/wwn-0x28ab18ee20ac53fd,/dev/disk/by-id/wwn-0x28ab18ee20
ac5425,/dev/disk/by-id/wwn-0x28ab18ee20ac5429): DEBUG {'microceph_config': {'node-03.domain.local': {'osd_devices': microceph.py:241
'/dev/disk/by-id/wwn-0x28ab18ee209f8d05,/dev/disk/by-id/wwn-0x28ab18ee209f8d11,/dev/disk/by-id/wwn-0x28ab18ee209f8d15,/dev/disk/by-id/wwn-0x28ab18ee209f8d21,/dev/disk/by-id/wwn-0x28ab18ee209f93fd,/dev/disk/by-id/wwn-0x28ab18ee209f902
5,/dev/disk/by-id/wwn-0x28ab18ee209f9069,/dev/disk/by-id/wwn-0x28ab18ee209f9421,/dev/disk/by-id/wwn-0x28ab18ee209f9439,/dev/disk/by-id/wwn-0x28ab18ee20a89d21,/dev/disk/by-id/wwn-0x28ab18ee20a32375,/dev/disk/by-id/wwn-0x28ab18ee20aff5
a9,/dev/disk/by-id/wwn-0x28ab18ee20ac53fd,/dev/disk/by-id/wwn-0x28ab18ee20ac5425,/dev/disk/by-id/wwn-0x28ab18ee20ac5429'}}}
DEBUG [put] http+unix://%2Fvar%2Fsnap%2Fopenstack%2Fcommon%2Fstate%2Fcontrol.socket/1.0/config/TerraformVarsMicroceph, args={'data': '{"microceph_config": {"node-03.domain.local": {"osd_devices": service.py:120
"/dev/disk/by-id/wwn-0x28ab18ee209f8d05,/dev/disk/by-id/wwn-0x28ab18ee209f8d11,/dev/disk/by-id/wwn-0x28ab18ee209f8d15,/dev/disk/by-id/wwn-0x28ab18ee209f8d21,/dev/disk/by-id/wwn-0x28ab18ee209f93fd,/dev/disk/by-id/wwn-0x28ab18ee209f9025,
/dev/disk/by-id/wwn-0x28ab18ee209f9069,/dev/disk/by-id/wwn-0x28ab18ee209f9421,/dev/disk/by-id/wwn-0x28ab18ee209f9439,/dev/disk/by-id/wwn-0x28ab18ee20a89d21,/dev/disk/by-id/wwn-0x28ab18ee20a32375,/dev/disk/by-id/wwn-0x28ab18ee20aff5a9,/
dev/disk/by-id/wwn-0x28ab18ee20ac53fd,/dev/disk/by-id/wwn-0x28ab18ee20ac5425,/dev/disk/by-id/wwn-0x28ab18ee20ac5429"}}}'}
DEBUG http://localhost:None "PUT /1.0/config/TerraformVarsMicroceph HTTP/1.1" 200 108 connectionpool.py:456
DEBUG Response(<Response [200]>) = {"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":{}} service.py:122
DEBUG Running step node-03.domain.local common.py:276
⠙ Configuring MicroCeph storage ... DEBUG Connector: closing controller connection connector.py:124
DEBUG Running action add-osd on microceph/0 microceph.py:282
⠹ Configuring MicroCeph storage ... DEBUG Connector: closing controller connection connector.py:124
⠸ Configuring MicroCeph storage ... DEBUG Connector: closing controller connection connector.py:124
⠏ Configuring MicroCeph storage ... [16:18:20] DEBUG Microceph Adding disks /dev/disk/by-id/wwn-0x28ab18ee209f8d15,/dev/disk/by-id/wwn-0x28ab18ee20ac5429,/dev/disk/by-id/wwn-0x28ab18ee20a89d21,/dev/disk/by-id/wwn-0x28ab18ee20aff5a9 failed: {'return-code': 0} microceph.py:296
DEBUG Finished running step 'node-03.domain.local'. Result: ResultType.FAILED common.py:279
Error: Microceph Adding disks /dev/disk/by-id/wwn-0x28ab18ee209f8d15,/dev/disk/by-id/wwn-0x28ab18ee20ac5429,/dev/disk/by-id/wwn-0x28ab18ee20a89d21,/dev/disk/by-id/wwn-0x28ab18ee20aff5a9 failed: {'return-code': 0}
journalctl -r
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="Matched trusted cert" fingerprint=ba0457eebf511b4262e9e1f7523aa7e5f515823c2cc793d099b243087c9a8a0a subject="CN=root@node-03,O=LXD"
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="Dqlite connected outbound" local="192.168.2.143:43086" remote="192.168.2.143:7443"
May 06 16:25:32 node-03 microceph.daemon[264165]: time="2024-05-06T16:25:32Z" level=debug msg="{true 0 map[]}"
How to get this microstack working?
Actually i just neet to test some application, but for sure i will not use the br-mgm IPS as primary interface for the cluster… (is possibile to tell to the bootstrap process to use the IP instead the fqdn?)…