Manage a proxied environment

Re this issue

rather than downgrading the protocol level on the bridge (which may have side-effects for the OVN integration) you can pass the protocol version to use to the ovs-ofctl tool:

sudo openstack-hypervisor.ovs-ofctl show --protocols OpenFlow15 br-ex

@m4t7e0 question on your host networking setup - I’m assuming you’re using Open vSwitch installed on the Ubuntu host to configure all of the bridges that you detailed in your network configuration summary?

If that is the case we provide Open vSwitch as part of the hypervisor snap (as you have discovered) and I wonder how having two sets of userspace daemons running for OVS is impacting on function - I’ve done this when each is in its own discrete network namespace under LXC containers, but not both on the host OS.

changed my configuration, compared to the first post, I set up a new setup from scratch on three other nodes using the experience of the old setup. Now I’m using the standard Linux bridge for the first time, I was able to set up without interruptions. I have encountered the same problems (the four points of my previous post). At the current state, I have reached the br-ex via the command I indicated, but I still have a block for the Ceph part… … now I’m trying to fix the Ceph problem and the change of IP address of the dashboard (during the first deploy, I made a typo, now I can’t access the dash anymore). I think I’ll restart a new deploy from scratch to solve all the problems soon

               +------------+
               |    br-api  (linux bridge) |
               |    192.168.20.X/24
               |    Gateway: 192.168.20.1
               |    DNS: 10.0.0.1, 10.0.0.2
               |    Interfaces: bond0.20
               +------------+
                       |
                       |
               +------------+
               | br-floating (not assignet) |
               |    Interfaces: bond0.21 
               |    192.168.21.X/24 (for  public network but NO IP associated to this bridge)
               +------------+
                       |
                       |
               +------------+
               |    br-mgm  (linux bridge) |
               |  192.168.2.X/24
               |    DNS: 192.168.2.3, 192.168.2.4, 192.168.2.5
               |    Interfaces: eno1
               +------------+
                       |
                       |
               +------------+
               |    br-stor (linux bridge) |
               |  192.168.31.X/24
               |    DNS: 192.168.31.2, 192.168.31.4, 192.168.31.5
               |    Interfaces: bond1.31
               +------------+
                       |
                       |
               +------------+
               |    bond0   |
               |  Interfaces: enp129s0f0np0, enp225s0f0np0
               +------------+
                       |
                       |
               +------------+
               |    bond1   |
               |  Interfaces: enp129s0f1np1, enp225s0f1np1
               +------------+
                       |

ubuntu@node-01:~$ sudo microceph.ceph status

  cluster:
    id:     2b6bdec7-6575-4c49-b85f-c3bba69ce7f8
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum node-01.dom, node-03.dom, node-02.dom (age 2d)
    mgr: node-01.dom(active, since 5d), standbys: node-03.dom, node-02.dom
    mds: 2/2 daemons up, 1 standby
    osd: 45 osds: 45 up (since 2d), 45 in (since 2d)

  data:
    volumes: 2/2 healthy
    pools:   8 pools, 897 pgs
    objects: 8.17k objects, 31 GiB
    usage:   76 GiB used, 65 TiB / 65 TiB avail
    pgs:     897 active+clean

ubuntu@node-01:~$ juju status -m admin/controller

Model       Controller          Cloud/Region     Version  SLA          Timestamp       Notes
controller  sunbeam-controller  sunbeam/default  3.4.2    unsupported  11:30:06+02:00  upgrade available: 3.4.3

SAAS                   Status   Store  URL
cert-distributor       waiting  local  node-01.dom/openstack.cert-distributor
certificate-authority  active   local  node-01.dom/openstack.certificate-authority
cinder-ceph            active   local  node-01.dom/openstack.cinder-ceph
keystone               waiting  local  node-01.dom/openstack.keystone
ovn-relay              active   local  node-01.dom/openstack.ovn-relay
rabbitmq               active   local  node-01.dom/openstack.rabbitmq

App                   Version  Status   Scale  Charm                 Channel        Rev  Exposed  Message
controller                     active       1  juju-controller       3.4/stable      79  no
microceph                      blocked      3  microceph             reef/edge       47  no       (workload) Error in charm (see logs): Command '['microceph', 'cluster', 'join', 'eyJuYW1lIjoibWljcm9jZXBoLzEiLCJzZWNy...
microk8s                       active       3  microk8s              legacy/stable  121  no
openstack-hypervisor           active       1  openstack-hypervisor  2023.2/edge    165  no
sunbeam-machine                active       3  sunbeam-machine       2023.2/edge     14  no

Unit                     Workload  Agent  Machine  Public address  Ports      Message
controller/0*            active    idle   0        192.168.20.141
microceph/0*             active    idle   0        192.168.20.141
microceph/1              blocked   idle   2        192.168.20.142              (workload) Error in charm (see logs): Command '['microceph', 'cluster', 'join', 'eyJuYW1lIjoibWljcm9jZXBoLzEiLCJzZWNy...
microceph/2              blocked   idle   1        192.168.20.143              (workload) Error in charm (see logs): Command '['microceph', 'cluster', 'join', 'eyJuYW1lIjoibWljcm9jZXBoLzIiLCJzZWNy...
microk8s/0*              active    idle   0        192.168.20.141   16443/tcp
microk8s/1               active    idle   1        192.168.20.143   16443/tcp
microk8s/2               active    idle   2        192.168.20.142   16443/tcp
openstack-hypervisor/0*  active    idle   0        192.168.20.141
sunbeam-machine/0*       active    idle   0        192.168.20.141
sunbeam-machine/1        active    idle   1        192.168.20.143
sunbeam-machine/2        active    idle   2        192.168.20.142

Machine  State    Address        Inst id               Base          AZ  Message
0        started  192.168.20.141  manual:               ubuntu@22.04      Manually provisioned machine
1        started  192.168.20.143  manual:192.168.20.143  ubuntu@22.04      Manually provisioned machine
2        started  192.168.20.142  manual:192.168.20.142  ubuntu@22.04      Manually provisioned machine

Offer      Application  Charm      Rev  Connected  Endpoint  Interface    Role
microceph  microceph    microceph  47   2/2        ceph      ceph-client  provider


@m4t7e0 a few things to note about the local mode (rather than MAAS) of deployment you’re using.

The networking is still very simple and flat for this mode - we have plans to improve support for using multiple networks and some fixes have been implemented into the 2024.1 track of the openstack charm to resolve some issues we’ve seen once MicroK8S gets installed (which results in the machine ending up with lots of IP addresses) - but its still only simple flat networking!

@m4t7e0 thanks for the update on your networking configuration - what’s the rationale for using Linux bridges?

I’m exploring the product, having never used it before, and I’m trying to find a way that works with my cluster. I have 10 nodes to use and I’m deploying on a set of 3 nodes to verify that the deployment, refresh, and add node process functions without issues… I encountered a problem when using the deployment with OVS, failing to bind it to br-ex because the interface was used by br-floating… so, simply because I didn’t know what Sunbeam wanted, I switched to Linux bridge configuration. If you don’t see any issues, there might be problems creating the cluster with OVS. In the next setup, I’ll return to OVS (which is definitely more suitable for my use case).

have started configuring 5 nodes for the deployment of "OpenStack 2024.1 (Caracal) - multi-node-maas " following the guide, but I am unsure if I can use the proxy in the deployment. Additionally, MAAS cannot manage the power-on part of the node in my case, so I am trying to understand if this configuration can work for my scenario.