Cannot Connect to LXC Container from Host Using MicroOVN Network

Hello everyone,

I have a cluster of 3 Ubuntu 22.04 servers using MicroCloud and MicroOVN (no MicroCeph). The network was entirely set up by MicroCloud + MicroOVN. My setup includes:

  • Server1 (Master)
  • Server2
  • Server3

Goal:
I aim to create an interconnected LXC network across a cluster of 3 nodes, where containers can communicate seamlessly with each other and be accessible from any of the three LXC cluster servers. The ultimate objective is to expose services hosted within the containers, allowing external servers or systems to connect to these services as needed.

To achieve this, I expected MicroCloud and MicroOVN to provide a fully functional and interconnected network for the cluster. However, I am currently facing issues connecting to the containers from the host servers within the cluster, which blocks further testing and use of this setup.

The problem is that I have an LXC container running on one of the nodes (server2), and I cannot connect to it via SSH or ping from any of the three servers.

Here are the details of my configuration:

LXC Container

lxc ls  
| NAME |  STATE  |        IPV4        | IPV6 |   TYPE    | SNAPSHOTS |          LOCATION           |  
+------+---------+--------------------+------+-----------+-----------+-----------------------------+  
| u1   | RUNNING | 10.50.227.3 (eth0) |      | CONTAINER | 0         | server2                     |  

Ping Results

ping 10.50.227.3  
PING 10.50.227.3 (10.50.227.3) 56(84) bytes of data.  
^C  
--- 10.50.227.3 ping statistics ---  
2 packets transmitted, 0 received, 100% packet loss, time 1001ms  

LXD Networks

lxc network ls  
+---------+----------+---------+----------------+---------------------------+---------------------+---------+---------+  
|  NAME   |   TYPE   | MANAGED |      IPV4      |           IPV6            |     DESCRIPTION     | USED BY |  STATE  |  
+---------+----------+---------+----------------+---------------------------+---------------------+---------+---------+  
| UPLINK  | physical | YES     |                |                           |                     | 1       | CREATED |  
+---------+----------+---------+----------------+---------------------------+---------------------+---------+---------+  
| br0     | bridge   | NO      |                |                           |                     | 0       |         |  
+---------+----------+---------+----------------+---------------------------+---------------------+---------+---------+  
| br-int  | bridge   | NO      |                |                           |                     | 0       |         |  
+---------+----------+---------+----------------+---------------------------+---------------------+---------+---------+  
| default | ovn      | YES     | 10.50.227.1/24 | fd42:c0ec:8df1:40da::1/64 | Default OVN network | 3       | CREATED |  
+---------+----------+---------+----------------+---------------------------+---------------------+---------+---------+  
| ens3    | physical | NO      |                |                           |                     | 1       |         |  
+---------+----------+---------+----------------+---------------------------+---------------------+---------+---------+  
| lxdovn1 | bridge   | NO      |                |                           |                     | 0       |         |  
+---------+----------+---------+----------------+---------------------------+---------------------+---------+---------+  

“Default” Network

lxc network show default  
name: default  
description: Default OVN network  
type: ovn  
managed: true  
status: Created  
config:  
  bridge.mtu: "1442"  
  ipv4.address: 10.50.227.1/24  
  ipv4.nat: "true"  
  ipv6.address: fd42:c0ec:8df1:40da::1/64  
  ipv6.nat: "true"  
  network: UPLINK  
  volatile.network.ipv4.address: 10.2.123.1  
  volatile.network.ipv6.address: fd42:2:1234:1234:216:3eff:fe53:b087  
used_by:  
- /1.0/instances/u1  
- /1.0/instances/u5  
- /1.0/profiles/default  
locations:  
- server1  
- server2  
- server3  

“UPLINK” Network

lxc network show UPLINK  
name: UPLINK  
description: ""  
type: physical  
managed: true  
status: Created  
config:  
  dns.nameservers: 10.2.123.36  
  ipv4.gateway: 10.2.123.1/24  
  ipv4.ovn.ranges: 10.2.123.100-10.2.123.120  
  ipv6.gateway: fd42:2:1234:1234::1/64  
  volatile.last_state.created: "false"  
used_by:  
- /1.0/networks/default  
locations:  
- server1  
- server2  
- server3  

Server Routes

ip route  
default via 10.0.0.62 dev br0 proto static  
10.0.0.48/28 dev br0 proto kernel scope link src 10.0.0.49  

Network Interfaces

ip a  
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000  
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00  
    inet 127.0.0.1/8 scope host lo  
       valid_lft forever preferred_lft forever  
    inet6 ::1/128 scope host  
       valid_lft forever preferred_lft forever  
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000  
    link/ether 50:6b:8d:a0:aa:44 brd ff:ff:ff:ff:ff:ff  
    altname enp0s3  
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000  
    link/ether f2:12:6c:97:76:c7 brd ff:ff:ff:ff:ff:ff  
    inet 10.0.0.49/28 brd 10.0.0.63 scope global br0  
       valid_lft forever preferred_lft forever  
4: ovs-system: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000  
    link/ether 52:54:ef:f3:4f:f8 brd ff:ff:ff:ff:ff:ff  
    inet6 fe80::5054:efff:fef3:4ff8/64 scope link  
       valid_lft forever preferred_lft forever  
5: lxdovn1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000  
    link/ether 4e:16:a7:19:e1:44 brd ff:ff:ff:ff:ff:ff  
    inet6 fe80::4c16:a7ff:fe19:e144/64 scope link  
       valid_lft forever preferred_lft forever  
6: br-int: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc noqueue state UNKNOWN group default qlen 1000  
    link/ether 7a:db:29:cd:3b:ab brd ff:ff:ff:ff:ff:ff  
    inet6 fe80::78db:29ff:fecd:3bab/64 scope link  
       valid_lft forever preferred_lft forever  
7: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000  
    link/ether 52:d0:6a:cf:2b:be brd ff:ff:ff:ff:ff:ff  
    inet6 fe80::ac83:d0ff:fe77:8aa4/64 scope link  
       valid_lft forever preferred_lft forever  
9: veth28d0d963@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc noqueue master ovs-system state UP group default qlen 1000  
    link/ether 0a:ff:11:40:aa:31 brd ff:ff:ff:ff:ff:ff link-netnsid 0  

Does anyone have an idea why I can’t connect to the container?

Any help is greatly appreciated!

This is expected as each LXD OVN network is situated behind a virtual OVN router that is performing NAT and connected to your designated uplink network.

You can directly reach inside the instance (container/VM) using lxc exec or manage files using lxc file * commands.

But if you want to expose services running on instances to the external uplink network then you can use the LXD Network Forwards feature to forward an IP from the uplink network towards an IP inside your OVN network.

See https://documentation.ubuntu.com/lxd/en/latest/howto/network_forwards/

And if your LXD hosts are also configured with IPs on the same uplink network the you’ll be able to reach the services from the LXD hosts as well.

However if you’re just trying to get temporary access to services inside your container from one particular LXD host, then you can just use the proxy device to forward port(s) from the host to the guest.

See https://documentation.ubuntu.com/lxd/en/latest/reference/devices_proxy/

This is not what we are looking for, because we have Ansible and Terraform workflows to create the containers and deploy some services inside, and we would need to assign dynamic ports for each forward we want to create for every container, such as SSH access.

What we are looking for is the possibility to route the OVN subnet so that the host can directly see this subnet and interact with the containers without requiring additional forward configurations.

In other words, I need to access the container’s IP directly, in this case, 10.50.227.3, from the host itself.

Do you know how I could configure the network so the host can directly interact with the OVN network? The goal is to be able to do things like SSH into the containers directly from the host, without needing intermediary proxies or NAT.

Do you have any ideas on how to achieve this? Thanks!

There is a LXD connection plugin for Ansible so you shouldn’t need SSH access for that bit at least. https://docs.ansible.com/ansible/latest/collections/community/general/lxd_connection.html#ansible-collections-community-general-lxd-connection

1 Like

I know, but it is not a solution to the issue posed.
We need the network to be available as I mentioned. In part one of the requirements for the solution is to be able to deploy with Jenkins through pipelines services to the containers via ansible. But then there are more parts involved and I need them to be available from the host.
So… is there a way to make the containers available from the host? Similar to the forwards that tomp suggested but that doesn’t quite work for us.

You can disable NAT on ovn networks via ipv4.nat and ipv6.nat settings (https://documentation.ubuntu.com/lxd/en/latest/reference/network_ovn/#network-ovn-network-conf:ipv4.nat) .

This then allows for the external uplink network to directly reach the OVN network if the uplink router has static route configured for the OVN network’s subnet toward the OVN network’s volatile.network.ipv4.address and volatile.network.ipv6.address addresses.

See https://documentation.ubuntu.com/lxd/en/latest/reference/network_ovn/#ovn-networking-architecture for a diagram of the architecture.

I would also like to inquire as to why you are looking into use OVN networks at all? As it sounds like what you’re looking for is a way for instances to join the same layer 2 network as the hosts?

For that, something like an unmanaged bridge network combined with bridged NICs might be simpler.

https://documentation.ubuntu.com/lxd/en/latest/reference/devices_nic/#nic-bridged

Is that correct?

Ok, got it. We will test it to see if it is valid with our requirements and we will comment here when we have results.
We have achieved by another way what we proposed. Now we can connect via ping,ssh… from Server1 for example or even from container1 we can connect outwards as well.
The configuration we have put is without microcloud but with ovn. We have three nodes with two Ethernets cards enp5s0 and enp2s0 (not configured).

Server1:

network:
    ethernets:
        enp5s0:
            dhcp4: false
            addresses:
            - 192.168.1.246/24
            routes:
            - to: default
              via: 192.168.1.1
            nameservers:
              addresses:
              - 192.168.1.12
              - 1.1.1.1

    version: 2

Server2:

network:
    ethernets:
        enp5s0:
            dhcp4: false
            addresses:
            - 192.168.1.247/24
            routes:
            - to: default
              via: 192.168.1.1
            nameservers:
              addresses:
              - 192.168.1.12
              - 1.1.1.1

    version: 2

Server3:

network:
    ethernets:
        enp5s0:
            dhcp4: false
            addresses:
            - 192.168.1.248/24
            routes:
            - to: default
              via: 192.168.1.1
            nameservers:
              addresses:
              - 192.168.1.12
              - 1.1.1.1

    version: 2

We configure the firt one as primary ovn-host

    OVN_CTL_OPTS=" \
         --db-nb-addr=192.168.1.246 \
         --db-nb-create-insecure-remote=yes \
         --db-sb-addr=192.168.1.246 \
         --db-sb-create-insecure-remote=yes \
         --db-nb-cluster-local-addr=192.168.1.246 \
         --db-sb-cluster-local-addr=192.168.1.246 \
         --ovn-northd-nb-db=tcp:192.168.1.246:6641,tcp:192.168.1.247:6641,tcp:192.168.1.248:6641 \
         --ovn-northd-sb-db=tcp:192.168.1.246:6642,tcp:192.168.1.247:6642,tcp:192.168.1.248:6642"
         
    OVN_CTL_OPTS=" \
          --db-nb-addr=192.168.1.247 \
         --db-nb-cluster-remote-addr=192.168.1.246 \
         --db-nb-create-insecure-remote=yes \
         --db-sb-addr=192.168.1.247 \
         --db-sb-cluster-remote-addr=192.168.1.246 \
         --db-sb-create-insecure-remote=yes \
         --db-nb-cluster-local-addr=192.168.1.247 \
         --db-sb-cluster-local-addr=192.168.1.247 \
         --ovn-northd-nb-db=tcp:192.168.1.246:6641,tcp:192.168.1.247:6641,tcp:192.168.1.248:6641 \
         --ovn-northd-sb-db=tcp:192.168.1.246:6642,tcp:192.168.1.247:6642,tcp:192.168.1.248:6642"
         
    OVN_CTL_OPTS=" \
          --db-nb-addr=192.168.1.248 \
         --db-nb-cluster-remote-addr=192.168.1.246 \
         --db-nb-create-insecure-remote=yes \
         --db-sb-addr=192.168.1.248 \
         --db-sb-cluster-remote-addr=192.168.1.248 \
         --db-sb-create-insecure-remote=yes \
         --db-nb-cluster-local-addr=192.168.1.248 \
         --db-sb-cluster-local-addr=192.168.1.248 \
         --ovn-northd-nb-db=tcp:192.168.1.246:6641,tcp:192.168.1.247:6641,tcp:192.168.1.248:6641 \
         --ovn-northd-sb-db=tcp:192.168.1.246:6642,tcp:192.168.1.247:6642,tcp:192.168.1.248:6642"
    sudo ovs-vsctl set open_vswitch . \
       external_ids:ovn-remote=tcp:192.168.1.246:6642,tcp:192.168.1.247:6642,tcp:192.168.1.248:6642 \
       external_ids:ovn-encap-type=geneve \
       external_ids:ovn-encap-ip=192.168.1.246
     ```
sudo ovs-vsctl set open_vswitch . \
   external_ids:ovn-remote=tcp:192.168.1.246:6642,tcp:192.168.1.247:6642,tcp:192.168.1.248:6642 \
   external_ids:ovn-encap-type=geneve \
   external_ids:ovn-encap-ip=192.168.1.247
    sudo ovs-vsctl set open_vswitch . \
       external_ids:ovn-remote=tcp:192.168.1.246:6642,tcp:192.168.1.247:6642,tcp:192.168.1.248:6642 \
       external_ids:ovn-encap-type=geneve \
       external_ids:ovn-encap-ip=192.168.1.248

Then we crete the uplink using the second card enp2s0

lxc network create UPLINK --type=physical parent=enp2s0 --target=Server1
lxc network create UPLINK --type=physical parent=enp2s0 --target=Server2
lxc network create UPLINK --type=physical parent=enp2s0 --target=Server3
lxc network create UPLINK --type=physical \
   ipv4.ovn.ranges=192.168.40.150-192.168.40.160 \
   ipv4.gateway=192.168.40.1/24 \
   dns.nameservers=192.168.1.12
lxc config set network.ovn.northbound_connection tcp:192.168.1.246:6641,tcp:192.168.1.247:6641,tcp:192.168.1.248:6641

Then we create the ovn network

lxc network create bigdata-ovn --type=ovn

Change the configuration to set nat to false, asign dns domain and get the ip gateway 192.168.40.150

root@Server1:~$ lxc network edit bigdata-ovn
config:
  bridge.mtu: "1445"
  dns.domain: lxc.lab
  ipv4.address: 192.168.45.1/24
  ipv4.nat: "false"
  network: UPLINK
  volatile.network.ipv4.address: 192.168.40.150
description: ""
name: bigdata-ovn
type: ovn
used_by:
[...]

add static route into the firewall (fortigate) to enroute 192.168.45.0/24 using the gateway 192.168.40.150

lanunch one machine

lxc launch images:ubuntu/22.04 c1 --network bigdata-ovn

And now we have another issue facing us.
Having two containers c1 and c2. Between them I can execute ping but we are not clear what is the IP of the DNS that registers those names to be able to forward the zone lxc.net in our primary dns which is 192.168.1.12.


root@c1:~# ping c2.lxc.lab
PING c2.lxc.lab (192.168.45.3) 56(84) bytes of data.
64 bytes from 192.168.45.3: icmp_seq=1 ttl=64 time=1.37 ms
64 bytes from 192.168.45.3: icmp_seq=2 ttl=64 time=0.218 ms
64 bytes from 192.168.45.3: icmp_seq=3 ttl=64 time=0.217 ms
64 bytes from 192.168.45.3: icmp_seq=4 ttl=64 time=0.215 ms
64 bytes from 192.168.45.3: icmp_seq=5 ttl=64 time=0.209 ms
1 Like

LXD uses OVN to provide local DNS resolution for instances within the OVN network, as well as DNS forwarding for external names to the uplink network’s DNS servers.

However LXD also supports the functionality to create a DNS server itself that can be used for zone transfers to an external DNS server.

See https://documentation.ubuntu.com/lxd/en/latest/howto/network_zones/