Interface name needed in IPv6 address from multipass VM (but not from host or non multipass VM)

I use multipass 1.5.0 on Windows 10 with Hyper-V and I observe that an IPv6 address works correctly from the host or from a non multipass VM but interface name must be added between multipass VMs.

There is a work around for ping command because it handles an extended IPv6 syntax with a suffix like %eth0 to designate the interface to use. But programs that use standard IPv6 syntax cannot work.

Procedure to reproduce the problem:

Create 2 multipass VMs:

C:\Users\tfavi\Documents\safe\bin>multipass launch -n V1
Launched: V1

C:\Users\tfavi\Documents\safe\bin>multipass launch -n V2
Launched: V2

Get IPv6 address of V1 on eth0:

C:\Users\tfavi\Documents\safe\bin>multipass exec V1 -- ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:01:4d:84 brd ff:ff:ff:ff:ff:ff
    inet 172.27.104.82/20 brd 172.27.111.255 scope global dynamic eth0
       valid_lft 85627sec preferred_lft 85627sec
    inet6 fe80::215:5dff:fe01:4d84/64 scope link
       valid_lft forever preferred_lft forever

Ping V1 from host (successful without interface name):

C:\Users\tfavi\Documents\safe\bin>ping -6 fe80::215:5dff:fe01:4d84

Envoi d’une requête 'Ping'  fe80::215:5dff:fe01:4d84 avec 32 octets de données :
RĂ©ponse de fe80::215:5dff:fe01:4d84 : temps<1ms
RĂ©ponse de fe80::215:5dff:fe01:4d84 : temps<1ms
RĂ©ponse de fe80::215:5dff:fe01:4d84 : temps<1ms
RĂ©ponse de fe80::215:5dff:fe01:4d84 : temps<1ms

Statistiques Ping pour fe80::215:5dff:fe01:4d84:
    Paquets : envoyés = 4, reçus = 4, perdus = 0 (perte 0%),
Durée approximative des boucles en millisecondes :
    Minimum = 0ms, Maximum = 0ms, Moyenne = 0ms

Ping V1 from V2 (successful with interface name but failed without it):

C:\Users\tfavi\Documents\safe\bin>multipass exec V2 -- ping -c4 -6 fe80::215:5dff:fe01:4d84%eth0
PING fe80::215:5dff:fe01:4d84%eth0(fe80::215:5dff:fe01:4d84%eth0) 56 data bytes
64 bytes from fe80::215:5dff:fe01:4d84%eth0: icmp_seq=1 ttl=64 time=0.513 ms
64 bytes from fe80::215:5dff:fe01:4d84%eth0: icmp_seq=2 ttl=64 time=0.642 ms
64 bytes from fe80::215:5dff:fe01:4d84%eth0: icmp_seq=3 ttl=64 time=0.691 ms
64 bytes from fe80::215:5dff:fe01:4d84%eth0: icmp_seq=4 ttl=64 time=0.703 ms

--- fe80::215:5dff:fe01:4d84%eth0 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3077ms
rtt min/avg/max/mdev = 0.513/0.637/0.703/0.075 ms

C:\Users\tfavi\Documents\safe\bin>multipass exec V2 -- ping -c4 -6 fe80::215:5dff:fe01:4d84
PING fe80::215:5dff:fe01:4d84(fe80::215:5dff:fe01:4d84) 56 data bytes
ping: sendmsg: Invalid argument
ping: sendmsg: Invalid argument
ping: sendmsg: Invalid argument
ping: sendmsg: Invalid argument

--- fe80::215:5dff:fe01:4d84 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3061ms

Ping V1 from a non multipass VM (Windows 10 VM) added to Hyper-V (successful without interface name):

C:\Users\User>ping -6 fe80::215:5dff:fe01:4d84

Pinging fe80::215:5dff:fe01:4d84 with 32 bytes of data:
Reply from fe80::215:5dff:fe01:4d84: time=1ms
Reply from fe80::215:5dff:fe01:4d84: time<1ms
Reply from fe80::215:5dff:fe01:4d84: time<1ms
Reply from fe80::215:5dff:fe01:4d84: time<1ms

Ping statistics for fe80::215:5dff:fe01:4d84:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 1ms, Average = 0ms

Note: Hyper-V doesn’t show any differences in the network adapter parameters of multipass VMs and the Windows 10 VM:

Do I have to modify something in my multipass VMs to make a standard IPv6 address works ? or is it a bug ?

Hi @tfa61, would you be willing to try a “non-Multipass” Ubuntu VM? I would suspect it would behave in the same way, and in fact enabling IPv6 would be something extra that needs to be done.

You could try adding dhcp6: true to /etc/netplan/50-cloud-init.yaml inside the instance to see if it makes a difference. Multipass doesn’t, by default, set IPv6 up at the moment.

I tried a standard Ubuntu VM created by Hyper-V and the problem is the same:

Adding dhcp6: true to /etc/netplan/50-cloud-init.yaml in multipass VMs didn’t change anything. I tried sudo netplan apply, I also restarted the VMs: still the same.

For reference content of file is now:

network:
    ethernets:
        eth0:
            dhcp4: true
            dhcp6: true
            match:
                macaddress: 00:15:5d:01:4d:84
            set-name: eth0
    version: 2

Hi @tfa61,

Have a search for ping6 invalid argument. Because IPv6 isn’t really configured (it’s a link-local configuration), you have to provide the interface on which you want for the ping to go out on. The kernel otherwise gives up on finding a route, since all the interfaces can have that subnet.

You’ll need to provide actual static IPv6 configuration on both sides to avoid this.

But there is only one interface! This is a Ubuntu limitation, because we don’t have to provide it from the Windows 10 VM in the same segment.

Is this related to the IP addresses not appearing in Ubuntu VMs properties in Hyper-V?

PS C:\Windows\system32> get-vm  | Select -ExpandProperty Networkadapters

Name            IsManagementOs VMName             SwitchName     MacAddress   Status IPAddresses
----            -------------- ------             ----------     ----------   ------ -----------
Carte réseau    False          Ubuntu 20.04.1 LTS Default Switch 00155D014D86 {Ok}   {}
Network Adapter False          V1                 Default Switch 00155D014D84 {Ok}   {}
Network Adapter False          V2                 Default Switch 00155D014D85 {Ok}   {}
Network Adapter False          WinDev2012Eval     Default Switch 00155D014D64 {Ok}   {172.27.98.103, fe80::dd44:e52b:27bb:31}

To recap: I can ping V1 from all other VMs but:

  • I need to provide the interface from Ubuntu VMs (both multipass one “V2” and non-multipass one “Ubuntu 20.04.1 LTS”)
  • I don’t need to provide it from the Windows 10 VM (“WinDev2012Eval”)

How can I do that? I don’t see any parameters for that, so I guess this could be done with a cloud-init file. What would its syntax for that?

My need is to be able to launch a bunch of multipass VMs than can communicate through IPv6 between them. The aim is to simply verify connectivity of a program I want to test. This program doesn’t have any means to pass the interface name to use (contrary to ping program).

There will be a dozen of VMs, so a unique cloud-init file shared by all of them would be a plus.

If anything, a Linux one. Not specific to Ubuntu or Multipass.

No, that’s because the Windows VM has guest additions that report the IPs to Hyper-V.

In Ubuntu, you’d use netplan inside the instance.

The Hyper-V DefaultSwitch, that provides default networking to Hyper-V VMs, does not seem to support IPv6 DHCP, which is why link-local addresses are used.

The best I can offer you is to use the new --network feature of Multipass (available in 1.6.0 RC), ensuring that whatever Virtual Switch you choose sets up IPv6 properly over DHCP, or that you add static netplan configuration for them, having used --network "id=…,mode=manual".

1 Like

The problem is that I don’t know how to do it. Documentation is lacking a simple example for this and I don’t know where to start.

In contrast I have found that Docker can do this in a very simple way:

Create a bridge network with local IPv6 unicast addresses:

C:\Users\tfavi>docker network create -d bridge --subnet=fd2f:9ab3:8b80:69fa::/64 --ipv6 br0
be06ba982d33816ce2939087e7ede406f89d9aa230a77b58f31027b74b636541

Create 2 containers on this network:

C:\Users\tfavi>docker run -dit --network br0 --name V1 alpine
585ecbfaeeaae464a72df6a5f1764358700d06a4ea8f23ee61babd90bfb0548b

C:\Users\tfavi>docker run -dit --network br0 --name V2 alpine
536c461fdc00294068214e61caea6c7baf5239b022b20562722bc2a8b21b149e

Get IPv6 address of V1 on eth0:

C:\Users\tfavi>docker exec V1 ip addr show dev eth0
21: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:14:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.2/16 brd 172.20.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd2f:9ab3:8b80:69fa::2/64 scope global flags 02
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe14:2/64 scope link
       valid_lft forever preferred_lft forever

Ping V1 from V2 (successful without interface name):

C:\Users\tfavi>docker exec V2 ping -c4 -6 fd2f:9ab3:8b80:69fa::2
PING fd2f:9ab3:8b80:69fa::2 (fd2f:9ab3:8b80:69fa::2): 56 data bytes
64 bytes from fd2f:9ab3:8b80:69fa::2: seq=0 ttl=64 time=0.070 ms
64 bytes from fd2f:9ab3:8b80:69fa::2: seq=1 ttl=64 time=0.092 ms
64 bytes from fd2f:9ab3:8b80:69fa::2: seq=2 ttl=64 time=0.086 ms
64 bytes from fd2f:9ab3:8b80:69fa::2: seq=3 ttl=64 time=0.045 ms

--- fd2f:9ab3:8b80:69fa::2 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.045/0.073/0.092 ms

Note: For a shorter demo I used Alpine image instead of Ubuntu because ip and ping commands are pre-installed on it. But I get the same result with Ubuntu image.

In conclusion, unless you indicate a solution as simple as this with Mutipass, Docker seems better suited for my needs.