Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Connect an LXC VM to an Unmanaged Bridge Network on the Host

💡 LXC VMs Cannot Access Other Internal Bridge Networks on the Host

In many setups, you may want an LXC virtual machine (VM) to communicate with services running on the same host but on a different bridge network that is not managed by LXC—such as Docker’s default docker0 bridge. This is especially useful in hybrid environments where containers and VMs are used side by side for flexibility.

However, by default, LXC VMs are attached to their own managed networks and cannot access these host-side bridges unless explicitly configured. As a result, the VM is unable to reach other network peers, such as Docker containers.


Let’s walk through a practical example to demonstrate this issue:

Understanding the problem statement:

Lets us start a LXC VM and run few ping tests:

$ lxc launch ubuntu-jammy node1
$
$ ping 8.8.8.8           # Internet: Works
$ ping 192.168.1.8       # Host: Works
$ ping 172.17.0.2        # Docker container: Fails

Observed output for the Docker container ping:

28 packets transmitted, 0 received, 100% packet loss

This confirms that while the VM has internet and host access, it cannot reach Docker containers running on the docker0 bridge.


Solution:

> Attach the LXC VM to the docker0 bridge

The docker0 bridge is unmanaged by LXC, meaning it was not created by LXC.

To enable connectivity between the LXC VM and Docker containers, we will add a second network interface to the VM and attach it to the docker0 bridge on the host:

# Create a new NIC device `eth1` and attach it to docker0 as parent.
$ lxc config device add node1 eth1 nic nictype=bridged parent=docker0

Verify new interface with lxc exec node1 ip a.

3: enp6s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

You’ll now see a new interface (e.g., enp6s0), but it is down and lacks an IP address:


> Next, Configure the New Interface

Assign a static IP address from Docker’s subnet (here we use 172.17.0.3) to the new interface:

$ docker network inspect bridge | grep Subnet
#   "Subnet": "172.17.0.0/16",

$ lxc exec node1 bash
node1$
node1$ ip addr add 172.17.0.3/16 dev enp6s0  # assign ip
node1$ ip link set enp6s0 up                 # bring up the interface
node1$
node1$ ip a show enp6s0

Expected output:

3: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP
    inet 172.17.0.3/16 scope global enp6s0

Verify:

$ ip r                    # New route for the Docker network
...
172.17.0.0/16 dev enp6s0 proto kernel scope link src 172.17.0.3



$ ping 172.17.0.2         # Docker container is reachable.

PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.079 ms
^C
--- 172.17.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms



$ curl 172.17.0.2:80      # curl Docker container running nginx.

64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.397 ms
<html>
<head><title>Welcome to nginx!</title></head>
...


Conclusion

Our LXC VM is now successfully connected to the unmanaged host bridge and can communicate with services running inside Docker containers. This approach is ideal for hybrid environments where different virtualization technologies need to coexist on the same network.


To clean up, simply delete the LXC VM, it will cleanup all the network devices, lxc delete node1




< Go to Home >