Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

macvlan not passing back bridged traffic #2587

Open
jgilbert20 opened this issue Oct 19, 2020 · 1 comment
Open

macvlan not passing back bridged traffic #2587

jgilbert20 opened this issue Oct 19, 2020 · 1 comment

Comments

@jgilbert20
Copy link

I can't tell if this is an intentional design or actually a bug.

I'd like to run a docker container that operates a layer 2 bridge to its hosts wired LAN. This will let me encapsulate the VPN setup into a nice composable docker image. However, I can't figure out how to persuade macvlan to actually pass over traffic from a bridge running in the container. This is my setup:

sudo docker network create -d macvlan --subnet=10.0.0.0/24 --gateway=10.0.0.1 -o parent=eth0 macnet32

Start image here:

sudo docker run -it --rm --net=macnet32 --privileged --cap-add=NET_ADMIN --cap-add=SYS_ADMIN --device=/dev/net/tun balenalib/rpi-raspbian:stretch

Install tools we might need:

apt-get update -qq && apt-get install -qq telnet net-tools bash bridge-utils iproute2 tcpdump netcat iputils-ping

At this point, I have a virtualized Mac address

root@47a06376b029:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
42: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:0a:00:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.0.2/24 brd 10.0.0.255 scope global eth0

I can also see broadcast traffic on the docker host's LAN if I snoop using tcpdump on eth0. So far so good.

Next, I reconfigure eth0 onto a bridge:

ip addr del 10.0.0.2/24 dev eth0
ip link add br0 type bridge
ip link set eth0 master br0
ip link set br0 up
ip addr add 10.0.0.2/24 dev br0
ip route add default via 10.0.0.1

Now I can tcpdump on br0, see the hosts network traffic, etc. Good so far.

The problem comes with I add into the br0 a virtual NIC from ZeroTier:

ip link set ztzlgjibd7 master br0

Once I do this, the docker container can see traffic and IP addresses both locally and on the VPN and can freely ping and TCP connect to services both on the LAN and within the virtual network. Nodes on the VPN network are able to see broadcast traffic originating from the docker host's physical network. So packets are flowing from the docker host's lan to the br0 device inside the docker container and then getting pushed out to the VPN. But this appears to be one way only -- traffic on the VPN network is not getting mirrored back to the docker's local LAN. For instance, if I ping a client on the VPN network from the local host network, I can see the arp requests for that IP address appear at that remote client as it should. However, traffic does not pass back from the VPN to the local host network. Thus I can't ping or connect between the networks.

Is this a limitation of macvlan?

@somova
Copy link

somova commented Feb 19, 2021

I noticed the same issue. Good to know, that further tests are useless. I hope that this issue will be solved, soon.
But, I think the problem relies in driver itself that only packets with destination mac address of this virtual network link will be forwarded.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants