-
Notifications
You must be signed in to change notification settings - Fork 880
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Internal macvlan network doesn't work in swarm #2418
Comments
@lemrouch: Can you please provide more information, like commands you executed? |
Hi, My goal:
There was just one problem - it didn't work. |
@lemrouch : I am seeing --config-from network is using parent interface from config-only network . Let me know if I am misunderstanding your query
|
To prevent some possible misundestading I have to ask first: do you have patch from #2414 applied? |
And the correct answer is..? |
Ping. Anybody out there? |
@lemrouch Some background with regard to swarm networking so we speak on the same wavelength.
Typically a container in swarm mode with "--network A_MAC_VLAN_NETWORK" has the following interfaces, sudo ip -d add sudo nsenter -t 16614 -n ip route where eth1 is veth pair attaching to swarm ingress overlay network, and eth0 is macvlan has has direct access external to host. Say if this container also provide a http service at port 80, then http request arrives at and replied to on eth1 (veth pair) and macvlan interface is not in play. if this container itself requires external service, say ping www.google.com, as dictated by above routing table, it go through eth0(macvlan) interface directly. Thus to summarize,
Hope this partially help you determine your requirement. |
Just to be clear what am I trying to achieve: My goal is to allow containers running on swarm to access legacy server which is running without any external connectivity at all in it's own subnet in separate VLAN. There is just one small problem with your background 2) I'm afraid your summary 3) is also not correct. I was able to join the VXLAN of such internal overlay network but it's quite difficult to maintain it as docker manages ARP tables on nodes internally. MACVLAN approach is little bit more complicated to set up as one has to use the config-from networks but it's easy to maintain. |
Happy New Year! |
hi @lemrouch , I'm not sure how an This can taken care in overlay networks by not connecting the container endpoints to the This also makes sense for bridge drivers which can apply iptable policies to achieve this |
My original task: Therefore I need some kind of swarm network which will not mess with default gateway and will allow some non-swarm server to join it. I can create internal overlay network which will create almost exactly what I need but it's quite complicated to connect non-swarm server to overlay network's VXLAN and explain each container where to find the legacy system in it. If I go the MACVLAN way it's a little bit more complicated to setup but the legacy network segment can be connected to the underlay VLAN just fine and each container has just simple static route to the legacy system subnet. From my point of view it's just fine to have traffic limited east-west. |
Thanks for the clarification, so the issue is that this statement doesn't hold true for you - Sharing some |
Yes! The default gateway problem is described in #2406. This issue is about underlay device. Problem is when I finally got internal MACVLAN network working it was using just dummy interface instead of real VLAN device. I have no idea who and why wrote such code. It might be OK for single node but not for swarm. |
@lemrouch |
Frankly, this makes no sense. I think @chiragtayal agreed and went on mission to fix this but got lost in the woods in the process. |
Is there any chance that this annoying bug will get fixed? |
Working example as requested in #2419 Let say we have legacy server (apex1) running PostgreSQL. It has interface in VLAN 49 and PostgreSQL is listening there.
Next to it are two docker swarm nodes (apex1, apex2) which have interfaces in the same VLAN 49:
Let's prepare config-only network for our MACVLAN network where subnets don't collide against each other neither the legacy system:
Create MACVLAN network with config-from parameter:
Inspect network:
Run container for test:
Inspect the container:
Connect the client to our internal MACVLAN network:
Inspect network again. It should have "parent": "enp1s0.49":
Inspect the container again. Default gateway should not be changed with new interface:
Test connectivity from container to legacy system over MACVLAN network:
Run and test container on 2nd node:
Test connectivity over overlay network:
Test connectivity between nodes over MACVLAN network:
|
When I create macvlan internal network it uses dummy interface despite the -o parent setting.
This means containers connected to such network can communicate to each other only if they are running on the same node.
I think internal macvlan network should be able to use interface defined in --config-only network.
The text was updated successfully, but these errors were encountered: