Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Support hairpinning routing with TURN server #82

Open
enobufs opened this issue Jul 18, 2019 · 3 comments
Open

Support hairpinning routing with TURN server #82

enobufs opened this issue Jul 18, 2019 · 3 comments

Comments

@enobufs
Copy link
Member

enobufs commented Jul 18, 2019

Summary

I deployed coturn server on AWS EC2, using -X option to assign a public (elastic) IP address so that allocated relayed transport address is routable.

If I use ICETransportPolicyRelay (use relay only), two pion nodes wouldn't connect with each other. The reason is, and I am pretty sure, the 1:1 private/public port mapper AWS provides (configured via security group) does not route packets between the ports on the same public IP address - so called, the hairpinning routing, is not supported.

It would be great if pion/turn server support, in addition to #56, this hairpinning routing (the green line below) which coturn does not even offer. (I reviewed its config 100 times...)

Screen Shot 2019-07-17 at 11 19 45 PM

Motivation

I believe, as long as UDP get through your local NAT/Firewall, then a relay-to-relay candidate wouldn't be necessary in most cases. Also, if many TURN servers are deployed, then the chances of two endpoints using the same instance of TURN server would be low.

But, if:

  • UDP is blocked by firewall (or only TURN server is reachable from the endpoint)
  • Using a small number of TURN servers are used (or temporarily scaled in per low traffic)

Support of the hairpinning behavior would be crucial.
(also, support of it is not expensive)

Describe alternatives you've considered

Add two relay candidates, maybe?

As we only support UDP right now. This is a low priority, I'd say.

@agowa
Copy link

agowa commented Dec 21, 2019

You could imply fix add this loopback nat capability using iptables. Just DNAT your own public IP to your internal IP.
Or even simpler just do ip addr add dev lo 198.51.100.27/32 on the EC2 server (where 198.51.100.27 is the IPv4 of your EC2 instance). That way when the server tries to establish a connection to that IP the linux kernel will recognize it as being itself and the connection does not leave the instance and the application can answer the request.

And for two different servers behind the same public IP you could just DNAT the public ip to the private internal ip on each instance using iptables.

@enobufs
Copy link
Member Author

enobufs commented Jan 4, 2020

@agowa338 Use of iptables sounds like a good idea. But the ip command you showed tho, would it alter destination IP? I guess your assumption is, the server would also be listening on the added IP right? Also, are you able to show an example of iptables commands I can try? Thanks!

@agowa
Copy link

agowa commented Jan 4, 2020

But the ip command you showed tho, would it alter destination IP?

No, but the application would need to listen on all interfaces 0.0.0.0. And no the destination ip will not change as long as you don't change it in the iptables rule. I don't know why you would want to change it in this cases though.

I guess your assumption is, the server would also be listening on the added IP right?

Yes

Also, are you able to show an example of iptables commands I can try?

You could do a dual NAT:
Assuming your EC2 instance has the public ip 198.51.100.27, the internal IP 192.0.2.5 and the interface is eth0.

ip addr add dev eth0 198.51.100.27/32
iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to 192.0.2.5

And as amazon already provides the DNAT for the return packages we don't need to care about those ourselves.
The network flow will look like:
image

This in addition to the former suggestion will also work if the application is querying the interface for the ip address (as you have the public and private one configured on it). To applications it will look like you have the public ip directly routed to your interface. It is indistinguishable form being e.g. a floating ip routed via BGP (like one could do for anycast scenarios).

# for free to join this conversation on GitHub. Already have an account? # to comment
Development

No branches or pull requests

3 participants