IPv6 may be the future of P2P and thus Direct Connect

Hi, as you may have noticed there was  subnet isolation at Campus Party Europe and a symmetric NAT which made impossible direct communication between campuseros nor NAT traversal respectively.

To circumvent these limitations we created one server where users on the same subnets could talk to each other directly and decided to go with public IPv6 instead of trying to use other solutions like n2n or freelan on the secondary server with which we’d create our own IPv4 private network where all the participants could talk to each other.

In this article I’m going to cover the reasons for this decision, the effects this has had and some thoughts I have developed about IPv6 with this experience along with my prior knowledge. All in all the experience has been good and I feel compelled to keep developing and promoting IPv6 on ADC.

Well first we should start by the reasons for choosing a public IPv6 tunnelled system against a P2P VPN system.

The first is of course politics, up until this year Campus Party has been promoting IPv6 up to the point of making some of their services available with this technology. Whilst closing a P2P VPN system over the excuse of it clogging the uplinks or any other would be something to expect, doing the same with IPv6 tunnels when they haven’t provided IPv6 connectivity would be really hypocrite and make harder for them to defend IPv6 on the future.

The second is variety, whilst there are various IPv6 tunnel brokers allowing IPv6 over UDP over IPv4 thus providing multiple servers to connect to since they’d take care of routing amongst them; a P2P VPN would depend on the number of supernodes in the likely case when connections can’t be established.(either because the system wouldn’t try to contact the participants despite being on the same subnet being unsure if the private ips match the same domain or not or just because the participants ended in different subnets) to forward the traffic between the clients.

The third is that having separate IP spaces could allow a user to prefer one connection way over the other (at least in theory, since this is not implemented on DC software) thus preferring faster IPv4 local transfer and falling back to IPv6 if it failed.

Also by using IPv6 we were showing a way to reveal against the NAT and connect with a public IP address and supporting this new technology.

But finally the most important reason was the ease of use of the freenet6‘s client gogonet. Which in most cases just required an installation and a click to work.

Looking at the results, we could see we had around 10 permanent users on the IPv6 and peaks of 40 on the other one, it’s not big numbers but given the lack of publicity and net issues I’m not surprised since our only competitive hand was the internal chat. The good news is that most of the IPv6 users where power users which will consider IPv6 on the future.

The main issue were the expected speed issues when using the tunnels to connect, likely because of the server’s or internet uplink saturation since in this mode it would work in half duplex (because the data had to be sent back to the tunnel server and forth). Saddly there was little we could do to overcome this.

So well, now come the good facts on IPv6. The first one is I’m considering expanding p2plibre to use it, maybe after the move to uhub. This also means I have a strong conviction to prepare a system allowing hubs to support dual stack clients. Currently the problem is that the hub has no way of knowing if the other protocol address is valid or not (which is bad since can be abused on  DoS attacks). I have some ideas to solve this limitation I may implement in the future involving the CTM and RCM commands on Hub client context.

The second is the ease of use, when taking my decission I remembered IPv6 as hard to use but when the showtime came the alternatives where quite good, my only complain is that brokers like SIXXs may take up to two days to register a tunnel for you.

Finally there is an issue, IPv4 headers are 20 bytes and IPv6 are 40 bytes long but since we were sending them over UDP they where actually 68 bytes, that’s some overload but since we were sending large packets unnoticeable.

There are more thoughts I may share at a later time, now I’m too tired to keep thinking.