
IPv6: It's time to get on board - el_duderino
https://code.facebook.com/posts/1192894270727351/ipv6-it-s-time-to-get-on-board/
======
api
The big bottleneck is this: the vast majority of top-tier cloud providers
(AWS, Azure, etc.) do _not_ support IPv6 and seem completely uninterested in
doing so. Until this changes I don't foresee a tipping point.

I really don't understand why this is. None of these are old systems. IPv6
existed when AWS was built. Were they really designed from the ground up
without IPv6 support?

Of course I kind of know the answer. While most of hackerdom is pretty open to
trying new things, most networking engineers are (in my experience) ultra
ultra conservative and terrified of change. The majority of the network
engineering community has dug in its heels like a stubborn mule at IPv6 and
must be dragged grumbling into it.

~~~
pixl97
>Were they really designed from the ground up without IPv6 support?

Yup. It is simply amazing how many things I see devs push out that hard code
badness (such as php ip2long, as an example). Intrusion detection that only
will correctly resolve and send IPv4 to automatic firewall rules. Interfaces
that truncate IPv6 addresses so information is lost. Who knows how many
security problems are potentially added by this code that expects one thing
and can get something else.

~~~
api
A lot of it's just laziness. Perhaps we'd see more support if C included a
built-in 128-bit type so IPv6 addresses could be shoved into a variable.

~~~
mprovost
You've actually hit upon (part of) the answer right there. While IPv4 was
designed around the (then) new 32 bit processors of the day, the committee
that came up with v6 decided to ignore the underlying hardware reality.
There's no corresponding type in C (or the underlying hardware, without going
into SSE) that can store a v6 address natively. It just acts as a handbrake
and makes it that much harder to implement. The question should be turned
around, why didn't they design a protocol that could fit into a built-in type
in C? After all, we've only just "run out" of v4 addresses about 20 years
after it was first predicted to happen. Going to 64 bit would have kicked the
can another few decades down the road, and could have led to a much faster
implementation. You can't blame the "designers" of C (since there aren't any
really, it's just the CPU manufacturers), it's the IETF that ignored the
reality of hardware and programming languages.

~~~
api
IP64 would have worked, and would have kicked the can much further than
another few decades. If I were asked to invent IP 2.0 I'd have suggested just
extending the V4 address space by another 32 bits and calling it a day.

But I do see the advantage of big addresses. They give you room for meaningful
cryptographic semantic information and stateless auto configuration, both of
which are very valuable in big distributed systems and for security.

So yeah, could have been simpler but not a big deal. Things like struct
sockaddr_storage make programming it pretty easy too, but most older devs
aren't familiar with them.

~~~
zzalpha
_IP64 would have worked, and would have kicked the can much further than
another few decades. If I were asked to invent IP 2.0 I 'd have suggested just
extending the V4 address space by another 32 bits and calling it a day._

And you'd still have to rewrite all software to take advantage of it.

Everyone who thinks they could've come up with a better solution seems to
conveniently forget that _no solution_ would've both extended the address
space while providing free backward compatibility with v4. Those are, quite
literally, incompatible goals.

Any hacky solution you do come up with (like, say, NAT64) applies to IPv6, as
well.

Meanwhile, IPv6 goes on to address a whole host of other issues, like the mess
that is CIDR, simplified autoconfiguration, built in address
randomization/privacy extensions, etc.

~~~
mprovost
I don't think that's true - there were proposals that extended the address
space in a backwards-compatible way but they were rejected in favour of a
clean-slate approach that required rewriting everything and breaking existing
networks.

~~~
zzalpha
Specifics, please.

How do you extend the address space such that old devices in legacy addresses
can communicate with new devices in an extended address space without
NAT64-like hacks.

~~~
mprovost
This is probably the best historical document looking at the original
discussions around IPng (which eventually became v6). Interestingly these
debates were happening right around the time of the invention of the www so
there isn't a lot online today.

[https://tools.ietf.org/html/rfc1454](https://tools.ietf.org/html/rfc1454)

Section 6 (Transition Plans) talks about plans to use new headers or IP
Options to carry the extra information for the new protocol in an IPv4 packet.
In the end they went for the alternative which was not to have a transition
plan, they went for the dual stack approach instead.

The SIP proposal is interesting, it's basically the same as v4 but with 64 bit
addresses. They were worried that it might not be enough, of course their
predictions about exhausting the 32 bit address space were off by decades.

~~~
zzalpha
That section makes my point for me. From the text:

(a) That IPng hosts can also use IPv4 or (b) There is translation by an
intermediate system

And that "The transition plans espoused by the various proposals are simply
different combinations of the above."

Those options are precisely equivalent to dual stack and NAT64 respectively.
No method of extending the address space van avoid doing one of the two.

Amusingly, it even predicts the situation today, noting that "Experience would
tend to show that all these things will in fact happen, regardless of which
protocol is chosen."

The mistake the committee made was dismissing NAT immediately instead of
standardizing and approach alongside dual stack, to provide operators with
options.

------
jewel
> knowledge of where IPv4 would fall short and its 2.0 version, IPv6

This is a comical example of the now common use of "2.0" to mean "new".

~~~
Zikes
It's a bit ironic since the "v6" literally stands for version 6. IPv4's 2.0
version would be IPv2, a step in the opposite direction.

~~~
devit
No, IPv4 + 2.0 = IPv6.

Easy math.

~~~
Zikes
But if IPv4 was the first version then wouldn't the 2.0 be double that?

IPv4 * 2.0 = IPv8

Why are we bothering with IPv6 when clearly IPv8 is the future?

~~~
faitswulff
Google released V8 years ago, now it's all about VP9

~~~
screaminghawk
We are skipping v9 and going straight to v10 in an effort to show that this is
a new product

~~~
infogulch
I thought it was because legacy software detected the much older versions,
VP95 and VP98, by doing a prefix search for "VP9.*" and they didn't want to
break them.

------
michaelmior
Is there any fundamental reason why IPv6 is faster than IPv4? My intuition is
that the only reason things appear faster is because IPv6 communication is
likely to occur using newer network devices (with newer strongly correlating
with faster).

Edit: The only technical difference I can see is that clients are expected to
do MTU discovery which would probably result in better link utilization.

~~~
jewel
I don't think it's the fundamental reason but the checksum doesn't need to be
recomputed at every stem, which used to be touted as a major advantage. (I
imagine processing power at the router level is cheap enough now that this is
no longer a concern, even for ten-gigabit links.)

[http://ipv6.com/articles/general/Top-10-Features-that-
make-I...](http://ipv6.com/articles/general/Top-10-Features-that-make-
IPv6-greater-than-IPv4-Part4.htm)

~~~
rjsw
With IPv6, you also don't have to compute the checksum twice at each end of
the connection, with IPv4 you have one in the IP header as well as one for UDP
or TCP.

------
porjo
The author, Paul Saab, also presented on IPv6 at @Scale conf:
[https://www.youtube.com/watch?v=_7rcAIbvzVY](https://www.youtube.com/watch?v=_7rcAIbvzVY)

------
dasil003
Not looking forward to having to support geolocation under IPv6.

~~~
sadgit
I'm not sure why it would be that different.

~~~
dasil003
Size of the database is my concern.

~~~
sadgit
The database will get bigger but I'm sure it will accommodate ranges rather
than individual IPS.

------
pixl97
Hey Suddenlink, Facebook is talking to you. Your neglected beta time is up,
time for results.

------
exabrial
Funny, the companies screaming the loudest about IPV6 are the ones that want
to track users the most.

I will be dragging my feat as much as possible on this.

~~~
andrewpe
Explain how these companies will be able to track you better with IPv6. I
would think it's the other way around since each device has (normally) a /64
possible addresses to use and routinely moves to a new address/holds multiple
addresses

~~~
devit
With the default IPv6 addressing scheme the last 64-bit are the MAC address of
your Ethernet card, which makes tracking trivial.

Also, there is usually no NAT, meaning IP addresses are more likely to
uniquely identify the user even without the MAC-based addressing.

But of course setting a cookie allows better tracking, and Tor defeats
tracking anyway for those wishing to intentionally defeat it, so in practice
it probably doesn't make much of a difference.

~~~
pdkl95
For the billionth time, NAT does not provide any security as it only does
address conversion. The other fields in the IP and TCP headers (such as the
connect(2) side's port number, the TCP timestamp[1], badly implemented initial
sequence numbers[2], and anything else that is useful for OS fingerprinting)
can be used to distinguish between users[3] that share a single NATed IP
address.

As you say, HTTP cookies and other higher-level protocol techniques are
usually more than enough to enable tracking. Worrying about your MAC or IP
address is like worrying about your street address. If you are going to be on
the net and ask people to send you data, they need to know where to send it.
It will always be possible for the person sending the data to log the return
addresses. Use Tor (or similar) for privacy, as your IP is by definition
public.

The most powerful feature of the internet was how it allowed anybody to
publish on their own, unrestricted by any central authority, so please stop
trying to create the _digital imprimatur_ [4] with NAT.

[1]
[http://phrack.org/issues/63/3.html#article](http://phrack.org/issues/63/3.html#article)
(section 0x03-2, "TCP Timestamp To count Hosts behind NAT")

[2]
[http://lcamtuf.coredump.cx/oldtcp/tcpseq.html](http://lcamtuf.coredump.cx/oldtcp/tcpseq.html)

[3] [http://memeover.arkem.org/2012/02/identifying-computers-
behi...](http://memeover.arkem.org/2012/02/identifying-computers-behind-nat-
with.html)

[4] [https://www.fourmilab.ch/documents/digital-
imprimatur/](https://www.fourmilab.ch/documents/digital-imprimatur/)

~~~
jgalt212
> For the billionth time, NAT does not provide any security as it only does
> address conversion. The other fields in the IP and TCP headers (such as the
> connect(2) side's port number, the TCP timestamp[1], badly implemented
> initial sequence numbers[2], and anything else that is useful for OS
> fingerprinting) can be used to distinguish between users[3] that share a
> single NATed IP address.

The above are all good points. However, using IPv6 for tracking is trivial.
Getting behind the NAT is not. It should not be trivial to track. From a
behavioral economics framework, the more steps a bad actor has to take to be
"bad", the less less he's to do so. Conversely, the easier it easier for
people to behave good, the more likely they will do so.

~~~
mentat
You don't have to "get behind the NAT" to track these things. Also browser
plugin versions are worth a bunch of entropy.

