
The world in which IPv6 was a good design - dbenamy
http://apenwarr.ca/log/?m=201708#10
======
hueving
>They have to be special, because an IP node has to be able to transmit them
before it has an IP address, which is of course impossible, so it just fills
the IP headers with essentially nonsense

Not nonsense! The global IP broadcast is specified as 255.255.255.255 and is
used by other protocols. The source IP address for the initial discovery is
indeed 0.0.0.0, which is not intuitive, but the rest of the DHCP exchange is
handled with real IP addresses like normal IP traffic. DHCP is very much an IP
protocol (see DHCP relay for how it transits IP networks).

>Actually, RARP worked quite fine and did the same thing as bootp and DHCP
while being much simpler, but we don't talk about that.

Ugh, come on! RARP doesn't provide you with a route to get out of the network
or other extremely useful things like a DNS server.

>and DHCP, which is an IP packet but is really an ethernet protocol, and so
on.

No, it's not an ethernet protocol. It's a layer-3 address assignment protocol
that runs inside of IP, which is normally encapsulated in ethernet frames. You
can have a remote DHCP server running any arbitrary L2 non-ethernet protocol
and if it receives a relayed DHCP request it will reply with IP unicast
perfectly fine with no ethernet involved.

~~~
bogomipz
>"The source IP address for the initial discovery is indeed 0.0.0.0, which is
not intuitive, but the rest of the DHCP exchange is handled with real IP
addresses like normal IP traffic.

No its not. The source host putting a DHCP discover request on the wire
doesn't have a real IP until the complete Discovery, Offer, Request and Ack
sequence is completed which is two round trips during which time the source IP
of the client is still 0.0.0.0. This is why DHCP uses raw sockets.

See:
[https://en.wikiversity.org/wiki/Wireshark/DHCP](https://en.wikiversity.org/wiki/Wireshark/DHCP)

>"DHCP is very much an IP protocol (see DHCP relay for how it transits IP
networks)."

I would say that DHCP is very much a layer 7 protocol, as it deals with leases
and renewals etc. It uses IP yes because it uses UDP and UDP must use IP but I
don't think that makes it an IP protocol.

See:
[http://www.tcpipguide.com/free/t_ApplicationLayerLayer7.htm](http://www.tcpipguide.com/free/t_ApplicationLayerLayer7.htm)

~~~
hueving
Sorry, you're right about the initial handshake. I was thinking of the renewal
process during which the request is sourced from the IP.

------
hueving
>In truth, that really is just complicating things. Now your operating system
has to first look up the ethernet address of 192.168.1.1, find out it's
11:22:33:44:55:66, and finally generate a packet with destination ethernet
address 11:22:33:44:55:66 and destination IP address 10.1.1.1. 192.168.1.1 is
just a pointless intermediate step.

This is completely wrong, it's not pointless.

First, this can be used to easily swap out routers in a network without
reconfiguring any clients or even incurring downtime. Without the intermediary
gateway IP representation, this would mean you would either have to spoof the
MAC on the second router or reconfigure all of the clients to point to the new
gateway.

Second, ethernet addresses are a layer-2 construct and IP routes are a layer 3
construct. Your default gateway is a layer-3 route to 0.0.0.0/0\. There are
protocols for exchanging layer-3 routes like BGP/RIP/etc that should not have
to know anything about the layer-2 addressing scheme to provide the next-hop
address.

Third, routers still need to have an IP address on the subnet anyway to
originate ICMP messages (e.g. TTL expired, MTU exceeded, etc).

Fourth, ARP is still necessary even for the router itself to know how to take
incoming IP traffic from the outside and actually forward it to the
appropriate device on the local network. Otherwise you would have to
statically configure a mapping of local IP addresses to MAC addresses on the
router.

So ARP is critical for separation of concerns between L2 and L3. We don't live
in an ethernet-only world.

>excessive ARP starts becoming one of your biggest nightmares. It's especially
bad on wifi.

Broadcast can become a nightmare. Excessive ARP is a drop in the bucket
compared to other discovery crap that computers spew onto networks.

The pattern of most computers now is to communicate with the external world
(from the LAN perspective) and not much else. So on a network of 1000
computers (an already excessively large broadcast domain), your ARP traffic is
going to be a couple of thousand ARP messages every few hours. If this is
taking down your WiFi network, you have much bigger problems considering all
of those are about a modern webpage load of traffic.

~~~
lwansbrough
> This is completely wrong, it's not pointless.

Pretty reliable rule of software is: if you think it's pointless you probably
don't understand it well enough.

~~~
pjmorris
Reliable rule, and not just for software. Seems like a good place for
mentioning 'Chesterton's Fence'....

In the matter of reforming things, as distinct from deforming them, there is
one plain and simple principle; a principle which will probably be called a
paradox. There exists in such a case a certain institution or law; let us say,
for the sake of simplicity, a fence or gate erected across a road. The more
modern type of reformer goes gaily up to it and says, “I don’t see the use of
this; let us clear it away.” To which the more intelligent type of reformer
will do well to answer: “If you don’t see the use of it, I certainly won’t let
you clear it away. Go away and think. Then, when you can come back and tell me
that you do see the use of it, I may allow you to destroy it.

~~~
msla
Sometimes, a thing is there because it's there, and nobody knows the original
purpose, so by Chesterton, nobody would be able to tear it down, because
nobody can see the use of it.

Chesterton's Fence neatly saves the utterly pointless fences in our lives,
regardless of the damage they can cause.

~~~
Cyranix
I think your conclusion is a result of oversimplifying the concept of
Chesterton's Fence. The idea can be restated as "Don't remove what you don't
understand", but that doesn't imply "This thing is impossible to understand
and may never be removed". A lack of understanding at a point in time doesn't
imply a permanent inability to understand.

Applied properly, the principle of Chesterton's Fence should provide you with
the impetus to observe and learn about the subject. In software, that could
involve creating/improving tests, diagramming method/API invocations,
monitoring network traffic, etc. As a result of your observations, you should
understand the subject deeply enough to determine whether it can be removed
safely. If it can't be safely removed, you now have documentation justifying
its existence (which may, in some cases, form the basis for a plan to migrate,
deprecate, and remove).

------
Animats
What he's really arguing for is a circuit-switched network, so that
connections can be persistent over moves. He just needs a unique connection
ID.

One amusing possibility would be to do this at the HTTPS layer. With HTTPS
Everywhere, most HTTP connections now have a unique connection ID at the
crypto layer - the session key. If you could move an HTTP connection from one
IP address to another on the fly, it could be kept alive over moves. HTTPS
already protects against MITM attacks, and if the transfer is botched or
intercepted, that will break the connection.

I'm not recommending this, but it meets many of his criteria.

The trouble with low-level connection IDs that don't force routing is forgery.
You can fake a source IP address, but that won't get you the reply traffic, so
this is useful only for denial of service attacks. If you have connection IDs,
you need to secure them somehow against replication, playback, etc.

~~~
runeks
> One amusing possibility would be to do this at the HTTPS layer. With HTTPS
> Everywhere, most HTTP connections now have a unique connection ID at the
> crypto layer - the session key.

As a network-ignoramus, who likes cryptography, I’ve long dreamt of a
networking protocol where endpoints are defined, primarily, by a public key.
All messages would be encrypted with the destination public key, and signed by
the source private key.

When a destination node receives a packet from a neighboring node, and ACK
would constitute the destination node’s signature over the received packet,
thus making ACKs provable and portable (“this node has already received that
packet, here’s the proof”).

Packet sources addresses would no longer be fake-able, as that would
constitute breaking asymmetric cryptography.

The protocol would have no concept of a “connection”.

Routing would be left out of this protocol completely, and networks would use
whichever routing protocols they find most efficient.

I wouldn’t be surprised if there were countless issues with a protocol like
this, but something about it just seems so elegant to me that I haven’t
stopped considering it yet.

~~~
Gibbon1
Interesting idea. One thought would be add encryption and signing to the
routing. Meaning unless you have the right permissions, your packets won't
even get to the destination.

~~~
mjevans
I'd much rather routing be about getting data from one known point to another.

A /session/ should be able to be serviced by multiple routes, maybe with a
preference (use the cheaper ones first, the faster ones first, etc) or maybe
over time (in the case of mobile).

Having connectivity based at the session level and having a single server be
'multi-homed' (many addresses, each conforming to a different outbound link)
would peel complexity back from the lower layers and allow them to focus on
being simple, robust, and easy to diagnose.

It would also move control and management back up to higher layers, and as
recently shown with a description of Google's core network devices, back to
the end points where a larger and more complete view can be used to determine
the best overall solution.

~~~
Gibbon1
I think my though comes from the use case where you have thing1 and thing2
that you want to be able to communicate via the internet. But you would rather
not be accessible from other devices.

------
ChuckMcM
It would be interesting to put that post into Genius and annotate its errors.
At some level the premise is both true and false.

I lived the IPV6 debate, I went to IETF meetings, I worked on network services
that would be affected one way or the other, I debated with others the various
ways to "improve" or "replace" V4 to get a better system. And all through that
time, while everyone felt there would be billions and billions of IP
addresses, I was not aware of any discussion of dynamic routing such that a
network endpoint could be found anywhere in the world without configuration.
For everyone at the time felt network infrastructure was fixed, and network
clients moved.

In that way a network client would move from one network to another, and then
in that new network it would have to establish itself and then advertise
somehow its new status. Everyone agreed that there would be some disruption
during this change of status but things like TCP were designed to tolerate
lossy networks. The network would adapt.

That pre-supposes a lot of little networks, with their own sets of rules.
Except that isn't the way cellular carriers think, they have one network and
your relationship to it rarely changes. If you aren't on their network you are
'roaming' and there are fixed rules in place for that. So they trade a lot of
tracking and management for ease of use on the customer. And it enables some
annoying things like 'header injection' in Verizon's case.

Dumb networks versus smart networks. AT&T's original switched network around
the world vision versus Bob Metcalf's self organizing collection of
independent nodes following a small set of rules. Architecturally its a debate
that has been going on for a long long time.

------
hueving
>And nowadays big data centers are basically just SDNed, and you might as well
not be using IP in the data center at all, because nobody's routing the
packets. It's all just one big virtual bus network.

The opposite trend is true in large data centers. L3 fabrics where everything
is routed have become extremely popular because BGP (or custom SDN setups) can
be used to migrate IPs and you get to utilize multiple paths (rather than the
single path offered by STP convergence).

~~~
Natales
Exactly. Virtually all new DCs are being build using Leaf-Spine architectures
that leverages pure IP routing and optimize for East-West traffic internally.
I'm also starting to see the rapid proliferation of L3-only SDN systems like
Project Calico [1] in fairly large Fortune 100 companies, for any sort of end
point, particularly containers these days.

[1]
[https://www.projectcalico.org/learn/](https://www.projectcalico.org/learn/)

------
Hikikomori
Interesting article, but it contains some weird statements.

>It is literally and has always been the software-defined network you use for
interconnecting networks that have gotten too big. But the problem is, it was
always too hard to hardware accelerate, and anyway, it didn't get hardware
accelerated, and configuring DHCP really is a huge pain, so network operators
just learned how to bridge bigger and bigger things.

IP forwarding (longest prefix match) is more complicated than mac forwarding
yes, but it has been done in hardware (ASICs, typically NPUs today) for a long
time now. Operators (I assume ISPs) do not build large bridged networks as
they need their networks to scale as they grow, or they will hit a breaking
point where their network collapses. ISP's typically use centralised DHCP
servers (as opposed to configuring their access routers) and configure their
routers to use DHCP relay. DHCP server configuration is easily automated by
just reading your IPAM data, it's a non-issue.

------
djrogers
> In truth, that really is just complicating things. Now your operating system
> has to first look up the ethernet address of 192.168.1.1, find out it's
> 11:22:33:44:55:66, and finally generate a packet with destination ethernet
> address 11:22:33:44:55:66 and destination IP address 10.1.1.1. 192.168.1.1
> is just a pointless intermediate step.

Bollocks. The abstraction allowed by using an IP address instead of a MAC
address is essential, considering that IP addresses are dynamic (even when
statically configures, devices can and do get replaced) and MAC adresses are
set at the factory. Can you imagine updating the routing table of every device
in your network because you had to replace a core router and the MAC address
was different? It’s the equivalent of publishing your website on an IP address
instead of a DNS hostname...

* yes, I know MAC addresses can be configured by software in many devices, but that’s even more of a hack than using arp to determine a MAC address.

~~~
mritun
What he is saying is that in a routed network (instead of bridged), you don't
need MAC address. At least not at layer 2.

------
tyingq
>One person at work put it best: "layers are only ever added, never removed."

Find this in the software world as well. Something about the java culture
seems especially fascinated with multiple layers of abstraction.

Edit: Ok, some factions of the culture. " _Convenient proxy factory bean
superclass for proxy factory beans that create only singletons "_

~~~
aidenn0
Some infamous examples of such java classes (For some reason Spring always
seemed to have more verbose names than other DI frameworks, which is saying
something):

    
    
      SimpleBeanFactoryAwareAspectInstanceFactory
      SerializedEntityManagerFactoryBeanReference
    

And this is totally not related to that sort of naming scheme, but in trying
to remember those spring names, I stumbled upon this gem:

[http://git.eclipse.org/c/aspectj/org.aspectj.git/tree/org.as...](http://git.eclipse.org/c/aspectj/org.aspectj.git/tree/org.aspectj.matcher/src/org/aspectj/weaver/patterns/HasThisTypePatternTriedToSneakInSomeGenericOrParameterizedTypePatternMatchingStuffAnywhereVisitor.java)

~~~
floatboth
[http://java.metagno.me/](http://java.metagno.me/)

------
okket
> Actually, RARP worked quite fine and did the same thing as bootp and DHCP
> while being much simpler, but we don't talk about that.

Actually, no. You can only set an IP address with RARP, not even a netmask
(RARP comes from pre-CIDR age) or other important stuff like default gateway,
DNS server, etc like you can with DHCP.

~~~
bluGill
Right, but those are all things we are not talking about here, so RARP worked
just fine.

------
akshayn
"If, instead, we had identified sessions using only layer 4 data, then mobile
IP would have worked perfectly."

Mobile IP can still work with the current infrastructure --
[https://en.wikipedia.org/wiki/Mobile_IP](https://en.wikipedia.org/wiki/Mobile_IP)

This proposal was basically a service which would host a static IP for you
(similar to the LTE structure but with IP underneath instead of L2), and
forward to whatever your "real" IP was using IP-in-IP encapsulation.

As the author states, layers are only ever added :)

~~~
apenwarr
That's basically just an IPIP tunnel. It works, but adds tons of latency. A
"good" mobile IP (which would be possible with eg. QUIC) would add no per-
packet latency.

~~~
Dagger2
Correspondent node functionality in Mobile IPv6 does exactly that, by sending
the packets directly between the two nodes involved. Basically it works by
associating the connection with a 128-bit GUID instead of with the node's
current network address, which allows the network address to change without
breaking connections.

So... basically exactly what the suggested solution would look like, if it was
modified to work with all protocols and not just TCP/UDP.

------
ktRolster
We'll switch to IPv6, and every service will still go through port 80.

~~~
ploxiln
minor correction: port 443

------
hueving
>Network operators basically choose bridging vs routing based on how fast they
want it to go and how much they hate configuring DHCP servers, which they
really hate very much, which means they use bridging as much as possible and
routing when they have to.

Very rarely does a network operator use bridging to avoid configuring DHCP.
All modern protocols are built on IP so you still need an addressing scheme
and most people want the Internet so the 169 auto addressing is out. So even
in big bridged networks, you still have a DHCP server. In fact, you configure
less DHCP in a big bridged network than DHCP for a ton of tiny networks.

The advantage to big bridging networks is that you have to setup very little
routing (just the router to get in and out). If you routed between every port
on the network, there would be an excessive amount of configuration involved
to setup prefixes on every single interface.

------
mjevans
Ok, so QUIC or some other common layer 4/4+5 'Modern TCP over UDP for network
compatibility' solution.

Lets just throw away the concept of 'addresses' for authentication and
actually use a cryptographic authentication identifier of somekind, combined
with some mux iteration ID.

~~~
__s
Drop the idea of ports too. Every program gets its own IP

Mentioning ideas like that at work get queer looks about how it'd be
impossible to configure a firewall at that point

But keep going further. End up with 128 bit CPU where every byte is IP
addressable. Necessary security to block random outsiders from reading your
memory, but capable of potentially running various parts remotely
transparently

~~~
apenwarr
With NAT, this has essentially already happened. You could say IPv4 is
actually 48-bit addressing, at least on the client side. For all useful
purposes, NAT expanded every /24 or even /32 subnet by an extra 16 bits, which
is the real reason we still haven't run out of IPv4 addresses.

That could be extended to the server side if we used something like SRV
records instead of defaulting to port 80/443.

~~~
jsjohnst
Maybe it's early and I've not drank enough coffee yet, but what benefit would
this provide? You'll be in effect doubling the DNS lookups needed to connect
to a website adding latency to time to first byte, with seemingly zero
benefit.

~~~
JdeBP
As I pointed out at
[https://news.ycombinator.com/item?id=14721416](https://news.ycombinator.com/item?id=14721416)
, the claim of "doubling the DNS lookups" simply does not hold water if
actually analysed.

------
luckydude
This is an awesome read and hilarious if you have any historical knowledge of
networking.

+1 highly recommend even if all you want is a few chuckles.

Well done.

~~~
luckydude
BTW, the best read in networking is "The Elements of Networking Style" by Mike
Padlipsky.

[https://en.wikipedia.org/wiki/Michael_A._Padlipsky](https://en.wikipedia.org/wiki/Michael_A._Padlipsky)

It's hard to find a copy of that book but oh, man, if you know the stack and
you lived through the ISO/OSI proposals, it's so so good.

I got lucky and read that book after I had ported Lachman's streams based
networking stack into the ETA-10 and SCO's Unix. I didn't know what I was
doing but I had to get the job done, I was just dealing with shorts that
weren't 16 bits, stuff like that. So I was a grunt thrashing around. At some
point I wanted to know more about I was doing, I still have a notebook where I
wrote down every packet format, all the IP stuff, TCP stuff, UDP stuff, ARP
stuff, etc. Shout out to Masscomp because I was a sys admin on their machines
in college and they had a great intro to networking that formed the basis of
my limited understanding of networking.

It was Padlipsky's book that brought the whole thing into focus. I dunno if
you guys have had that clarity problem, I had it again when I went to Sun and
was working on the kernel, had no idea what I was doing, thrashed about, and
slowly, slowly, the architecture of what Sun had done came into focus. It was
amazing to me when I got it. It took a lot of time just looking, reading the
code, trying to see the picture. Padlipsky's book made me get networking long
before I came to Sun, I was a n00b at everything and he made me get it. And it
was funny as heck.

"Do you want protocols that look nice or work nice?"

"If you know what you are doing 3 layers are enough, if you don't 7 aren't"

His book was full of that stuff and it made you get the stack. If you are into
the network stack, and especially if you are trying to figure it out, get that
book.

In fact, if you contact me and say "that's me but I'm broke", I'll see if I
can find a copy on ebay or some place and send it to you.

That book rocks.

~~~
luckydude
If HN doesn't upvote this to the stars, you suck. It is the essence of what HN
likes, it's great tech presented well. Go find that book, go read about Mike,
come on people, dig a little.

I don't care about the upvotes, I care that you go read. And read something
that will help you see.

Maybe this thread is dead and I need to do a top level post about Mike.

~~~
luckydude
Sigh, I think I screwed up this by being stupid. I still think that this is in
the HN wheelhouse but I didn't help with my comments. My bad.

Please go read the book.

------
noahl
This was a very informative article for me, but there was one thing I didn't
understand. At the end he made the case that mobile routing needed essentially
two layers: a fixed per-device (or per session) identifier, and then a
_separate_ routing-layer address that could change as a device moved. QUIC has
session identifiers, and that's great and could solve the problem.

But earlier in that very article, he already pointed out that every device
already has a globally unique identifier used in layer 2 routing ... the
ethernet MAC address.

Would someone please explain to me why we can't use MAC addresses as globally
unique device IDs?

(Is MAC spoofing the issue?)

~~~
nayuki
In theory we can use MAC addresses, but there are problems: 0) Privacy: You
don't want all traffic to be labeled with your hardware ID. 1) Flatness: MACs
are essentially random, and a router would need a huge table to keep track of
who's where. IP (v4/v6) assigns addresses hierarchically, making routing
tables manageable.

~~~
noahl
The idea would be to use IP addresses for all levels of routing, and MAC
addresses only on the endpoints to identify the connection. So routing
actually becomes simpler. However, you have a good point about privacy. One of
the other commenters also mentioned non-unique MACs.

Another point against using MACs which I want to point out is that they don't
make much sense if you have a service running on multiple hosts. I mean, you
could introduce "virtual MACs", but it seems better to keep the idea of
"service ID" separate from "device ID". Session IDs solve the multiple hosts
problem too, by completely avoiding it.

------
femto
The "Internet Mobile Host Protocol" (IMHP) was written as a draft RFC in 1994.
As far as I know it was never adopted, but is it still relevant, even as an
inspiration for IPv6?

[1] [https://www.cs.rice.edu/~dbj/pubs/draft-johnson-
imhp-00.txt](https://www.cs.rice.edu/~dbj/pubs/draft-johnson-imhp-00.txt)

Edit: Its official entry at the IETF: [https://datatracker.ietf.org/doc/draft-
johnson-imhp/](https://datatracker.ietf.org/doc/draft-johnson-imhp/)

------
collinmanderson
A little off topic, but the TCP BBR Congestion Control they mention looks
promising. I've been annoyed by "Bufferbloat" for over a decade and find
different solutions to the problem pretty fascinating.

The nice part about this solution is that it doesn't require making changes to
the individual nodes on the network (e.g. cable modem) in the way that other
solutions have required (small and fair queues).

It also appears to be able to avoid the usual packet-drops of regular TCP
congestion control.

------
Aloha
Part of the difficulty here - is you're not just upgrading the whole stack,
you're instead layering on whatever stack is already there - its a needed part
of deploying any new technology without replacing everything from the basement
up. I'm not sure what this guy would do instead however - as someone with a
decent networking background, I got completely lost in the end.

~~~
loeg
The author has a verbose writing style, and may not be a genius, but they are
clearly familiar with networking protocols and do a good job explaining the
general scene and history.

~~~
Aloha
I just disagree with some of his premise - I don't think its feasible to get
away from any and all layer two broadcast messages - even limiting yourself to
IPv6 Mulitcast still leaves you with some pseudo-broadcast messages for either
housekeeping or auto-discovery purposes - its part of what makes IP based
networking in general work - and I'm talking services that run at layer four -
that rely on that primitive to exist to make certain functionality happen.

Routing still adds complexity, and I see no way for it to not add complexity -
he talks about each switch being a router for example - you then need a way to
determine where a particular subnet you're trying to get to lies - which means
a routing able (akin to a mac address table, but for prefix), and some way to
feret out which interface a particular device lies (akin to arp, but instead
searching for a prefix to route to) - in the end, you end up with the same
primitives and complexity, but you've just moved it up a layer in the stack -
which does nothing to get rid of complexity - it just adds to it. IP networks
are often a tree in organizations - ethernet is often deployed with different
topologies, relying to spanning-tree to give you more redundancy without extra
configuration (at the expense of some delay on re-convergence)

I think also that so long as we're dragging along the legacies of the
disparate layer one technologies (which in any case, would not and could not
go away) - we're kinda stuck with what we have.

~~~
apenwarr
The point is that actually IPv6 already includes all the complexity you're
talking about: complicated multicast to replace complicated broadcast,
complicated routing to replace complicated bridging. The underlying problem
with IPv6 is that it includes all this complexity _because_ they expected to
have to replace layer 2 bridging. But this never happened, so now we have all
those features twice, which is worse.

~~~
Aloha
So IPv6 in essence is the future that was stillborn - we still plan and
develop our networks for an IPv4 world, and then run IPv6 on top of them,
correct?

~~~
apenwarr
It seems so to me. But someday, years from now, the investment might still pay
off. In the meantime it's just pain.

------
therealwardo
I gave a talk about a lot of the same concepts this piece covers -
[https://www.youtube.com/watch?v=g2czluHsmog](https://www.youtube.com/watch?v=g2czluHsmog)

it has a more visual explanation of the OSI model and how it relates to
routing and different kinds of hardware. I also tried to explain some of the
interesting problems in actually building out a network in the second half of
my talk.

if anyone is just trying to learn the basics of networking I'd also strongly
recommend the Juniper Networking Fundamentals online class, its free at
[https://learningportal.juniper.net/juniper/user_activity_inf...](https://learningportal.juniper.net/juniper/user_activity_info.aspx?id=769)
or you can find videos of it on YouTube.

------
anilgulecha
One big UX mistake of IPv6: it was not made backward compatible with IPv4.
(v6)0.0.192.168.1.10 == 192.168.1.10(v4).

This simple design when planning and rolling it out would have meant
incrementally updating the networking stack to also support v6. Now it turns
out v4 and v6 are completely different, and no one has a big enough reason to
make the change until everyone else makes the change. Hard chicken-egg
problem.

~~~
okket
Backwards compatibility can not work. You can not answer an IPv6 packet with
IPv4. There is no room in the header for the much bigger source/return
address.

You can try to do hacks like NAT (like you probably do in your home IPv4
network, which breaks/stops any peer-2-peer protocol). The IPv6 version is
called DNS64/NAT64, and it breaks even more things, e.g. DNSSEC. Because it
not only requires network address translation (lying about your IP), but also
rewriting DNS records (lying about signed DNS records).

The only sane way forward is get rid of IPv4 as fast as possible (even if
'fast' means a decade or two).

~~~
djrogers
> You can try to do hacks like NAT (like you probably do in your home IPv4
> network, which breaks/stops any peer-2-peer protocol).

That’s demonstrably untrue. Sure p2p protocols and NAT devices need to account
for NAT, but to imply that they’re impossible to use with NAT is just silly.
Many p2p protocols work via NAT all the time...

~~~
okket
Sure, you can add another layer of hacks like UPnP or NAT-PMP to work around
the breakage a bit. Or try to use side effects like NAT boxes waiting UDP
responses when they forward such a packet. And you can spend the rest of day
telling yourself this is totally fine, because it works most of the times.

Or could step back and see that this a terminally ill protocol on life
support.

------
undoware
Easily the best technical document I've ever read. Holy heck. "Now I see with
pulse serene, the heart of the machine"

~~~
mulle_nat
Somehow it's always old network guys, who write the best articles.

------
bullen
I would like the extension of IPv4 to be that you have the option to specify
the internal IP you are trying to reach too... so x.x.x.x:192.169.0.100 for
example? That would be backwards compatible and give 8 bytes which is more
than enough for all futures.

------
teh_klev
There's quite a good (as usual) BBC Radio 4 "In Our Time" episode about Robert
Hooke:

[http://www.bbc.co.uk/programmes/b070h6ww](http://www.bbc.co.uk/programmes/b070h6ww)

------
_pmf_
> To save on electronics, people wanted to have a "bus" network

It was also to save sanity and avoiding having to rip apart every office
building for installing hundreds of cables.

------
mirimir
I found the piece informative and entertaining. But I'm not technical enough
to comment much. I would have liked to see what he thought of MPTCP as a
replacement for TCP.

------
betaby
We have to stop IPv6 debate, it's already the reality. Even if you don't like
it, even if you think it's ugly - doesn't matter. US IPv6 mobile traffic
passed 50% some time ago
[https://engineering.linkedin.com/blog/2017/07/linkedin-
passe...](https://engineering.linkedin.com/blog/2017/07/linkedin-passes-
ipv6-milestone) IPv6 at least on mobile is real and many of us even didn't
notice it's there.

------
Hnrobert42
The best post I've ever read on HN.

------
peterburkimsher
That is a beautifully-written article.

The IEEE hardware and IETF software guys have been busy adding complexity to
the networks, with so many legacy protocols (when everyone just uses TCP/IP)
and extra ports (when everything happens on port 80 - seriously, even email is
now on cloud services).

I can't get LTE because of political problems. So I just gave up trying to be
online, and started caching everything possible.

Meanwhile, storage is getting larger capacity, smaller size, and cheaper. I've
got a 512GB SD card in my pocket all the time, with a backup of my laptop in
case my bag gets stolen.

My phone does everything offline if possible. Offline MP3 music. Offline maps.
Wikipedia. StackOverflow. Hacker News. FML. UrbanDictionary. XKCD. The few
YouTube videos I actually want to see again.

The only thing I need Internet for is communication. To send a message, I walk
around looking for open WiFi and type my message to them on Facebook
Messenger. If they need to reach me urgently, they can just use my phone
number (which keeps changing every 6 months for the same political problems).

What if access points had large caches with mirrors of the content people
want? Instead of asking Google's server in the US to send me a map tile, what
if I could just get it from the local WiFi AP's web server? It would be much
faster, and save so much trouble with networking.

Sure, there are some things that people need the network for (e.g. new
content, copyrighted material). But so much else is free of licenses, and
would be possible to mirror locally everywhere.

~~~
isostatic
One look at ntop tells me very little of our traffic is port 80 or 443

------
marasal
This was a great read.

------
feelin_googley
Suprising to see a recommendation for QUIC by someone who seems to ackowledge
djb's contributions and incredible attention to detail.
[http://apenwarr.ca/log/?m=201103#28](http://apenwarr.ca/log/?m=201103#28)

Correct me if wrong, but QUIC was inspired by djb's CurveCP?

Would you rather have djb implement your trusted UDP congestion-controlled
overlay or a company with 70,000+ employees who are paid from the sale of
online ads?

@hashbreaker Apr 15 CurveCP's zero-padding (curvecp.org/messages.html) was
designed years before ringroadbug.com, explicitly to stop that type of attack.

[http://ringroadbug.com](http://ringroadbug.com)

Ring-Road

Leaking Sensitive Data in Security Protocols

What is Ring-Road?

The Ring-Road Bug is a serious vulnerability in security protocols [e.g, QUIC
but not CurveCP] that leaks the length of passwords allowing attackers to
bypass user authentication. The Internet Engineering Task Force for HTTP/2 led
by Google is working to create a patch to protect security protocols
vulnerable to Ring-Road.

Researchers a part of Purdue University identified a major security issue with
Google's QUIC protocol (Quick UDP Internet Connections, pronounced quick).

------
davidreiss
Is anyone else shocked at the low level of adoption of IPv6? I remember how in
the late 90s people were saying we were going to run out of addresses and
everyone need to migrate to IPv6 ASAP. Now, it seems that IPv4 is going to be
around for a long while.

~~~
marcosdumay
Keep in mind that we are currently running out of IPv6 addresses.

The slow adoption is pushed by entities that control a big number of IPv4
addresses. I have no idea how long this situation can sustain itself.

~~~
mrunkel
Citation needed for both wild statements.

------
fundabulousrIII
This article was some of the most egregious nonsense I've read in a while.

------
gridscomputing
"QUEER"?!

~~~
sctb
We detached this subthread from
[https://news.ycombinator.com/item?id=14986630](https://news.ycombinator.com/item?id=14986630)
and marked it off-topic.

------
killjoywashere
Bookmark

------
tardo99
What if the server needs to send you a packet while you're mobile but you
haven't sent it a packet yet so it can update its cache? That packet will be
lost in his scheme. Nice try.

~~~
dcposch
IP is best effort. Packets get lost all the time. Higher protocols, like TCP
and QUIC, all handle packet loss--typically by trying again. Losing a packet
is better than losing all open connections.

~~~
elihu
I think it may be worse than a lost packet, though. If the destination host is
truly gone from that network, the nearest router might reply that no such
computer exists, causing the socket to be disconnected. (It's been awhile
since I've studied TCP/IP; I'm not sure what the exact correct reply would
be...)

Perhaps this could be mitigated by adding a timeout that keeps sockets alive
for awhile in case the destination shows up somewhere else.

------
beagle3
I am very glad IPv6 didn't catch on. The world in which it was designed was
not a world in which everyone (NSA, Google, Facebook) was trying to document
and correlate every tiny thing you do, whether it is related to them or not.

If IPv6 eventually becomes widespread, I hope it comes with ISPs that will let
you replace your prefix, and phones/hardware that will randomize your suffix -
otherwise, the internet becomes completely pseudonymous.

~~~
vertex-four
IPv6 did catch on. Every consumer of the UK's largest broadband services (BT,
Sky) now has access to the IPv6 internet. Many, many people across the world
have access to it, with clients that prefer connecting to IPv6. And much of
the world, especially on mobile but now even on broadband, doesn't even have
an IPv4 address of their own - they're NATed along with their ISP's other
subscribers through a handful of IPs for a whole ISP. An IPv6 address is the
only address they actually have.

IPv6 is the only way we're ever going to create working peer-to-peer
infrastructure. If you intend to keep anonymous, integrate Tor or HORNET into
your protocols.

~~~
beagle3
I am not familiar with the UK situation these days, but every country I've
visited in the last year (US, quite a few european and a couple of asian) IPV6
wasn't more than a small irrelevant thing.

> And much of the world, especially on mobile, doesn't even have an IPv4
> address of their own - they're NATed along with their ISP's other
> subscribers through a handful of IPs for a whole ISP. An IPv6 address is the
> only address they actually have.

And that's a great thing, if you care about privacy (and I do). And yet, peer
to peer on these things works reasonably well using ICE, STUN, TURN and
friends, and if you want a public IPV4 address, the going rate wherever I look
is about $1/month.

> IPv6 is the only way we're ever going to create working peer-to-peer
> infrastructure

We practically have less-than-perfectly-but-still working peer to peer; the
lack of immediately direct connection is not, I believe, what's stopping
"working peer to peer" from happening -- the vast majority of the ISPs in the
US, for example, block incoming port 80 and outgoing port 25, and for good
reasons - most users cannot be trusted to run an addressable peer. So with
IPv6 it would be technically easier to p2p, but practically the same as it
will be firewalled by the ISPs.

And the price for this improvement will be utter complete tracability of your
actions among every website -- right now, google and facebook can only
(easily) exchange info about you if you gave enough of it to them, or if they
decide to share cookies (which you can see and stop). On IPV6, it would be
enough for them (and wikipedia, and ISPs and everyone else) to just trade
access logs.

~~~
vertex-four
> every country I've visited in the last year (US, quite a few european and a
> couple of asian) IPV6 wasn't more than a small irrelevant thing

Well, yes. It works so long as you're connecting to someone else who has an
IPv6 address, you don't really care about it unless it's broken.

> And that's a great thing, if you care about privacy (and I do).

It's really not. Your ISP can quickly deanonymize you, and there's regular
"misconfigurations" which do. Facebook et al have no problem tracking you
between sites pretty much no matter what you do - your browser cache can be
used for that without even touching javascript.

Again, if you want to be anonymous on the internet, use Tor. It accomplishes
what you're looking for in a NAT to a _much_ better degree. If you want to
keep other users' privacy, encourage the use of onion routing in new
protocols, and encourage the use of Tor to access the legacy internet.

> And yet, peer to peer on these things works reasonably well using ICE, STUN,
> TURN and friends

Which require, of course, somebody running a centralised server and willing to
pay for the bandwidth of TURN. This outright prevents proper peer-to-peer
infrastructure from happening - the people running these services need to pay
for them somehow. Even working around it via e.g. Skype's "supernodes" is
expensive in terms of developer cost and the amount of expertise needed to
create such a system.

> the vast majority of the ISPs in the US, for example, block incoming port 80
> and outgoing port 25

And allow all other ports, hopefully? Peer-to-peer infrastructure is not going
to run over HTTP and email. It's going to run over brand new protocols and
ecosystems, many of which are sitting in a variety of research papers waiting
to be implemented.

FTR, they block incoming port 80 because they want to maintain an artificial
differential between "consumer" and "business", not any security rationale -
most of the rest of the world doesn't do that, they just have a firewall
blocking everything incoming on the ISP-provided router by default, and you
can unfirewall port 80 if you want to. Blocking outgoing port 25, otoh, is
done because SMTP is a terrible protocol that by default assumes every node on
the internet is trustworthy, and ISPs were roped in to ensure nobody ever had
to change it.

~~~
beagle3
> It's really not. Your ISP can quickly deanonymize you, and there's regular
> "misconfigurations" which do. Facebook et al have no problem tracking you
> between sites pretty much no matter what you do - your browser cache can be
> used for that without even touching javascript.

Actually, facebook has a great problem tracking me between sites, because I
make sure that they have these great problems (by using different VMs for
different aspects of my works and life, none with access to hardware
acceleration, by using proper web filtering at both the browser and gateway
level). They have it easy with the vast majority of the population, no doubt,
but for now my actions gets mixed with everyone elses in such a way that
Facebook would actually have to assign a person to deanonymize me. Similarly
Google.

My ISP can quickly deanonymize me, but at this point in time they don't unless
they get a government request (I'd be surprised if they actually demand a
warrant). Switching to IPv6 would effectively deanonymize me constantly.

> And allow all other ports, hopefully? Peer-to-peer infrastructure is not
> going to run over HTTP and email. It's going to run over brand new protocols
> and ecosystems, many of which are sitting in a variety of research papers
> waiting to be implemented.

That's a great ideal. No, they don't allow all other ports, but what they
allow or block varies a lot by service class, area and ISP, and you'd know for
sure only after you tried (it used to also change often, but I heard it's
converged; I'm not living in the US anymore)

> Which require, of course, somebody running a centralised server and willing
> to pay for the bandwidth of TURN. This outright prevents proper peer-to-peer
> infrastructure from happening - the people running these services need to
> pay for them somehow. Even working around it via e.g. Skype's "supernodes"
> is expensive in terms of developer cost and the amount of expertise needed
> to create such a system.

Supernodes were retired because they do not work well anymore (haven't in a
few years). I do not find "pay $1/month to provide service" too onerous; there
are also public ICE/STUN/TURN.

I find it disingenuous that you completely dismiss the societal cost
(privacy), and the engineering costs (the reason IPv6 is still not dominant
despite being "in the works" for 20 years now), because some future protocol
which had not been shown useful over those 20 years ("research papers waiting
to be implemented"). There is enough IPv6 to make the case for the need, and
the ONLY case that has been made is "we're running out of IPv4" which is not
wrong, but far from dire as I can _still_ get 100 IPv4 addresses for $50,
which is the same price I've paid for it 10 years ago.

~~~
vertex-four
> My ISP can quickly deanonymize me, but at this point in time they don't
> unless they get a government request

[http://www.bbc.co.uk/news/technology-16721338](http://www.bbc.co.uk/news/technology-16721338)
\- something I remember from recent-ish history. That data is, of course,
still passed to O2's partner organisations (which don't seem to actually be
listed anywhere), and you have no control over it.

> I find it disingenuous that you completely dismiss the societal cost
> (privacy)

I don't. I think there's other, significantly better solutions for it. I don't
think NAT provides reasonable privacy in and of itself.

> the engineering costs

In practice, the fact that it's been spread out over 20 years so far is
because that's how long it takes to get round to replacing an entire nation-
wide deployment of carrier-grade infrastructure at all unless there's other
reasons to do so. Smaller/regional ISPs have been on IPv6 for _years_ now,
partially because buying enough IPv4 space would be prohibitively expensive
and partially because there's no reason not to. The technical details of IPv6
support were resolved in pretty much all networking kit a long, long time ago
- it's a marginal cost at this point. The rest of it is primarily planning,
testing, and replacing ancient consumer routers.

> the ONLY case that has been made is "we're running out of IPv4" which is not
> wrong, but far from dire as I can still get 100 IPv4 addresses for $50,
> which is the same price I've paid for it 10 years ago

And yet I can't get a real IP address for most of the things I'd like to. My
ISP tries its hardest not to sell IPv4 addresses to anyone (it can't buy them
quickly enough, and buying them is a _huge_ resource drain - they lose money
on every address sold, which is then made back up in subscription costs), let
alone "home" users. On the other hand, it literally gives out static IPv6
ranges if you ask nicely.

~~~
beagle3
> That data is, of course, still passed to O2's partner organisations (which
> don't seem to actually be listed anywhere), and you have no control over it.

Verizon was also doing this for mobile customers in the US, perhaps still do.
I vote with my wallet against these ISPs. You did have some control over it,
for example, by using HTTPS. But IPv6 prefixes are so plentiful, that they are
assigned one-per-customer which makes correlating logs incredibly trivial;
Even things like this O2/Verizon still required some per-ISP effort; no such
thing with IPv6; no need to inject headers. The prefix is your undeletable
cookie.

> I don't. I think there's other, significantly better solutions for it. I
> don't think NAT provides reasonable privacy in and of itself.

It's not the NAT that affords privacy - it's the size of the address space
which does have enough IP addresses, but not so many that an ISP can avoid
reassigning them.

The NAT only affords as much privacy as suffix randomization (as has been
noted in this thread), which is "very little" to "not at all".

What are those other "significantly better" solutions you are aware of ? I've
been looking for them, and found none.

> And yet I can't get a real IP address for most of the things I'd like to.

Likely because you are on a residential ISP and it's not their business (my
ISP will gladly sell me one if I switch to the "business class" service, which
is exactly the same except it costs about twice as much; I'd pay more to NOT
have a fixed IP address).

Get an Amazon free tier and tunnel through it. Or pay $2 for a lowly VPS to
tunnel through.

I don't think your wish to experiment is somehow more important than my wish
for privacy. Neither of us get to actually vote (except with our wallet),
though.

~~~
vertex-four
> It's not the NAT that affords privacy - it's the size of the address space
> which does have enough IP addresses, but not so many that an ISP can avoid
> reassigning them.

Again, we live in a world where CGNAT is a thing. My own ISP puts all IPv4
connections through CGNAT by default unless you explicitly opt out. Many
smaller ISPs do the same - one of the new gigabit broadband services in my
country will not allocate IPv4 addresses to customers, instead going for CGNAT
and requiring an additional payment of £5 a month for an IPv4 address.

Mobile ISPs _all_ implement CGNAT on IPv4 at this point - if they attempted to
buy enough address space for every active mobile phone to have an IP, there'd
be a serious problem.

Every single user on each of these networks does not have a routable IPv4
address. You cannot make a direct connection to these devices. IPv6 solves
that problem.

> What are those other "significantly better" solutions you are aware of?

Tor. Future protocols should integrate HORNET or similar. If you really want a
NAT without onion routing, use a VPN that'll do it.

> Likely because you are on a residential ISP

That's literally the point here. There's a differentiation between a
"residential ISP" which can only ever consume and never participate as an
equal part of the network, and a "business ISP" which is _significantly_ more
expensive because it comes with an SLA that I don't need or want.

IPv6 allows me to be an equal part of the network at the same cost as my
current broadband service. I can run a website off my raspberry pi without
paying anyone a penny. I can SSH/remote desktop into my home machine without
having to create a "jump server". I can participate in peer-to-peer networks
without depending on the hope that some other people on the network have
machines that I can directly connect to, so that nobody else has to directly
connect to me.

~~~
beagle3
Ok, just to clear up the confusion (because not all posts in this thread use
the same terminology):

Home NAT, which is equivalent to suffix randomization, does NOT afford any
privacy.

Carrier Grade NAT, which would be equivalent to prefix randomization (if such
a thing existed) DOES afford some privacy, provided that care is taken not to
leak other data (through cookies, browser fingerprinting, stylometrics, etc).

I am not currently at home behind a CGNAT, because my ISP is apparently IPv4
rich, but they are planning to switch at some point. I am behind a CGNAT on my
mobile. I have no problem doing peer to peer on either using a STUN server I
run on a $2 VPS that comes with an IPV4 address. I also tunnel ssh to my home
through it when I want to.

The same ISP, if I request an IPv6, will give me the prefix it assigned to me
the day I signed up. That's how they roll (They actually play it as a feature
- "you pay for a fixed IPv4, but you get a fixed IPv6 for free! without even
asking!")

IPv6 allows you to play "equal part" \- it's routable, yes, but if everyone
was equal we would have mob rule by DDoS attacks way worse than we do now
(perhaps everyone is equal and we will have them .... if that's the case, it
will stop being the case after a few high profile attacks as such).

Also, 99.9% of the people do not know how to secure their networks or devices.
If everything was routable, as you seem to desire, I think we'd be worse off.
As it is, the local home NATs provide a bit of security (which no one would
have designed - we got lucky they were there because of address scarcity) and
the CGNATs/random V4 assignment provide a bit of privacy (which got lip
service, but would not have been as effective if not for address scarcity).

My threat model includes "$company can track my whereabouts online regardless
of what I do about it". Your threat model seems to be "I can't route to my
server without another hop". It's not that one is valid and on is invalid -
it's just that they are incompatible with each other.

~~~
vertex-four
> Also, 99.9% of the people do not know how to secure their networks or
> devices.

I take it that you've never heard of a firewall on your router. Mine ships
default deny. I assume yours does too.

~~~
zimpenfish
Doesn't help if the router is easily hackable -
[http://www.bbc.co.uk/news/technology-40382877](http://www.bbc.co.uk/news/technology-40382877)

> "Because the default wi-fi password formats are known, it's not difficult to
> crack them," said Mr Munro. > Once an attacker has access to your wi-fi
> network, they can seek out further vulnerabilities.

