Not nonsense! The global IP broadcast is specified as 255.255.255.255 and is used by other protocols. The source IP address for the initial discovery is indeed 0.0.0.0, which is not intuitive, but the rest of the DHCP exchange is handled with real IP addresses like normal IP traffic. DHCP is very much an IP protocol (see DHCP relay for how it transits IP networks).
>Actually, RARP worked quite fine and did the same thing as bootp and DHCP while being much simpler, but we don't talk about that.
Ugh, come on! RARP doesn't provide you with a route to get out of the network or other extremely useful things like a DNS server.
>and DHCP, which is an IP packet but is really an ethernet protocol, and so on.
No, it's not an ethernet protocol. It's a layer-3 address assignment protocol that runs inside of IP, which is normally encapsulated in ethernet frames. You can have a remote DHCP server running any arbitrary L2 non-ethernet protocol and if it receives a relayed DHCP request it will reply with IP unicast perfectly fine with no ethernet involved.
> No, it's not an ethernet protocol
I mean, obviously it's not, by definition, but let me ask -- why does it have a hardware address in the protocol? Is it maybe because this protocol, like ARP, is a bridge between the layers, and thus this protocol shares more in common with ARP than with IP?
RARP did not allow a lot of the things that DHCP did, but DHCP can be done in an address-aware mode as well (and BOOTP was much more bare-bones than DHCP, and that's closer to contemporaneous with RARP. All you have with BOOTP is the ability to cross broadcast domains, which are themselves a fiction anyway, as the author points out). If we let RARP do the IP assignment, then DHCP can be used to transmit configuration information to the newly assigned host very easily, and it would allow us to cut out the hardware-addressing aspect of DHCP.
No its not. The source host putting a DHCP discover request on the wire doesn't have a real IP until the complete Discovery, Offer, Request and Ack sequence is completed which is two round trips during which time the source IP of the client is still 0.0.0.0. This is why DHCP uses raw sockets.
>"DHCP is very much an IP protocol (see DHCP relay for how it transits IP networks)."
I would say that DHCP is very much a layer 7 protocol, as it deals with leases and renewals etc. It uses IP yes because it uses UDP and UDP must use IP but I don't think that makes it an IP protocol.
This reminds me fondly of "frottle". A project from the Perth WAFreeNet - a city-wide wireless network back in the early 2000s. The "hidden node" problem with WiFi over these long distances is that each node cannot listen to prevent themselves talking over other nodes (CSMA/CD) -- because they cannot hear the other nodes - only the central access point.
There were costly commercial solutions (these days there are not so costly ones, for example many UBNT Ubiquiti products implement 'AirMax') so instead they implemented the Frottle project which would hold and then later transmit packets using a user-space iptables QUEUE driver when it received it's "token" / turn from the central AP over a TCP connection. The quote isn't on the webpage anymore it seems but it was something about a layer 3 & 4 solution to a layer-2 problem - a great and free hack that worked well :)
When modem and router are separate, the modem only provides an MPoA tunnel to provide Ethernet access to the DSL link, while the router connects to the AC via PPPoE over said tunnel.
But to the original point there is no reason you could not run DHCP over a T1 directly..... no Ethernet at all involved (HDLC or something at the data-link).
This is completely wrong, it's not pointless.
First, this can be used to easily swap out routers in a network without reconfiguring any clients or even incurring downtime. Without the intermediary gateway IP representation, this would mean you would either have to spoof the MAC on the second router or reconfigure all of the clients to point to the new gateway.
Second, ethernet addresses are a layer-2 construct and IP routes are a layer 3 construct. Your default gateway is a layer-3 route to 0.0.0.0/0. There are protocols for exchanging layer-3 routes like BGP/RIP/etc that should not have to know anything about the layer-2 addressing scheme to provide the next-hop address.
Third, routers still need to have an IP address on the subnet anyway to originate ICMP messages (e.g. TTL expired, MTU exceeded, etc).
Fourth, ARP is still necessary even for the router itself to know how to take incoming IP traffic from the outside and actually forward it to the appropriate device on the local network. Otherwise you would have to statically configure a mapping of local IP addresses to MAC addresses on the router.
So ARP is critical for separation of concerns between L2 and L3. We don't live in an ethernet-only world.
>excessive ARP starts becoming one of your biggest nightmares. It's especially bad on wifi.
Broadcast can become a nightmare. Excessive ARP is a drop in the bucket compared to other discovery crap that computers spew onto networks.
The pattern of most computers now is to communicate with the external world (from the LAN perspective) and not much else. So on a network of 1000 computers (an already excessively large broadcast domain), your ARP traffic is going to be a couple of thousand ARP messages every few hours. If this is taking down your WiFi network, you have much bigger problems considering all of those are about a modern webpage load of traffic.
Pretty reliable rule of software is: if you think it's pointless you probably don't understand it well enough.
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.
Chesterton's Fence neatly saves the utterly pointless fences in our lives, regardless of the damage they can cause.
Applied properly, the principle of Chesterton's Fence should provide you with the impetus to observe and learn about the subject. In software, that could involve creating/improving tests, diagramming method/API invocations, monitoring network traffic, etc. As a result of your observations, you should understand the subject deeply enough to determine whether it can be removed safely. If it can't be safely removed, you now have documentation justifying its existence (which may, in some cases, form the basis for a plan to migrate, deprecate, and remove).
If you don't immediately see its use, you should do some more thinking/archeology.
IPv6 designs are only similar to the first half of that statement. If it's in IPv6, it's because someone wanted it there. The people designing IPv6 had almost 2 decades of operational experience running IPv4 to draw from. Things were intentional.
That being said: in this specific case you're right.
differentiating the specific cases where the seemingly pointless solution is actually the best way to solve a problem from the cases where the problem could be solved a different way that eliminates the need for the seemingly pointless solution from the cases where the solution is actually just a pointless flourish.
But isn't this what CARP and HSRP essentially do anyway? Seems like they already found a way around that.
One amusing possibility would be to do this at the HTTPS layer. With HTTPS Everywhere, most HTTP connections now have a unique connection ID at the crypto layer - the session key. If you could move an HTTP connection from one IP address to another on the fly, it could be kept alive over moves. HTTPS already protects against MITM attacks, and if the transfer is botched or intercepted, that will break the connection.
I'm not recommending this, but it meets many of his criteria.
The trouble with low-level connection IDs that don't force routing is forgery. You can fake a source IP address, but that won't get you the reply traffic, so this is useful only for denial of service attacks. If you have connection IDs, you need to secure them somehow against replication, playback, etc.
As a network-ignoramus, who likes cryptography, I’ve long dreamt of a networking protocol where endpoints are defined, primarily, by a public key. All messages would be encrypted with the destination public key, and signed by the source private key.
When a destination node receives a packet from a neighboring node, and ACK would constitute the destination node’s signature over the received packet, thus making ACKs provable and portable (“this node has already received that packet, here’s the proof”).
Packet sources addresses would no longer be fake-able, as that would constitute breaking asymmetric cryptography.
The protocol would have no concept of a “connection”.
Routing would be left out of this protocol completely, and networks would use whichever routing protocols they find most efficient.
I wouldn’t be surprised if there were countless issues with a protocol like this, but something about it just seems so elegant to me that I haven’t stopped considering it yet.
Again, I’m fairly network-ignorant, but as far as I can see this constitutes an inversion of the current architecture: routers need to be aware of signatures, such that they don’t deliver an invalid packet (bad signature) to a destination. So the logical layer would be the lowest one, and a router that delivers a packet with a bad signature would be considered defunct.
The funny thing about the current architecture, in my view, is that the correct destination of an HTTP TLS packet is hidden (encrypted) inside the application data (“Host: google.com”). So routers rely on IP addresses to figure out where the packet needs to go, while the logical destination is only visible to the receiver once it decrypts the packet.
The idea would be moving this information out of the application data, making it cryptographically sound (public key is destination, not domain name), and making routers aware of it such that they know whether a packet was delivered to the correct destination by whether it responds with a valid signature.
IP addresses form a hierarchy, which makes it doable.
I fundamentally believe you should run your encryption over the network, and not try to bake it in.
RPKI and BGPSEC do need to happen but we're an eon away from it being a reality.
Incorrect, you might want to read about TLS SNI (Thought exercise; the server has to pass your packets to the correct vhost before decryption).
You might want to Google dTLS (TLS over UDP) and then read some of the dialogue about why it's impractical on the public internet.
Consider further that by moving your presentation layer logic into the network layer, every time you want to introduce a new cipher you'll need to upgrade every network device on the internet. Think how bad the export-grade crypto problem has been, then multiply by the momentum of Tier-1 ISP install base. Instead of making the network less important, you're handcuffing yourself to Verizon.
A /session/ should be able to be serviced by multiple routes, maybe with a preference (use the cheaper ones first, the faster ones first, etc) or maybe over time (in the case of mobile).
Having connectivity based at the session level and having a single server be 'multi-homed' (many addresses, each conforming to a different outbound link) would peel complexity back from the lower layers and allow them to focus on being simple, robust, and easy to diagnose.
It would also move control and management back up to higher layers, and as recently shown with a description of Google's core network devices, back to the end points where a larger and more complete view can be used to determine the best overall solution.
Mostly your right though.
I lived the IPV6 debate, I went to IETF meetings, I worked on network services that would be affected one way or the other, I debated with others the various ways to "improve" or "replace" V4 to get a better system. And all through that time, while everyone felt there would be billions and billions of IP addresses, I was not aware of any discussion of dynamic routing such that a network endpoint could be found anywhere in the world without configuration. For everyone at the time felt network infrastructure was fixed, and network clients moved.
In that way a network client would move from one network to another, and then in that new network it would have to establish itself and then advertise somehow its new status. Everyone agreed that there would be some disruption during this change of status but things like TCP were designed to tolerate lossy networks. The network would adapt.
That pre-supposes a lot of little networks, with their own sets of rules. Except that isn't the way cellular carriers think, they have one network and your relationship to it rarely changes. If you aren't on their network you are 'roaming' and there are fixed rules in place for that. So they trade a lot of tracking and management for ease of use on the customer. And it enables some annoying things like 'header injection' in Verizon's case.
Dumb networks versus smart networks. AT&T's original switched network around the world vision versus Bob Metcalf's self organizing collection of independent nodes following a small set of rules. Architecturally its a debate that has been going on for a long long time.
The opposite trend is true in large data centers. L3 fabrics where everything is routed have become extremely popular because BGP (or custom SDN setups) can be used to migrate IPs and you get to utilize multiple paths (rather than the single path offered by STP convergence).
>It is literally and has always been the software-defined network you use for interconnecting networks that have gotten too big. But the problem is, it was always too hard to hardware accelerate, and anyway, it didn't get hardware accelerated, and configuring DHCP really is a huge pain, so network operators just learned how to bridge bigger and bigger things.
IP forwarding (longest prefix match) is more complicated than mac forwarding yes, but it has been done in hardware (ASICs, typically NPUs today) for a long time now.
Operators (I assume ISPs) do not build large bridged networks as they need their networks to scale as they grow, or they will hit a breaking point where their network collapses. ISP's typically use centralised DHCP servers (as opposed to configuring their access routers) and configure their routers to use DHCP relay. DHCP server configuration is easily automated by just reading your IPAM data, it's a non-issue.
Find this in the software world as well. Something about the java culture seems especially fascinated with multiple layers of abstraction.
Edit: Ok, some factions of the culture. "Convenient proxy factory bean superclass for proxy factory beans that create only singletons"
When the requirements appeared to change, I was all like, "Are you sure about that? That's great news because that means we can remove these layers from my solution and simplify the code a lot."
Apparently they had never heard about someone wanting to get rid of their own layers, because they just sat there in silence trying to come up with reasons to keep the (now) unnecessary layers.
In the end, I think they agreed that the new requirements must be incorrect, and the old ones still cover both cases. Imagine that! Throwing away new requirements solely to keep layers.
Phil Freeman has said that there are entire libraries on npm that just implement traverse.
Actually, no. You can only set an IP address with RARP, not even a netmask (RARP comes from pre-CIDR age) or other important stuff like default gateway, DNS server, etc like you can with DHCP.
Bollocks. The abstraction allowed by using an IP address instead of a MAC address is essential, considering that IP addresses are dynamic (even when statically configures, devices can and do get replaced) and MAC adresses are set at the factory. Can you imagine updating the routing table of every device in your network because you had to replace a core router and the MAC address was different? It’s the equivalent of publishing your website on an IP address instead of a DNS hostname...
* yes, I know MAC addresses can be configured by software in many devices, but that’s even more of a hack than using arp to determine a MAC address.
Correct me if wrong, but QUIC was inspired by djb's CurveCP?
Would you rather have djb implement your trusted UDP congestion-controlled overlay or a company with 70,000+ employees who are paid from the sale of online ads?
CurveCP's zero-padding (curvecp.org/messages.html) was designed years before ringroadbug.com, explicitly to stop that type of attack.
Leaking Sensitive Data in Security Protocols
What is Ring-Road?
The Ring-Road Bug is a serious vulnerability in security protocols [e.g, QUIC but not CurveCP] that leaks the length of passwords allowing attackers to bypass user authentication. The Internet Engineering Task Force for HTTP/2 led by Google is working to create a patch to protect security protocols vulnerable to Ring-Road.
Researchers a part of Purdue University identified a major security issue with Google's QUIC protocol (Quick UDP Internet Connections, pronounced quick).
Mobile IP can still work with the current infrastructure -- https://en.wikipedia.org/wiki/Mobile_IP
This proposal was basically a service which would host a static IP for you (similar to the LTE structure but with IP underneath instead of L2), and forward to whatever your "real" IP was using IP-in-IP encapsulation.
As the author states, layers are only ever added :)
So... basically exactly what the suggested solution would look like, if it was modified to work with all protocols and not just TCP/UDP.
Very rarely does a network operator use bridging to avoid configuring DHCP. All modern protocols are built on IP so you still need an addressing scheme and most people want the Internet so the 169 auto addressing is out. So even in big bridged networks, you still have a DHCP server. In fact, you configure less DHCP in a big bridged network than DHCP for a ton of tiny networks.
The advantage to big bridging networks is that you have to setup very little routing (just the router to get in and out). If you routed between every port on the network, there would be an excessive amount of configuration involved to setup prefixes on every single interface.
Lets just throw away the concept of 'addresses' for authentication and actually use a cryptographic authentication identifier of somekind, combined with some mux iteration ID.
It implements a virtual ethernet layer using cryptographic identities underneath.
Here's the relevant section on the address computation from the manual: https://www.zerotier.com/manual.shtml#2_1_2
ZT is designed to just be an end to end encrypted virtual LAN for anything you want to dump across it.
There's also a library implementation which effectively gives every app its own cryptographically-derived address (if that's what you're into).
What specifically do you mean by that, and are you sure that it still applies to the IETF version?
I've not read about the IETF version; I'll look into it.
Mentioning ideas like that at work get queer looks about how it'd be impossible to configure a firewall at that point
But keep going further. End up with 128 bit CPU where every byte is IP addressable. Necessary security to block random outsiders from reading your memory, but capable of potentially running various parts remotely transparently
That could be extended to the server side if we used something like SRV records instead of defaulting to port 80/443.
Not for any technical reasons, though--I'm just lazy, and specifying port numbers is annoying.
JdeBP's post provides a good answer about the SRV records. The short version is a single DNS packet can contain both A/AAAA and SRV responses.
This is already a thing: https://en.wikipedia.org/wiki/Remote_direct_memory_access
The folks at IETF meetings are doing a wonderful job trying to keep existing tech working. That's why you always extend old standards and rarely deprecate anything. Just look at BGP, for example.
We, the application software programmers on layers 5 through 7 with the "move fast and break things" attitude, could not ever have designed anything like the internet and keep it running as long as our current one has been.
Might be an interesting weekend project sometime.
That does imply that there should be a way (possibly host to host, possibly build in to some kind of service resolution system) of looking up a 'name entry' and 'service type'.
Why is that? Wouldn't those addresses be hierarchical?
Mobile addresses for applications would make the usual case of "hey, you migrate service X to that other switch, and nothing works anymore" basically disappear.
Anyone know what's going on with those protocols?
Didn't Apple use something called Multipath TCP for mobile connections? Is that vaguely related to mobile IP?
+1 highly recommend even if all you want is a few chuckles.
It's hard to find a copy of that book but oh, man, if you know the stack and you lived through the ISO/OSI proposals, it's so so good.
I got lucky and read that book after I had ported Lachman's streams based networking stack into the ETA-10 and SCO's Unix. I didn't know what I was doing but I had to get the job done, I was just dealing with shorts that weren't 16 bits, stuff like that. So I was a grunt thrashing around. At some point I wanted to know more about I was doing, I still have a notebook where I wrote down every packet format, all the IP stuff, TCP stuff, UDP stuff, ARP stuff, etc. Shout out to Masscomp because I was a sys admin on their machines in college and they had a great intro to networking that formed the basis of my limited understanding of networking.
It was Padlipsky's book that brought the whole thing into focus. I dunno if you guys have had that clarity problem, I had it again when I went to Sun and was working on the kernel, had no idea what I was doing, thrashed about, and slowly, slowly, the architecture of what Sun had done came into focus. It was amazing to me when I got it. It took a lot of time just looking, reading the code, trying to see the picture. Padlipsky's book made me get networking long before I came to Sun, I was a n00b at everything and he made me get it. And it was funny as heck.
"Do you want protocols that look nice or work nice?"
"If you know what you are doing 3 layers are enough, if you don't 7 aren't"
His book was full of that stuff and it made you get the stack. If you are into the network stack, and especially if you are trying to figure it out, get that book.
In fact, if you contact me and say "that's me but I'm broke", I'll see if I can find a copy on ebay or some place and send it to you.
That book rocks.
I don't care about the upvotes, I care that you go read. And read something that will help you see.
Maybe this thread is dead and I need to do a top level post about Mike.
Please go read the book.
But earlier in that very article, he already pointed out that every device already has a globally unique identifier used in layer 2 routing ... the ethernet MAC address.
Would someone please explain to me why we can't use MAC addresses as globally unique device IDs?
(Is MAC spoofing the issue?)
Another point against using MACs which I want to point out is that they don't make much sense if you have a service running on multiple hosts. I mean, you could introduce "virtual MACs", but it seems better to keep the idea of "service ID" separate from "device ID". Session IDs solve the multiple hosts problem too, by completely avoiding it.
"The problem with ethernet addresses is they're assigned sequentially at the factory, so they can't be hierarchical. That means the "bridging table" is not as nice as a modern IP routing table, which can talk about the route for a whole subnet at a time."
They are supposed to be unique, but in the real world they are not.
Edit: Its official entry at the IETF: https://datatracker.ietf.org/doc/draft-johnson-imhp/
The nice part about this solution is that it doesn't require making changes to the individual nodes on the network (e.g. cable modem) in the way that other solutions have required (small and fair queues).
It also appears to be able to avoid the usual packet-drops of regular TCP congestion control.
Routing still adds complexity, and I see no way for it to not add complexity - he talks about each switch being a router for example - you then need a way to determine where a particular subnet you're trying to get to lies - which means a routing able (akin to a mac address table, but for prefix), and some way to feret out which interface a particular device lies (akin to arp, but instead searching for a prefix to route to) - in the end, you end up with the same primitives and complexity, but you've just moved it up a layer in the stack - which does nothing to get rid of complexity - it just adds to it. IP networks are often a tree in organizations - ethernet is often deployed with different topologies, relying to spanning-tree to give you more redundancy without extra configuration (at the expense of some delay on re-convergence)
I think also that so long as we're dragging along the legacies of the disparate layer one technologies (which in any case, would not and could not go away) - we're kinda stuck with what we have.
Mobile QUIC will give Google even more insight about how users move from network to network..
Mobile IP didn't fail because the latency was too bad. It failed because there was financially viable use case. The mobile providers wanted a purely network-based roaming solution to control the billing model. WiFi vendors couldn't count on OS support of mobile IP, so they went for L2 solutions. Because there were L2 solutions, the OS vendors didn't bother to implement mobile IP.
it has a more visual explanation of the OSI model and how it relates to routing and different kinds of hardware. I also tried to explain some of the interesting problems in actually building out a network in the second half of my talk.
if anyone is just trying to learn the basics of networking I'd also strongly recommend the Juniper Networking Fundamentals online class, its free at https://learningportal.juniper.net/juniper/user_activity_inf... or you can find videos of it on YouTube.
This simple design when planning and rolling it out would have meant incrementally updating the networking stack to also support v6. Now it turns out v4 and v6 are completely different, and no one has a big enough reason to make the change until everyone else makes the change. Hard chicken-egg problem.
You can try to do hacks like NAT (like you probably do in your home IPv4 network, which breaks/stops any peer-2-peer protocol). The IPv6 version is called DNS64/NAT64, and it breaks even more things, e.g. DNSSEC. Because it not only requires network address translation (lying about your IP), but also rewriting DNS records (lying about signed DNS records).
The only sane way forward is get rid of IPv4 as fast as possible (even if 'fast' means a decade or two).
That’s demonstrably untrue. Sure p2p protocols and NAT devices need to account for NAT, but to imply that they’re impossible to use with NAT is just silly. Many p2p protocols work via NAT all the time...
Or could step back and see that this a terminally ill protocol on life support.
To be fair, I don't think the authors of v6 at the time realized how much friction an alternative IP stack would cause.
(Because it makes things even more complicated, now you'll have not two incompatible but about three things.)
And of course the problem is/was of economics. No incentive to change, v4 is good enough for now, for the incumbents. And even if large companies with new users were running out of v4, they can afford trading netblocks and/or carrier grade NAT devices, both are cheaper than telling others to upgrade (and you'd lose customers/subscribers otherwise).
The sin was committed when IPv4 was made and not initially designed to allow for variable / expanded address space -- it is not IPv6's fault.
Adding an IP Option to IPv4 packets that could carry extra address bits was not an option either -- IP options aren't preserved much at all on the Internet. Furthermore, even if most routers didn't drop IP options, adding "v6" address space via IP option in a packet that old/v4-only devices would nevertheless attempt to parse would have been hell operationally.
granted, IPv6 has lots of complexity/flaws/idiosyncrasies/weirdnesses (multicast, mobility, slaac, ndp, prettyprinting / the colons, extension headers, etc.) that mostly only look good through the rose-tinted glasses of the 90s and significantly slowed down deployment -- and in the end mostly ended up as "difference for difference's sake", but that the transition is difficult is also IPv4's fault for not having a robust address space expansion mechanism.
There is no mapping back from IPv6 to IPv4 (so IPv6 is backwards compatible, but not forwards compatible), but it can't be because the whole point is to have a larger address space.
Hence IPv6 is definitely not backward compatible.
Then there are switches and routers that make assumptions about incoming packets, so we can't do strange bit hacks with IPv4 packets. http://seclists.org/nanog/2016/Dec/29
Switching to a new protocol is the only real choice at this point.
It was also to save sanity and avoiding having to rip apart every office building for installing hundreds of cables.
The IEEE hardware and IETF software guys have been busy adding complexity to the networks, with so many legacy protocols (when everyone just uses TCP/IP) and extra ports (when everything happens on port 80 - seriously, even email is now on cloud services).
I can't get LTE because of political problems. So I just gave up trying to be online, and started caching everything possible.
Meanwhile, storage is getting larger capacity, smaller size, and cheaper. I've got a 512GB SD card in my pocket all the time, with a backup of my laptop in case my bag gets stolen.
My phone does everything offline if possible. Offline MP3 music. Offline maps. Wikipedia. StackOverflow. Hacker News. FML. UrbanDictionary. XKCD. The few YouTube videos I actually want to see again.
The only thing I need Internet for is communication. To send a message, I walk around looking for open WiFi and type my message to them on Facebook Messenger. If they need to reach me urgently, they can just use my phone number (which keeps changing every 6 months for the same political problems).
What if access points had large caches with mirrors of the content people want? Instead of asking Google's server in the US to send me a map tile, what if I could just get it from the local WiFi AP's web server? It would be much faster, and save so much trouble with networking.
Sure, there are some things that people need the network for (e.g. new content, copyrighted material). But so much else is free of licenses, and would be possible to mirror locally everywhere.
The slow adoption is pushed by entities that control a big number of IPv4 addresses. I have no idea how long this situation can sustain itself.
Perhaps this could be mitigated by adding a timeout that keeps sockets alive for awhile in case the destination shows up somewhere else.
If IPv6 eventually becomes widespread, I hope it comes with ISPs that will let you replace your prefix, and phones/hardware that will randomize your suffix - otherwise, the internet becomes completely pseudonymous.
IPv6 is the only way we're ever going to create working peer-to-peer infrastructure. If you intend to keep anonymous, integrate Tor or HORNET into your protocols.
> And much of the world, especially on mobile, doesn't even have an IPv4 address of their own - they're NATed along with their ISP's other subscribers through a handful of IPs for a whole ISP. An IPv6 address is the only address they actually have.
And that's a great thing, if you care about privacy (and I do). And yet, peer to peer on these things works reasonably well using ICE, STUN, TURN and friends, and if you want a public IPV4 address, the going rate wherever I look is about $1/month.
> IPv6 is the only way we're ever going to create working peer-to-peer infrastructure
We practically have less-than-perfectly-but-still working peer to peer; the lack of immediately direct connection is not, I believe, what's stopping "working peer to peer" from happening -- the vast majority of the ISPs in the US, for example, block incoming port 80 and outgoing port 25, and for good reasons - most users cannot be trusted to run an addressable peer. So with IPv6 it would be technically easier to p2p, but practically the same as it will be firewalled by the ISPs.
And the price for this improvement will be utter complete tracability of your actions among every website -- right now, google and facebook can only (easily) exchange info about you if you gave enough of it to them, or if they decide to share cookies (which you can see and stop). On IPV6, it would be enough for them (and wikipedia, and ISPs and everyone else) to just trade access logs.
Well, yes. It works so long as you're connecting to someone else who has an IPv6 address, you don't really care about it unless it's broken.
> And that's a great thing, if you care about privacy (and I do).
Again, if you want to be anonymous on the internet, use Tor. It accomplishes what you're looking for in a NAT to a much better degree. If you want to keep other users' privacy, encourage the use of onion routing in new protocols, and encourage the use of Tor to access the legacy internet.
> And yet, peer to peer on these things works reasonably well using ICE, STUN, TURN and friends
Which require, of course, somebody running a centralised server and willing to pay for the bandwidth of TURN. This outright prevents proper peer-to-peer infrastructure from happening - the people running these services need to pay for them somehow. Even working around it via e.g. Skype's "supernodes" is expensive in terms of developer cost and the amount of expertise needed to create such a system.
> the vast majority of the ISPs in the US, for example, block incoming port 80 and outgoing port 25
And allow all other ports, hopefully? Peer-to-peer infrastructure is not going to run over HTTP and email. It's going to run over brand new protocols and ecosystems, many of which are sitting in a variety of research papers waiting to be implemented.
FTR, they block incoming port 80 because they want to maintain an artificial differential between "consumer" and "business", not any security rationale - most of the rest of the world doesn't do that, they just have a firewall blocking everything incoming on the ISP-provided router by default, and you can unfirewall port 80 if you want to. Blocking outgoing port 25, otoh, is done because SMTP is a terrible protocol that by default assumes every node on the internet is trustworthy, and ISPs were roped in to ensure nobody ever had to change it.
Actually, facebook has a great problem tracking me between sites, because I make sure that they have these great problems (by using different VMs for different aspects of my works and life, none with access to hardware acceleration, by using proper web filtering at both the browser and gateway level). They have it easy with the vast majority of the population, no doubt, but for now my actions gets mixed with everyone elses in such a way that Facebook would actually have to assign a person to deanonymize me. Similarly Google.
My ISP can quickly deanonymize me, but at this point in time they don't unless they get a government request (I'd be surprised if they actually demand a warrant). Switching to IPv6 would effectively deanonymize me constantly.
> And allow all other ports, hopefully? Peer-to-peer infrastructure is not going to run over HTTP and email. It's going to run over brand new protocols and ecosystems, many of which are sitting in a variety of research papers waiting to be implemented.
That's a great ideal. No, they don't allow all other ports, but what they allow or block varies a lot by service class, area and ISP, and you'd know for sure only after you tried (it used to also change often, but I heard it's converged; I'm not living in the US anymore)
> Which require, of course, somebody running a centralised server and willing to pay for the bandwidth of TURN. This outright prevents proper peer-to-peer infrastructure from happening - the people running these services need to pay for them somehow. Even working around it via e.g. Skype's "supernodes" is expensive in terms of developer cost and the amount of expertise needed to create such a system.
Supernodes were retired because they do not work well anymore (haven't in a few years). I do not find "pay $1/month to provide service" too onerous; there are also public ICE/STUN/TURN.
I find it disingenuous that you completely dismiss the societal cost (privacy), and the engineering costs (the reason IPv6 is still not dominant despite being "in the works" for 20 years now), because some future protocol which had not been shown useful over those 20 years ("research papers waiting to be implemented"). There is enough IPv6 to make the case for the need, and the ONLY case that has been made is "we're running out of IPv4" which is not wrong, but far from dire as I can still get 100 IPv4 addresses for $50, which is the same price I've paid for it 10 years ago.
http://www.bbc.co.uk/news/technology-16721338 - something I remember from recent-ish history. That data is, of course, still passed to O2's partner organisations (which don't seem to actually be listed anywhere), and you have no control over it.
> I find it disingenuous that you completely dismiss the societal cost (privacy)
I don't. I think there's other, significantly better solutions for it. I don't think NAT provides reasonable privacy in and of itself.
> the engineering costs
In practice, the fact that it's been spread out over 20 years so far is because that's how long it takes to get round to replacing an entire nation-wide deployment of carrier-grade infrastructure at all unless there's other reasons to do so. Smaller/regional ISPs have been on IPv6 for years now, partially because buying enough IPv4 space would be prohibitively expensive and partially because there's no reason not to. The technical details of IPv6 support were resolved in pretty much all networking kit a long, long time ago - it's a marginal cost at this point. The rest of it is primarily planning, testing, and replacing ancient consumer routers.
> the ONLY case that has been made is "we're running out of IPv4" which is not wrong, but far from dire as I can still get 100 IPv4 addresses for $50, which is the same price I've paid for it 10 years ago
And yet I can't get a real IP address for most of the things I'd like to. My ISP tries its hardest not to sell IPv4 addresses to anyone (it can't buy them quickly enough, and buying them is a huge resource drain - they lose money on every address sold, which is then made back up in subscription costs), let alone "home" users. On the other hand, it literally gives out static IPv6 ranges if you ask nicely.
Verizon was also doing this for mobile customers in the US, perhaps still do. I vote with my wallet against these ISPs. You did have some control over it, for example, by using HTTPS. But IPv6 prefixes are so plentiful, that they are assigned one-per-customer which makes correlating logs incredibly trivial; Even things like this O2/Verizon still required some per-ISP effort; no such thing with IPv6; no need to inject headers. The prefix is your undeletable cookie.
> I don't. I think there's other, significantly better solutions for it. I don't think NAT provides reasonable privacy in and of itself.
It's not the NAT that affords privacy - it's the size of the address space which does have enough IP addresses, but not so many that an ISP can avoid reassigning them.
The NAT only affords as much privacy as suffix randomization (as has been noted in this thread), which is "very little" to "not at all".
What are those other "significantly better" solutions you are aware of ? I've been looking for them, and found none.
> And yet I can't get a real IP address for most of the things I'd like to.
Likely because you are on a residential ISP and it's not their business (my ISP will gladly sell me one if I switch to the "business class" service, which is exactly the same except it costs about twice as much; I'd pay more to NOT have a fixed IP address).
Get an Amazon free tier and tunnel through it. Or pay $2 for a lowly VPS to tunnel through.
I don't think your wish to experiment is somehow more important than my wish for privacy. Neither of us get to actually vote (except with our wallet), though.
Again, we live in a world where CGNAT is a thing. My own ISP puts all IPv4 connections through CGNAT by default unless you explicitly opt out. Many smaller ISPs do the same - one of the new gigabit broadband services in my country will not allocate IPv4 addresses to customers, instead going for CGNAT and requiring an additional payment of £5 a month for an IPv4 address.
Mobile ISPs all implement CGNAT on IPv4 at this point - if they attempted to buy enough address space for every active mobile phone to have an IP, there'd be a serious problem.
Every single user on each of these networks does not have a routable IPv4 address. You cannot make a direct connection to these devices. IPv6 solves that problem.
> What are those other "significantly better" solutions you are aware of?
Tor. Future protocols should integrate HORNET or similar. If you really want a NAT without onion routing, use a VPN that'll do it.
> Likely because you are on a residential ISP
That's literally the point here. There's a differentiation between a "residential ISP" which can only ever consume and never participate as an equal part of the network, and a "business ISP" which is significantly more expensive because it comes with an SLA that I don't need or want.
IPv6 allows me to be an equal part of the network at the same cost as my current broadband service. I can run a website off my raspberry pi without paying anyone a penny. I can SSH/remote desktop into my home machine without having to create a "jump server". I can participate in peer-to-peer networks without depending on the hope that some other people on the network have machines that I can directly connect to, so that nobody else has to directly connect to me.
Home NAT, which is equivalent to suffix randomization, does NOT afford any privacy.
Carrier Grade NAT, which would be equivalent to prefix randomization (if such a thing existed) DOES afford some privacy, provided that care is taken not to leak other data (through cookies, browser fingerprinting, stylometrics, etc).
I am not currently at home behind a CGNAT, because my ISP is apparently IPv4 rich, but they are planning to switch at some point. I am behind a CGNAT on my mobile. I have no problem doing peer to peer on either using a STUN server I run on a $2 VPS that comes with an IPV4 address. I also tunnel ssh to my home through it when I want to.
The same ISP, if I request an IPv6, will give me the prefix it assigned to me the day I signed up. That's how they roll (They actually play it as a feature - "you pay for a fixed IPv4, but you get a fixed IPv6 for free! without even asking!")
IPv6 allows you to play "equal part" - it's routable, yes, but if everyone was equal we would have mob rule by DDoS attacks way worse than we do now (perhaps everyone is equal and we will have them .... if that's the case, it will stop being the case after a few high profile attacks as such).
Also, 99.9% of the people do not know how to secure their networks or devices. If everything was routable, as you seem to desire, I think we'd be worse off. As it is, the local home NATs provide a bit of security (which no one would have designed - we got lucky they were there because of address scarcity) and the CGNATs/random V4 assignment provide a bit of privacy (which got lip service, but would not have been as effective if not for address scarcity).
My threat model includes "$company can track my whereabouts online regardless of what I do about it". Your threat model seems to be "I can't route to my server without another hop". It's not that one is valid and on is invalid - it's just that they are incompatible with each other.
I take it that you've never heard of a firewall on your router. Mine ships default deny. I assume yours does too.
> "Because the default wi-fi password formats are known, it's not difficult to crack them," said Mr Munro.
> Once an attacker has access to your wi-fi network, they can seek out further vulnerabilities.
Out of curiosity... without cheating, what do you reckon v6 deployment is at for clients in the US -- that is, what percentage of clients do you think use v6 to connect to v6-enabled sites?
Are you familiar of an ISP that will give you a new v6 prefix on demand (say, once every hour or day or week?) Or one that mixes all customers? otherwise, the NAT you do on your own behind that prefix is of very little (though not strictly zero) practical use; It just means that if someone gets access logs from two websites, they don't know if two requests were made from my laptop or one was from mine and the other from my kids.
I am not living in the US these days and have no knowledge on which to base the estimate ... but I haven't received an AAAA DNS record to any request I've made through several countries.
edit: added: My specific browser setup, described somewhere else around here, makes it hard to track or fingerprint. IPv6 takes that ability away from me (and everyone else).
Wait, whether you receive an AAAA DNS record has nothing to do with whether you're in IPv6 - it's to do with whether you're requesting AAAA records. How exactly are you testing this? What does `dig google.com AAAA @22.214.171.124` get you?
$ dig www.amazon.com AAAA @126.96.36.199
; <<>> DiG 9.7.1-P2 <<>> www.amazon.com AAAA @188.8.131.52
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: xxxxx
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;www.amazon.com. IN AAAA
;; ANSWER SECTION:
www.amazon.com. 1208 IN CNAME www.cdn.amazon.com.
www.cdn.amazon.com. 55 IN CNAME opf-www.amazon.com.
;; AUTHORITY SECTION:
opf-www.amazon.com. 895 IN SOA ns-191.awsdns-23.com. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 60
;; Query time: 70 msec
;; SERVER: 184.108.40.206#53(220.127.116.11)
;; MSG SIZE rcvd: 147
We provide IPv6 for all our customers by default. Some customers choose to disable IPv6 (the most typical reason appears to be they have anti-abuse systems that require the client IP to be v4).
I'm even more surprised at Amazon lacking an AAAA record, though. They surely have the data to tell, and IPv6 won't improve their retail business (or they have IPv6 fraud problems that would negate whatever improvement).
Can you share what percentage of customers have IPv6 turned off explicitly?
Can you share what percentage of hits to cloudflared sites can be IPv6 (even if they happen through IPv4)?
Cloudfront now supports IPv6, so the reasoning for not enabling it on Amazon.com is likely similar.
No, we don't. We operate a v6 to v4 gateway for you.
IPv6 has caught on (I'm commenting from an IPv6-only connection right now, on a residential US ISP).
Most clients do perform RFC4941 suffix randomization.
Replacing the prefix destroys one of the most useful features of IP addresses and in particular the larger IPv6 address space - routability (the property that getting to two addresses that share a prefix usually uses the same next-hop router).
What is the percentage of US homes who are on an IPv6?
What is the percentage of websites on IPv6?
What are the number of web site hits that are IPv6 to IPv6? (in the US? in the world?)
The highest estimate I've ever seen for any of these is less than 20%, which - 20 years into IPv6, is in my opinion "not caught on". The mobile world when 3G arrived preferred carrier-grade-NAT to IPv6 (which was technically a better solution), which is in my opinion "not caught on".
Suffix randomization has been implemented, but is not universal in my experience; and it is essentially useless for privacy in one's home. It slightly blurs the distinction between my laptop and my son's iPad. And that's ALL it does.
Right now I enjoy getting an address from a pool of 16K addresses every time I reset my cable modem; And it is likely to transition soon to a carrier-grade NAT which would give me even more privacy.
It is likely that I should be taking crazy pills - I seem to remember the snowdens of yesteryear and facebook shadow profiles, which either I'm hallucinating or no one else seems to care about.
On the other hand, I have a startup idea to profit from the impending v6-complete-lack-of-privacy that I should probably start working on. If you can't beat them, profit off them.
The server-side IPv6 adoption is not so great; see [http://www.delong.com/ipv6_alexa500.html] for deployment numbers. Luckily, most ISPs providing IPv6 (or at least, my personal one) provide a carrier-grade NAT64 gateway to allow access to IPv4 services from IPv6-only clients.
By web site hits, I have no idea - I don't know where to find those numbers.
IPv6 suffix randomization is enabled by default on Windows, OSX, and iOS. For Android, it probably varies (like everything else) by vendor, but my personal Android phone is using a random suffix. What are the machines you're using that aren't doing this?
Yes, suffix randomization doesn't hide which home connection you're on; but neither did old-school IPv4 + NAT. Sure, IPv6 didn't add that feature, but that's enough of a performance killer that it should be relegated to a separate system like TOR. The IPv6 prefix you are assigned by your carrier is a feature of whatever DHCPv6 setup they have; if they're assigning you the same prefix for every time you power-cycle your modem on IPv6 and they were not doing so with DHCPv4, that's super weird.
I think it was Win7 last I tested it, probably an early service pack; according to https://superuser.com/questions/243669/how-to-avoid-exposing... it should already have had privacy addressing, but perhaps it was somehow turned off on the machine I tested (or perhaps my expectation that it would change on reboot was wrong?).
> The IPv6 prefix you are assigned by your carrier is a feature of whatever DHCPv6 setup they have; if they're assigning you the same prefix for every time you power-cycle your modem on IPv6 and they were not doing so with DHCPv4, that's super weird.
They were allocating from a pool on DHCPv4, where reservations were for a few hours (so immediate power cycle would get same address, but if you wait a couple of hours or release and request, you'd get a new one). They are not using DHCPv6 in the same way - they assign a prefix-per-customer. That was the case with all the local IPv6 carriers I inquired with. I guess it means that the prefix is /56 or even /60 - I didn't even ask.
All local ISPs I've asked, don't give out the same IPv4 (they all charge for a fixed IP, so no guarantee you'll get the same one unless you pay; at least 3 out of the 6 actively change your IP whenever they can, to force you to pay even if you need fixed IPs for short times).
All local ISPs I've asked provide the same IPv6 prefix to a customer.
I assumed that was common practice - at the very least, more common than the other way around (fixed IPv4 when you didn't ask for it, random IPv6)
And some run the connection as IPv6 only and then CG-NAT IPv4, which of course gives you a random IP again, but is even worse for P2P applications and means you can't use DynDNS etc anymore.
My personal connection has had the same IP for the past 5 years, and I think changing it would mean asking my ISP and coming up with an answer for why I need that.