Coming up with a solution that looks like a huge technological advancement, with no real respect for the motivations or incentives of those who'll need to implement and use it, is a fairly common occurrence in tech and something engineers should be trained to guard against.
- There is a lot in IPv6 that is different from IPv4. Ignoring if those changes are good or bad, it does make the transition harder.
- IPv6 was promoted way before there was demand. To some extent it is good to prepare people (and vendors). But it does create the impression that IPv6 is a failure
- Demand for IPv6 is highly asymmetrical. The party that is out of IPv4 addresses needs IPv6. But everyone who has enough IPv4 space has no reason to care.
When IPv6 was first promoted, there basically was no IPv4 market. You would just go and get more IPv4 space when needed. For the last couple of years we now have a mature market for IPv4 addresses.
It is possible to buy IPv4 addresses, but prices go up. At some point it becomes interesting to try to move traffic to IPv6.
- Memorising an IPv4 address is about as easy as memorising a phone number, which is to say, fairly easy. I remember the iPv4 addresses of both my rental servers, every device on my home LAN, a bunch of public DNS servers if things go wrong, ...; there's no way I'm going to be able to do that for IPv6.
- At least last time I tested it (more than 10 years ago now), the greater length of IPv6 headers had a quite measurable adverse impact on transmission latency of small packets (online gaming, remote shell...)
- Why do people keep treating "you get your own unique IP address when browsing" as if it were an advantage? The way I see it, NAT and IP address reuse (especially together with some European countries' laws stipulating the address->identity mapping must be deleted within some time period) are the currently most widely rolled out privacy technology. Somewhere downthread, they talk about how Belgian police is trying to prevent ISPs from putting more than 16 customers behind the same internet address. Since I can hardly say everything I do on the internet is perfectly legal, what's bad for Belgian police is probably good for me.
I've maintained for 2 decades that simply adding another 2 slots for 0-255 would have opened up a greatly usable amount. Every current v4 - 126.96.36.199 - would also be 0.0.210.40.134.34, but we'd have another 65000 groupings of 4 billion addresses to allocate as needed, and a transition would have been far easier (smaller space, less processing, easier to think about, etc).
I understand where you are coming from by adding a few more bytes to the address scheme but one of the things IPv6 was designed for was massive address aggregation which means really short routing tables. Your 188.8.131.52.0.0 (say) scheme does not go far enough. Also, your scheme needs to be efficient in the world of bits and bytes and I don't think it is. Your scheme would probably need to be 64 bit to start with and not 48 bit because silicon, etc doesn't work like that.
However that simple routing scheme was blown out of the water by private addressing - ie get your own ISP independent address range. When you change ISP you end up with another prefix and hence all your addresses change. All addresses. So you buy your own range (about £3000 set up and £3000 per year from memory. You also need an ISP(s) to route it and if more than one then a BGP peering arrangement.
Another wrong in the name of IPv6: 64 bit IPv6 prefixes from an ISP means you can only have one subnet. No way to put your dodgy IoT stuff on its own VLAN.
/56 == 256 /64s
Each /64 == 18,446,744,073,709,551,616 addresses
Each subnet (/64) is 18,446,744,073,709,551,616 addresses - which is nice.
(I'm looking at you UPC/Virgin Media/Liberty Global)
> Another wrong in the name of IPv6: 64 bit IPv6 prefixes from an ISP means you can only have one subnet. No way to put your dodgy IoT stuff on its own VLAN.
Can't your router handle carving up a subnet just fine?
Of course, some people aren't only using the Internet for browsing.
The Internet of things will end up needing unique endpoints, and NAT does not play will with those either.
India has a lot of people, and quite a few of them will be IPv6 only (or behind a very degraded carrier grade NAT). If you are talking to customers/clients/vendors there, assume you need IPv6.
Memorising IP addresses isn't done in any larger scale network, you use DNS.
NATs work well at about the scale of a single household, beyond that, they keep making things worse.
I was always curious about this argument -- do you mean that NAT elimination will allow any two arbitrary devices to communicate with each other?
I would think that even in IPv6 world, the firewalls would still be a necessity. Most ISP would continue shipping routers with stateful firewall enabled by default (to prevent internet exploits), so any peer-to-peer software would still have to deal with UPnP/STUN/TURN. Sure, the STUN protocol will be simplified a bit because it would not need to worry about IP changing, but it would still be way more complex than just a simple connect() call.
Note that the situation maybe better in some cases -- like for India or for cell phone networks -- but there would still be enough people behind the firewall to make arbitrary incoming connections unreliable.
Related: the privacy extensions seem to be a pretty bad idea. I have no idea how would I set up a firewall to say "allow incoming traffic to my main laptop, port 22222" if it's IP address always changes. Ideas like "disable privacy extensions" and "filter by MAC" have their own significant downsides.
Stateful packet filters will still be necessary, but those would be end user configurable.
Keep in mind that you are thinking about a NAT you control. If your ISP is NATing you as well, you would have a very different impression.
- If you go around memorising addresses then you are doing IT wrong in general. So many things depend on DNS (not just A records) that punching in IPs by default is a bad habit. Browsers will keep on enforcing SSL/TLS more and more until the point where typing in an IP address into the URL bar will be as painful as using the web GUI for say an elderly HP switch is right now.
- In general latency is not affected by header lengths these days. In some cases, networks are prioritising IPv6 for the opposite affect. In other cases ISPs have dropped their entire IPv6 support without noticing for quite some time. sigh
- The addressing scheme in use should have nothing to do with your privacy. Yes NAT does accidentally hide you a little bit. However I can fingerprint your browser instead, for example. I'll trade easy SIP n RTP over NAT any day.
Now, for my gripes:
- Try doing multi WAN effectively over IPv6 without PI and a routing algorithm, or NAT
- Try changing ISP (new addressing everywhere)
The second gripe I currently work around with RFC 4193 - Unique Local IPv6 Unicast Addresses, the first one I whine about and will probably use NPT (wholesale NAT for IPv6)
Most of the things I do with memorised addresses have nothing to do with the browser or HTTP. (Mind you, though, the moment a browser won't let me access a bare IP, I'm switching away from that browser.)
> - In general latency is not affected by header lengths these days. In some cases, networks are prioritising IPv6 for the opposite affect. In other cases ISPs have dropped their entire IPv6 support without noticing for quite some time. sigh
I wondered if this may be the case; I may need to rerun some tests.
> - The addressing scheme in use should have nothing to do with your privacy. Yes NAT does accidentally hide you a little bit. However I can fingerprint your browser instead, for example. I'll trade easy SIP n RTP over NAT any day.
A website I navigate to may, but how will the carrier fingerprint my browser?
All in all, it seems like we are talking about very different threat models to privacy/security. You are worried about the likes of Google and Facebook profiling you, whereas I am worried about the likes of $intellectualpropertymonopolist sending me a $20k bill for identifying me in a torrent, or experiencing a nasty surprise at $nationalborder (or at home, !) for something I said on an internet forum.
(This does not seem like an abstract or overblown threat to me; I've seen 2 out of the 3 things above happen to friends and even one schoolmate more than 2 times.)
Me too. Wireshark, nmap and co are in regular use in my job. All browsers are playing nanny, more and more apart from the likes of Links (which I also use quite often). A lack of https is already flagged and I suspect that things will get worse in this regard. Links2 has saved my bacon many times in the past so you may enjoy it 8)
You are worried about the likes of Google and Facebook profiling you
No mate. I'm CREST accredited: I'm not worried about G and F profiling me - I know they do. However I also know that my choice of addressing scheme does not affect my privacy whatsoever. A carrier can use metadata to derive loads of facts about your usage even if you are connecting to the oher end over say https. NAT will save you from some silly firewall screw ups but not much else. A VPN can help but is no silver bullet either. If I really put my mind to it I could probably make myself near enough anonymous with enough use of proxies, VPNs and TOR but I'm not too sure about that!
I too am from the UK and have seen silly overreactions such as your  link. However, this is the world we have nowadays and in our case we have a horrific level of CCTV pointed at us as well as some pretty impressive levels of IP traffic mining. We also have the rather unpleasant RIP Act and a few others to belie our supposed liberal way of life in the UK. You do have to be careful what you say nowadays, within reason.
But surely you can visit a McDonald's car park, run tor-browser and download whatever that Sony thing was without being traced.
This heavily reminded me of the good old Night Watch essay. You can punch DNS names into the address bar on a browser, because someone out there traded normal sleep schedules for the tremendous opportunity to think about BGP trees, netmasks, and other exciting arcana.
If you are debugging network connectivity, then I have a tough time believing it’s possible without typing in a few network addresses.
Because I'll surely have an TLD for my local nginx and for my home router...
It's also fairly complete, used in production by some pretty big ISPs.
For a light weight approach instead, Dnsmasq is good:
It's what most home routers (and lots of other stuff) embed. :)
It’s just a bit of a pain to maintain if the only thing you care about is connecting to server nr 192.168.0.x
I could certainly do it, but there’s 5 services in my house I’d care to connect to, and remembering 1-5 is just as easy as giving them all names. The marginal gains are very low.
Personally I tend to throw the more stable IP addresses in /etc/hosts, as that's simple too. :)
If you go around exposing services on the internet then you should know how to do it properly. If you can get a name on it then you can put a SSL certificate on it (cheers Lets Encrypt).
If you have a SSL cert on it then you can be fairly sure you are talking to your gear and not a MitM if you take other precautions.
I absolutely do have a LE SSL cert for my home router and all my home web sites. pfSense has a ACME and dynamic DNS client for many services and HA Proxy built in. What more could you want!
I'm in the group that you're speaking of—I have a personal server set up that I can log into remotely, but the prospect of setting up a domain and TLS is very daunting. But, I don't think people like me are all that common.
The original tenet of the internet pretty much hinged on the idea that every person with a computer can host/self-publish information as well as consume other people's information in a distributed way. They will not have to depend on a centralized publishing authority.
I think v6 could have enabled this
Although IPv6 doesn't have NAT, per se, it does have Network Prefix Translation (NPTv6). And some VPN services (FrootVPN and Perfect Privacy, for example) have already implemented that.
With ipv6, an IP address can be completely disposable. You could scrape Google search results all day long and use a different address for each call.
Heck, you could completely proxy search results in real time and create your own search engine, secretly using Google as your backend while injecting your own ads.
It works the same way on v4 and v6.
I know next to nothing about ipv6, I'm afraid.
Note that there's still no way to map a subnet to a person, just like there's no way to map a public v4 address to a person.
On the other hand, if the ISP ultimately does hand out the address in the subnet (and the end user can merely ask it for a new one), the ISP can retain a record of this, which together with the server-side data can be used to unambiguously deduce who accessed what, whereas the equivalent information in v4+NAT is insufficient without also logging everyone's connection metadata. It would therefore be more appropriate to say the privacy, tracking and banning implications are the same as dynamic IPv4 without NAT, where you can likewise request a new address from your provider at any time.
The prefix might be 2001:db8:1:2300::/56, the first network 2001:db8:1:2301::/64, and the machines on that network 2001:db8:1:2301:random:numbers:go:here.
The ISP knows who has which prefix, because they handed them out, but the allocation of IPs inside that prefix is handled entirely by the end-user network. The ISP isn't involved in it, so they have no idea which IP is which computer.
With NAT the LAN-side IPs are hidden from the ISP. In v6 the ISP can see the LAN part of the address, but without any way to identify which machine is using which IP that doesn't give them any extra information. All they get is the prefix and a random number. Computers typically change the random number on a regular basis too so you can't even do any long-term analysis on it.
 https://www.e-recht24.de/news/datenschutz/10387-datenschutz-... (in German)
Limiting the timeframe does improve the costs and the privacy impact, but the ISP will still need to build and run all of the logging infrastructure, and the end result is that you'll still be identifiable.
This results in a stale-mate of sorts:
As a server operator, as long as you have IPv4-only clients, you need an IPv4 address. There are no  IPv6-only clients, so implementing IPv6 at all is a lot of work with no tangible benefits for the next x years .
As a client, you can't go IPv6-only without losing access to the IPv4-only services. There are no IPv6-only servers, so implementing IPv6 at all is a bunch of work with no real benefits for the next x years
 no meaning a tiny fraction that rounds to 0%
 x being the number of years before there's a significant number of IPv6-only servers or users.
You could design something where the IPv4 could be left unchanged in the core of the Internet. But the core of the Internet has supported IPv6 for a long time, it is the edges where the problems are.
Just something as simple as writing an address to a log file would fail if addresses suddenly became bigger.
I disagree somewhat. You could imagine a hierarchical routing structure where IPv4 NAT servers double as IPv4ext routers and gateways. All current servers maintain an IPv4 address which, if the server is unaware of IPv4ext, will not be able to decode things like full client IP address for loging, giving an incentive for server operators to upgrade.
Meanwhile, unaware client end-points connect to IPv4ext servers just as before, via NAT, but of course don't have end to end connectivity to other IPv4ext clients, giving them a (weak) incentive to upgrade.
This would go on for a decade or two just like it did with IPv6, until the moment where virtually all firmware and software is upgraded. At this point carrier grade NATs would stop mangling, the IPv4 only packets are dropped from the core and full non-hierarchical routing can commence on the full IPv4ext address field, finally solving the IPv4 exhaustion problem, in addition to the already solved e2e problem.
I agree this backwards compatible deployment scenario is much more convoluted than a clean slate redesign, and it's hard to imagine it could have seen preferable in the 90s, before wide-spread carrier NAT was a thing.
1) As far as I know, nobody ever wrote down a protocol description, and preferably made an implementation. The devil is in the details. It is very hard to estimate if such a design would actually work.
2) At the moment, the core of the internet and just about all host operating system support IPv6. The hard part is getting the edge networks to upgrade. My guess is that your proposal would run into similar issues.
The fundamental difference compared to dual stack is that there is no technical cost to deploy it outside software updates and very little risk - aside from the dubious security benefits of NAT. Not only the core, but virtually all hardware and software on the Internet today support ipv6, but there is an enormous cost to configure it to work.
This flexibility and drop in upgradability of things like EnhancedIP comes with the significant cost of breaking the internet into 2^32 independent routing domains that prevent you for getting the full benefits of the larger address format. This is an acceptable trade off only in retrospect, after the v6 "failure".
There is very little consensus within the operator community on how to deploy IPv6. The net effect is that there is an endless series of configuration options.
Which leads to having to select equipment very carefully to make sure you get an overlapping feature set.
The problem with something like encoding extra address bits in IPv4 options is that you would have to get a large group of operators on board for it to get any traction.
For example, the ISP that I use for my home internet connection give customers a static (officially 'stable') IPv4 address. It may not be beneficial for them if a next generation internet protocol would force the deployment of NAT boxes.
The downside of dual stack is that you have to manage 2 networks. The benefit it that you manage two completely separate networks. IPv4 routing has very little effect on IPv6 routing.
Merging the two, as is done for example with NAT64/DNS64, leads to network issues that many people don't understand. I think you would get the same if you mix legacy IPv4 stacks with stacks that encode extra bits in an IPv4 header option.
This is of course completely ignoring the fact that many firewalls just drop anything that has IPv4 options. So deployment may be just as an uphill battle as IPv6.
Sure that's why you have things like IETF, to coordinate such changes. If that were to happen, the EnhancedIP option would quickly become transparent, as the core of the internet upgrades firmware in a few years. I'm not in any way proposing this to be a good idea today, let alone that it could be deployed in ad-hoc uncoordinated fashion.
> my home internet connection give customers a static (officially 'stable') IPv4 address. It may not be beneficial for them if a next generation internet protocol would force the deployment of NAT boxes.
In that case you become the owner of your own routing domain and you have the option to provide your "customers" with an extended IP address that is e2e reachable from the outside world, while NATing the rest of the legacy devices. Your ISP does not have to do anything aside from not filtering options, that's the beauty of a backwards compatible solution.
> Merging the two, as is done for example with NAT64/DNS64, leads to network issues that many people don't understand.
The complexity and fragility of something like NAT64 comes exactly from the fact that it tries to bridge two separate internets. An EnhancedIP NAT is simply a transparent router for EnhancedIP aware endpoints within the domain it controls. You can freely mix legacy and upgraded devices in your network with guaranteed interoperability and you get e2e connectivity if both end points and any NATs in the route are upgraded. The rest of the network only "sees" IPv4 traffic as far as they are concerned.
sure, we can simply stay on v4 and encapsulate everything into UDP and use vhost or other names and simply treat v4 address + ports as the solution. economically both are costs of growing the internet. (any solution requires a lot of application changes anyhow, v6 is simpler, but requires ISP buy in, so what app devs can do?)
As the article argues, you can't run IPv6-only. You need some strategy to reach IPv4 services on the internet because the internet is IPv4. That answer is going to be either publicly routable IPv4, IPv4-to-IPv4 NAT, or IPv4-to-IPv6 NAT. If you do the latter (or if you do dual stack) you can route directly to other IPv6 hosts without NAT - but what's the benefit? Are there systems of communication between parties on the public internet that can guarantee native IPv6 on both ends, don't want to use IPv4 NAT, and don't want to set up a point-to-point VPN?
(I am actually okay with IPv6 ULAs for private addressing on private VPNs to avoid RFC 1918 collisions/exhaustion, but that also saves you a lot of the complexity of IPv6 deployment because you don't need any network device support, you generally get address assignments from your VPN layer and don't need to think about SLAAC or DHCPv6 or anything, etc. And it's unrelated to public IPv4 exhaustion.)
When I was on an ISP with CGNAT I regularly saturated the connection tracking tables which resulted in all new connections failing until some some slots were freed up.
NAT also makes it difficult to run any kind of server at home (PCP support varies) or use any kind of p2p protocol.
NAT traversal techniques don't work all the time and even when they work they may only help for coordinated connections but not for unsolicited contacts.
I care about this greatly, but I'm not clear that ISP's do. On the contrary, my ISP's TOS states that I'm technically not allowed to run my own server on my home internet plan.
WebRTC is one example of something that can greatly benefit from having publicly routable addresses on clients (in practice this tends to mean IPv6). If both clients are behind IPv4 NAT that doesn't allow hole punching, they will need TURN (or media) server in order to communicate. While clients might be close to each other (e.g. same apartment building), that TURN server can have horrible routing for this set of clients. For example, it could be in Europe while both of the clients are in west coast of USA (and that's not even nearly the worst case).
- There are limits to how many devices you can put behind a single IPv4 address. There is the case of Belgium where law enforcement asked ISPs to limit CGNAT to 16 customers per IPv4 address. Obviously for law enforcement, if an address is shared between multiple customers it makes investigations harder.
- A second problem it that you may lose geographical resolution if customers for a wide area share a pool of addresses. For some ad placement you really want to know where addresses are.
- But the bigger problem is that network speeds keep growing. Compared to an IPv6 router, boxes that can do NAT at a large scale and high speed are quite expensive. So an ISP has an incentive to move traffic volume to IPv6. Relatively low volume oddball sites can go over the NAT box.
If enough of the Internet is running IPv6 that you save significantly on performance by bypassing NAT for IPv6 sites only, that seems worthwhile, sure. But also I'd intuitively find that surprising, at least at present - maybe my intuitions are just wrong about how much IPv6 there is.
Another advantage is that v6 is easy to hand off early, but if you're CGNATing v4 then your v4 traffic has to go via your CGNAT routers. For cost reasons you probably want as few of those as possible, which means v4 traffic may need to go further to reach them. T-Mobile in the US is like this; v6 traffic is passed off as soon as possible and gets a relatively direct network path, but v4 traffic has to go all the way to one of their datacenters to get NATed. That can add a lot of latency.
I have no clue what law enforcement plans to do.
Video services use most of the bandwidth. So with youtube and netflix on IPv6, you can easily have most the traffic go over IPv6.
Then the performance problems are limited to sites without IPv6.
Do you mean ~65k devices behind a single public IPv4 address? 
>> There is the case of Belgium where law enforcement asked ISPs to limit CGNAT to 16 customers per IPv4 address.
Well if law enforcement in Belgium asked, of course we all need to immediately work to redesign the global IPv4 internet to comply. </sarc>
That's an absolute limit of 2^16 (65k) - the practical limit is much lower.
If you only allow one connection per client, then yes, you can get to 65k with TCP/UDP.
If you want more than one connection per client (e.g. because the user wants to download content from Facebook while also downloading a YouTube advert), you need to allocate multiple ports on the NAT device.
I'd imagine that most clients need at least 2^4, and possibly up to 2^8 simultaneous connections to ensure that you don't introduce problems. At the level, you have a limit of 2^8 - 2^12 clients (i.e 256 - 4096).
For a typical provider, a large proportion of their traffic is going to go to a limited number of properties (e.g. facebook, google, youtube).
Each of those properties is only going to return a limited number of IP addresses, and all of the traffic is going to be to a very small number of ports (i.e. 443/80). I can well believe that clients connects via a single ISP to a single remote port a large multiple of times.
If you think, you understand IPv4 well enough, this book will explain the differences and why certain decisions were made when IPv6 was designed: https://sites.google.com/site/yartikhiy/home/ipv6book
Now I am aggressively against IPv4, because over and over I see problems resulting from NAT, or collisions in private address ranges on two LANs, or broken path MTU discovery, etc. The only arguments for IPv4 are economic or political; at this point the technical debate is over and IPv6 is a clear winner (the majority of technical problems in IPv6 deployment are related to compatibility with IPv4).
For example, at my home I get a /48 prefix from my ISP over DHCPv6 (over PPPoE). Then I assign /64 prefixes to my subnets. After that, hosts pick up addresses using SLAAC.
Obviously, VPNs are more complex, but that's not the fault of IPv6.
push "route-ipv6 subnet-you want-to-route" (either your real subnet, or 2000::/3 for all traffic).
You need the same for the IPv4 side of things too.
Clients need no change.
Which highlights the utter failure of v6 - people are willing to pay for access to technology that v6 was supposed to make obsolete over a decade ago.
Given that there is always a cost to switching, people will consider switching when continuing on the old path will become more costly than switching.
> Even if IPv6 was completely perfect...
More specifically: "completely technologically perfect". Which is the point being made in this thread: the technical aspect is of only partial relevance. If IPv6 fails because of political problems, or "contextual ones" (like “we could make it technologically inferior but more readily backwards compatible; it would make it less awesome but easier for people to migrate”), then that is still "failure of IPv6".
If IPv6 had been perfect, it would be fully backwards compatible, and there would be no market for IPv4 right now.
Just because that isn't possible doesn't mean there isn't a middle ground.
Every major OS, network device maker, service and program supports it. I can't imagine it "will fail" in any way where everyone goes "ok, forget IPv6, let's move on to IPv7 it's the new thing" and the world says "phew, at last!".
> According to market researcher Gartner, over 1.5 billion smartphones were sold last year
That's an IPv4 internet of address-needing devices every couple of years, just in smartphones. World IPv6 day was 8 years ago in 2011, and then Vint Cerf said there were no plans for an IPv7, and 8 years later there still isn't an IPv7 coming from the IETF. https://www.networkworld.com/article/2200118/router/cerf--fu...
IPv6 can't "fail" for the same kind of reasons huge financial companies can't fail - there isn't an alternative.
But as far as I know, nobody came up with a credible protocol that is fully backward compatible with IPv4.
So, you can ask the IETF to come up with a magically protocol that has longer addresses and is still backward compatible with IPv4. But they are only human. So that kind of magic is not going to happen.
- you would need to restrict yourself to an extremely tiny fraction of the v6 space that's the same size as the v4 space
- you couldn't use any IPs that correspond to in-use v4 IPs because they would overlap
- you couldn't use any IPs that correspond to unusable v4 IPs, for the same reasons you can't use them in v4
- you'd have to talk a protocol that looks the same as v4 on the wire, because otherwise v4 hosts won't be able to handle it
You know what we call that? We call it IPv4.
We might need some way to force them... Google has done a lot of good by changing functionality for SEO + chrome by giving warning or giving PR boost if your site has a certain feature.
They should consider prioritizing sites that have both IPv4 and IPv6 and also warn users in the browser if the site does not have IPv6.
It's a legit warning too:
"Warning! This site may not work on some networks!"
No? My recollection is the original primary objective of IP Next was to prevent the net's flat address space from collapsing into NATed fragments. The need was immediate and pressing. We failed. Now we all live in what was feared - a post-collapse wasteland of centralized systems.
- No real competition in ISPs
If you had a national ISP in the US who ran IPv6 with end-to-end connectivity without NATs, a killer app would move to it and then drag all the other ISPs onto IPv6.
That party is not an ISP though. An ISP is interested to provide IPv4 internet no matter whether there is IPv6 or there isn't.
There are absolutely motivations and incentives for the advancement of IPv6, the most obvious of which is the exhaustion of IPv4 address space. The Notice the inflection point of IPv6 adoption in 2013 - this occurs in tandem with the real exhaustion of the IPv4 space. As IPv4 address space becomes increasingly expensive the financial incentives will grow stronger - recent block sales have the price approaching $18 PER IP: http://ipv4marketgroup.com/ipv4-pricing/
Cellular providers have particularly incentives to adopt IPv6, given that by just about any account there are more mobile devices in use globally than the entire IPv4 address space. Odds are good that if you’re using a major cellular provider, you’re actually accessing the internet over IPv6 directly. T-Mobile is apparently already in the process of decommissioning IPv4 entirely.
Work on IPv6 began nearly in tandem with work on NAT, which lasted as a stopgap longer than most expected.
Recognizing major risks to a global system and developing a complete solution a few years in advance of necessary adoption with the only miss being the estimate of how long the stopgap solution would remain viable doesn’t sound like a miss to me. The IETF engineers deserve a little more credit.
There's every reason to believe reaching a point where IPv6 is widely enough supported to allow you to turn off your IPv4 address, say over 99%, will take the same number of decades as has taken to get to 50%.
You can already see this reverse inflexion in countries that are further along like US and Germany.
Which is exactly my point: a system could have been decided that only focused on increasing the address space (in as backwards-compatible a way as possible), but instead v6 tried to roll "everything and the kitchen sink" into it, while making the address space transition about as difficult as possible.
The fact that IP v4 addresses are so expensive just highlights the failure of v6. If v6 had been successful, v4 addresses should have a value of $0.
In the mid/late 90's even I as a relatively low level person working in networking at BT I could see the problems with IPv6
Look at the deployment of TLS as a success story. Everyone kept on supporting both old and new, watched the percentages, and then dropped the old when the new had enough penetration.
The whole thing is also a bit of a shell game. Nobody wants to invest in it until they feel like they're "behind" if they don't. So you have a big player or two in order to make it feel like that's the way the wind is blowing.
Eventually some protocol is the bottom of what a group agrees to speak rather than individuals and that protocol has a completely different set of deployment issues than abstraction layers that can be built between end stations at higher levels.
The biggest problem with IPv6 is that it a different protocol that requires special support on all levels starting from applications and down to managed switches.
Has WiFi been upgraded? I believe new devices still have support back to 802.11b.
IPv6 is a great example of how not to.
There are a handful of browsers that pushed the adoption. There's nobody doing the same in the IPv6 space.
There are areas that the browsers purposefully ignored (DNSSEC, DNS-SD) and thus their adoption is low.
And brings out the problem with IPv6- there is no sector pushing IPv6, because no sector sees the benefits.
Question is why does no sector see it? My theory is that it adds a lot complexity and planning for little gain. So no sector is going to push for it.
For ISPs, it means investments with no benefits. They do it only once their CGNAT is too overloaded or on other sign of running out of IPv4. When they do it, in the simplest and cheapest way possible, which the informed customers see as worse service, so naturally they want to avoid it.
IPv6 would help network application developers, so they could do more or do what they do now with less, but they have no impact on investments ISPs would have to do.
Then there are parties, that have working IPv4 infrastructure, feel no pressure caused by lack of IPv4, and migration would mean just expenses with zero benefits. Exactly like companies that ran login forms over http, up until browsers started to warn users. That was the incentive, that caused them to switch to https.
ASCII -> UTF8
VHS -> DVD
not as successful:
DVD -> BluRay : timing was rather close to the rise of streaming services like Netflix, Hulu, etc.
1. DVD video quality is good enough for most.
2. The Combination of DRM, unskippable portions, and similar is enough of a barrier to discourage the upgrade. Personally, I will only watch a movie on disk when I don’t have time to rip it and remove that crap.
I'd be more interested in how the HD DVD as a very visible competitor blocked BluRay adoption because of consumers wanting to wait for a winner in that war.
This assumes markets with little to no competition. But most ISPs are in competitive markets with no trend setting big players and they can't waste money investing into IPv6 or anything else that has no demand if they want to keep the business alive and healthy.
From the client perspective there is the same level of compatibility between IPv4 and IPv6: client can connect to both types of addresses.
Sure is: it weren't broke and they fixed it!
I mean, even today if you take a median python user (a devops person in a big shop, maybe) and ask them to name four major advantages that Python 3 has over Python 2... I doubt they could get past one. It's not that it isn't a better language, but for 90% of its user base who don't do library design it's almost indistinguishable. And it's incompatible!
IPv6 was similar for most of its life. IPv4 wasn't broke.
Now... it's getting toward broke. And in fact lots and lots of client ISPs (mostly mobile ones) are moving rapidly to IPv6, where their systems hit IPv6 backends of all the big content providers.
I honestly don't know that I agree with the thesis of this article. Network operators who want to deploy IPv6 in the modern world certainly can, and are. That's pretty much the definition of a smooth transition, even if its taking a few decades longer than expected.
For new projects I use ipv6 where easy and python3.6 where possible.
Maybe just a decade is not all that long period and the tech community is too impatient due the historically rapid pace of changes.
If the answers are "No", "No" and "No", I would have to say that indeed, IPv6 really is a failure, after two decades and monstrous investment thrown at it, it has failed to solve any of the practical issues it was supposed to.
It's a gigantic failure, both in a costs vs. benefits approach, but exponentially so when considering the opportunity costs of being still stuck with IPv4.
Infrastructure & Software costs money and time to switch with no real personal benefit and requires a change in knowledge. Both your examples require a hard switch and don't have a nice easy transition plan. Add taking away things that are familiar to the people in the field and you have a bad, long transition.
They're about as compatible as they can possibly be, given the design of v4.
But maybe I am mistaken. Does the IPv6 specification proposes way to map an IPv4 network inside an IPv6 one? Or are all these later hacks?
You can map the entire v4 space into a v6 /96 with NAT64. That works fine, giving the same sort of outbound-only connectivity that NAT gives in v4. Does that do the job?
What do you find as searing pain?
I work on P2P applications, so NAT is the bane of my existence.
At this point I no longer believe you. Having worked in the ISP industry for quite some time, only a very small portion of the more technically advanced users can do this successfully. Most users dont remember their passwords to even get in their device.
Just because windows firewall security is balls doesn't justify writing a nat helper or proxy for every protocol their is. You have just accepted the abuse that nat doles out as a norm.
But do you really want to stake the adoption of IPv6 on convincing the world of that? Or should that be a separate battle?
Peer-to-peer is largely solved with UPnP port forwarding, but most of these apps can also at least be configured to use a specific port range, education can then be configured on the router to forward.
Then the are the badly designed protocols. That's perhaps a bit harsh, they have their reasons, but using multiple ports is a bit lazy vs using encapsulated packets (eg where you can send either a control or data packet over a single connection, or even send multiple data streams concurrently, in both directions).
FTP is one example, where a separate port is used for data transfers. This is "simple" from the perspective that the data port just contains the raw file contents, no extra encoding, but very painful from a network point of view. I used to support web hosting customers around 2000, when ftp was commonly used for maintaining site content, and people didn't understand when to use PASV mode, server operators didn't always setup the firewall to allow inbound data ports, and occasionally you'd run into the worst case of both client and server behind NAT.
SIP is another example. It uses multiple streams, but is built assuming everything has a public IP. The message it uses to setup this stream includes the system's IP address, which is the private non-routable IP when behind NAT.
Contrast these to a multiplexing protocol like HTTP/2, where it's basically transparent to network operators (still just a single TCP connection on 80 or 443). There's many other examples of multiplex use in more proprietary systems: online gaming, for example, sends different packet types over a single connection -- it doesn't require a separate "movement" and "shoot" port for each connected player. Most sites/apps using web sockets also work like this, with each piece of data encapsulated inside a message wrapper (usually JSON).
I'm being a bit unfair, because FTP predates NAT, and SIP was being created around the same time -- though I'd argue SIP's assumptions about the network environment (and seeming compete ignorance of NAT) were both poor and unnecessary.
Until Comcast helpfully replaces the cable modem in a working setup with a router/WiFi/hotspot abomination, so the network ends up double-NATted, and anything that tries to punch through UPnP- or TURN-style stops working.
If the server has a firewall that blocks all inbound but 21/tcp, or is behind NAT with just 21/tcp forwarded, no data transfers are possible.
These are totally different setups from a network operations point of view, but look identical from application and user point of view.
I think this is what the parent was getting at: both setups effectivity prevent a user from accepting arbitrary inbound connections to their machine.
Ingress blocking being a side effect of NAT is the whole point of equating it to a firewall. It's roughly similar for all intents and purposes except that it also provides socket translation, and socket translation is really useful so every firewall worth mentioning also provides NAT.
That ipv6 initially did not address the usecases for socket translation outside of multiplexing addresses is probably one of the reasons it hasn't gained much ground in the decades it has been around.
Anyway, the point is that most of the headache that NAT causes application developers is also true of firewalls that block inbound connections by default. Removing NAT from the equation doesn't save you.
So if NAT is a problem for your application, so are the stateful firewalls implicitly envisioned by those who advocate for assigning unNATted public addresses to devices on home networks.
For one thing lack of NAT makes it much easier to deal with multi-homed systems. For various reasons, multi-home is much more common with IPv6 than IPv4. Without NAT I can discover thing like what an address's scope is without querying the network. Having to ask a remote server to discover a global address turns what should be an atomic operation into a potentially troublesome state machine.
NAT also create annoying corner cases when there are local peers reachable via an address which also has a NATed global address. You may not be able to tell that the peer's address is not globally reachable, which is a problem if you want to advertise that peer to others.
It’s the opposite imo. Lack of NAT makes it impossible to do policy based routing enforced at a router level, eg route VoIP over ISP 1, and Web over ISP 2. Without NAT, each IPv6 PC is issued one or more IP addresses per WAN, but has no idea when it’s appropriate to use one over the other. (SLAAC router advertisements aren’t sophisticated enough)
Then they wonder why people aren’t adopting it.
(Its the same complaint I have against Let’s Encrypt. They shoved down a policy which is antithetical to helping their mission.)
The latest spec requires encryption. In a cable. Which might be ok for some applications, but certainly not necessary for all items. Now you have so many versions, which may or may not implement a laundry list of features. And people just want simple, no fuss cables.
Actually, it's still simple. For devices and hosts using USB 2.0, the only real change with the USB-C connector is a single extra resistor. For devices and hosts using USB 3.x, they only have to detect which of two pins has the resistor on the other side to select which of the two high-bandwidth channels should be used. And there are two main types of cable: USB 2.0 cables and full-featured cables, like the old non-USB-C USB 2.0 and USB 3.x cables.
The extra complexity only appears when you want the extra features which are new to USB-C: higher voltage and/or current, alternate modes, using both high-bandwidth channels at the same time (USB 3.2), and so on.
I think this ship has sailed. If the alt modes fail, that's fine, but the core spec is fine.
Can you expand what you mean by this? I'm drawing a blank.
Still, LE seems to be wildly more successful than IPv6. I suspect in part that's because they were more technically right - or at least it's more feasible to add the automation than to rearchitect an IPv4 network - and in part because you can do manual certificate updates. (For complicated and entirely uninteresting reasons, I do manual certificate updates on my personal website every three months using certbot certonly --manual and scp, and it works.)
I'm also way more personally sympathetic to LE because they're pushing the change for security reasons (revocation doesn't work, so we need to move to very short-lived certs) and not mere elegance ones. Rolling out IPv6 as designed brings no security benefits either to the user or to the ecosystem, and carries quite a few potential security risks.
Here is a chart showing the trend for LE marketshare (under the IdenTrust root):
I really support LE's mission, and celebrate their success, but would feel more comfortable if a separate organisation tried replicating what they had done, running the same service but with distinct personnel and assets.
For reference, here is another chart showing the run down of the remaining IPv4 supply:
You can implement IPv6 by ditching all your network hardware, installing commodity Linux boxes, writing some patches to iptables, and insisting that nobody bring Android devices onto your network unless they run your in-house fork of AOSP. I just think most people would not consider that an option in scope.
Sometimes you have to say "yes" to one powerful faction, regardless of whether their request is technically a great idea, in order to get enough political capital to fend off everyone else.
Pay attention developers. This attitude is all too common in our industry.
The other thing that V6 does that it should never have done is the extension header nonsense. That makes it possible to layer protocol inside protocol essentially forever. Hardware designers just love this feature. In practice, a lot of hardware vendors do not support it and just punt to exception cases when they hit an extension header. I'm a little surprised that that there isn't some widespread DOS that involves extension header handling botches.
When I connect to machines on my home network in any way involving avahi/zeroconf, the machines talk to each other via IPv6 by default.
At work, it's a different story. I have drifted from a sysadmin/helpdesk role into a programmer position, so that is no longer my concern. When it was, however, there was little incentive to use IPv6 - everything worked and continues to work just fine with IPv4, and sometimes there were even some rather esoteric problems with Windows' "Network Location Awareness" when IPv6 was enabled.
Meanwhile Danish ISPs refuse to implement IPv6 because: There's no demand.
That completely missing the point and their responsibility in my opinion. There's never going to be any significant IPv6 demand from private users. At work however we have customers that have started to request IPv6 only devices and networks, because there's no need for IPv4 specifically, and in some ways IPv6 is just easier (for example there's no need to do NAT).
For IPv6 to be successful the ISPs need to role it out, regardless of demand. The issue isn't necessarily at the consumer end, but the ISPs are part of the Internet and they need to help develop it, regardless of profitability in the next fiscal year.
For just consuming the web, it is fine. For switching from public IPv4, is is insufficient.
BTH, I've never tested whether it works with CGNAT or not. It would be additional hop to jump if I would give up my IPv4 address, which obviously I'm not going to.
Of course it's a lot messier than getting a static private address allocated.
The only good thing about UPC/Liberty global's implementation of DS lite (now "Ziggo" where I live) is that they will switch it back to IPv4 with a single phone call to the help desk. I can live without IPv6, I cannot live without being able to reach my home server and IoT things.
Though of course it'd be nice to get a private static address.
I guess the ISP space isn't competitive enough that they will ever go "looks like our ipv4 users get shitty latency and more congestion on facebook than the competitor's ipv6 users, so we're gonna upgrade next year!", but their IPv4 setups will probably eventually succumb to attrition too and be replaced by IPv6 gear.
Responsibility to whom?
If there's no significant demand from end-users for something, then we're relying on there being a benefit for access providers.
>> For IPv6 to be successful the ISPs need to role it out, regardless of demand.
>> regardless of profitability
Can anyone give examples of a successful technology roll-out where there was no demand, and no profit to be made?
Seat belts, airbags and other safety systems in cars (like ABS, VSC, etc).
Emission control systems in cars.
Some (most?) operating systems rotate the v6 privacy address daily or more often. The benefit of this is a) address not transparently based on permanent ethernet hardware MAC address and b) changes over time. Both are meant to hamper tracking.
If you're on a Mac and using IPv6, you can see these temporary privacy addresses stacking up over time if you type `ifconfig en0`. Old ones don't disappear immediately when rotated out since you need to be able to receive packets for a while. They are marked "deprecated" for some time before they disappear.
Which kinda defeats the purpose of having a globally reachable unique address in a lot of respects. How am I supposed to allow connections to this device in my firewall if the address is always changing?
Personally I'd prefer ISPs to take an approach like this by default, but allow the option for the consumer to have a statically assigned IPv6 prefix for free if they want it, who understand its implications.
This prevents also to create products that need a public address. I think it is a real brake on innovation, who knows what could be invented if everyone had a public ip address ?
Comcast is guilty of this. Verizon doesn’t even offer IPv6 on FIOS.
For both, I just set up an IPv6 tunnel to Hurricane Electric using pfSense.
Why would ISPs not deliberately(!) change address(es) for cheap/residential plans, to provide a reason for those customers to care about this to pay more for a pro/business plan with a static IP allocation?
Looks like the wet dream of adtech, to have everybody use a static IP address. No thanks.
You don't. That's what ESP/AH with PKI is for.
It is also interesting to consider the incentives (or lack thereof) for IPv6 peering. The fact that HE and Cogent haven't resolved their peering dispute from 2009 suggests to me that there is insufficient incentive (particularly from their customers) to do so, even when there are obvious practical effects. (I can't reach openstreetmap.org via an HE IPv6 tunnel even now.)
Perhaps the technical effectiveness of Happy Eyeballs and other backwards-compatibility mechanisms necessarily reduces incentives for improving IPv6?
... reports a steadily increasing adoption rate for IPv6. Is that rate somehow too slow? It currently stands at 25% of Google users.
My cable provider uses IPv6.
But yes. When I did an internship in 2001 my co-workers told me that I had to learn IPv6 because it will replace v4 in the next years, hehe.
Right now the typically default behavior for switches/routers that encounter the exhaustion is to summarize prefixes with a shortened prefix and (possibly) punt the evaluation to the general purpose CPU (example here) - which suffice it to say, introduces a host of security concerns. This means, as a security engineer, in situations where complex/large ACLs exist, I need to be aware of and control how IPv6 TCAM exhaustion failure modes work and plan that eventually my hardware TCAM may be exhausted and fail in a spectacularly bad way.
Or, I just ignore IPv6 almost entirely and just don't have the problem (cleverheadtap.jpg)
My biggest fear is such an application not emerging quickly enough. Without an imperative from users for end-to-end connectivity there's a risk that IPv6 networks which somehow break it become entrenched. If that happens we are back to the old chicken-and-egg situation: Users don't care because there's no app and there's no app because the network is broken and operators don't care.
Any potential killer app is going to have to decide whether they want to lose such users, because I cannot imagine an app that is so compelling that I'd switch ISPs for it. And I'm a person who actually wants IPv6 for its own sake. I certainly cannot imagine, say, my parents switching ISPs over an app (and I'm not sure how much choice they have either).
And if such an app arises, it's going to be easy enough for users to use VPNs - it's already common for people to use VPNs to get to region-locked content or (at least as of a few years ago) play LAN videogames over the internet.
And my estimate is it probably takes years to roll out IPv6 on a network that isn't ready for it - how would the app remain a killer app until then and not be disrupted by someone willing to run a proxy server?
And that works. Does it take blood and tears to make it work? Certainly. But not so many to make it infeasible.
If it does connect to the internet, I would rather have the device phone to one server only, and that server handling public access. It's much easier to keep one server secure than millions of devices that are in the hands of customers.
Of course my prefered option are IOT devices talking to one hub I control in my network, and me deciding how I want to expose that hub. But in that case, peer-to-peer communication is a non-issue just like in the first case.
Uhm, 'users' should not ever have to know what 'IP' even is!
If they do, we have failed.
That seems unlikely to emerge. Anything you can do with end-to-end connectivity you can do with a server in the middle forwarding packets. Servers are cheap and reliable, so there's very little incentive to get rid of them.
The problem is that if you want to host something usually you want it to be up and reachable 0-24. And that has costs, and p2p can help with that, but the freeloader problem is not trivial. (You need reputation or some other kind of accounting, that requires solving sybil attacks, and possibly global sync/enumeration, both are hard, etc.)
So this means two things. The first is that the desire to move to IPv6 on the part of the article's authors has nothing to do with public IPv4 address space exhaustion, it's based on other alleged inherent benefits of IPv6. The second is that IPv4 NAT already solves the exhaustion problem - you're either doing NAT to IPv4 or NAT to IPv6 (using your favorite 6preposition4 encoding/tunneling scheme), but as far as the public internet is concerned, it looks like you're doing plain old IPv4 NAT.
I don't know who the ultimate benefactor would be - would it be ICANN?
- IPv6-to-IPv6 NAT has only been accepted very recently and very begrudgingly. Whatever your views are on NAT, the fact is that lots of people have network designs that rely on it, and if you want them to stop, you're now asking them to couple two major transitions, which is a significant economic cost. (Option 3 in this article is IPv6-to-IPv4 NAT, assuming that the public internet will indefinitely be IPv4; it's noteworthy that none of their options ever envision the public internet becoming IPv6.)
- IPv6 recommends the use of its own scheme, SLAAC, for address assignment, with DHCPv6 being also very recent and poorly implemented - for instance, Android has no DHCPv6 support and plans to never implement it https://code.google.com/p/android/issues/detail?id=32621 . There's also a "stateless DHCPv6" for communicating DNS servers but using SLAAC for addressing; without it, SLAAC expects you to use a scheme called RDNSS to communicate your DNS servers, which is also not 100% supported. So you now need to spend engineering time supporting all of these options because some devices only support one and some only support the other, and you need to come up with network designs that work with both SLAAC (which has strong opinions on how you use /64s) and DHCPv6 (which doesn't).
- IPv6 doesn't use ARP, on the grounds that it's a layering violation, a separate layer-3 protocol that runs directly on top of Ethernet but talks about IP addresses. Instead, IPv6 has a clever scheme for using multicast to transfer the information that ARP would convey, by having machines join multicast groups based on their MAC address. This works very, very poorly with networks that aren't designed to support significant multicast load - for instance an attempted deployment of IPv6 caused packet storms in the MIT Computer Science and AI Lab's network for about a week because their switches were falling back from multicast to broadcast: https://blog.bimajority.org/2014/09/05/the-network-nightmare... So a working IPv6 deployment involves upgrading all of your hardware to hardware that has good support for multicast, which is also a significant economic cost.
- Various protocols like Teredo and ISATAP attempt to set up tunneled IPv6 routing in preference to IPv4 routing, making it hard to do a staged deployment, especially if you have BYOD on your network. For bonus points, because they're tunneled, you get different and possibly worse routes over IPv6, making debugging harder. So that's a cost in additional L1 and L2 support.
If someone had come up with an IPv7 that's just "We extended IPv4 to 128-bit addresses and we left ARP and DHCP and NAT and everything alone," people would have switched to it already. But the powers that be are drowning in the second-system effect and nobody wants all the features they added.
I was (as a young engineer) monitoring the mailing lists during the IPV6 proposal discussions and the hubris was palpable.
It was very much a case of "they will have to implement this so we will get to force all of these other improvements on them too "
They were relying on Spanning Tree in a who knows how big broadcast domain. Firstly, that's just begging for things to hit the fan. A single device having a meltdown will cause exactly this, a broadcast storm that is able to take down the entire campus, because it was a single broadcast domain.
Secondly, it is a security nightmare. No amount of links or switch capacity will suffice in a single broadcast domain campus, relying on STP, if port isolation and proxy-arp is enabled, along with DHCP snooping, arp inspection etc. So, port isolation is not turned on. Isolation also creates a requirement for a pyramid-shaped network so nobody wants to do that anyway. But back on point, MITM-heaven, anyone can do what ever they want because the L2-infrastructure is not limiting anything. Ethernet does not care about security and Internet Protocol only implements or allows to implement security in gateways that interconnect subnets that reside on separate broadcast domains.
Routing is the answer and this is why I route on the access-layer, as well as on aggregation and core -layers. Route loops are very rare with OSPF and broadcast storms are limited to single switches if you route at access-layer. Also the posible issues with untested code-paths are minimized this way, since none of the switches are seeing more than the equal amount of hosts as it has ports.
I do not agree with SLAAC because I do not believe in broadcast domains the size of a /64 so I'd deploy DHCPv6 in every possible braodcast domain that does not have Android devices in them. Luckily there is no place for Android in wired networks and especially datacenters, so I can happily deploy DHCPv6 in those. And if I ever need to service Android-devices, I can dualstack and let the devices know of DNS-service with DHCPv4! Take that, Lorenzo! Hah! Outsmarted you there!
And it sounds like turning the intelligent feature off and treating the packets as pure broadcast, just like ARP packets, would have fixed the problem. If the switch can't do that in the right way, it's not the protocol's fault.
I've seen this happen a few times in my life in production systems, designed by someone else. Overload a switch somehow and it goes straight in ludacris mode because of the topology. Properly configured networks suffer minor outages only in case of single device meltdowns.
And no matter how smart the peole on campus are, there will always be someone who cannot accept that someone else is more right. I have first hand experience of this since I have tunneled IPv6 at home and know pretty quickly if someone does not listen to ICMPv6 Packet Too Big -messages. Not the first time I contact people about it but so far the only one I have not been able to convince is exactly someone really smart on some campus somewhere. I wrote many emails and tried to explain that IPv6 allows MTU's as low as 1280 bytes and that ICMPv6 is a must allow protocol but nope.
IPv5 could’ve been the extending IPv4 address space, and IPv6 could have all the other changes.
But they didn’t want that, because reasons.
Here is a copy: