"Please just try to fit more than 4 billion numbers into 4 bytes" -- this is mathematically impossible.
"Just extend the address size" -- this is an entirely new protocol by the definition of IPv4, which uses fixed-size addresses.
The reason for the slow IPv6 adoption is that there was no financial or business pressure. While IPv4 is ubiquitous, nobody individually feels a need to migrate to IPv6.
E.g.: How many customers would you gain by supporting IPv6? Generally zero. That doesn't sell well internally when the network team is asking for a budget.
The IPv6 transition will be like a bankruptcy: very slowly, slowly, then all of a sudden.
The sudden bit will happen when IPv4 address will cost $1K to $10K annually. At that point, customers will be reaching those IPv4 endpoints via three layers of proxies or NAT gateways, and IPv6 will be noticeably faster, more reliable, and free.
There was a third option: make the existing IPv4 space a hierarchically routed island of the new IPv4.1 space, with backwards compatible packet format, then upgrade just the endpoints in the first phase.
So every owner of a ipv4 would get, say, an entire 32 bit space that routes over existing IPv4 infrastructure. So, if the endpoints are upgraded, you have guaranteed end-to-end deliverability without silly hacks such as NAT or STUN.
This doesn't solve the "backwards compatibility" problem itself, because you still have two logical different IP networks running on top of each other, requiring separate name resolution, etc. But what it does solve is the "incentive problem": endpoints are incentivized to upgrade because it gives them an immediate benefit, end-to-end connectivity to other upgraded users with non-routable addresses sitting behind a dumb, non-upgraded IPv4 routers.
For example, VoIP or P2P software would immediately benefit and it would drive adoption for an immediate use-case. In the later stage, when the entire infrastructure can understand the extended packet format, you would start to publish extended routes that don't fall into the hierarchical range, similar to IPv6 today.
IPv6 lacks any such incentive, because me upgrading and enabling it has zero benefits until all hops separating me from the internet also enable it and correctly configure it. On the contrary, by requiring a completely new, complex configuration with no "default, just works" mode, IPv6 introduces a disincentive, because by enabling it not only do I not gain anything, but I risk breaking my internet due to misconfigured upstream. So the conservative setting for IpV6 has, for the last 3 decades, remained "off". This only recently began to change.
No. The NAT64 hack involves intercepting the DNS requests and rewriting the IPv6 packets in flight so that the IPv6 only clients see outside IPv4 hosts as IPv6. Among many issues, it breaks any end-to-end encrypted protocol that includes IP literals, such as FTP and SIP.
Also, NAT64 presents no immediate benefit to a client upgrading in a IPv4 only environment, since it still doesn't allow two clients behind NAT64 gateways to connect to each other if there isn't an IPv6 connection between them. So the same old IPv6 self-fulfilling tragedy, everybody must upgrade before anybody can see any benefit, therefore nobody upgrades (or uses NAT64).
The main benefit of a backwards compatible packet format is that IPv4.1 islands see each other from day one in the legacy IPv4 internet and get the full benefits of the new protocol, without any configuration or tunnels. The "encapsulation" seen by legacy hops is in fact the canonical, definitive packet structure, there is no temporary transition technology that can break or needs to be configured.
> the problem of IPv4 clients trying to connect to servers with only IPv6 addresses
This is not a problem that can or should be solved, and it's not the problem significantly preventing IPv6 adoption. A non-upgraded client will just see an IPv4 internet, just like an USB 1.0 won't be able to use USB 2.0 speeds.
The difference is that, while a software&hardware upgrade to IPv6 won't bring you any new connectivity without extra configuration from your upstream provider, an IPv4.1 upgrade will instantly allow you to see (and connect end-to-end) to all existing IPv4.1 islands and hosts, using only your legacy, IPv4 connection. The hierarchical extended address space (IPv4 subdivisions) is immediately available, incentivizing adoption without risking connection issues, while the upward extended space becomes available when you have a native IPv4.1 connections, just like with IPv6.
1. If you want the solution of IPv4 packets being a subset of the address space of IPv6, your options are rather limited in regards to tampering with the original packet. IPv4 hosts can't deal with the larger addresses of any IPv6 hosts they're talking to.
2. Of course there's no immediate benefit for IPv4 clients - there isn't in your proposal either! It's a compatibility measure, allowing IPv6-only clients to talk to IPv4 servers. (I make that distinction because under no scheme can an IPv4 host initiate a connection to a host with no IPv4 address.)
3. I have no idea what you're talking about with the idea that NAT64 requires IPv4 clients to talk to each other over an IPv6 subsegment. I suspect you're thinking of a different transition technology solving a different problem. NAT64 provides a virtual IPv6 subnet containing the entire IPv4 address space. IPv6 clients can send packets to this subnet, at which point they are routed to the nearest NATting gateway and passed into the v4 internet after packet rewriting.
4. FTP and SIP containing IP literals is an irreducible incompatibility, which cannot accommodate an address size change without intrusive packet rewriting anyway.
What hardware, especially ASICs, do not support wire speed IPv6 and have not for a decade or two?
T-Mobile was gave a presentation on going IPv6-only in 2017:
> For the past 10 years T-Mobile has worked towards creating an IPv6 environment and we are now getting very close to our goal. Stephan presents learning on how to successfully enable IPv6-only using DNS64 with or without 464XLAT. He will do a live demo of the different IP interfaces on an Android handset. Finally, he will discuss and give some best practices on how to handle DNS, applications, and websites that are having issues with DNS64.
Any hardware router introduced in the last two decades has IPv6 support; in my somewhat outdated experience the barrier to implementation is implementing dual-stack logic in software built for single stack.
ISPs in Belgium implemented IPv6 a decade ago (some even two decades). ISPs elsewhere could have done it too by now (nobody is still using 10+yo hardware, I hope...).
> There was a third option: make the existing IPv4 space a hierarchically routed island of the new IPv4.1 space, with backwards compatible packet format, then upgrade just the endpoints in the first phase.
That is called DS-Lite and we have it
Still doesn't solve a problem of old clients not being able to access new servers
No. DS lite is a technology that allows the ISPs to upgrade to IPv6 while their clients do not. This is not the problem preventing IPv6 adoption, quite the contrary, clients support IPv6 from the Windows 2000 and Linux 2.1 era. What held back adoption is precisely that most ISPs don't bother with IPv6, which is an extra headache in configuration and man hours, as long as it provides no benefit to the end user.
The beauty of a backwards compatible packet format is that it takes the last mile ISP completely out of the equation, clients upgrade when they see benefits, and instantly get end-to-end connectivity. This is important for VoIP, push to mobiles etc.
> Still doesn't solve a problem of old clients not being able to access new servers
See sibling comment on why this is not the right problem to solve either.
No. There would be no NAT box holding IP-port mappings in its internal memory, with the related timeouts, flakiness, port clobbering etc. and no packet re-writing. All routing decisions would be static, based on information in the IP header: the legacy outside routers would just examine the legacy part of the IP address and packet, while the internal IPv4.1 would use the extended bits. So just like any packet routing and without translation.
Critically, this solves the cold start and connectability problem of NAT: if you get a packet addressed to your outside IP, to a port that has no memorized mapping, to what internal IP do you send it to? Lacking a static or UPnP port assignment, it can only be dropped. The extended packet format would provide this information for every packet, the upgraded outside host would tell you what upgraded internal host it wants to talk to.
This is solved by statefulness: the router/firewall can be told to drop by default any unsolicited connections.
It's how things work with IPv6, which doesn't have NAT (by default): just because a host has a globally routable address does not mean it is reachable by default.
You won't have the "NAT as a firewall" dilemma because there would be no NAT - this whole thought experiment would take place in the 1996 era, before the explosion of NATs. Expecting your /32 gateway to do any firewalling wouldn't be too different from expecting your ISP to do the same for the entire city at the /18 level.
You can tunnel IPv6 over IPv4 (which is how the very first deployments worked).
And I think 6to4/6rd worked pretty to close to what you suggest: Each IPv4 gets assigned a block of IPv6 space which gets tunneled over IPv4.
> I think 6to4/6rd worked pretty to close to what you suggest: Each IPv4 gets assigned a block of IPv6 space which gets tunneled over IPv4
No. Tunnels encapsulate IPv6 traffic but they need to be set up, I need to know a gateway that is willing to take my traffic, decapsulate it and place it on the IPv6 internet. There are many reasons this is bad idea, it won't ever scale, it's fragile etc. 6rd doesn't improve things significantly.
A backwards compatible packet format will allow you to connect directly to IPv4.1 islands, without any configuration or choke point: the packets you place on the wire have the final extended structure, even if they appear as encapsulation to the legacy IPv4 hosts in the path.
It's as if any IPv6 host on the internet is guaranteed to have its own 6to4 gateway, and the encapsulation just happens to be the exact IPv6 packet / IPv4 compatible structure that you use to connect to any other non-gatewayed host.
Windows actually used to have 6to4 setup by default in the past.
There is no need to manually configure anything since the 6to4 have a globally unique anycast address and the ipv6 space is just derived from the own IP address.
Of course NAT breaks that if used on some box behind the router (also would break your proposal I think?) and there were a few issues with 6to4 (random people running gateways? who deals with abuse?) that lead to the development of 6rd (and which was used by a few fairly large ISPs).
> There was a third option: make the existing IPv4 space a hierarchically routed island of the new IPv4.1 space, with backwards compatible packet format, then upgrade just the endpoints in the first phase.
Do you mean applying some kind of network address translation?
> So every owner of a ipv4 would get, say, an entire 32 bit space that routes over existing IPv4 infrastructure. So, if the endpoints are upgraded, you have guaranteed end-to-end deliverability without silly hacks such as NAT or STUN.
The problem isn't ipv6 (even with the tons of extra features that ipv6 forces upon you).
One major problem is dual stack. It doubles the workload for very limited benefit. You have all the downsides of making ipv4 work in the first place. You've then got all sorts of messes like NAT66 (ipv6 was supposed to get rid of NAT), a lack of clarity on which patch to use (NPTv6 and NAT66 are two different options for the same problem, a problem which was built into ipv6 in the first place), messy hacks like DNS64
Instead had the approach been ipv6 only from the start, with no dual-stack, having the OS transparently deal with sockets to ipv4 devices by converting to the ipv6 mapped address (:ffff:xxxxxxx), and thus eliminating the need for dual stack from the start, things would have moved far far faster. You'd be able to communicate with ipv6 by using stateful NAT at the edge of your ipv6 network (as you do now at the edge of your RFC1918 network), you could expose services on your ipv6 only devices with natting (as you do now).
You'd still have A and AAAA records, your client having an ipv6 stack could prefer AAAA instead of A, but if it needed to use an A record (or someone just tried to connect to 12.34.56.78) the stack would have gone "ok I'm ipv6 only, I'll connect to :ffff:12.34.56.78" and rely on the network to make it happen.
Throw in things like NPTv6 and 464XLAT from the start (rather than 16+ years in) -- the addons which were created to address the fundamental architectural flaws in ipv6 -- and you'd have had a far smoother transition.
Any solution to the 4->6 transition that assumes that all devices of some class (be it clients, servers, or middleboxes) moved to IPv6 at once is deluded and would not work.
There was no way to make the transition to IPv6 without dual stack. The problem was much more that the precise dual-stack approach was not well thought out, when it should have been a fundamental part of the IPv6 RFC itself.
Any ISP who wishes to move to IPv6 still to this day has to consider how it will handle clients that don't speak IPv6, servers that don't speak IPv6, routers they own that don't speak IPv6, and peers who don't speak IPv6. There is no way to make all of this work without having devices that translate between the two (losing most of the benefits of IPv6 when going through this translation, of course).
When you've spent 10 million dollars or more on a router that doesn't speak IPv6, you don't change it one year later just because a new protocol has come up. That thing is there to stay for 5-10 years, and you just work around it as best you can.
> Any solution to the 4->6 transition that assumes that all devices of some class (be it clients, servers, or middleboxes) moved to IPv6 at once is deluded and would not work.
464xlat allows communication from ipv6 only clients to legacy ipv4 ones without the need for a separate stack on your end device
nat46 allows communication from a legacy v4 device to a modern v6 device without the need for a separate stack on your end device
Had ipv6 transition been thought about better back in the 90s then you could have deployed your new subnets as ipv6 only back in 2005 and still communicate with all your older kit.
> That thing is there to stay for 5-10 years, and you just work around it as best you can.
IPv6 is 30 years old. I sweat assets like there's no tomorrow but the oldest kit I've got active today is less than half that. Even for ipv4 only devices, a single legacy subnet would be reachable from my v6-only management devices via my ipv6 backbone via 464xlat
> 464xlat allows communication from ipv6 only clients to legacy ipv4 ones without the need for a separate stack on your end device
> nat46 allows communication from a legacy v4 device to a modern v6 device without the need for a separate stack on your end device
How does that work if your ISP doesn't support IPv6? Can an OS developer deliver IPv6-only OSs to any end user? How about v4-only VPNs? Ultimately the answer is that devices must support both IPv4 and v6 until the day only one remains. Keeping both active at the same time may be more optional, but there is plenty of software which assumes IPv4 at other layers than simple connectivity. So running IPv6-only is generally a bad idea even today.
This whole discussion was about what should have been done differently at the start of the IPv6 rollout to help it complete in less than a lifetime, not about the situation some decades in.
But that's what I mean: lots of ISPs have supported IPv6 for 1-2 decades. Most hardware & software already supported it a decade ago, and nobody should be using anything that old without updates. The only reason for an ISP today to not provide it is incompetence.
Two decades ago I was a member of a ISP consumer group, and we discussed it with a couple ISPs back then. They all were working on a planning for it (one smaller ISP even already implemented it back then!). Apparently in other countries ISPs were allowed to behave irresponsibly.
Really, the only way to force such incompetent ISPs out is if governments get involved, or if all/most backbone providers and IX operators set a date where IPv4 will become very expensive, and then one where it will be switched off...
> Instead had the approach been ipv6 only from the start, with no dual-stack, having the OS transparently deal with sockets to ipv4 devices by converting to the ipv6 mapped address (:ffff:xxxxxxx), and thus eliminating the need for dual stack from the start, things would have moved far far faster.
This is, indeed, how dual-stack works; you open a PF_INET6 socket and use sockaddr_in6 addresses for everything, including IPv4 (which get mapped to ::ffff:/96 addresses). Been like that essentially forever. The “dual” in dual stack refers to the OS' stacks, not userspace.
Don't forget you may need to opt in to get mapping to work. It's not available by default on all platforms.
If you're unlucky you'll also have to sacrifice a goat to appease the JVM gods. JVM behaviours vary hugely across implementation, version and underlying platform. Not to mention the short sighted decision made by many sysadmins to disable IPv6 completely...
> The problem isn't ipv6 (even with the tons of extra features that ipv6 forces upon you)
The year is 2023, The chromium engine is full blown operating system, it has notifications, background task management, GPU acceleration for general compute, it's larger than Windows XP, and can in fact run windows XP in the browser. Teams consumes 500 mb of ram to do the same job ICQ did in 2002 with 5 mb of ram. Cars have 4G, lightbulbs need updates and security patches.
But Ipv6 features take a few extra bytes and are a problem.
> But Ipv6 features take a few extra bytes and are a problem.
Some people pay for each extra byte they have to send through a network, and design whole systems around the goal of minimizing the amount of data they ship around.
No one pays for the extra free gigabyte that Chrome takes over.
> No one pays for the extra free gigabyte that Chrome takes over.
Sure they do. It just doesn’t show up in Chrome’s metrics so Chrome doesn’t care about it.
* Start-up time of other applications. If a program needs 1 GB of RAM, and Chrome is holding all but 512 MB, then the program must perform multiple allocations, waiting for Chrome to release its cache after each one.
* Smaller cache in other programs. Consider a program that can run with 4 MB of RAM, but could use up to 1 GB of RAM to cache intermediate results and improve performance. Such a program would check the amount of RAM available and scale their own cache size accordingly.
* Competing caches in multiple Chrome instances. Multiple independent Chrome instances, such as from Electron shells, each try to cache as much as possible until RAM is exhausted.
In fact some of the earliest adopters of IPv6 were Google, Microsoft, Netflix. Companies who when you're considering the problem of (N * a few bytes) have a very large N so are the most likely to have material costs from it. Yet even to them, it's a rounding error.
For Netflix the cost is actually especially low. Cost as a percent of bandwidth is (a few bytes / packet size), and when you're streaming enormous media files packets are almost always max size.
The user pays for it or suffers the consequences. Anyone relying on such bloated browsers has externalised the cost for coming up with a resource efficient alternative to their users.
You say ipv6 was supposed to get rid of NAT. Can you explain why it doesn't? You then say the problem was built into ipv6 from the start.
From looking it up it looks like it's mostly required when IP's change (e.g. when you change ISP), which for me is more of an argument to use DNS if you want fixed addresses.
One reason could be the proliferation of prefix delegation, meaning an ISP now controls the numbering of your internal network, and it keeps constantly changing the net block, so you can never get stable local addressing… so you just keep running dual stack or try to use nat66 with a ULA network as a kludge or something.
You can have multiple IPv6 addresses on your interface, so you can have both the ULA address for internal use as well as the global address - no need for NAT
However, the priority order on your OS for address selection, even when it comes to stuff like choosing which DNS results to use is IPv6 global address > IPv4 > ULAs. So on a dual stack network, ULAs with not be used unless the only address is a ULA.
And even if you run your internal services with only an AAAA record pointing to the ULA, the client's source address will likely be the global address of the client device unless you tweak the tables on each client, which then means you'll need to have your global address in all your firewall rules to access the internal services on ULAs, which then means you're not saved from having your ISP-provided global address in your configuration, which is what you were trying to avoid by using ULAs.
The problems this caused/s seems to have been an unintended / unforeseen consequence that was more exposed as people gained experience. There's a draft being worked on to officially change the priority:
> The behavior of ULA addressing as defined by [RFC6724] is preferred below legacy IPv4 addressing, thus rendering ULA IPv6 deployment functionally unusable in IPv4 / IPv6 dual-stacked environments. The lack of a consistent and supportable way to manipulate this behavior, across all platforms and at scale is counter to the operational behavior of GUA IPv6 addressing on nearly all modern operating systems that leverage a preference model based on [RFC6724].
Leaving aside that the draft is badly worded (there are long descriptions of the problem, but no short paragraph saying what must be changed), the long time until this is deployed if ever, the new order will still leave ULA below IPv6 global addresses - which means it doesn't solve stock_toaster's problem at all.
The only in theory clean alternative to the kludges you mentioned is to let DHCPv6 renumber your network automatically which is a major change in how you have to think about your network.
The existence of NAT66 and NPTv6 are proof that there is still a need for NAT in an ipv6 environment. Maybe not in your environment, but people wouldn't make these solutions if there wasn't a need.
Would you rather have a bunch of routers sending out advertisements which every client needs to sort out, or have one consistent multi wan load balancing/failover policy that is transparent to clients?
IPv6 purists think that you can simple have multiple IP addresses on each client, all configured magically, and reconfigured transparently when your ISP changes, with every device somehow knowing via some form of policy deployment (perhaps via dhcpv6) which network to use for a given flow.
That's so much simpler than simple src-natting your clients at the edge of your control and routing your outgoing traffic based on a policy at your natting device /s
There's certainly a requirement for them, but it would be good to know what the justification of that requirement was before we know if there's a need. E.g. the justification could be "our SOPs say all network traffic must go through NAT", and if you dig deeper you might find that the SOP was written to save money on IPv4 addresses. That would not indicate a fundamental need.
NAT works well as an ultra simple firewall. All those ancient IoT devices don’t need to accept traffic from arbitrary addresses, but they may need to communicate with the outside world.
Using a firewall is obviously an option, but why give an IP to something you don’t want accessible by the outside world?
There's something that works even better as an ultra simple firewall: An ultra simple firewall!
> why give an IP to something you don’t want accessible by the outside world?
- You might change your mind about it needing to be accessible by the outside world, and if it already has a global address you don't need to renumber everything.
- Addressing and routing aren't the same thing; it can be useful to have globally unique addressing even without global reachability.
Ultra simple firewalls that expose your internal network architecture are less secure than NAT. You simply cannot information about your internal network without risk and there’s zero direct benefit for doing so.
I don't think this is a useful thing to say. If I keep trying to fill up my electric car with petrol, and get so annoyed that it keeps spilling on the floor that I pay someone to install a petrol cap and fuel tank, it would be equally silly to say that electric cars failed in their mission to get rid of petrol use.
Fitting more than 4B numbers into 4 bytes is mathematically impossible, but building a backwards compatible and easier to integrate standard may not be.
Take USB for example. The capabilities of USB 3.1, 3.0, 2.0 is impossible to achieve for USB 1.0. So is high-speed charging.
However, the end-user experience is generally pleasant, nitpicks around some of USB-IF's specific choices aside.
The USB protocols over the wire are generally not compatible between versions, especially at the lowest levels (signalling). That's the definition of how more bandwidth can be squeezed into the same wires. The signalling layer changed between versions.
The "end-user experience" IPv6 equivalent of the USB version transition is that a person browsing to "www.google.com" has no clue whatsoever that it actually went via IPv6 instead of IPv4.
Just like with USB 1 to 4, IPv6 goes down the same cables and works the same at the application layer. Some changes occurred, but changes are mandatory for things to change.
You're asking for USB 4 to be magically "the same" as USB 1.0 while sending tens of gigabits over the wires -- not for the end users -- but for the lazy electrical engineers that can't be bothered to update their designs!
Extending address is fully possible, and if we drop requirement that the extended part be individually routable, even simple.
But no, someone said we must redo whole stack and we need every piece of sand to have public routable address..so now we are stuck between rock (old fossilized IPv4) and hard place (completely incompatible IPv6).
> Extending address is fully possible, and if we drop requirement that the extended part be individually routable, even simple.
No it is not:
* You also have to deploy new DNS code to handle a new record type to handle longer "IPv4+" addresses.
* You also have to deploy new OS and library code with new socket, etc, APIs because all in_addr_t definitions and data structures are 32-bit-only.
If a public service has a "IPv4+" address, how does a not-IPv4+ host, or not-IPv4+ compliant code handle it? If you want >4B addresses you have to tweak all the code that touches address structures. You have to (re)deploy code on all the network elements that touch the packet bits: all the end-user applications (browsers, chat clients, etc), all the end-user operating systems, all the middle-boxes, all the routers. If you have network devices and segments between the public service and the client that are not IPv4+ compliant, you have to configure the clients to send the IPv4+ traffic to translation boxes that are IPv4+ compliant.
Basically all the stuff that is happening with IPv6.
You're inventing a new addressing scheme, and proposing that we put a bunch of middleboxen in to mediate connecting the old world to hosts on this new addressing scheme.
No, I describe existing practice with NAT where you in fact have IP addresses extended by TCP/UDP port numbers. You could instead move this "port" directly into IP header in backward-compatible way and fall back to stateful NAT only if the counterparty does not support it.
A device that doesn't understand your newly invented addressing scheme would need to rely on some other intermediate device in order to get traffic to an endpoint that does support your new scheme.
You're using different words, but you've got a separate addressing scheme, and dependence on proxies to enable everyone to talk to each other. This is exactly where we are with IPv6.
> You could instead move this "port" directly into IP header in backward-compatible way
If you're changing the meaning of the headers, it is by definition not backwards compatible.
No, you can't: an IPv4-only device that receives that packet would interpret your extra address bytes as part of the TCP/UDP header. There simply isn't any room in the IPv4 header to squeeze extra bytes in. That is exactly why NAT baked knowledge of TCP & UDP into routers, breaking the layering design in the process. If there were any way to add extra headers to IPv4 without relying on middle boxes to support it, it would have been done a long time ago.
Not true. There is Options field to extend IPv4 header. And only server and the router closest to user would need to support such extension, keeping layering violations to minimum.
USB 3 the protocol isn't backwards compatible with USB 2, USB 3 ports just include both USB 2 and USB 3 pins (what one might call dual stack). You can easily connect two differnt devices one to the USB 2 pins and one to the USB 3 pins. If you only want to connect USB 3 devices, there is no need for USB 2 Pins, as done on the PS4.
There is also no specified way to convert USB 3 to USB 2, but some have tried, with mixed results.
Option 5: ffff/96 (yes I get it, only works if host has both ipv4 and ipv6, on the plus side: no need for the network to support it. Mostly for applications)
The issue is most of these require ISPs to deploy new hardware, or deploy new network services. The problem is that network hardware is single-purpose, because only single purpose hardware can sustain the speeds we demand of internet networks. This means a lot of hardware needs to be replaced in order to make the global IPv6 transition and, short of redesigning IPv4, which is 43 years old now, there's no other way to make the transition. All these solutions require either work by your ISP, or work by you yourself on all your hosts.
NAT64+DNS64 is the best transition method as it eliminates the need for dual-stack.
Clients can be IPv6 only and ideally need a CLAT installed to handle the edge case of IPv4 literals in apps that don't use DNS. The ISP's internal network can be IPv6 only. Only this NAT64 translator needs to speak both IPv6 and IPv4, and only for non-IPv6 traffic.
On hacker news. You're going to find a big contingent of people who are getting things like VPS/colo/dedicated/cloud hosting, only get an IPv6 address on that (or finding that an IPv4 address costs extra) ... and are occasionally finding some customers can't reach their sites without every host having an IPv4 address or paying for something like cloudflare.
So there is a bit of a demand, especially here, for forward compatibility.
the reason for slow IP adoption is that they decided to break all backwards compatibility.
You need new protocol, sure. But do you _have_ to switch from "1 almost fixed address per interface" to "tons of addresses per interface and dynamically changing"? Did you have to present it as a separate protocol to apps? Did you have to use : in representation, breaking most ad-hoc text processing code? etc..
if they goal was "herr is a new verion of IPv4 with same semantics" then we'd just need to wait for new kernels and libraries to come out, and it would be all done years ago.
"Just extend the address size" was certainly one of the options. Sure, it's still a change, but the point is: After this change, both protocols could have worked side-by-side. Devices that only supported IPv4, no problem, they send 32-bits. Devices that supported IPv6-as-it-could-have been would simply have zero-padded those 32-bits to match the new protocol. Talking to old devices, the zero-padding gets dropped.
Then any network address beyond ipv4’s 32 bit range would have been completely inaccessible to any legacy devices. That would have essentially been the same situation that we have now - where ipv6 only services are inaccessible to anyone on an ipv4 network. So service operators need to keep their ipv4 addresses and networks don’t update.
How would that be an improvement over the existing situation?
Everyone is always quick to complain that we're going through N number of NAT gateways or N number of proxies, but this is virtually never a problem for most of the Internet. Even despite this rats maze of proxies and NAT gateways we're still supporting virtually all the applications that consumers use and love such as VoIP, WebRTC, HTTP(S), DNS, Gaming, Streaming Video, Mobile Apps, etc.
NAT seems to always get a bad rep because it inconveniences the very few that want to have an end to end experience, but there has to be some sacrifice to keep the Internet running for the billions of users.
NAT and by extension CGNAT are the unsung heroes of the Internet.
>Even despite this rats maze of proxies and NAT gateways we're still supporting virtually all the applications that consumers use
That's a tautology: "Despite the limitations of IPv4, we're still supporting all the applications that can work within the limitations of IPv4".
Lots of potential P2P applications (that might solve a lot of problems with have with the current centralised model of the internet) either don't make it past the drawing board because of NAT, or have to be encumbered with complex, expensive-to-develop, best-effort NAT-punching behaviour that burdens everyone involved (and can stop an application from being truly P2P by having to run things like STUN servers).
>NAT seems to always get a bad rep because it inconveniences the very few that want to have an end to end experience
I think there would be many more that wanted this if it were trivially easy to do
>but there has to be some sacrifice to keep the Internet running for the billions of users.
> I think there would be many more that wanted this if it were trivially easy to do
I've seen figures from proponents of Future Internet Architectures such as Named Data Networking claim that consumption is about 80% of Internet traffic. The truth is not everyone needs a Internet addressable host, mobile phones for example don't. And well, we're living in this situation today with CGNAT and you don't hear complaints from customers about not having Internet addressable IPs.
> What's the sacrifice in using IPv6?
Support. Enabling IPv6 on broadband consumer networks, small medium businesses, etc. means that you have to support the various devices v6 stacks and applications and ensure they continue to work just as well as they did with IPv4. IPv6 can still cause damage and the ability to support and fix these issues throw out virtually all incumbent knowledge.
If it were really just a "flip of a switch", everyone would've done it by now.
In some countries almost everybody already did it. The way they did it is by starting 15-20 years ago, and making sure every new replaced device supports it.
Any kind of application that acts as a server needs a direct IP connection.
Gamers get errors about "strict NAT." Traditionally the solution to this problem caused by NAT was to forward the ports. If their ISPs has chosen CGNAT port forwarding is impossible.
VoIP calls that have one way audio are a symptom of reachability issues caused by a firewall or address translation problems. VoIP services have adapted to IPv4 NAT by relying on proxying instead of STUN but CGNAT really degrades reliability.
Video chat uses the kludges of TURN when peer to peer connectivity does not work. This increases costs for the video chat service who in turn require a paid subscription as they will not relay traffic for free.
BitTorrent and file transfer services need direct IP connectivity. If p2p file transfers worked on any network we would not need to mind Gmail's 25MB attachment limit, or pay for intermediary cloud storage.
Most of the applications that could communicate peer-to-peer use relay servers that make delay and scalability worse. Some combinations of NAT may sometimes work without a relay server, but figuring this out is complex and increases connection setup time. Every early SIP/VoIP user had the 'the connection only works in one direction' experience, usually caused by NAT.
A CGNAT is a stateful component which makes it expensive to operate. Failover to a backup is hard, as is scalability with this kind of components. And then there are legal requirements. You have to know what user had which IP address at a given time. I'd rather invest in dual stack instead.
Peer-to-peer still wouldn't work even in a fully IPv6 world without something like STUN or TURN, since endpoints would still be protected by stateful firewalls preventing external connections to them.
With a firewall, the application knows its public-facing ipv6 address/src port number without STUN. A stateful firewall does not alter packets.
It can open the firewall simply by sending something. If it can communicate its public-facing ipv6 address/src port number to the remote side using a SIP proxy (ok, for this signalling, you need a relay), it can receive traffic. No TURN needed either.
If the two peers are each behind its own stateful firewall (as would be common with something like BitTorrent), then you still need some 3rd party server accessible to both of them that either relays traffic between them, or at least allows them to negotiate the port pair that they'll communicate on (so that the one which will accept the TCP connection can send a TCP SYN with the other's source port as destination).
The second option may not even work with more paranoid firewalls, which might not allow TCP SYN packets on existing connections.
At this stage it shouldn't require much money to integrate ipv6. Your network equipment needs to be really old to not support v6 natively.
Though granted, there is support and support. I use hyperoptic in the UK as an ISP. I replaced the native router and I still can't figure out a way to get an IPv6 address.
> The IPv6 transition will be like a bankruptcy: very slowly, slowly, then all of a sudden.
I don't think that's true.
Some services on the internet are already made available through IPv6. Doesn't that mean their migration to IPv6 is done?
There are however some ISPs that seem to be dragging their feet. I recall I tried to deprecate IPv4 access to a personal project of mine and it was no longer reachable when I tried to access it from my home. Lookups from other points of the world could resolve the IP but not my little home network. I felt forced to continue paying the 2€ I paid for a IPv4 address just because of that.
Edit: to make it abundantly clear, I'm looking at you, Vodafone. You suck.
> No, the migration is only done when you're exclusively running IPv6.
I don't think that's an informed, thought-out take. The internet works just fine for all intents and purposes if you have some services reachable through IPv4. There is no obligation to shut down IPv4 in order to work with IPv6.
If you're able to go through your daily work seamlessly hitting services with IPv6, that's a successful migration. It matters nothing if an unrelated service hasn't went through their migration yet.
What I meant was, almost all server operators on the internet have to support either IPv4-only or dual stack. It is still economic suicide to run an IPv6-only server, at least in most areas of the world. As long as you have to run IPv4 software as well and lease an IPv4 address, I would say you are not fully migrated to IPv6.
"this is an entirely new protocol by the definition "
NO!!!
this is what the parent comment meant about ipv6 design. Add an octet and that's it. Same protocol with same rules just a bigger address.
It may be a different version of IP but the protocol and supporting protocols like ARP and DHCP just need to support the new IP.
The reason IPv6 failed is the same reason why when new devs join a team, they find how everything is wrong and try to fix it all and leave a bigger mess than what they started with. You solve problems one step at a time. Overhauls are only justified when your objective is specifically to improve the whole system.
"The reason for the slow IPv6 adoption is that there was no financial or business pressure."
That is only part of the reason. The other part is it is a pain to use, there is no way to use it without also supporting v4 and on top of that you have to learn and adapt other new protocols, addressing schemes, gotcha's and much more.
We could presumably have done something like: use the IPv4 packet format, treat the 32 bit src/dst address in the header as the first 32 bits of the address and put the remaining 96 bits (+ checksums/etc.) as the first few bytes of the payload. Then create TCPv6, UDPv6, IGMPv6 etc. protocol identifiers for the protocol field to distinguish traffic that's encoding an IPv6 address in the first few bytes of the payload.
Then, if you own an IPv4 address, you effectively own an IPv6 subset. Then we reserve a whole bunch of IPv4 addresses for IPv6-only allocations.
I obviously haven't thought it through in detail, but wouldn't something like that effectively transparently work via IPv4 core infrastructure provided the networks at either end support IPv6 if they're using it? We'd still need NAT for IPv6-only endpoints that need to talk to IPv4-only endpoints. It also wouldn't be anywhere near as clean as IPv6 and would lack a few of the nice features, but... an awesome protocol I can't actually use isn't much use to me.
> We could presumably have done something like: use the IPv4 packet format, treat the 32 bit src/dst address in the header as the first 32 bits of the address and put the remaining 96 bits (+ checksums/etc.) as the first few bytes of the payload. Then create TCPv6, UDPv6, IGMPv6 etc. protocol identifiers for the protocol field to distinguish traffic that's encoding an IPv6 address in the first few bytes of the payload.
So no router can route it sensibly and no existing client works ? How would that help ?
it's probably been mentioned before, but there are customers that require IPv6 (like some US gov agencies and others), so for a lot of B2B/enterprise software companies it actually makes sense to support ipv6. And it's technically interesting, so why not! (I've been there, and it was fun)
Really, IPV6 has failed because of human reasons. I know because almost everyone demonstrably hates it, as evidenced by their behavior.
The big issue is that the router vendors hated it, the OS vendors hated it, the programming language people hated it, and the software writers hated it. How do I know? NOBODY WANTS TO ADOPT IT except by force, even now.
Worryingly, pro-IPV6 people are consistently arrogant and dismissive. Essentially their argument always boiled down to "ha, you'll be forced to use it eventually and then I'll be RIGHT!!!" which is why IPV6 people hate NATs with a vehement irrational passion, because it floated IPV4 for, what, two decades at least?
I'm guessing it is because IPV6 was a tossed-over-the-wall protocol that didn't get reference implementations from the biggest router vendors first. Here's a very very very very very very troubling link:
That is Cisco bragging about it's IPV6 website on a pdf from 2011! 2011! Fifteen years after the birth of the protocol. If Cisco did not have an IPv6 site up until FIFTEEN YEARS after protocol definition ... oh god.
Comcast routers weren't IPV6 functional back in 2015, at least they weren't on my cable modem. If an ISP that makes bank on renting and turning over its consumer routing hardware can't roadmap ipv6 adoption within 22 years... ugh.
And my biggest complaint about ipv6 is that they didn't increase the number of ports. Really. We have to keep shoehorning apps into 64k ports rather than a sensible 4 billion, but maybe there's some OS mapping concern with that, doesn't matter, the ship sailed.
Somewhere in IPV4 is an options header (up to 40 bytes). Why that didn't provide the necessary space for some degree of backwards compatibility somehow is beyond me.
What should have happened is that the big router vendors got together and agreed on a standard protocol. Then the major OS vendors and language standards bodies got together and made reference implementations for basic networking.
Once that was working / adopted by next gen hardware and software releases, then things might have gotten rolling.
I mean, how much work was that relative to the mind boggling amount of work done to implement NAT and firewall traversal/busting code in, say, Skype? Ever seen those whitepapers? Wow are they doozies. Holy crap are people willing to write code.
> NOBODY WANTS TO ADOPT IT except by force, even now.
Hi, I've adopted it for many reasons and I'm happy. You'll need to update your count. Seriously though, there's lots of people who adopted it - you'll need some more data for a generalisation like this.
> We have to keep shoehorning apps into 64k ports
With ipv6 you typically get a whole range assigned to your machine rather than a single address. Why expand ports, when you can assign millions of apps to different addresses, with the same port that correctly identifies the service type? As a bonus, this already works with DNS AAAA entries so you don't have to mess with SRV to find the right port.
"Please just try to fit more than 4 billion numbers into 4 bytes" -- this is mathematically impossible.
"Just extend the address size" -- this is an entirely new protocol by the definition of IPv4, which uses fixed-size addresses.
The reason for the slow IPv6 adoption is that there was no financial or business pressure. While IPv4 is ubiquitous, nobody individually feels a need to migrate to IPv6.
E.g.: How many customers would you gain by supporting IPv6? Generally zero. That doesn't sell well internally when the network team is asking for a budget.
The IPv6 transition will be like a bankruptcy: very slowly, slowly, then all of a sudden.
The sudden bit will happen when IPv4 address will cost $1K to $10K annually. At that point, customers will be reaching those IPv4 endpoints via three layers of proxies or NAT gateways, and IPv6 will be noticeably faster, more reliable, and free.