Also somewhat exasperating that the "running out of IPv4 addresses" saga has played out across almost my entire career. I remember attending the CIDR meetings at the IETF in around 1992, for example. So one of the first technical problems I encountered in my career remains unsolved nearly 30 years later.
It's not a fast process, but I don't see any evidence it is stalled.
Facebook is a counterexample.
> Over the past few years, Facebook has been transitioning its data center infrastructure from IPv4 to IPv6. We began by dual-stacking our internal network — adding IPv6 to all IPv4 infrastructure — and decided that all new data center clusters would be brought online as IPv6-only. We then worked on moving all applications and services running in our data centers to use and support IPv6. Today, 99 percent of our internal traffic is IPv6 and half of our clusters are IPv6-only. We anticipate moving our entire fleet to IPv6 and retiring the remaining IPv4 clusters over the next few years.
If they assigned my phone an IPV6 address in addition, that traffic could go directly to Facebook through cheaper switches, which is faster for me, and cheaper for them.
It serves its purpose if it allows end user devices to directly communicate with each other even if cloud servers with real IPv4 addresses continue to use IPv4 until the end of time.
And UPnP seems to get around this right now anyways. At least, every NAT'd connection I'm on, when I run a Bittorrent client, I have no trouble getting inbound connections.
There are known solutions for this.
For host firewalls, the application can open a port for itself during installation.
For network firewalls, the firewall can implement Port Control Protocol (RFC6887), which supports opening even IPv6 ports.
> And UPnP seems to get around this right now anyways. At least, every NAT'd connection I'm on, when I run a Bittorrent client, I have no trouble getting inbound connections.
UPnP is a rubbish fire. The protocol itself is badly designed and unnecessarily complicated and many of the implementations are broken. Section 9 of RFC6886 is informative.
One of the common failure modes is that a client will create a port mapping with a random UPnP device that isn't the real gateway. Many applications will then falsely indicate that incoming connections are working but none ever come through.
And it's still sharing an IP address. Only one device can have the ssh port, or the SMTP port, or any other port.
IPv6 + Port Control Protocol fixes all of that.
Verizon on the other hand hasn't done shit.
Forward-thinking network operators (like Verizon Wireless or Comcast or T-Mobile USA) have taken advantage of the LTE and DOCSIS 3 transitions (which inherently involve installing and provisioning new hardware on a massive scale) to properly provision and deploy IPv6 to their customers.
There's plenty of large eyeball networks (https://www.akamai.com/uk/en/about/our-thinking/state-of-the... , click on "Networks" to sort by traffic volume, the second column is the percentage of requests from that network via IPv6) that have working IPv6 deployed and used to a significant percentage of their subscribers -- some around 80%.
On the upside, I have a /60 to play with...
This is also why ISP's prefer to rent you CPE and (preferably) manage it remotely.
This would in turn cause more content networks to support ipv6 - once even 1% of eyeballs can't hit revenue producing websites those customers will demand v6 or move IaaS/SaaS/etc. providers.
My mum's boring broadband connection, with a free router supplied by the ISP in the UK, has the functionality next to the port forwarding settings for IPv4.
That's typical. Look up IPv6 pinhole to see how ISPs document it.
But I wonder, is it a fair assumption that the router that you get will either 1) not route IPv6 at all, or 2) route IPv6, and by default deny incoming traffic? Problematic would be ones that routed IPv6, and by default accepted incoming traffic.
Second, you get the same security benefits by having a firewall that denies outbound connections not already established.
I'll take an insecure device isolated at the bottom of the ocean in a titanium block over a probably-secure device that is publicly addressable any day.
EDIT: Dan has many excellent points, but I'd like to quote my favorite:
The IPv6 designers made a fundamental conceptual mistake: they designed the IPv6 address space as an alternative to the IPv4 address space, rather than an extension to the IPv4 address space.
Indeed, what were they thinking!
It's certainly an undeniable fact that IPv6 adoption has been a disaster, taking much longer than hoped for. Frankly, I expect to see IPv4 coexist with IPv6 for the next hundred years - not ideal.
I respect djb and his contributions to cryptography, but he is off base here. The sin was committed when IPv4 was made and not initially designed to allow for variable / expanded address space.
Adding an IP Option to IPv4 packets that could carry extra address bits was not an option either -- IP options aren't preserved much at all on the Internet. Furthermore, even if most routers didn't drop IP options, adding "v6" address space via IP option in a packet that old/v4-only devices would nevertheless attempt to parse would have been hell operationally.
IPv6 has lots of flaws/idiosyncrasies/weirdnesses (multicast, mobility, slaac, ndp, etc.) that only looked good in the 90s, that definitely made ipv6 adoption a lot more painful than it needed to be, and ended up as difference for just difference's sake; but the one thing that was unambiguously done right with v6 was a completely clear separation of address spaces from IPv4.
Despite not changing the address length at all, the idea of doing this was abandoned because of the widespread compatibility problems across different OSes.
Sure, you could hand-wave another layer of NAT to "fix" it, just like the extended address proposals.
Hell, even using MAC addresses which begin with a number devices haven't seen before is enough to cause issues, despite following the existing standards:
And all the existing NAT / firewall / middleware devices - they're supposed to handle the extensions transparently? Having seen the many ways that ASA protocol / application fixup can mangle packets - I find it very hard to believe.
All these proposals I've seen appear to assume that extended addresses start being distributed only after every or almost every host and router in the whole world had all of its software upgraded to understand extended addresses. But that's not realistic, since without being able to actually use it, there would be no incentive to modify every single piece of network-facing software and hardware to be able to use extended addresses. It's a Catch-22.
At least that's my understanding of it.
Via a NAT router that talks v4 on one side and v6 on the other.
The router would also need to act as a DNS server and translate v6 responses into dynamically assigned v4 addresses. Routers routinely do this sort of thing today for captive portals.
If there's a system that has only v4, it can tunnel v6 packets out to a tunnel gateway that will unwrap and forward them.
There are other protocol-specific tricks, like v4 DNS records that resolve to a HTTP reverse proxy, which forwards based on hostname and path to the real v6 server, while the v6 DNS records point directly to the v6 server.
That depends on what you mean. A v4 packet can only be delivered unaltered to a v4 stack. But one can build a proxy server that tunnels TCP and UDP from a v4 network to a v6 network, so data can be passed from a v4 system to a v6 system without any software changes at either end.
Now that you have to buy IPv4 addresses at around $10 for a single address, the game has changed.
With carrier grade NAT, content has to slowly migrate to IPv6 for geo-location reasons. Any content that sees a high amount of abuse also has to avoid CGN.
Once big parties like google, facebook, etc. have a lot of traffic on IPv6, it stops making sense for them to invest in IPv4. So they will try to pressure ISPs into offering IPv6. Likewise, if an ISP sees that big content providers are on IPv6, then it stops making sense to invest in IPv4. Pressuring smaller content providers to offer IPv6.
And then IPv4 joins the ranks of IPX, NetBIOS, Apple Talk. Some pockets may continue to exists, but the rest of the world has moved on.
There is neither "rough consensus" (besides that the author should have their posting privileges on the IETF lists removed) and certainly not any "running code". They've been trying to get IETF people to care about their harebrained "IPv10" scheme since November 2016; that they have yet to realise that their scheme is useless and that nobody seriously cares is about as depressing as seeing that internet-draft getting cited.
The term "crank" (https://en.wikipedia.org/wiki/Crank_(person)) is applicable here.
Some parts of it are laughable such as
IPv10 support on "all" Internet connected hosts can be deployed
in a very short time by technology companies developing OSs
(for hosts and networking devices, and there will be no
dependence on enterprise users and it is just a software
development process in the NIC cards of all hosts to allow
encapsulating both IPv4 and IPv6 in the same IP packet header.
For instance, if an IPv4-only host wanted to communicate to an IPv6-only host, it could send packets to a well-known NAT46 anycast address with an IP option of the destination host. The NAT46 host could then create the IPv6 packet with the correct destination and IPv4-mapped source.
He suggested using the IPv4 routing table for IPv4-mapped IPv6 addresses, which wouldn't be loop-free unless every router was dual stack and did the same thing. However, with what I described, it seems like any dual-stack host (or router) could perform the translation in a loop-free manner.
- Understands that the proposal requires that "anything [that] will process a L3 packet" be upgraded to understand the new packet format, but seems to believe it's a simple matter of making the OS developers (which are "few") "push new updates globally".
- Seems to believe having the proposal ratified by the IETF is both necessary and sufficient for the proposal to be adopted everywhere.
Edit: the author is now requesting that the internet-draft be removed from the database: https://www.ietf.org/mail-archive/web/ietf/current/msg103018...
Funnily, having worked on adding dual-stack support to several pieces of software and protocols, my main grips with IPv6 remain how addresses are represented to humans. Despite compression and all, I realize there was no obvious way to make addresses easier to memorize and that's basically the price to pay to have a staggering 1'500 addresses per square meter of earth.
However, and it may sound a bit silly but I still consider that the choice of ':' as a separator was an unfortunate one, I'd have a hard time listing all the nasty implications it has in terms of parsing :)
Such a missed opportunity.
With an imaginary phone running with an imaginary v4 extended address, you'd still need something in the network at the carrier side to handle communications with legacy devices.
There's not much difference between that and having a v6-only device.
Still relies on TCP and UDP and thus still needs IP addresses.
All the big ISPs have been testing IPv6 for years now and I'm sure it's just a matter of flipping a switch. Now that a new contender is in town (a 4th ISP) and they're having huge problems because they have no IP addresses left to assign to their customers, I suppose the rest of ISPs will stall even more their transition to IPv6 so they can try and "suffocate" the new ISP.
Just my two cents...
I feel like one big hurdle is IPv6 usability. You can write down and easily remember IPv4 addresses. IPv6 netmasks can get really confusing. They make sense if you expand out every block, but in reality, IPv6 requires a lot of tooling to chop up and work with address spaces in an intuitive way.
The only thing that needed fixing was the address space (imo, as far as I can see), but if I need to turn it on, I now have to worry about weird routing, special addresses (IPv4 has some, but IPv6 seems to have taken it to a new level), understanding hexadecimal addresses, translation layers etc. What a mess, just extend the address space.
Life is much easier when you can speak of a machine as "2.dev.awesome-service" (.your-company.com)
Besides, the machines should be ephemeral, and the new ones will likely get new IPs. (Unless you're doing something like EIPs, but just stop that and use DNS. ;-) )
It's still relatively easy to get IPv4 blocks for a buck or two per IP through auction houses.
In 1997, I remember running 2 T1's w/BGP on a router with a whopping 32 megs of RAM. I think there were 50,000 routes! How things have changed...
I was thinking more along the lines of a bit being flipped would mean the next 8 bytes are an IP and not four, for example.
From my limited understanding, I don't think they reserved a bit in the address space?
I think it's theoretical of course, I don't believe there is a reserved bit in the IPv4 range.
PS IPv6 sucks.
As of right now, ARIN appears to have exactly 0 IPv4 addresses--they can't even give out a block of 256 IPv4 addresses to someone who asks for it. All they can do is put you on a waiting list until someone else agrees to give up their IPv4 address space. Some people have been waiting since July 2015.