It seems that we in Belgium are somewhat stuck at this rate though. Consumer routers behind ISP modems that only provide IPv4, enterprises not adopting IPv6, mobile networks not adopting IPv6 (all IPv4 only with CGNAT if I'm not mistaken), ...
Probably because the Cloudflare data is percentage of traffic (e.g. total requests), and the Google data is percentage of users. Mobile traffic is more likely to be IPv6, and that would skew the numbers.
There are some large countries out there. Note that according to my sibling comment, Belgium isn't #1, or above 50%, right now.
I (Cloudflare) am talking about traffic %; Google is talking about % of users. Cloudflare doesn't have a concept of 'user'.
That sounds like percentage of traffic to me?
And yet, even in this position, I think having IPv6 native inside of a production network isn't absolutely necessary (our production env runs in the RFC1918 space too even though v6 would be available at both ends):
Access to the private production network should all happen over an encrypted and authenticated VPN and all access form the production network to the public network (if needed at all) should go through a filtering proxy server.
Which means that the addressing used in your internal network becomes completely irrelevant and can easily be IPv4.
This only becomes an issue once you need more hosts than the /8 offered by the 10/8 RFC1918 network, but the days when you need (and can afford) more than 16.7M nodes on a cloud computing provider are probably still a ways off.
If you want instance level IPv6 in order to access an instance directly or in order to access a resource directly, you're probably doing something wrong.
Then again, as services become v6 only (which will still be a few years off) and you need to access them from your production network, then your frontend proxy will of course need access to the v6 network. At that point, yes, instance level v6 becomes necessary too.
In contrast, even if you use "private" IPv6 addresses (ULA), the address space is large enough that everyone can have their own prefix. Or better, just use public IPv6 addresses and firewall them.
For one, you don't need NAT to use ULA internally. You just use ULA for internal stuff and global addresses for everything else.
But really, if renumbering is a problem in the first place, you are doing something fundamentally very wrong. Renumbering should be a matter of adding a new prefix of the same length to your addressing plan and replicating in it the exact same subnet structure you had before, then deprecating the old prefix (so it's not used for new outbound connections), waiting a bit, and then removing the old prefix. If your problem is hard-coded addresses everywhere, that is also a problem with your system design. Almost all of your configuration should only contain host names, and your DNS should either use dynamic updates so that hosts register their current primary addresses with the DNS server, or some other mechanism that generates records from an addressing plan of sorts and a prefix. There should be very few places where you need to change addresses in order to switch prefixes.
NAT66 does exist and over time more software and hardware will support it. But you don't need NAT if you are using ULA; instead you would configure each system with both its ULA address and the global address from your provider (SLAAC can advertise multiple prefixes). Traffic to other internal systems would use the ULA addresses, while traffic to the Internet would use the global address.
For example, I've been working on contributing IPv6 support to GitLab.com , which is hosted on GCP.
Turns out it's impossible for the main site right now, because none of the global load balancer types allow port 22, which is used for Git traffic. I've no idea why there are even such restrictions, but there's of course a "feature" request for allowing port 22 . I was able to repurpose the work for other parts of GitLab.com, but I've yet to get even those to work, due to the load balancers gobbling up HTTPS/SSL requests .
"Access to the private production network should all happen over an encrypted and authenticated VPN"
IPsec, likely the same thing that'd be encrypting your VPN, can just be used directly between hosts without complication if you have all public addresses.
"and all access form the production network to the public network (if needed at all) should go through a filtering proxy server."
Why does it have to be a proxy? Why not just a firewall sitting in the route path? Again, the way you're use to working around a limited address space is leading to complicated solutions to problems easily solved by having more addresses.
In computing very little is "necessary" but it doesn't mean it makes much sense for it to be that way the same as programming your cloud application with NAND logic doesn't make sense.
Because while there is a 10/8, once you start subnetting that and advertising it from multiple sites and start running labs that look and act like production that /8 goes quick.
Give me IPv6 any day.
Not necessary, but it could be quite useful. The 64bit suffix of ULA addresses lets you group your containers into separate blocks with static prefixes. E.g. you could always have the :0000 - :FFFF block be your reverse proxies, the next block the storage shards and so on.
It's not that you require more than a /8 can provide, it's more a function of convenience. I have 10 customers, each one of those has 4 environments (Dev, QA, Stage, Prod) and each of these environments need to live in 3 regions, and each region has 3 AZs, and each AZ has 6 subnets. Splitting that up with a 10.0.0.0/8 gets complicated. I would much rather say 2006:abcd:0001:::: is customer A, ::0002::: is customer B, etc. IPv6 gives me plenty of room to maneuver and eliminates the need to worry about IP exhaustion.
And the days when you need 16M nodes all of which need to talk to each other directly are probably even further off.
(In fact, I would go even further and say that if you find yourself ever facing that situation you've done something badly wrong in your architectural design.)
Worse, Kubernetes as a model doesn't use IPv6 internally when it should. No reason not to use e.g. ULAs.
dig +short aaaa news.ycombinator.com
Most probably yes. Since IPv6 is deployed more on consumers side rather than enterprises. So it's not surprising that over the weekend IPv6 traffic hits a peak.
By default, most ISP routers firewall all IPv6 traffic (and IPv4 is implicitly firewalled via NAT). In an enterprise environment, dual stack support can get a lot more tricky and you need to ensure firewalls are property configured, or else regular workstations can be accessible via the public network.
Home connections have more IPv6 integration than businesses that have to configure their firewalls, etc.
There is little to no incentive to change a well working network, it makes (business) sense to introduce v6 in the upcoming G5 but not in the running setups.
Server side adoption is honestly not that bad. Quite a few major datacenters implement IPv6, but people don't set their A records or slightly modify their config for an IPv6 interface. Cloudflare provides a temporary proxy solution in that it offers IPv6, but it isn't really true IPv6.
There may well be a barrier in the future, but so far there isn't any sign of one.
So I am basically NATed to ipv6.
Is Google including this kind of mobile traffic? I would guess that they are.
T-Mobile is IPv6 only, you will have no IPv4 address assigned to your phone.
"So I am basically NATed to ipv6."
Other way around, your IPv6 connection is NATed to IPv4 via one of DNS64 or NAT64.
"Is Google including this kind of mobile traffic? I would guess that they are."
Yes, but maybe not in the way you're thinking if I'm reading your comment correctly. The IPv6 only portion is just INSIDE T-Mobile, when you need to leave T-Mobile to access services you will be using v6 if the services support v6 and v4 if the services support v4. Since Google is measuring this by how you connect to Google and most Google services are v6 enabled then you'll be counted as v6 properly. For the ones that aren't v6 enabled you'll be counted as v4 properly as they are measuring how you go over the general internet not how you traverse a private network.
464xlat works well for letting legacy v4-only programs access v4 via NAT64, but it doesn't help them reach native v6 hosts.
I'm not looking forward to IPv6 enabling/escalating yet another factor in tracking everything and everyone, online.
Maybe this is a grossly (as in, large) wrong and ill-informed perspective, on my part. If so, please disabuse me of it. I remain concerned.
It's trivial to fix once it becomes a problem (at least in our case it was), but I wouldn't expect it to be an uncommon mistake.
That was 3 years ago =)
* I've been running two IPv6 hosts without any issues since then.
(Some places definitely were seeing initial rollouts around that time, but it was a far cry from "all".)
Lack of IPv6 on corporate networks isn't very surprising, considering how hard/annoying enabling it is for business connections .
As for Reliance Jio, they have over 300 million IPv6 users, with a total IPv6 adoption of 88% .
I wonder if China's ISPs are doing the same thing?
Probably because they ran into the IPv4 shortage relatively early (address allocations were based on the distribution of nodes in the 1980s or so), and because they didn't get massive internet adoption early enough to need to fall back on carrier-grade NAT like Russia or China.
Looks like they've been proven wrong. ISPs are happily using CG-NAT, all services are being adapted to that (moving away from p2p like Skype did) and end users don't care too much. Ouch!
I wonder how "happily" they are using CG-NAT. NAT is inherently stateful and keeping track of that state becomes more expensive as the amount of hosts behind the NAT is increasing.
Running CG-NAT at scale is complicated and resource-intensive, so I'm totally willing to imagine that it's better long-term value for a carrier to switch to IPv6 than to expand CG-NAT.
When most of your core network is IPv4-only, getting some CG-NAT equipment is indeed cheaper than buying a completely new core network.
Right now, my laptop has 4 open TCP sockets, and my phone has 3.
Even assuming the average user has 50 sockets open, and assuming you are an ISP with 300 million users, that's still only 175 gigabytes of RAM. The total cost is under $1000, or peanuts for what would be the worlds largest ISP.
Guess what, having a 175 gigabyte of (T)CAM is drastically more expensive.
You could make a reasonable argument that it has too many of them, even. Where did you get the idea that we didn't think it needed any?
Similar for HN. It also does not support IPv6. From my experience it is in the 99 percentile of websites when it comes to reliability. Aka it works pretty much all the time. Better then almost every other website.
> How would the user notice that?
Initial connections taking multiple TCP syn retries leading to an initial stall or random assets from 3rd party sites or subdomains not being loaded which can break sites in many ways.
It was somewhat broken for a while on windows 10, but they fixed that in march.
If a user browses with a given address and gets identified by one specific agent that shares that information with 3rd parties (e.g. a social network that collects and shares users' IP addresses), every time that happens the then generated new browsing data will have a solid past reference to which it will be added, thanks to that one correct identification.
Friendly reminder that IPv6 by default uses your mac address as part of your address and hence you are recognizable to any server in the world to track you around.