All the other providers I use already support IPv6 and have for years. Amazon should have supported this years ago, and even now they only have tentative, incomplete support for a few of their services.
It's embarrassing, frankly. I'm impressed they didn't try to charge for it, but then again, when you're marking up bandwidth egress 18x over market rate, I guess you have some room to work with.
Some other providers support it. Scaleway is a fun new thing, yet their initial offering wasn't developed with IPv6 support and cannot be backed in on their old servers.
DigitalOcean "supports" IPv6. In some regions. And only gives you 16 addresses per VPS to play with instead of a /64.
There are many examples of this, but you see my point. People still don't think IPv6 first, so it's always a separate project to tack it back on. That's a shame because if you support just IPv6, your OS will do IPv4 to IPv6 translation for you, so that you don't have to support two stacks. (I am talking about server-side applications, not places where you need to make connections to IPv4 servers.)
> People still don't think IPv6 first, so it's always a separate project to tack it back on.
This is a pretty big problem too because there are many services that don't take firewalls into account. They're two totally different protocols and have two firewall stacks (iptables/ip6tables). I don't know if Windows Sever is any better at either abstracting both firewall sets or ensuring you are explicitly warned if you leave IPv6 addresses in a different firewall state than IPv4.
Amazon's IPv6 support is pretty horrible, and that's terrible consider just how much of the world runs on Amazon and our total exhaustion of IPv4 addresses.
Without broad scale IPv6 deployment, we're going to get into a split-Internet situation where people will not be able to reach an audience without an expensive legacy IPv4 address.
> I've found vultr.com to have solid IPv6. They give you a true /64
How is that solid IPv6? You can't even trivially subnet it! Sure, it's marginally better than the 16 address madness of other hosters, but it's still terrible. Especially so, given that you need an NDP proxy hack ... WTF?
Yes, good providers give you a real /48, routed to a link-local address, so it's all under your very own control, trivial to create subnets, tunnels, whatever, without any need for any hacks whatsoever.
As for the NDP proxy hack being easy: Well, sure, there are more difficult things (and yes, I once wrote an NDP proxy deamon myself, which wasn't particularly difficult either). But it still adds complexity, for no real reason. The question isn't whether you can work around this deficiency. The question is: What is the point of this deficiency in the first place?
Plus, you still only have one /64, which is just braindead.
And, with a proxy NDP setup, you are vulnerable to NDP cache DoS: Your upstream router needs to do NDP resolution for each individual address (instead of just the one router address that your prefix is routed to), which enterprising attackers can DoS by simply sending packets to a few thousand to million of your addresses which then will overload the NDP cache of that router, leading to your machine(s) potentially being unreachable.
Yes, Linode will route /64 and /48 subnets to you. They'll even give you several of each if you ask, and the subnets can be failed over to other nodes for high availability.
Linode are the only ones I have seen do IPv6 correctly. I wish they also knew how to do security correctly; unfortunately I can no longer recommend them in good conscience.
At vultr you have the option to "bring your own ip", as they call it. This really just allows you to use BGP to announce whatever prefixes you own. You could have a /40 behind each of your servers if you wanted to.
Which is great to have as an option--but otherwise, it's a stupid idea. The whole point of having such a large address space with IPv6 is to enable aggregation of routes. So, every vultr datacenter should have at most one route in the global IPv6 routing table, for a huge prefix that all their customers are aggregated under. Having customers announce their own provider independent address space might well have its legitimate uses, but it adds additional routing table entries to the global routing table, which is why it's a terrible idea to do this instead of just assigning customers /48s from their provider aggregated space.
What are the popular use cases where you need more than 16 IPs (use cases applicable to the small-ish cloud servers digitalocean offers)? I'm genuinely curious
You could run a VPN, you could give each service you have its own address, you could expose every docker container to the world as its own address, etc.
However much space you personally need, it's beneficial to be isolated on a separate /64 from other clients. A lot of services block IPv6 abuse on a /64 basis. It sucks when one jerk gets everyone else in the data center blocked by something.
One somewhat popular use case specifically for large address space certainly is simply using it as connectivity for your office/servers located elsewhere via tunneling, given that many on-premises providers also suck at address assignment.
However, your mistake probably is more in thinking in terms of "number of addresses", as you would with IPv4. IPv6 is meant to be easy to deploy. Part of the specs are mechanisms for automatic address allocation. And one big reason for why the IPv6 address space is so huge in the first place is to enable those, and to thus avoid all the bureaucratic overhead that address exhaustion causes.
As such, it is important that address allocations are standardized, so that software that implements such mechanisms just works. The best-known mechanism probably is SLAAC (stateless address auto-configuration). SLAAC requires a /64 per network segment to work. If you don't have that, you'll have to configure things manually. For no reason at all.
Imagine a world where you would not ever have to manually assign addresses at all. That's kindof the goal with IPv6. Any ethernet segment always has a /64, so there is never any situation where you could connect a new machine and there wouldn't be any addresses available for it. You just plug it in, it performs router discovery, assigns itself addresses, and it just works. If it's supposed to have a DNS name, it might fire off a DNS update to the DNS server to make its addresses known to the world. And all of that without any need for some kind of DHCP server keeping track of assignments.
This is also applicable to virtual machines and containers. A common setup is to bridge virtual machines and containers to the host machine's ethernet interface. In that case, a container isn't really any different from a physical machine plugged into the ethernet, as far as address configuration is concerned. So, if the VM hoster's (virtual) ethernet interface has a /64 with router advertisements, you can simply start stuff in containers bridged to that interface, and they'll automatically have addresses assigned without you having to do anything. So, even if you only have three containers, some fixed allocation of 16 addresses prevents that automatic configuration from working. But also, it's completely reasonable to have more than 16 containers running in a VM, in which case you'd be out of luck with 16 addresses, and would have to start using workarounds like NAT and non-public addresses and stuff that terribly complicate things--again, for no reason whatsoever.
But also, just bridging stuff isn't the only reasonable thing to do. You might want to have a bunch of containers isolated a bit, so you want to have them behind a packet filter. So, you set up a virtual bridge that you add all those containers to, and then you set up routing between that bridge and your upstream ethernet interface. Now, you obviously still want autoconfiguration to work for the containers on that isolated virtual ethernet. But you need a /64 for that. But you also need a /64 for your host, and maybe another bridge with completely unfiltered containers. Now, you already need two /64s. Unless you prefer configuring everything manually, that is.
Also, the long-term perspective should be to even automate the assignment of /64 prefixes to any such subnets that you might configure. In principle, such a mechanism already exist, in the form of DHCP-PD (prefix delegation), but it isn't really used much in data center setups (but, well, IPv6 isn't used much either ...). With that in place, you should be able to simply rent some virtual machine in the cloud, install some "private cloud" software stack, and then be able to have a user interface that allows you to create virtual network segments and filters between them and containers for various applications connected to those segments--without ever having to think about addresses at all, without ever entering any address, without even seeing any addresses. Addresses simply are always available, as many as you need, and there are standardized protocols that take care of assigning unique addresses to everything that needs an address.
Also, as a result of that, it should be outright trivial to migrate everything to a different hoster. Everything should be able to renumber fully automatically to the prefix allocated by the new hoster. You simply shut down your VM, transfer an image to the new hoster, and boot it back up, and it should just work.
Addresses should never be a factor to consider what designing your network/server/hosting setup. If you think something should be in a container, you should be able to launch a container, and it will have an address.
Great to finally see AWS start moving to IPv6! I'm sure just getting S3 to support it was a massive effort on the backend infrastructure (in all their data centers around the world - except for China apparently) and they've going to spend a lot of time looking at the new traffic and tweaking their network.
I don't forsee just S3 access having a huge impact outside of Amazon though, since IPv6 has the biggest impact on end-users, and I presume most of those are hitting CloudFront, ELB or EC2 endpoints.
It's kind of sucky, but there are a couple of good reasons for it:
1. Some people have broken IPv6 connectivity. Their OS will prefer the IPv6 address and won't fall back to the IPv4 address until after a lengthy timeout.
2. People might have IP address restrictions in place that currently only have IPv4 addresses whitelisted. If suddenly they started using IPv6 addresses they'd be blocked and their applications would break.
(1) is the user's OS responsibility. All modern OS systems have a complex system in place already for this (using ICMP preflighting and DNS resolution timing checks and simultaneous v4/v6 connections, among other things). We're already in a world were networks have to be IPv6-first or IPv6-only (mobile networks are IPv6-first and soon likely to be IPv6-only; several large US cable ISPs have moved or are moving to IPv6-first/IPv6-only; etc), at least in the consumer world.
I don't care whose "responsibility" it is: if it costs my company a customer and it is something that can be fixed on my end with a seemingly simple workaround, it is something that I should fix, and I am glad that Amazon is thinking this through, particularly if it breaks an existing user when they roll out the upgrade. I mean: imagine if 0.1% of Netflix customers woke up today and the only immediate recourse Netflix has when they start being flooded with complaints is "you will need to upgrade your OS: we can no longer support whatever slightly older version you have.
Might be to avoid problems with client libraries, firewalls, ... that don't react properly to it? E.g. if something tries to use IPv6 behind a firewall that blocks it and doesn't fall back to IPv4?
Backward compatibility. If you whitelist in access rules some IPv4 IPs, than adding IPv6 support to same endpoint may give 403 for some clients which try IPv6 first.
I'll note that Azure fully supports IPv6 because it was obvious 10 years ago that building a cloud host without it was a dumb idea. Why it is taking AWS and Google so long I have no idea.
"The foundational work to enable IPv6 in the Azure environment is well underway. However, we are unable to share a date when IPv6 support will be generally available at this time."
If your application requires a specific protocol version, Google also provides IPv4-only, IPv6-only and dual-stack targets unaffected by the list of DNS resolvers above. The available CNAME targets are:
ghs.googlehosted.com: Automatic (default)
ghs4.googlehosted.com: IPv4 only
ghs6.googlehosted.com: IPv6 only
ghs46.googlehosted.com: IPv4 and IPv6
"Before EC2 (VPC) supports IPv6, we recommend that you continue using the standard IPv4-only endpoints from EC2"
In other words, IPv6 in VPC is coming.