Here in the UK, for example, we know we have an energy crisis looming. We know we need to build more power stations. We have known this for over 20 years. We have the technology, money and skills, but only this year has one single new power station been signed off on.
Motor sport... knew neck injury was a common way to die or be crippled. They had the HANS device which would save a vast majority of lives, but people refused to use it. Not so much money, but attitude. Then a few drivers died who could have been saved, and there was a mass push to adopt the HANS device.
Same sort of thing with Islamic terrorists. We knew of the threat, but did little to stop it until the catastrophe of 9/11.
Look at AIDS. No one did much until the numbers got scary.
What we do is wait and wait until there is a catastrophic failure, then panic the known solution in to place, while blaming every one in sight. Of course then mistakes happen because its a rush job.
In short, this is perfectly normal.
I suspect that we wont get IP6 properly until the internet feels and looks broken. When the average punter is inconvenienced on mass is when IP6 will properly roll out. In this case, it must be very expensive to blanket upgrade, and prices will have to rise to cover it. But, the people paying those higher prices wont see immediate benefit, in effect they will be expected to invest up front while IP6 is rolled out. Or, for once, will it be different?
I have a suggestion. ISPs could offer a deal where the customer can pay more, but later on get IP6 upgrades for free, or something beneficial in return for the investment. Let that increased cost actually be a small private investment in one's ISP.
IMO, the main issue with IPv6 is it's overall complexity compared to IPv4. While it's end goal is laudable, I agree with another comment in that there may have been a better halfway solution (IPv4.1) which could have eased the transition.
I say this knowing full well that the problem is very complex and the people working on it brilliant which makes me believe that there may have been no other way.
And what complexity would that be, if I dare challenge you? I say it's the contrary: IPv6 is dead simple. IPv4, having to deal with NATs and other networking hacks like it makes everything which should be simple complex.
Really: IPv6 is a no-brainer, and the only thing remotely "complex" about it would be that addresses are larger, and that would be because IPv6 was designed to accommodate a larger address-space. But since end-users will be using DNS anyway, that's not going to bother anyone except techies.
Techies who should be able read a few simple FAQs on the subject and be done.
TLDR: If you find IPv6 "complex", that's because you've been too lazy to look into it. I recommend you go do something about that.
A few FAQs? I think you're being a bit disingenuous. While the protocol itself is relatively simple to understand, the complexity related to its implementation is immense which is why we're still talking about it and not fully doing it.
Why it's complex:
1. The recommended approach is dual-stack. This means that network admins/system administrators will essentially have to manage two separate networks. Of course, this is assuming you have core-to-edge IPv6 support. If not, then you're upgrading all of your hardware.
2. Cost. Predominantly why we're still here. Often easy for us to push aside in our cosy developed world, but ignores an enormous number of legacy hardware deployed all over the world that will simply not work and, quite frankly, the solutions like tunnelling read a lot like the supposed pitfalls of IPv4.
3. Training. It's not a matter of a few FAQ's to learn, for a lot of technicians it is a matter of re-certification as all hardware has to be replaced which likely means new operating systems and firmware, proxying/tunnelling servers/devices ala Teredo/ISATAP.
For anyone that says it's a "no-brainer", I say that you have little to no experience in managing large heterogenous networks in cost-strapped organizations which, up until now, have had little to no reason to upgrade as the IPv4 address exhaustion issue does not really affect them (yet).
With IPv6, most issues you have in IPv4 space would be the solved the instant you enable IPv6. All the hacks goes away. It's a nice, clean network again.
The cost of getting IPv6 capable routers should be negligible as anything worth the money bought the last half decade already supports it. Chances are if you haven't replaced it yet, you will anyway, due to stupendously increased bandwidth requirements we've seen the last years.
I'm not saying it's a free ride, but I'm saying it's a no-brainer. Because keeping patching the rugged IPv4 landscape we have now is not going to stay viable. And then you might as well invest in the proper solution right away instead of wasting money, time and resources on a stop-gap solution.
Personally, I don't.
I guess I'm a glass half-full person on this one.
One possibility that might have worked would be an option field to store the internal IP address from NAT. This would allow software that knew about the option to work around NAT. The expanded address would basically become a list of IPv4 address. Unfortunately, this would require support from NAT routers and the consumer routers are probably the slowest equipment to upgrade. This would also require support from operating systems at least, but might be possible for most applications to be unchanged. Protocols, like FTP or SIP, that include addresses would need to be extended but would get the most benefit from easier NAT traversal.
All of the changes have already been done for IPv6 (or not done like with Skype).
People keep suggesting we could've somehow gone half-way, but it's an illogical proposition.
Router ASICs are keyed for IPv4 - that is the only thing they will ever do. Change anything about IPv4, and you have to replace all the routers (heavy duty ones, which do the heavy lifting of the internet).
There is no half-way solution to this problem that is somehow "easier" then existing transition mechanisms.
That's a broad oversimplification. Use the Cisco 6500 line as an example: it's a platform that's very long in the tooth, and through modularity is still functioning quite well, even for IPv6.
Also: let's not pretend that similar issues weren't hit with v4, eg, the routing table explosion.
You can change plenty of things without fork-lifting entire platforms, and serious long-term challenges exist even without considering v6.
You can't "just change the address length". They're hardware ICs. If the address is not 32 bits long, then its a malformed packet that should be ignored. They cannot handle it differently. They can't be reprogrammed.
There's been some consternation over Cisco stuff in this capacity, since they use an ASIC for IPv4 and a software-mechanism for IPv6 which is not nearly as fast.
The cost and complexity of changing hardware is enormous compared to every other part of the process. IPv6 in software has been solved, and where it can't be solved you can shim-it trivially by comparison. Software is not what's holding it back.
The internet goes to some lengths to avoid routing loops as well, so once a packet is passed one router we're not going to be able to easily spin it around and send it back to get to the right place. The net effect is going to be to randomly DDoS a bunch of devices which happen to be behind non IPv4.5 hardware, and so receive 50% of the internets traffic which doesn't get routed properly by an upstream router.
Let's see what the situation with germany's Telekom is like:
2010: http://heise.de/-1102458 - At the end of 2011 all DSL connections will be switched to ipv4/ipv6 dual stack
Mid 2012: http://heise.de/-1605061 - At the end of 2012 they will provide ipv4/ipv6 dual stack
End of 2012: http://heise.de/-1763557 - Only customers with VOIP or ISDN are getting ipv6.
It's not "me" who loves it, it's the ISPs.
I think you got that the wrong way around. Old analog and ISDN connections are specifically mentioned to never get v6 (which makes sense, I guess). Only new contracts with a fixed IP will definitely get it.
But... Why does it make sense?
Alternatively, ISDN/dialup gateways just don't support IPv6 because they were all designed in 1995.
Here you are supporting the network problem. If no ISPs support IPv6, it makes no sense for websites to support it. If no websites support it, it makes no sense for ISPs to support it.
You're right that everyone with an address needs no new address to be globally reachable, but in order to actually fix the issue we need everyone to switch. The protocol versions just don't inter-operate.
From the total internet IP scans I've seen recently, there is also a significant percentage of allocated IPv4 space which is dark. Out of addresses we are not.
For example: http://www.nsa.gov/ia/_files/factsheets/rhel5-pamphlet-i731....
Yeah, I've never understood that. I get that ping and traceroute have their own IPv6 versions, but that's sort of understandable (maybe).
A firewall, though, is a firewall, it shouldn't matter what one of the IP protocols is. TCP and UDP don't have separate tools. The maintainers should clean it all up and put it into one iptables tool. If you want only IPv4, use a -4 flag, and ditto for IPv6 with a -6 flag. Heck, for most rules you can just imply it by the nature of the src and dst addresses.
And since layer 3 addresses are different between IP versions, we need different rules. (There may also be additional concerns if you enable V4 mapped addresses for IPv6)
As with the original adoption of NAT, it will be slightly bumpy at first. There'll surely need to evolve some sort of port forwarding protocol (hopefully something more like NAT-PMP and not like uPnP) that will allow incoming connections through what will now be double NATs (one at the ISP and one in your home), since we can assume people won't remove their original NAT just because a new one has been added. People like their personal NAT, it makes them feel safe. And I think those people are right to feel that way.
And then if you want to run a public web server, for now you'll have to pay extra for a static IP, just like you have had to do for years now. That's not a big deal, there are way less than 4 billion public web servers right now. The market won't run out right away. As we get even tighter on addresses, the price of a static IP will rise.
And when that becomes a problem (prices too high), someone will find a way to use something like DNS SRV records to have multiple web servers on a single IP address, using different port numbers, and DNS will tell your web browser which port to use. This will require browsers to support whatever new standard, but I figure we have 10+ years before this level of workaround is absolutely critical.
Remember, if you allow for identifying servers by ip:port instead of just IP, you have roughly 65536 times as many "addresses" available. That's enough to last pretty much forever.
We don't need IPv6. We will need new routers to do the carrier grade NATting (and it's a very hard problem to NAT at that speed/scale, but hey, that's what money is for. And ISPs can deploy it incrementally, as they run out of addresses).
You're also underestimating the difficulty of deploying some of your proposed "solutions." DNS SRV has seen no adoption outside of a few services like SIP and XMPP. Getting web browsers, and every other last client out there that speaks HTTP, to support SRV won't be easy. Just look at the glacial pace other web standards move at.
And factoring in the port number doesn't give you 65536 times as many "addresses." Things like routing and ARP happen at the address level. Under your "solution" you wouldn't be able to migrate/fail over services just by moving their IP addresses between hosts. DNS-based solutions have never worked well for this.
It's still the internet even though you need DNS to turn names into IP addresses, right? It's just a little more complicated. That's what the new world will be: the Internet, but a little more complicated. Which is exactly what happened when DNS, then CIDR, then NAT were introduced.
That's the point of direct addressing: removing ambiguity and allowing direct connections.
Edit: The cost of patching IPv4 is just another reason to move deliberately to IPv6 (or something that allows direct addressing again, but for the sake of argument, IPv6 is the leading candidate).
Edit: Deleted confused nonsense about NAT-PMP.
There's no doubt that direct addressing is simpler and more appealing. Yes. But it requires worldwide 100% deployment of a replacement to IPv4, which is not simple at all.
The cost of patching IPv4 and working around the quirks seems similar and less desirable to the cost of simply running IPv6 in parallel. I wouldn't describe it as all-or-nothing, but I would say ISPs need to help by providing low-latency tunnels/advertised routes. (Hurricane Electric can't handle everything, going forward.)
"All-or-nothing" itself creates an obstacle.
 By "strictly worse" I mean there isn't, currently, any server that anybody cares about that you can connect to solely with ipv6; there are, however, numerous sites that are only connectable to with ipv4.
The people at the edge who don't care are, sadly, the one's who would most benefit. I agree that that is a problem. These social problems (the consumer apathy and the willingness of ISPs to exploit that to make a peer network a broadcast tree) are admittedly overwhelming but, as you also note, the costs to ISPs are minor: IPv6 can be provided to the edge, even if it's ultimately tunneled over IPv4-only hardware.
This is more a matter of technology-leaders, IMHO, pushing/expecting ISPs to do the right thing for once (if we can spare a few minutes from selling censorship to dictators). History has shown they aren't going to do it without public pressure. Their preferred distribution medium (cable TV) already existed: it was people who understood that it was a peer network that drove the adoption of the Internet. If IPv6 spends the first 10 years or more being used exclusively by that group of people, fine, but it's still worth promoting. ISPs only started taking it seriously in the last few years, so there's a long way to go, but I think it's a reasonable goal to get an upstream IPv6 router advertisement, eliminating the need for tunnels, to every IPv4-connected home in the next 5 years (it is really just a matter of installing Linux, or your preferred OS, on a spare box until the load dictates an upgrade; there is no chicken-or-egg problem).
If a consumer-facing network with millions and millions of devices uses IPv6 exclusively for their management network to keep the service running effectively and efficiently, do I care? Do they?
The key difference here is that the NAT is now outside the users control, and if they want to fool around with anything funky and put it on the public internet (like a new Tim Berners Lee making a new World Wide Web) he wont be in a capacity to do so anymore. Ooops.
The old internet let people invent things and publish it as they saw fit. And that's why the internet we have now is awesome.
This new internet you are describing lets people apply for permissions to publish thing. That sounds like the exact opposite of what the internet was designed to do and what was required for the internet we have today to evolve.
It is a very short-sighted strategy pushed forward by people to lazy to read up on IPv6 and see how simple it really is. It's not earth-shatteringly different. The bigger, as by design, and that's about it.
I see you're using historical precedent:
640k of RAM ought to be enough
32-bit memory addresses will always be large enough
2 digits is enough to encode a year since it will be the 20th century forever
All the characters anyone would want to use fit within 8 bits
Separately, I think addresses are slightly different than those other measurements. I'd think the number of needed public server ip:port addresses is roughly on the same order of magnitude as the number of humans, or perhaps less. By that measurement, 4 billion is almost enough, but clearly not enough.
I wouldn't want to have to bet on that, but I don't have to. There's always another layer of indirection possible. And that layer of indirection will always be infinitely easier to deploy than a replacement to IPv4.
It is a big deal. The price is a signal of scarcity, and this means some webservers won't be deployed because of the scarcity. The only difference is that instead of plain running out of new addresses we will be gradually running out, while screwing those who can't afford public IP addresses.
The price is a signal of a scarce resource, when scarcity can be avoided by switching to larger addresses.
If you try to ssh user@fe80:0123::1, for example, you'll get an invalid argument error from connect(). In this case, you'd do an ssh user@fe80:0123::1%eth0, assuming that address is the link local address of eth0 on that host. Alternatively, don't use the link local address.
If the routing table grows too fast (eg, due to fragmentation of the available address space) there is a very real chance that the internet as it currently exists will break suddenly for some users.
Not that IPv6 fixes this automatically -- it grows the routing table as well -- but v4 fragmentation does cause very real problems. You can't just pick addresses up of the floor and put them to use. Each routed block carries a very real and marginal cost for all those globally who service that route.
Beej's Guide shows how to use the C socket APIs in a dual-stack manner:
For testing your own network and browser, http://test-ipv6.com/ is quite good.
If that would be true, then it wouldn't be able to provide any better end-to-end connectivity than IPv4 and thus would be a completely wasted effort.
As the register commented "IPv6 was neither designed for small biz nor consumers. IPv6 was designed by big-ticket network engineers bearing global infrastructure and enormous enterprise networks in mind. Learned gentlemen who live in a world where buying IBM and connecting it with Cisco never got anyone fired"
They have reinvented the OSI stack and we know how well that worked in practice ( I was third line for the UK's X.400 so I know what OSI is like)
There is just no reason to use NAT if you have enough addresses. It's a hack to solve address scarcity, and doesn't add any security or any other benefits (unless you don't have a firewall, but you've got much bigger problems in that case!).
The RFCs for IP allocation say that every end site should get a /56 allocation - that is 256 subnets of /64 addresses . A business site should be able to get a /48 (65,536 /64 networks) for no extra cost. Perhaps a mobile device with a cellular modem would get a /64 but that is the smallest allocation.
1. A /64 network has 2^64 addresses.
Edit: To respond to your other comment, no, there's nothing stopping ISPs from inflicting NAT on IPv6 too, other than the consumer asking "why am I behind NAT when there is no shortage of addresses?".
If they are truly draconian cabals of evil, I don't expect "well, you don't need it as much" would stop them.
But only 1% of traffic is IPv6 mostly because customer equipment doesn't support it. Cable modems need to be upgraded to DOCSIS 3.0. Most consumer routers also don't support IPv6 including many models currently for sale. The manufacturers don't seem to have plans to release upgrades for IPv6 support. The third part firmwares like DD-WRT and OpenWRT have some support but it still isn't great. Most customers wouldn't be willing or able to install new firmware.
Mobile seems to be the area with the most IPv6 adoption. Verizon Wireless has 25% of traffic on IPv6. Verizon required IPv6 for its LTE network, and many of the popular mobile sites like Google and Facebook support IPv6.
First, the supply side: if you read this article through, you saw the demand curve inflections at the classful-classless change and at the NAT change. Downthread someone already mentioned carrier-NAT, which is one more potential inflection. But there are others; the biggest could be a liquid market for routable blocks. There are large companies pointlessly squatting on huge allocations; some of them assign routable IP addresses for every desktop in their network, despite (sanely) firewalling them off entirely from the Internet. A market for IP addresses would make that waste expensive and return potentially large numbers of addresses back to general use.
Second, the demand side: It will no doubt enrage HN nerds to hear this, but most Internet users do not need a first-class address. In fact, it's possible that most Internet users are poorly-served by first-class addresses! They never take advantage of them, but holding them exposes them to attacks. Because mass-market application developers in 201x have to assume large portions of their users don't have direct Internet connectivity, technology seems to be trending away from things that require it, and towards tunneling, HTTP-driven solutions, and third-party rendezvous.
Finally: Who says IPv6 needs to be the future? Bernstein's summary of the problems with transitioning is getting old, but the points in it seem valid. If we're going to forklift a whole new collection of libraries and programs onto everyone's computer, why don't we just deprecate the whole IP layer and build something better. I don't think I see the reason why IP can't just be to 2020 what Ethernet was to 1990: a common connectivity layer we use to build better, more interesting network layers on top of.
The core functionality of the IP protocol has served beautifully over the last 20 years, but the frills and features have not. IP multicast is a failure. IPSEC is a failure. QOS is still a tool limited to network engineers. We barely have anycast.
These are all features that would be valuable if they work, but that don't work because the end to end argument militates against them --- their service models evolve too fast for the infrastructure to keep up, and they're pulled in different directions by different users anyways.
We can get new features and unbounded direct connectivity with overlay networks. We have only the most basic level of experience with overlays --- BitTorrent, Skype --- but the experience we've had seems to indicate that if you have a problem users care about, overlays tend to solve them nicely. We should start generalizing our experience with successful P2P systems like Skype and pull in some of the academic work (like the MIT PDOS RON project) and come up with a general-purpose connectivity layer that treats IPv4 like IPv4 treats Ethernet.
Special bonus to that strategy: Verizon and AT&T don't really get a say in how those overlays work, and nobody needs to wait for a standards committee to agree on whether things are going to be big endian or use ASN.1 or XML.
> A market for IP addresses would make that waste expensive and return potentially large numbers of addresses back to general use.
The concept of legal ownership of IP addresses as property is explicitly denied by ARIN and RIPE, and for good reason. If they were property, the addresses would be held and hoarded as investments (which can be seen in the subset which is already owned in this manner). They would also fragment, which leads to worse performance and higher costs. Given those two reasons, the first point fails under the label of really bad idea.
> it's possible that most Internet users are poorly-served by first-class addresses!
Internet users, or let's call them end users, want to use software that works, is effective and thirdly cheap. Software that has those 3 attributes serves them. However, with NAT, those attributes are directly harmed. Some software will never work with NAT. Of those that will work, some are much less effective, such as harmed latency and privacy. And thirdly, NAT adds costs to software development in from of complexity, meaning that the end cost for users increase. Thus, because of NAT, software is less useful and more costly, which in turn harms Internet users.
> Who says IPv6 needs to be the future?
IPv6 was the smallest possible change to fix the problem, while still maintaining a form of performance requirement that overlay networks don't. Performance here being 1) latency, 2) router capacity, 3) privacy/security. If a new protocol would fulfill the performance requirements, then there would be a reason to discuss replacing IPv6 with something better, but until that time is here, IPv6 is the upgrade that is necessary for the Internet, end-users, and suppliers.
Really? Going from a 32 bit addressing scheme to 33 bits would double the amount of addresses, pushing the problem a way down the track. Sticking to octets for simplicity, going to 40 bit addressing (five octets) would provide as many addresses as we need for the near future. But they went for 128 bit addressing with IPv6 - I fail to see how that was the smallest change that they could have made.
Making it backwards compatible with v4 would have been an even smaller change and we'd probably be using it by now if they hadn't broken compatibility.
And how was picking an address size that didn't match up to the native integer size of any common CPU good for performance? Maybe they expected us all to be running 128 bit CPUs by now.
This can be the difference between a 1-line patch to a C program and a 30 line patch.
Of course, the standards committees don't care about that cost (it is an externality to them), because "rough consensus and working code" stopped being the code of the IETF more than a decade ago.
I do not know if such attempts has been made or considered, or if its would be easier than the current approach.
They could have defined a 64 bit address space and an escape hatch/upgrade path in the extraordinarily unlikely event that we ran out of addresses --- you could allocate a static IP address to every email sent in 2012, spam included, and still consume only 0.0003% of a 64 bit address space. You could address every page in Google's index in 0.00000005% of that address space.
In a 64 bit addressing scheme, IP addresses would remain scalar integers in the vast majority of programming environments used on the Internet (and where they aren't scalars, 128 bit addresses are even worse!). Instead, we have to forklift out not just the code that bakes in 32 bits as the width of an address, but also all the code built on the assumption that addresses are numbers you can compute with.
And IPv6 is really a 64 bit addressing scheme already. 64 bits for the network, and another 64 bits for the host within that network. The later part can be ignored by routers outside of the target network.
40 bit address is also still possible to memorize (unlike 128 bit address).
First: IPv6 is not the smallest possible change. IPv6 is a very large change, involving an infrastructure upgrade and software upgrades across the Internet. Overlay networks necessitate neither. Routers on the Internet can at the beginning remain ignorant of new overlays. Endpoints add software as and when they decide to participate in a specific overlay. Meanwhile, the existing IPv4/HTTP service model, which works just fine under NAT, continues to operate. This is a more incremental approach than IPv6 and thus by definition a smaller set of changes.
Second: NAT demonstrably doesn't harm the interests of most Internet users, because a huge fraction of satisfied Internet users are already NAT'd. But if you believe Internet user interests are harmed by NAT, you also must believe they're harmed by IPv4, which has no functioning multicast or workable group messaging and has a security model designed in the '70s. At this point, arguing for a NAT-less Internet is de facto an argument for a massive software upgrade. If we're going to upgrade, let's upgrade to something better than IPv6.
Finally: we already have hoarding. It's just hoarding of fiat allocations.
As for NAT, you are making an Argumentum ad populum fallacy to counter evidential claims. NAT adds complexity to software design when it operates over the network. Proof exists for this fact through RFCs and large design considerations documents. There are also evidental proof that such complexity adds to the cost of developing software, with equal proof that increased developing costs means increased prices. NAT traversal for many services also adds bandwidth costs and increases latency. I have a hard time understanding the argument that increased development costs, higher latency, and more bandwidth would be neutral for the user. The fact that a large number of users, mostly limited to a single ISP, are content with higher costs and higher latency doesn't seem to me to be a good argument in favor of NAT.
I wish you'd stop trying to make me defend the NAT service model, because that argument is extremely boring. My point, which I think sees overwhelming evidence from just a cursory look at the modern Internet, is that most users are not harmed by NAT. Innovation continues despite its pervasiveness. We should use the time NAT has bought us to come up with something better than IPv6, which continues to bake critical policy decisions into $60,000-$200,000 Cisco router and switch chassis.
Really, CG-NAT is such an ugly hack that I think the only acceptable use for it is really just as a solution used when somebody has IPv6 to provide IPv4 connectivity to services that are behind the times.
Overlay networks are interesting, but that's no reason to not have the ability to have end-to-end IP routeability when pretty much the entire core of the Internet and many internet services support IPv6 already.
So what? DJB's linked criticism correctly predicted that would happen, and also correctly predicted that would not cause any significant uptake in ipv6.
The issue is that 98% of the ISPs customers will be happier with CG-NAT than with an ipv6 address, so the ISPs are going to spend money on the former and not the latter. This will be true as long as a majority of their customers connect to even one server without deployed ipv6.
The vast majority of people connected to the internet consider themselves a client, not a peer. CG-NAT is better for you than ipv6 if you are a client that wants to talk to even a single ipv4-only server.
I will point out, though, that Dual Stack Lite may end up being cheaper for ISPs than NAT444 because CGNs are relatively expensive and native IPv6 traffic (including Google and Netflix) doesn't have to go through a CGN.
Yes. Let's gamble the indefinite future health of the internet on what works for you, a single point of reference, right now, at the very beginning of the IPv4 shortage, without a single thought spared for use-cases not concerning you.
That sounds like a very good and not at all short-sighted strategy.
I do know that every place I have been to the PS3/Xbox has been behind local NAT, so I would be surprised if CG-NAT broke these; my understanding also is that both of them have a central service for game-discovery which means there is no reason they couldn't implement NAT traversal there.
I also never said that CG-NAT wasn't more short-sighted than ipv6; rather that the ISPs have no motivation to deploy ipv6 and much motivation to deploy CG-NAT.
They would if they could use them, that is, if developers didn't "have to assume large portions of their users don't have direct Internet connectivity". What about we solve that instead?
We have only the most basic level of experience with overlays --- BitTorrent, Skype --- but the experience we've had seems to indicate that if you have a problem users care about, overlays tend to solve them nicely.
How well that BitTorrent and Skype work if everyone's behind carrier-grade NAT? How can Supernodes function?
I don't think I see the reason why IP can't just be to 2020 what Ethernet was to 1990: a common connectivity layer we use to build better, more interesting network layers on top of.
Because it imposes stupid restrictions on those connections that it's supposed to be serving. Like not being able to connect any two arbitrary endpoints.
The rest of your comment presumes that the only connectivity on the Internet is via IP packets. But that's not true; it's an assumption based on historical patterns of access. Instead, assume the emergence of a routed message relay substrate built out of TCP connections (or even best effort SCTP or some other TCP-friendly datagram service). You'd "connect" to that next-generation Internet by making the same kind of connection your browser does, and having done so would be off to the races.
This stuff makes me want to blog again.
Ptacek, I'm a bit surprised to hear this from you of all people. Globally-unique addresses are very useful for things other than direct end-to-end routability.
Financial institution question: "What was the last device to hit that mission-critical system?" Answer: "Well, the access logs show 10.100.1.4, but that's the Internal --> DMZ NAT address, so I'll need to check the firewall logs to see."
Without NAT in play, network configurations become much easier to conceptualize. ACLs become easier to deal with. Things become more clear.
It's a bit of a slur to call even the legacy class-A allocation squatting, being able to do sparse allocation is a godsend. I'm looking forward to IPv6 for this alone.
Finally, I cringe at anything that dictates or predicts what an end-user needs: I don't think it's unreasonable to expect a global communications network to provide globally unique addressing.
This a million times. People growing up these days, behind NATs, probably never experienced the internet like it was in the early days.
Back when anyone online could offer anything to anyone else without having to resort to "hosting providers" and confined to what they "supported". Because you didn't need anyone to "provide" you with "hosting", because, hey, you were already online!
That sort of openness allowed the internet as we now know it to flourish and develop. Who are we to deny the future the same possibilities?
Unless routing is defacto-illegal (in Canada, a prior bill would have made routing come with certain data retention and interception obligations, with non conformance being a crime punishabel by $50K to $250K per day), or carried a prohibitive liability (ex, TOR exits). Skype got a foothold in a different political environment (and it's proxying isn't well-publicized), and BitTorrent isn't a general proxy (and politicians would love to ban it, even though that's a technical non sequitur). If I could afford to run an IPv6 tunnel+open router, I would (I would also love to run an open WiFi router), but I also don't want my door kicked in at 2 AM either, so it would be nice if my ISP helped too. These are pretty big obstacles to experimenting with large-scale network alternatives. (Not disagreeing, just making an observation.)
(I've been running IPv6 on my home network since 2008 using a tunnel provider. My ISP does not support native IPv6 yet.)
Programmers do not yet know enough about IPv6, nor do they have the drive to learn about it.
In the consumer router space, the level and quality of IPv6 support is far from perfect. Even the third-party firmware is spotty. IPv6 works fairly well on OpenWRT trunk, thanks to 6relayd and odhcp6c, but the best you'll get from DD-WRT is a kernel module and a pile of scripts in the wiki.
Which is why this is solved inside existing frameworks so programmers don't need to meddle with it.
Want a socket to "www.google.com"? Sure have it. Is that IPv4 or IPv6? You don't know, and you shouldn't have to care.
Just like Unicode, this should already be a solved problem in mature frameworks.
Maybe because the software ISP use is not ready yet ?
If it weren't for the latencies and the liability, I would consider a P2P IPv6 overlay an acceptable transition plan.
If the 3 VPS providers I use, two had them available. None of my local ISP's assign IPv6 addresses, even though nearly all hardware built in the last 5 years supports it.