Hacker News new | past | comments | ask | show | jobs | submit login
There is no Plan B: why the IPv4-to-IPv6 transition will be ugly (arstechnica.com)
188 points by soundsop on Sept 30, 2010 | hide | past | web | favorite | 191 comments



Consider carefully: to really deploy IPv6, we'll need to change most of our networking software (clients and servers) to handle v6 addresses. This would be hard enough if the v6 migration was a one-liner. But it isn't. Beyond the fact that you have to handle both sockaddr_in and sockaddr_in6, note that v4 addresses are scalar values for C programs. C does not have a scalar 128 bit type.

Our v6 future is NAT'd to the horizon. If that's the case, what's the major win for v6 to this generation?

The notion that we're imminently going to run out of v4 addresses is at least somewhat artificial, since we run fiat allocation schemes right now and could migrate to a market-based allocation to replace it. There are companies addressing desktop machines with routable /16 components because they don't believe in NAT; there are companies grandfathered in to /8(!) allocations. A lot of this waste might stop if people had to pay for it.


A market is an excellent way to manage scarcity. However, ipv6 is preferable to well managed scarcity because it effectively eliminates the scarcity. Addresses under ipv6 are practically unlimited.

A typical ipv6 allocation for an end user is a /48. This allocation consists of 65,536 subnets, each of which has 18,446,744,073,709,551,616 individual addresses. This is just for the end user! Your grandma's cellphone will have 18,446,744,073,709,551,616*65,536 address available to it. No one will ever have to worry about running out again.

The market is a great way to allocate wheat, but imagine if we could make a machine that generated unlimited wheat. Why not use it?

Look at the population of Asia and look at the current ip allocations for that region. It's insanely small! Think about the next 30 years. All of those people are going to want internet access. Demand for ips is only going to grow and the difficulty associated with any sort of fundamental change is only going to grow.

IPv6 isn't perfect but it's needed and needed soon.


So, I agree with you about all of this, but it's simply orthogonal to the discussion. We are not charting a course in a vacuum, based on pure competing principles.

People have been arguing for the urgency of migrating to IPv6 for over a decade now. Not in the abstract. A decade ago was 2000, and in 2000, people were also seriously concerned about IP address depletion, and certain that we needed to imminently migrate to IPv6.

So the first thing you want to address in this argument is whether or not the Internet --- or, more precisely, the value propositions and service model of the current Internet --- is going to collapse if we don't migrate. And, no, it turns out, it's not going to collapse. It will surely be more expensive for a Fortune 100 company to hold on to a vanity /8, but most of us aren't going to notice any change in a market-driven allocation world.

To me, it follows from this that if we're going to have a usable IPv4 Internet for the foreseeable future, we have a choice:

* We can pursue IPv6 with all vigor and alacrity, forklifting out every piece of software that treats an IPv4 address as a number instead of an opaque bundle of an arbitrary number of bits, or

* We can reconsider whether a constituency in a persistent entry in the global BGP4 routing table is what it means to be a "first class citizen" of the Internet, and instead build a new city on top of the ruins of the old one, relegating IPv4 addresses (over time) to the same trash heap as MAC addresses. This isn't a pie-in-the-sky idea; CDNs and cloud providers are an immediate half-step in that direction, and overlay networks are a reasonable architectural goal for the next 5 years.

In our overlay future, we get more than just expandable addressing; we get multicast service models (which IPv6 isn't going to provide), we get true multihoming (which IPv6 isn't going to provide), we get first-class peer-to-peer routing (which IPv6 isn't going to provide), and we get ISP portability without any intervention or agreement by a nationwide telco or MSP.


Your grandma's cellphone will have 18,446,744,073,709,551,61665,536 address available to it.*

Good to know they're not being wastefully allocated this time around.

"Man said, "Carefully husbanded, as directed by the Cosmic AC, the IPv6 addresses that are even yet left in all the Universe will last for billions of years."

"But even so," said Man, "eventually it will all come to an end. However it may be husbanded, however stretched out, IP addresses once allocated are gone and cannot be restored. Entropy must increase to the maximum."

Man said, "Can wasteful IP allocation not be reversed? Let us ask the Cosmic AC.


Not so sure about that. Think about how much energy would be necessary to operate that many devices: http://blogs.sun.com/bonwick/entry/128_bit_storage_are_you


I wasn't suggesting they would be used, I was suggesting they would be wasted.

Unless you want to run your new company as a process on my grandma's cellphone, that she can have more IP addresses on it than I can make a witty comparatory analogy about, is merely stupid and wasteful allocation.

The IPv6 address space may be so big we could never use it all, it's not so big we could never wastefully allocate it all.


You're not understanding the scales involved. You're complaining that the smallest package of salt at the store is a .5 kilo when all you need is a dash. That's crazy. There are oceans full of salt. Humanity will never want for salt.


Hate to break the analogy, but, actually: not true. There's a simple numeric scaling bottleneck with IPv6 already: routing table size. There will, for the foreseeable future, be scarcity in portable routable addresses.


What you are proposing is likely to trigger (or at least prolong) a dark age of the internet. And I'm not exaggerating the slightest.

If people have to pay for IP addresses, then the internet will be divided in two: those who can pay, and those who can't run a server. And I predict that the crushing majority of people who use the internet will be of the second category. This would obviously be a catastrophe for freedom of speech and even privacy.

The major win for v6 is enormous: an IP for everyone, and the possibility for a server at every home, which will at last give us the decentralized internet we should have had in the first place.


If it costs $5/yr for a single IP address, I'm not too concerned about the cost.

Meanwhile, most of the applications YC-types build don't necessarily need routable IP addresses. In a better world, most of our content would be delivered via overlay networks that abstract away that detail anyways. In the immediacy, what matters is the DNS.

We are in a dark age of the Internet, and it's our own doing. We've tethered ourselves to an archaic protocol that already indentures us to massive telco and cable operators, and would even if we had portable /32's assigned directly and personally from ARIN.

I don't see why we should be focusing all our engineering efforts on this one technological detail, tethering us forever to the idea that a license to speak on the Internet entails an IP address. There's 15+ years of research behind better networking topologies that care no more about IP addresses than a home router cares about Ethernet MAC addresses. Let's go that direction. It's easier, and probably better.


> I don't see why we should be focusing all our engineering efforts on this one technological detail, tethering us forever to the idea that a license to speak on the Internet entails an IP address. There's 15+ years of research behind better networking topologies that care no more about IP addresses than a home router cares about Ethernet MAC addresses. Let's go that direction. It's easier, and probably better.

can you please elaborate on the above a bit ? bunch of pointers to papers would be most appreciated. thanks !


A good place to start is RON, which is a MIT PDOS project that Frans Kaashoek and Robert Morris (that one) were involved in:

http://nms.csail.mit.edu/ron/

I had a funded startup in '99 doing this stuff (in our last year, we adapted the MIT Click Modular Router to the same idea).


Check out "A Layered Naming Architecture for the Internet". This is the paper we focused on in an Advanced Networking class when we talked about naming. It aggregates several key principles from the literature into one system. I haven't read the RON paper Thomas linked to, but based on a quick glance, I'd say both papers are definitely worth your time. The Naming paper is actually written by several of the same people who wrote or collaborated on RON.

Feel free to drop me an email if you want to discuss the Naming paper.

http://nms.csail.mit.edu/papers/index.php?detail=4


FYI: a public IP can cost anywhere between $0 and $25/year in Eastern Europe, depending on the ISP.


Virtually any number between $1 and $100/yr seems reasonable to me. Consider: for the rare (and hopefully rarer and rarer) cases where you need a "native" IP address, 1 will almost always do. Meanwhile, all these prices are feasible (in a first-world kind of way) for individuals, but untenable for wasteful use by company IT departments.

$6.5MM/yr may not sound like a dealbreaker for a F-500's /16, but that's actually quite a lot of headcount.


Unfortunately $25k is quite a lot for a /24, and you need a /24 if you want to have a network with independent BGP routing (because this is the smallest block in most BGP routers)

I manage the networks for a small company and we only need a few public IPs, but because we want the reliability we have a dual-homed connection via two ISPs, and our own /24.


Most small companies are not in a position to get a BGP-advertised default-free connection to multiple ISPs for an allocation as small as a /24. Can it be done? I'm sure, but only after a sizeable investment of time and effort.

My point being, it's not as if this is a capability IPv4 currently provides that is imminently going to be snuffed out by address exhaustion. The ship sailed on free/easy multihoming when Sprint started filtering anything smaller than a /19, back in '97.


You obviously have more experience with this than me, but in the UK we had no trouble getting BGP advertising with two ISPs for incoming connections (though we're not default-free, just using HSRP between ISPs for outbound).


There are plenty of people who have done this more recently than 1998, which is the last time I had to register an ARIN allocation. That was for a fairly large regional ISP in Chicago, for the sole purposes of multihoming, and it was a nightmare; I ended up having to call in a favor from a friend close to ARIN.

Surely someone who runs a multihomed hosted app (maybe a YC company) can chime in on how easy it is for a startup to get a BGP advertisement in 2010.


Most IP allocations to organisations these days are not PI (provider-independent) blocks from RIRs; they are simply aggregated from an upstream ISP's larger announcement. That doesn't mean you can't punch holes in their aggregate by announcing that same block (if it's a /24 or larger) through ISP #2, but it does not result in any new numbering allocations from ARIN, net.

It's pretty rare to get a PI block these days unless you're pretty dang large, and taking aim for at least a /20.


Can you advertise a /24 from Level3 at, say, Verizon? Don't the backbones filter advertisements by ASN?

(Back in the dark ages when I was actually doing this stuff, you couldn't advertise anything smaller than a /19 unless you were grandfathered in.)


Well, you have to have your own ASN to announce it, yes. :) And certainly, you have to get the other ISP to allow this announcement from you through their filters, but that's a normal part of establishing BGP routing with another provider.

But the point is that if Level3 announces 4.0.0.0/8, yes, you can announce 4.16.73.0/24 through them, and through Global Crossing as well. Level3 does have to forward your announcement to its peers de-aggregated, though; that is, if they just fold it into their 4.0.0.0/10 announcement, incoming traffic will prefer your /24 announcement through GX because the prefix is longer/more specific, and plus that gives Level3 no way to withdraw the announcement if the link goes down. Not everybody will happily let you punch holes in their nice, clean aggregate like that, potentially hosing their AS as a whole with flap dampening penalties and such if the link is flaky. However, it is normal order of business with Tier 1s.

But yes, if you have an ASN, you can announce a /24 or bigger directly. You can't do anything smaller, though I have the sneaking suspicion that may change. There are some cranky network operators out there who have not upgraded to equipment with the horsepower and RAM (most importantly RAM) to hold a full BGP view consisting of prefixes down to the /24 granularity, and will indeed filter higher, but they're techically misrouting -- their problem. In any case, that's not the norm, no.

If everybody filtered prefixes smaller than /19 or /20, then multihoming would be the province of a relatively small plutocracy of institutions. I haven't run the numbers any time remotely recently, but most announcements, by volume, are of prefixes smaller than /19 for sure.

Take a look at this:

http://bgp.potaroo.net/as2.0/bgp-active.html

If I'm reading "Prefix Length Distributions" right, 52% of all announcements are /24s, and average prefix length is 22.33.

Now, technically, what's being announced != what's being filtered by influential backbones. But it has to be pretty dang close, or ~52% of all announced networks would not be routable from Sprint. :-)


I agree that the internet would be better off if it was more decoupled from monolithic backbones. Even more, the commercialization arguments underway sound a lot like arguments to deregulate housing loans in 1995...I do not think that IP addresses should be controlled by cable and datacenter companies, who have every interest to inflate the prices. And that is what market regulation without any standards or rational planning will result in.

An IP address is a human right on the same order as education and cultural freedom. But I bet that even with v4 people could figure out creative ways to get servers running for a lot more people. Let's not be sensationalist.


Why is an IP address a human right? Isn't "first-class access to the Internet" the right, and IP addresses just an implementation detail?


Why is first-class access to the Internet a human right? Isn't "ability to communicate freely" the right, and the Internet just an implementation detail?


Well, it's the implementation detail. Right now, without a first class internet connection, you can't completely exercise your ability to communicate (and publish) freely. Likewise, I think that right now, without a public IP, you don't have a first class internet connection.

So, by the contraposition, the transitivity of the implication, and my mathematical powers, having a public IP should be treated as the fundamental right it enables.

Of course, if there are several efficient "implementation details" which enable the layer above, then you just need one of those.


Your public IP address is also not a first-class Internet connection. You think it is[1], because you have a crappy Internet connection. But if you were an established company trying to use the address for real connectivity, you'd quickly realize that your IP address is nothing more than a stamp your ISP is putting on a connection it's lending you. You cannot, for instance, advertise your /32 on two different networks through BGP. Other people can, because they have better, more meaningful addresses.

So this notion that we all have a fundamental right to full and fair access to the Internet is already subverted by the fact that we're all default-routed second class citizens of the Internet.

In an overlay network world --- a world that is easier for us to travel to than an all-IPv6 world --- this wouldn't be a problem. Addresses would be inherently portable, multihomed access (both through ISPs and through our friends and neighbors) would be the norm, the notion of "server" addresses and "home" addresses (where you can't take port 25 connections) would be extinct.

Unfortunately --- but very importantly --- this is not equally true of an all-IPv6 world. There is nothing magical that IPv6 does to give us all access to the BGP RIB's of top-tier providers. You're still going to be some ISP's b+tch whether your IP address is 32 bits long or 128 bits wrong.

I'm not arguing that people don't have this fundamental right you want them to[2]. I'm saying that the IP address doesn't actually give it to them. Static IP addresses are just something for geeks to argue about while the telcos continue locking down the Internet.

[1] I'm being presumptive because it simplifies the point I'm making; sorry.

[2] All though I don't think they in fact have that right; things that compromise the "right to IP addresses" are invariably first-world kinds of problems.


Well, being your own ISP[1] is indeed important. To bad it's (1) such a hassle, and (2) the big players don't want to peer with the small ones any more. (In France, the http://fdn.fr non-profit is taking steps towards solving (1))

Overlay networks sound fantastic. Maybe Eben Moglen's FreedomBox (or something similar) could make it a reality?

[1]: For instance by being a member of a non-profit which itself is the actual ISP.


So why don't overlay networks, if they're scaled to the size of the global Internet, have exactly the same routing problems internally that BGP has?


Sure, if that's what you think. I don't have a problem with that. What I don't understand is why a personal 32-bit (or 128-bit) identifier is a human right. If it is, most of you, including the ones with static addresses, are getting shafted. Your address means less than you probably think it does.


"if people had to pay for it" is the key. Once it is more expensive to support IPv4 than to adopt IPv6 for any given entity, that entity will adopt IPv6. That point is coming soon, it just happens to coincide with us running out of IPv4 addresses the way they are being allocated today.


This sounds completely reasonable to me. If IPv6 is truly the right solution, then the market will gradually dial up the pain for native IPv4 and we'll switch naturally.


What really needs to happen is the various RIRs need to expedite this process by making v6 extremely cheap or free, and making v4 prohibitively expensive. They will cop flack for it, but this is the bitter pill that needs swallowing.


IPv6 addresses are already free or nearly free. I think there's an important difference between scarcity naturally driving up IPv4 prices which encourages IPv6 adoption and artificially raising IPv4 prices to strong-arm people to switch to IPv6 because it's "good for them".


The problem with IPv4 addresses TODAY is that they're artificially priced. People deliberately do stupid things with their addresses (like, assigning permanent routable /32's to every computer in a dorm resnet) so they can avoid having to justify and possibly lose their vanity /16's.


"market based allocation" - this means further deaggregation of the existing prefixes.

Which translates into eventual serious capital expenditures for the carriers to hold the large tables. Worldwide.

In addition to the money required to pay for carrier grade translators. Though my employer produces them (amongst other stuff) - I do not think that architecturally they are a good thing.

All this money will need to come from somewhere. As the profit from a "prefix sold" goes into one pocket, but the expenditures are spread around the world, it means this profit would not come from IP addresses. It will come from the subscribers' monthly payments and quotas.

Translating: bye bye unmetered access on the cheap. Not so long ago in .be the ISPs were charging after you go over 10Gb on the DSL line.

About squeezing out the /8s from the current owner: that would help but not for long. It's like getting the additional credit line from the bank on the credit card.

For some info on the exhaustion and the models used you can take a look here: http://www.potaroo.net/tools/ipv4/

Of course, NATs are "convenient", so I would not necessarily exclude that some will decide to go with the scenario you describe - cascaded NATs.


Help me understand how RIB explosion is a major cost factor in IPv4, where there is a fundamental engineering incentive to dynamically allocate, but isn't in IPv6, which has as part of its value proposition the notion of everyone keeping a persistant block of addresses?


everyone is not going to keep the persistent block of addresses.

everyone with an AS# and BGP might - same as now for IPv4.

everyone who is single-homed to the same ISP - just might as well, since they will be using provider-assigned space that will be aggregated by their ISP.

I suspect that you interpret the last item as an advertisement that every Joe and his dog will get a PI prefix - I think they won't.

PA vs. PI space is an interesting topic, there is currently a very lively discussion going on in IETF on this:

http://www.ietf.org/mail-archive/web/v6ops/current/msg05398....

I will let you take a look at it, and read the data directly from real-world operators.


Help me understand how making it technically simpler for people to have persistant portable allocations is going to decrease RIB sizes. It's been over a decade since I had to write a BGP regex, but to my uneducated eyes, it looks like the exact opposite is true.

It really feels to me like nobody wants to acknowledge that IP addresses are just the license plate numbers that ISPs issue; that, or they hope that IPv6 is going to magically change that.


I have reread my message above and do not see me stating the assertion you are arguing against. Mind to clarify your logic ?


What I see you saying is "yeah, well, that's not going to be a problem because very few people are going to bother to ask for portable addresses". But clearly more people are going to ask in a world where it's numerically possible for them to have one than in a world where it isn't.


It depends on the economic (dis)incentives around such an action, and they will be the main determining factor, as opposed to technicalities. Whether it will be much easier with IPv6 than with IPv4 - the time will show.

Much like with the a migration itself - which is a balance of the cost of hinging on the IPv4 only (investing into NATs and support), or investing into IPv6 infra as well as a strategic way forward. I have no material interest in either (gee, FWIW the NAT mess could provide much more billable work:), but having helped non-single digit hundred of people people un-shoot themselves with NATs in the past 10 years, I think IPv6 is cleaner architecturally in the long run - because it is simpler.

But I think we both agree it's a matter of economics. What is simpler and cheaper to use. And the fact that the p2p substrate in the heavily-NATted environment is a dead-on competitive advantage - so no-one is going to play the charity and release their code for the benefit of the community.

But maybe the time will show differently.


What is the benefit of having address that is scalar type in some programming language? Except that you get endian issues for free.

And exactly this "stop wasting address space for insignificant devices that are not mine" attitude is the problem that IPv6 tries to solve. I don't see any reason why there should be two classes of internet-connected and almost-internet-connected devices.


There is zero benefit to having addresses be scalar integers. What's your point? IPv4 addresses happen to be scalar integers, and are often treated that way, particularly in C code. Tsk tsk all you want, we still have to fix it.

I don't see any reason either, but engineering purity doesn't trump reality; reality suggests that freeing us from the distinction might not be worth the cost, at least not soon.

Ok, there are benefits to not having to use bignum routines to do netmask calculations in those few applications that address whole networks at a time.


The problem is not in the low-level applications - porting them is doable (http://gsyc.escet.urjc.es/~eva/IPv6-web/ipv6.html)

The problem is in the library frameworks that do not give at all the means of making an IPv6 socket.

Once that problem is solved - porting the properly written apps is not going to be too difficult.

For the IPv4 address literals being scalars - well, properly written programs would have used the DNS names in the first place, and the use of the addresses would be appropriately contained. So I would treat this much as other bugs.

Pointers may also fit into int, but it is not a reason to fit them there.


Couple random points:

I just bought a new consumer router last year (actually, come to think of it, 2). Neither have current support in the admin screens for IPv6 - it's the standard 4 box dotted quad input only. Yes, a firmware upgrade can solve this, but this was 2009 at the time - how much more time do hardware vendors need to put this in consumer products, where we'll see arguably the most pain (in terms of hand-holding service requirements)?

Also, relating to screens, a huge number (large majority?) of software is coded specifically to the dotted-quad format. I think we'd have seen more adoption if the plan we're migrating to had simply added a couple more dots to it. Easier for people to think about at the end, and would have given us a moderately larger pool base, easier way of thinking about new address spaces (visually, I mean).

64.27.78.45.134.240 where you tell people that the first two (or last two) now correspond to new addresses, and 'old' addressed simply have two leading 0s, would have made transition easier to swallow (IMHO).

2001:0f68:0000:0000:0000:0000:1986:69af, in comparison, even if shortened down, looks pretty alien.

Adding two dotted quads would have given us 65k x 4 billion addresses. Yes, it's not IPv6-sized, I know. But in terms of the 'let's measure everything by how many IPs each person on the planet can have!' it would have been sufficient for a while - I think most people would be fine with fewer than 20k addresses each.

I know this is wholly naive of me, and real hackers everywhere can do this stuff in their sleep. However, 'real hackers' aren't going to be dealing with the majority of this transition - it's going to be average joes trying to help their family get back on the internet over the phone after they mistype something on their new DLINK-950v6 router from Best Buy.

We're well past the phase where we can back out, or I think even consider alternatives, but the impact to existing software screens is not to be underestimated.

IPv6 may turn out to be the real Y2k.


> how much more time do hardware vendors need to put this in consumer products, where we'll see arguably the most pain (in terms of hand-holding service requirements)?

Under IPv6, won't customers just need a hub or a switch? Every IPv6-aware device in the home will just request a unique address directly from the ISP, completely eliminating the need for home routers. Of course, devices still needing IPv4 would need some other solution.

Of course, that doesn't come without a tradeoff (as mentioned in the submission). Those IPv6 devices will need to be aware that they are connected to the Internet at large, and security will need to be addressed accordingly.


> Every IPv6-aware device in the home will just request a unique address directly from the ISP

No thanks, because unscrupulous ISPs will then start to charge people per device connected. It isn't (or shouldn't be) any of my ISP's business how many devices I have connected, any more than the high-level meaning of the bits I transfer is their business. Their business is to move bits.


Well yeah, you'd have to buy IPs in blocks of, say 100. If you don't buy them from an ISP, you'd have to buy them from your local RIR in /48 blocks!


Some parts of the world have competition. So this issue shouldn't stop us globally from getting rid of routers at home.


The plan is to still have home routers in IPv6 because the ISP will assign a prefix to each customer, and the customer's router will route between their prefix and the ISP. Here's the spec: http://tools.ietf.org/html/draft-ietf-v6ops-ipv6-cpe-router-... It's going to become an RFC soon and then people can start complaining that vendors don't support it.


I think that's specifically why people will still be using routers. Nowadays, most single computers still are sitting behind routers, despite the fact that they don't actually "need" to, specifically for the firewall protection. It's a lot easier to plug into a router and forget it than it is to have to deal with setting up a firewall every time you reinstall your OS.


If we're talking about a box that primarily performs firewalling, let's call it a "firewall" instead of a "router". (Although as I said above, routing is still necessary in IPv6.)


It's still a router. It just doesn't do NAT.

FWIW, NAT is typically done by the firewalling code, whereas routing is done in separate code.

So "router" is always correct, but "firewall" does not always apply


I'd much prefer that noone calls any piece of software a 'firewall' ever again. It's been over-applied to the point of meaninglessness.


Your vision for a migration to IPv6 involves everyone getting a new home access point, so that devices can autoconfigure from their ISP instead of from the Linksys box in the living room?


"Security will need to be addressed accordingly." You say that as if it were a small thing. Indeed, as if it were a thing with a known solution. In reality this "security" issue of consumer pcs directly on the internet is a bigger problem than v4 address exhaustion. Today it is perhaps a multi-billion dollar a year problem due to worms, viruses, botnets, ddos, etc. Intentionally making the problem worse is a recipe for catastrophe.


People will still need routers to serve as wireless access points, since the vast majority of web connected devices are connected via WiFi.


I am one of the authors of a draft that describes how to make the transition, at least for the "client" apps (like HTTP), more straightforward:

http://tools.ietf.org/html/draft-wing-http-new-tech

Also, for the rogue RAs, there is a solution too: http://tools.ietf.org/html/draft-ietf-v6ops-ra-guard

(currently shipping in some products already).

For security, there are better approaches that do not compromise the applications transparency as much as the filters do:

http://tools.ietf.org/html/draft-vyncke-advanced-ipv6-securi...

To summarize, knowing Iljitch personally, I hoped for a more sober article. v4->v6 transition is not going to be easy, sure. But the article does come through as having a slightly hysteric tone, which is unfortunate.

As for "no customers" stance that I hear every now and then: Just today I had a chat with a large-ish service provider. They have quite a lot of enterprise customers for IPv6 today. XS4ALL provides IPv6 in .nl today. free.fr has been doing it for years already.

Finally, to end on a cheerful note - some folks have found this video funny, hopefully you might as well: http://www.xtranormal.com/watch/7011357/ - though I do not claim to have the best sense of humor.


What's the value proposition of an IPv6 address to any of (a) my mom, (b) a YC web app founder, or (c) a Fortune 500 company in the industry standard best-practices configuration of private non-routable addresses for internal hosts sitting behind a series of firewalls and proxies set up for policy reasons?

A customer use case for any one of these scenarios would be enlightening.


(a) - When she would have to choose whether to pay $10 or $30 per month for the facebook.

(b) - Experience, when down the road a year or two from now it will mean some real eyeballs. If you do not care about serving to mobile users, no need to bother. Maybe there are startups that are not just about webapps.

(c) - they already have IPv6. On Vista and Windows7. Just that it travels over Teredo. Reason: visibility into the end-station traffic.

I am not trying to convince you that IPv6 is the panacea. If you show me the compelling argument towards the standard p2p substrate that would not abuse the links between the ISPs too much, is standardized, has implementations in all the major OSes as part of standard distribution with similar API - or freely available in the source code form with the BSD or MIT license - I'd be happy to see it. Oh yes - and you can show the business cases for it for the above three participants, too, if you like. Oh, and of course it has to traverse multiple layers of NATs too.

I'd seriously use it in some of the pet projects.


I'm not seeing the scenario where IPv4 costs my mom $20/mo for commodity web access.

And even my mom has IPv6 software. Mysteriously enough, none of my clients --- large F-500's with network groups mature enough to have security teams sponsoring security reviews of applications --- none of them use IPv6 in production. Pretend I'm one of them and sell me an IPv6 deployment project.


I am not a sales guy. Surely you will he fine with NATs for some time. But this time is not infinite. And when it ends, you will be up for a harsh awakening.

So, plan which of the scenarios is more appealing to you. If you plan a parachute in a year, do not bother. Otherwise discounted cash flow may help. Maybe it does not make sense to you now, fine.

And let's chat in a year. No. I am not selling ipv6. It is there. Facebook and YouTube and tpb is on ipv6. Probably the only remaining part of internet not migrated yet is Craigslist.


I still fail to see an issue in any of this if the rollout moves from backbone-to-users, and not the other way around. Which is the only order which makes much sense anyway - if the backbones can't route your packet, what's the point in speaking that language?

ISP-level NATs don't make sense, except in connecting external-IPv6 to internal-IPv4, in which case: so? Only serve up the IPv4 connections which currently exist, creating no new ones and dynamically mapping no ports, and pass through all IPv6. And if I recall correctly, IPv4 addresses are reserved in a range of IPv6 addresses already, so translating is a non-issue if the in-between is all IPv6. Any attempt to access IPv6-only addresses from IPv4 get fake IPv4 addresses, probably as some hashed value so repeat connections yield the same result.

External-IPv4 to internal-IPv6: yeah, full of issues. Who gets port 80 once you start sharing external IP addresses? SSL ports? How about all those port-specific email servers? But not the reverse.

I could be missing something obvious, though. IANAIETF expert by any stretch of the imagination. Anyone care to correct me?

(edited for a bit more, and less block-of-text-iness)


Backbones seem to be mostly fine. Well, I've got my /48 from Hurricane Electric and had no serious problems except for minor disturbance with my Nokia N900, which needed custom kernel (weirdly, official one has no IPv6 support).

Except that IPv6 space is too empty. There's nothing much out there — Google, Debian & Ubuntu mirrors, Python docs... that's almost all I've noticed to use over IPv6. This is the reason nor ISPs nor users go v6, even those who can. And, on the other side, almost no hosting providers give v6 addresses to their customers, so almost no websites appear at v6 Internet. I believe this is the most important problem: there's really almost nothing to do with IPv6 nowadays.

IANA should really encourage LIRs to get v6 blocks, but they're not doing it at all.


Worse than nothing to do is the pile of misconfigured and abandoned IPv6 endpoints/experiments. Trying to get work done with an IPv6-before-IPv4 DNS preference makes the internet _work less well_, for the time being.

(This is 6mos of playing with IPv6 on Softlayer systems [well done, guys]; there's breakage on the internets.)


Is there anything I as a consumer (and private website admin) can do to help IPv6 along?


As a website admin, you can request an IPv6 address, configure your webserver to listen on it, and configure your DNS with an AAAA record for it. ie. you can make your website accessible over IPv6.


However, depending on your business, you might want to be careful. At a minimum, clamp the MTU on the server to be less than 1500. There is still a fraction of clients that will be broken by this process.

I run a small service which you can try on your website "right now" and see what happens, without affecting your users too much: http://testv6.stdio.be/


I could except my hosting provider still isn't ready - apparently their data center itself isn't totally ready. I was told 3 months ago to check back in 6-9 months.


Yes, it does depend on whether your upstream provider, i.e. ISP supports IPv6. DNS root servers supports IPv6 and most domain name services should support IPv6 by now.

There is no NAT in IPv6. The address that you get should be globally routable, Unless of course your ISP have any IP filters in place. In the end it will depend on how your ISP allocates IP addresses to you.

Initially, the main issue was non-IPv6 capable CPE devices like your cheapo ADSL modem. The way around this was via Teredo tunneling, tunneling on IPv6 packets within IPv4 UDP packets.

Windows Vista and above supports IPv6 pretty well out of the box including Teredo tunnelling out of the box. Most BSD flavours are alright via the KAME project... not too sure about Darwin though.

Anyway, it's been awhile since I've worked in the IPv6 area so god knows what else they've come up with.


There is sort of IPV4 in IPV6. It is there. It is not universally supported. See http://en.wikipedia.org/wiki/IPv6#IPv4-mapped_IPv6_addresses for more.


The real problem with IPv6 is that it focuses too much on the technical issues and mostly ignores the human issues. This is why it hasn't gained significant traction and, IMHO, won't even when the IANA runs out of unallocated IPv4 blocks.

There are a few main issues with IPv4:

1. Everybody wants to have a public IP address, and that is making the available IP addresses run out really fast; 2. There are huge chunks of allocated addresses that are in fact unused. And there are also a number of public addressable /8's being used for internal networks for no good reason; 3. Splitting up wasted /8's in (geographically disperse) small chunks isn't feasible because routing tables would get huge.

No. 1 is easy to solve: not everybody needs a public IP address, no matter what people say. Mobile phones can survive just fine with private addressing behind a NAT. Public addresses for mobile users could be handed out on a case-by-case basis, for an additional fee.

No. 2 will come into play once we run out of address space. Scarcity will trigger a market for IPv4 addresses, and as any scarce resource, its use will be rationalized and made more efficient.

No. 3 is the most tricky. But that's where IPv6 really appears as the problem solver. Internet backbones will have a different structure than the IPv4 addressing implies, and that structure will be made with IPv6. IPv4 will be tunneled over IPv6 networks with its own structure. In fact, this has been done for a long while inside the ISP's networks using MPLS (IP packets enter the ISP network and are then routed using MPLS until they exit into another ISP/carrier's network). Those multiple MPLS domains will become a single IPv6 domain.

The structure of the Internet will change when IPv4 addresses become scarce, that's for sure, but I seriously doubt IPv6 will gain acceptance at the network endpoints. There is just too much legacy for that to happen in the next 30 years at least.

I belive in scarcity dynamics, so I'm not too worried. Also I also belive NAT to be a good solution for private networks and don't agree with the view that NAT is evil.


1. "not everybody needs a public IP address": sure, and there is no need for more than 6 computers on the entire world, ever. In my opinion, stalling the transition hurts innovation; while it's true that as the things are at present public addresses are not an imperative, new technologies like P2P, which would greatly benefit from public addresses, open a whole new realm of possibilities.

2. Again correct, assuming that there is no cheaper alternative to the "addresses market". But it exists, and is (mostly) free. Once the providers realize that it's less expensive (longterm) to switch to IPv6 than to keep buying new addresses, they'll do it, even such a solution might prolong things even more.

In general, most of "solutions" to the IPv4 crisis represent a sizable effort, which will only increase as the addresses become more scarce; and the sum of it will be much larger than the effort of simply switching to v6. We'll see how it will turn out though.


Stalling the transition hurts innovation, yes, and avoiding an address market would be good too. But this is ignoring the human problem, which was my original point.

From a technological standpoint it would be nice to switch, but you have to convince everyone to switch, which isn't easy because:

Most OSes deployed today either don't support IPv6 or the support is problematic. Remember that most of the world still uses Windows XP and are in no hurry to change.

Most applications don't support IPv6. Many of those could be easily changed but the people responsible have better things to do, especially in the enterprise space (which most people here seem to ignore, always thinking that the consumer Internet is the only thing in existence).

IPv4 is elegant in its simplicity, whereas IPv6 is complex and different for the sake of being different in some cases. This means extra cost to switch internal networks, and without switching internal networks there's no pressure to port most applications.

Network administrators know IPv4 well, what works and what doesn't, where the faults are. Switching to IPv6 means extensive training which adds to the cost. Given that right now people that actually have a working knowledge of IPv6 are few and far between and seem to be concentrated in ISPs, this will be hard to change.

Despite the benefits of everyone having public addresses, there are also security downsides, and these outweight the benefits in most people's eyes.

So, the main problem is legacy. IPv6 benefits are irrelevant when it doesn't interoperate seamlessly with IPv4.

Again, IPv6 adoption is a human problem, just like convincing people to abandon IE6 is.


> Despite the benefits of everyone having public addresses, there are also security downsides,

No there isn't. Not a single one. Just replace your old NAT by a clean firewall, and you're set.


It makes device specific protocol attacks possible.

Sniffing traffic on the outside of a NAT hides which internal device(s) the traffic is coming from, doesn't it? So if you wanted to attack Alice's traffic then you can't easily tell which it is.

If you can sniff traffic from Alice's IPv6 address, then you have a much smaller amount of traffic to brute force, and you can try a MITM attack without risk of anyone else behind the NAT being affected accidentally.


Nah, unfortunately NAT doesn't really do a good job of hiding what the source machine is. People have used this to analyse how many machines are behind a NAT with very clear patterns emerging based on TCP/IP timestamps as well as the sequence numbers (which are sequential in most OS's).

The bigger concern is the fact that you apparently have access to the data being sent/to from an IP address and can thus sniff the data in the first place. Whether that be a NAT'ed or IPv6 IP address.


Letting people see each individual device, even firewalled, is a security downside.


Obfuscation isn't security. If exposing the existence of a device is compromising to you, then this doesn't solve the problem, it makes it less likely.

To which you might say "it makes us less likely to be compromised", which is probably your goal. So obfuscating network access probably makes sense to you. But I think it's dishonest to market this as "security". The door is still exactly as open as it would be if they were exposed. Security is fixing the problems that would be exploited


http://www.out-law.com/page-11052

It is marginally more difficult than IP addresses.

So, unless you are truly using a multiuser machine from which the users use the Internet, it's just a convenient feeling of security. Which in itself does have some value, though, even if merely to save from some stress.


I don't believe you. I'm willing to listen though. Do you have a link to a detailed, technical explanation of this?


I need a reference that exposing more information about systems that are nearly guaranteed to have security flaws is bad? I'll give you a simple scenario and then go look for something to make you happy.

I have a computer running services A and B and several computers running service B. Service A exposes information about the computer's configuration that helps attack service B, but only if the attacker can figure out which one.

Edit: I haven't really been able to find a comparison between firewalling and firewalling+NAT, just comparisons between nothing and NAT.


By the way, I wasn't completely explicit. I supposed that in both cases all incoming ports are closed, except for the ones you explicitly open. That way, the only difference between the NAT and the firewall is the address translation.

My scenario is narrow, but I expect it to be a common one: IPv6 internet boxes will likely include such a firewall by default.


I have to agree with Dylan on this one. By default, the security of a system with ANY public addressability (vs. one that can't be seen except on outbound requests) is dramatically less secure than not exposing it at all.

If we can believe the old adage that "The only secure server is one encased in concrete", then the average non-NATTED device is more akin to one that is concreted than one with public addressability.


http://news.ycombinator.com/item?id=1744736 (I supposed that going from NAT to clean firewall would open any incoming ports.)


Errata: wouldn't open any incoming port.


It's been a while since I used either IPv6 or Windows XP, but if my memory serves, IPv6 on XP worked just fine. It did require a trip through the install-a-protocol dialog box, which I agree is rather clunky; however, it did work, and once it's installed, there's no further configuration to be done on the client.


1. I've just run into a couple of cases on certain types of public wi-fi network where having a public IP on my phone would have been very useful in just the last couple of days.


> don't agree with the view that NAT is evil

NAT is a nightmare for p2p applications. For algorithms that send short messages to many different peers the overhead of using STUN for every connection is enormous. See eg http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.81....


P2P is a nightmare because every application is expected to provide its own connection fabric. In a content-addressed overlay network --- and there are many huge deployed examples of these deployed --- you wouldn't need to do the STUN chicken dance every time you wanted to add a peer.

If this all sounds hand-wavy, well, it's already the mechanism used by the exchanges and ECNs to route stock information, and it's already a mechanism used to build CDNs. It's a considerably more straightforward evolutionary path for the Internet than forklifting out every IPv4-aware device or software package.


> P2P is a nightmare because every application is expected to provide its own connection fabric

Well someone has to write the connection fabric, even in a content-addressed overlay. Writing a DHT that performs NAT traversal is just as difficult as for any other P2P application. See http://www.slideshare.net/vschiavoni/dht-and-nat for some examples.

CDNs have it easy because they control the network the servers are on. Writing p2p applications for the internet at large is much harder than it should be because of NAT.


I think you're missing my point. It's hard to write P2P applications for "the Internet at large" for the same reason that it's hard to write a web browser in assembly language: you're targeting the wrong layer of abstraction.

What I'm suggesting is that if instead of a futile effort to convert the Internet to a new, Cisco-sponsored network protocol we invested in adoption of a few standard overlay networks, people could write P2P applications that targeted overlays instead of the Internet, and IP addresses would matter only as much as Ethernet addresses matter today.


You mean something like this http://libswift.org/ ? I think I understand your point now. What we need is a standard mechanism for addressing devices. I'm not sure that building another layer on top of IPv4 would be any cheaper in the long run than converting to IPv6, given the amount of overhead caused by NAT traversal. This overhead also constrains the design of the overlay - you end up having static connections which are kept open as long as possible to avoid the connection overhead. If we could send UDP packets directly between every pair of peers without paying for the overhead of NAT traversal it would open up a lot of possibilities.


I think anything built on top of IPv4 must be less expensive than IPv6, simply because the cost of converting to IPv6 is deceptively high and because we'll inevitably need to build an overlay layer anyhow.

The overhead of NAT is irrelevant on overlay networks; it scales linearly with the number of direct connections established (OSPF would call them "adjacencies"), but probably scales logarithmically with the number of relationships on the network.


> it scales linearly with the number of direct connections established

Yes, assuming that the overlay algorithm actually establishes direct connections with a small number of peers. That's the way most overlays are designed because they need to reduce the cost of establishing a connection. By way of contrast, I recently worked on a gossip algorithm for producing uniformly distributed peer samples with no warmup time. This algorithm doesnt maintain an adjacency graph at all. Instead it sends a single udp datagram to a different peer at every tick (100 ms for the test system). Its a really simple algorithm that provides stronger guarantees than any existing peer sampling service but replace each datagram with a NAT traversal and the overhead kills it.


Not every P2P revolves around "content". The most well-known example would be Skype and SIP.


Content-addressed networking doesn't imply that content is being transmitted in discrete units; it implies (what Tibco would call) subject-based addressing, and the notion of publish/subscribe.

It works just fine for unicast, and even in strictly unicast situations, there are plenty of cases (conference calls, for instance) where "subscribe" turns out to be useful.


Nice analysis. And locator-ID split idea in (3) is fun. However, (3) is tricky in itself - you still need the routing info to come from somewhere. And, would not matter too much whether it is IPv6 or another layer of IPv4 that are used as locators - or did I miss something in the argument ? Peering for the IPv6 should be roughly the same as now for IPv4.

The scarsity dynamics and the cost of v4 vs. v6 is something I totally agree with. And with the fact that NATs are not evil per se. They're a tool just like IPv* is.


> NATs are not evil per se. They're a tool just like IPv is*

"Nukes are not evil per se. They're a tool just like knives are."

Please avoid this kind of sentence. They're void of content, and therefore can't be contradicted, but at the same time they convey wrong ideas. Here for instance, I can't think of any direct usage of nukes that isn't evil. Because nuke explosions automatically kill people and ruin the environment.

Likewise, I can't think of any usage of NATs on behalf of other people which isn't evil (though much less so). Because, you automatically end up closing all receiving ports, and they can't re-open them.


"They're void of content" - no, they are just a restatement that any technology is only as evil as its use. Think P2P. One can argue it is evil because its predominant use is to distribute pirated content. But it is not. Similarly, nukes were seriously considered for plugging the oil leak - to name one recent example.

If you can't think of any usage of NAT that is not harmful - it does not mean there isn't one. To start, what you mention - the stateful nature of the NATs (one of the reasons why people fight so hard to have them) - is a useful security property and is very frequently sought after by the people who want to put the NATs in IPv6. Check my vid on xtranormal that I mentioned in another message.

What is harmful in NATs is that they destroy the end-to-end referrals (and when they remap the numeric ports to different ones, too).

Another example of NATs that is useful and is widely used is the load balancer. It is exactly NAT, but put inside-out. Sometimes even doing SSL decapsulation. And you can use round-robin destination NAT as a poor man's loadbalancer technique (how useful it is today, is another story).

To summarize: NATs are like recreational drugs. Pleasant in small dozes, lethal when uncontrolled. And people can not control themselves in using them.


> "They're void of content" - no, they are just a restatement that any technology is only as evil as its use

Which is exactly what I meant by "void of content". This sentence is self evident, almost a tautology.

About NAT themselves, I wasn't talking about using them for yourself, but using it for your customers. But that's probably what you mean by "uncontrolled use of NAT".

About the load balancer, wouldn't the problem be solved with SRV records?

Anyway, it appears from your other comments that we basically agree on everything. So let's not fight.


"This sentence is self evident, almost a tautology." - Ok, probably I misinterpreted your comment about the NATs being evil, apologies.

"About the load balancer, wouldn't the problem be solved with SRV records?" - the SRVs would certainly help a lot - however, the predominant user of the load-balancers, the web browsers, are not using SRV and not planning to. The http://tools.ietf.org/html/draft-jennings-http-srv-00 which proposed it (and which possibly could have been used by the clients too) - now got pushed into something that would never be possible to use by the browsers.

Re. agreement - yes, I think we do agree.


> The http://tools.ietf.org/html/draft-jennings-http-srv-00 which proposed it […] - now got pushed into something that would never be possible to use by the browsers.

That would really suck. Could you elaborate, or send me a link, please ?


You can remove the -00 and look through the history of the doc - it's now at a later revision.


As far as I can tell, IPv6 is already here. My home network has IPv6, and all my servers have IPv6. Doing this amounted to one line of config on the servers, two lines of config on my router (one to tell it the ipv6 address, one to start rtadvd), and no configuration on my workstations.

And, I have better IPv6 connectivity from home than IPv4. Traceroute to Google:

    $ traceroute google.com
    traceroute to google.com (209.85.225.104), 30 hops max, 60 byte packets
     1  blinky.internal (10.0.0.2)  0.267 ms  0.249 ms  0.230 ms
     2  dsl253-036-001.chi1.dsl.speakeasy.net (66.253.36.1)  15.046 ms  19.018 ms  23.002 ms
     3  220.ge-0-1-0.cr2.chi1.speakeasy.net (69.17.83.153)  16.991 ms  20.965 ms  24.950 ms
     4  core1-2-2-0.ord.net.google.com (206.223.119.21)  26.932 ms  28.911 ms  30.893 ms
     5  72.14.236.178 (72.14.236.178)  32.875 ms 72.14.236.176 (72.14.236.176)  87.863 ms 72.14.236.178 (72.14.236.178)  34.822 ms
     6  209.85.241.22 (209.85.241.22)  37.802 ms 72.14.232.141 (72.14.232.141)  39.334 ms  41.312 ms
     7  209.85.241.35 (209.85.241.35)  43.294 ms 209.85.241.29 (209.85.241.29)  23.635 ms  23.689 ms
     8  66.249.95.138 (66.249.95.138)  27.670 ms 72.14.239.18 (72.14.239.18)  29.652 ms 209.85.248.102 (209.85.248.102)  31.626 ms
     9  iy-in-f104.1e100.net (209.85.225.104)  33.608 ms  35.591 ms  37.573 ms
Traceroute6 to IPv6 Google:

    $ traceroute6 ipv6.google.com
    traceroute to ipv6.google.com (2001:4860:b007::68), 30 hops max, 80 byte packets
     1  blinky.jrock.us (2001:470:1f11:488::1)  0.273 ms  0.246 ms  0.228 ms
     2  jrockway-1.tunnel.tserv9.chi1.ipv6.he.net (2001:470:1f10:488::1)  17.464 ms  19.492 ms  21.437 ms
     3  gige-g3-4.core1.chi1.ipv6.he.net (2001:470:0:6e::1)  31.410 ms  33.398 ms  35.383 ms
     4  * * *
     5  2001:4860::1:0:3f7 (2001:4860::1:0:3f7)  29.284 ms 2001:4860::1:0:92e (2001:4860::1:0:92e)  37.278 ms 2001:4860::1:0:3f7 (2001:4860::1:0:3f7)  39.263 ms
     6  2001:4860::1:0:1d1 (2001:4860::1:0:1d1)  46.248 ms 2001:4860::1:0:2776 (2001:4860::1:0:2776)  47.653 ms  49.593 ms
     7  2001:4860::38 (2001:4860::38)  51.579 ms  36.400 ms  36.944 ms
     8  2001:4860:0:1::f (2001:4860:0:1::f)  35.849 ms 2001:4860:0:1::d (2001:4860:0:1::d)  33.996 ms  33.369 ms
     9  iy-in-x68.1e100.net (2001:4860:b007::68)  27.974 ms  31.954 ms  31.951 ms

Same number of hops, but less latency!


Nice to see the specific data.

Another data point: last year at FOSDEM we had more than just a couple of v6-connected hosts, here're some stats I kept for fun:

http://stdio.be/onsite.fosdem.net/

Needless to say, noone ran after the conference participants to upgrade the software on their devices.


I'm not discounting the article's assertions, but we're doing fine with a ~7,000 node IPv4/IPv6 network.

Each node is running a reasonably modern client OS (80/20 Mac OS X/Windows 7, mostly). Most of our servers are Windows Server 2008 and some 2003. We do have several Linux servers, as well. Pretty much all the service they run seamlessly translate to IPv6 (Exchange, IIS, SMB, AFP, Apache [and its modules])

Each node and server get an IPv4 address and an IPv6 address. Mac OS X and Windows prefer IPv6 transport over IPv4 and most users have no clue when they go to Google, they're doing so over IPv6. nearly all of our internal web, email, and file sharing traffic is over IPv6. Nobody knows any different. There has been small, interesting issues, but we're able to resolve them pretty quickly and haven't run into anything even marginally major.

From the looks of things, the transition will be like the fader control on an audio mixer. Nodes will have dual stacks, and IPv4 will fade out gradually and IPv6 will, at the same time, fade in gradually, and nobody will be the wiser (other than the poor network programmers who have to get their network code IPv6 ready, which frankly, they should've begun years ago).

Implementing a large-ish scale IPv6 network (dual stacked, of course) has been relatively pain-free (in our experience).


I think the subtext of this article is that schemes like this appear to work because you're really relying on IPv4 for guaranteed network access; the true migration to IPv6 (the "magic moment" as DJB would call it) hasn't happened until people can feasibly not depend on IPv4 for a production network.

If everyone has to be dual-stacked, then we're committed indefinitely to making IPv4 workable.

Which is fine! But it rather lessens the urgency of converting people to IPv6.


But are we really relying on IPv4 when at least 90% of our network traffic has seamlessly transitioned to IPv6? It's almost funny to say this, but it seems like IPv4 is already legacy for us.

Sure, some external websites haven't yet converted to IPv6, so the stack falls back to IPv4, but for the most part (and 90% is probably a very conservative estimate) IPv6 is king across the WAN.

The downside to IPv6 is that it's a steep learning curve, and virtually nobody else in the org is even marginally familiar with it. If you're familiar with bit math, IPv6 is a bit easier, but then you get into routing and DHCPv6 which tend to differ significantly in some areas from their v4 counterparts. Also, I'm finding that certain vendors' (cough Cisco) IPv6 implementations aren't nearly as tight as they claim. It wasn't until July 2010 that Cisco really implemented DHCPv6 in a usable manner in IOS.

Now if only Apple iOS devices supported IPv6 ;)


IPv6 on iPhone 4: http://www.fix6.net/archives/2010/06/22/ipv6-on-iphone/

For the learning: take a look at http://www.6deploy.eu/index.php?page=tutorials - they seem pretty nicely readable, maybe you can use those materials.

Curious about the 90% part - care to tell more over mail ? If yes - then ay at the general direction of your cough ;-)


djb's description here: http://cr.yp.to/djbdns/ipv6mess.html is pretty good too...


Aye. djb is pretty opinionated, but in this case he is spot on.


I disagree.

His main objection seems to be thatfor IPv6 to work, administrators would (shock) have to administer IPv6. As he says: They would 1. have to acquire IPv6 address-space and 2. they will have to add it to their DNS.

That's it. Once that is done, 99% of things out there just work. If you run Windows and Active Directory, all you have to do is acquire address space, as AD does DNS for you automagically.

If we can't expect administrators to put in this minimum of effort to make the internet work in the future, but instead expect them to be able to set up huge layered NAT infrastructures with all the extra effort and problems that leads to, something is horribly, horribly wrong.

Really. IPv6 opponents seems to live under the illusion that IPv6 will create more work, which in itself is true, but they ignore the extra work required to make IPv4 work at all. Work at all today that is. It's not going to get better in the future.


I believe his objection is that, being realistic about these things, administrators won't bother to seperately administer IPv6, when it doesn't confer any advantage to the first movers.

It's not about what should happen, it's about what will, taking into account the reality of human behaviour. And even if 95% of sites out there did enable IPv6, then everyone will still want IPv4 addresses so that they can still reach that last 1-in-20 - the transition simply can't happen until all existing sites are reachable over v6.

I don't think he's an "IPv6 opponent", I think he just makes a good case that there isn't a workable transition plan.


Anybody knows how the 128 bits in the addresses are really going to be allocated? If everybody gets "lower 64-bit for a subnet," and every device simply considers that it should allow that the subnet is whatever is a slave of it, we end up using only the upper part, and then if you again give big chunks of upper 64 bits to different entities like it happened with IPv4, there's still real chance that even if the space is big it remains inefficient, especially if upper 64 bits are separated to encode some special assumptions about routing or whatever. Is there any text about this whole subject, which just doesn't brag that 128 bits is "a lot?"


This Wikipedia section seems relevant: http://en.wikipedia.org/wiki/IPv6_address#General_allocation. A choice quote:

IPv6 addresses are assigned to organizations in much larger blocks as compared to IPv4 address assignments: the recommended allocation is a /48 block which contains a number of addresses that is 248 or 7.9×1027 times larger than the entire IPv4 address space.


How much the /48 is larger is again what I don't care as I can calculate that easily but it doesn't mean anything in how it's to be used. The relevant piece is:

"5 RIRs, /23 to /12, smallest gets 512 /32 blocks, one per ISP, ISP divides into 64K /48 blocks, typically one for each customer"

It's obvious that the "customer" here is not "every man on Earth" or "every phone" but probably should be something like the smaller ISP or the big company. The lower 64 bits are it seems effectively "unique device id." So the idea is to use upper 64 bits to fix the location of the device and lower 64 to address the device. And yes, I think I've understood that the "device id" (lower 64-bits) doesn't have to be fixed and encoded in hardware.


The smallest RIP is probably Africa, and if it has /23 and if magamiako is right:

http://news.ycombinator.com/item?id=1743268

and every /48 goes to every home user, that means that you can't ever have more than 33 million home users in Africa! (512 * 64 * 1024) The 33 million limit is why for me seemed "obvious" that /48 doesn't go to "every man on Earth." Does anybody know something more exact than what's written in Wikipedia article? Some real example how the space is going to be distributed? That was why I started this thread, and sadly still most of the answers are hand-waving "it's big, don't care," even if that doesn't carry any real information.


You can get a free tunneled /48 right now from Hurricane Electric.

http://tunnelbroker.net/


Those are some pretty big assumptions. But, even if only 64 bits were used, that's still enough space for every human on earth to have 2.75 billion IPv6 addresses.


Only if there are no distribution rules in upper 64 bits, which is not going to happen. That's why I'd like to know the planned allocation process. Do you know anything about it? I posted my question to get exactly that info.


Right now they are only allocating out of one /8, so if they are being too broad in their allocation policies now, then they can start cleaning up the mess when the need the next /8.



As the wikipedia article states, and as the IANA assignments are coming through--you, as your personal self, gets a /48 block. Or rather, your "premises" gets a /48 block. If you're a small business owner with 10 computers, you get a /48 block. If you're a home user, you get a /48 block. If you're a large enterprise business, you get a /48 block.

It's a rather significant waste of addressing and I really wish they had moved it over a few blocks, but that's the current plan. That said, you have to figure.

Right now, for internet alone, we're using only 1/8th the space of the IPv6 address space. The plans are 2000::/3. Which essentially means 2000:: through 3FFF::. that's a rather ridiculous amount of hosts. It comes out to 2^125 hosts. 4.25352959 × 10^37 hosts on the internet. It's unfathomably large. So it doesn't really matter if huge chunks are wasted. You won't ever really use all of it anyway.

I agree that the IPv6 space is ridiculously large, but that's kind of the point. They realized that the internet infrastructure will only continue to grow--and they want to, at all costs, avoid any problem with hitting the limit of addressing at any point in the future. The larger it grows, the more of a problem it will be to change it later on.


Even with this generous distribution it's still about /32 per ISP. Imagine IPv4 Internet where each existing ISP has just a single v4 address.

Maybe when we'll have a colony at Alpha Centauri that won't be enough anymore, but this should be perfectly sufficient for quite a while.


Even then it will be sufficient for the Centaurians to have their own Internet and reuse the whole address space. Until someone invents ansible, any communication between us and them will be too slow for any real-time purposes.


So do you actually know that every ISP exactly gets /32? Can you point to some material about that? I'm still asking for some exact information.


For IANA -> RIR allocations, http://lacnic.net/documentos/lacnicvii/POLITICA-IPV6-IANA-RI... and for an example of RIR allocation to ISP/end user: https://www.arin.net/policy/nrpm.html

Each RIR has it's own policy though, so check with the one you're concerned with (ARIN vs APNIC, etc).


I could just be misinformed/sleepy, but why can't we all just multihome? Upgrade or replace (tech refresh, anyone?) all gear which doesn't natively support v6 and start running both networks at the same time.

You hardly need to support anything but the routable address space of v6 if the device is configured for a v4 network, and applications can get patches to prefer a v6 connection (apparently even though some applications claim to try to make a v6 connection first, it doesn't usually work for me in practice). This at least buys you the time and flexibility of transitioning the chicken while the egg finishes gestating/maturing.

We all realize that every v4 frontend server has to support v6 before clients can be transitioned. However, I don't see a problem with giving everyone a v4 and a v6 in the meantime while the servers are upgraded. This means middle-man upgrades first, and basically every link between an internet host has to natively support both ipv4 and ipv6. If you just used RAs you don't have to support dhcpv6 yet, and dhcpv4 handles the stuff RAs don't for the clients. Perhaps i'm over-simplifying.

(edit) Also, completely separate thought: why the hell isn't Obama offering a discount to help transition like they did with TV? That was a pointless upgrade while this is a real looming problem. Can somebody get Google and Microsoft in a room and make them form an IPv6 lobby?


I can't edit this post anymore so another random comment: ssh doesn't work with link local addresses. I just set up an ad-hoc wireless network between my two laptops and tried to ssh from one to the other. The addressing works because I can "broadcast"-ping both networks successfully:

  psypete@pinhead:~$ ping6 -Iwlan0 ff02::1
  PING ff02::1(ff02::1) from fe80::216:eaff:fe9b:56e2 wlan0: 56 data bytes
  64 bytes from fe80::216:eaff:fe9b:56e2: icmp_seq=1 ttl=64 time=0.055 ms
  64 bytes from fe80::20e:8eff:fe13:5741: icmp_seq=1 ttl=64 time=2.23 ms (DUP!)
But unfortunately it seems ssh doesn't know to set its sin6_scope_id for the interface of the bind address I tell it.

  psypete@pinhead:~$ strace -e trace=network ssh -6 -b fe80::216:eaff:fe9b:56e2 fe80::20e:8eff:fe13:5741 2>&1 | grep AF_INET6
  bind(3, {sa_family=AF_INET6, sin6_port=htons(0), inet_pton(AF_INET6, "fe80::216:eaff:fe9b:56e2", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EINVAL (Invalid argument)
(edit) Wait, nevermind, i'm a noob. I have to specify the scope ID in the address with the postfix '%wlan0'. Adds to the list of new crap to memorize for IPv6 transition


> I don't see a problem with giving everyone a v4 and a v6 in the meantime while the servers are upgraded

The problem is the total absence of short term benefit, while the cost is not null (in money, time, or cognitive load). So, even if it's better for everyone to go IPv6, it's worse for each individual to move first. So everyone wants to move last.


The short term benefit in my proposal is being completely backwards-compatible and avoiding the problems of tunneling and other proposed workarounds while providing IPv6 routing where necessary. It doesn't "fix" the problem of running out of IPv4 address space (unless a given ISP decides to switch to IPv6-only, require its customers to upgrade, and turn on a multihomed IPv4 address for extreme cases). What it does do is allow the transition to take place. Everyone should be shouting at the top of our lungs at ISPs to support this so we can get this chicken/egg scenario over with.

People are basically in denial. The cost is never going to be null and everyone has to move eventually. Server admins actually have the bulk of the responsibility here for getting the whole internet migrated to IPv6. It's their services we want to use and network addresses are just a way of getting to them.

The problem is they have seemingly no accountability. I tried to hold the admin of xkcd.com's feet to the fire and I got rebuffed. Unless there's a substantive potential for their customers to abandon them they don't really give a crap (hopefully because they have other things to take care of). And in the meantime they can claim they don't need to move because the rest of the network isn't ready for them yet. So tell ISPs to multihome already and we can then start telling server admins to get their shit together because the rest of the network is ready for them.


> […] while providing IPv6 routing where necessary.

This is the key point. Right now, and on the short term, IPv6 is not necessary at all. Or at least it isn't perceived to be. Reason: everyone is still compatible with IPv4. I know it's as stupid as racing towards a concrete wall, telling yourself that you can always slam the breaks later, but we seem to race towards that wall anyway.

> People are basically in denial. […] they have seemingly no accountability […]

I completely agree. But I can't think of a way to solve this.


> I completely agree. But I can't think of a way to solve this.

Charge people more to use IPv4-only networks and make it slower. Oh, wait......


No plan B? Hah! Let's face it already: there will be no IPv6 transition.

Eight years ago, I was in the "more effort needed" school of thought about IPv6. I figured we'd slowly update our programs and toolkits and libraries and reference material and college textbooks and switches, etc., and by 2005 it'd be about the right time to switch over. It will be painful, I thought, but necessary and ultimately a very good thing. So I waited and watched, much as I'm sure many of the HN readers have been doing. I figured that those people who are smarter and more connected than me would form the leadership of the IPv6 migration and show the rest of us the way to 128-bit Internet addressing bliss.

Realistically, though, we're no closer to a workable migration to IPv6 today then we were in 2002. Since then, we've managed to switch from analog to digital TV (at least in the US), migrate most software from ISO-8859-1 to some flavor of Unicode, settle the Blueray/HD-DVD format war, increment the USB protocol twice, formalize HTML into HTML 4.0 and XHTML and HTML 5, and virtualize or emulate every major operating system onto every major hardware platform. But nothing of any substance has happened on the IPv6 front.

The reasons for the lack of progress have been clearly laid out by others, but the unspoken sad reality is that we're wasting our time with IPv6. Even to say "IPv6 migration" implies that it will happen and thus IPv6 is the only solution to IPv4's woes. We're torturing ourselves with IPv6 by going on like this and it's not bringing us any closer to a solution! At this point, there's no reason not to start looking for workable extensions to IPv4 (and TCP/IP in general) to address the issues we're having.

I'd bet that a small group of dedicated engineers could come up with an compatible extension to IPv4 (that expands the address space) and could develop a workable migration strategy, example code, socket library modifications, and a compatible version of Ubuntu all within a year. Imagine the excitement if IPv6 could be sent to the circular file and a clear alternative was not only proposed, but downloadable.

So, in short, that's my solution: cast off IPv6 as the death march project that it is and get excited about a minimum viable product (see what I did there?) alternative to IPv4.

Oh, and since this is HN: I'm sure that there's just no way to make money from an extension to IPv4. No consumer going to need a converter box, to sit between their modem and LAN, right? I mean, the government would have to offer rebates. Madness! No businesses are going to need consulting services or contracted coding work. Nope. It certainly won't be like Y2K, but even it it was I'm sure no one made any money from that transition...


_I'd bet that a small group of dedicated engineers could come up with an compatible extension to IPv4_

I'm sure they could -- but why didn't they? I think that the end of the IPv4 era is something that will happen pretty quickly, as most people are operating on the "don't mess, it's working" principle; once it stops working there will be a need for a quick solution, and the only one that has any groundwork laid out -- apart from NATs -- is IPv6, which has been supported by most modern operating systems.

I don't think the migration will be a concerted effort -- it will more likely unroll as a domino-effect: once a few key players, probably backbone providers, decide to make the switch, their users will follow suit, and then their users and so forth.


The cost of upgrading to IPv6 is dominated by the cost of making any change, so I don't see why upgrading to IPv4++ would be any cheaper than upgrading to IPv6. In both cases you have a lot of pain and people will still put it off until after the last minute.

I also disagree about IPv6 readiness; according to the ISPs, they are very close to being ready (if they don't already have it deployed). Unfortunately, they're using a strategy of doing most of the preparation behind the scenes and — voila! — turning it on at the last minute; from the outside this plan is mostly indistinguishable from doing nothing.


For the "I'd bet that a small group of dedicated engineers could come up with an compatible extension to IPv4" I quote the [1]:

"There is no such thing as an IP protocol with extended addressing that is on-the-wire compatible with IPv4."

[1]: http://www.gossamer-threads.com/lists/nsp/ipv6/25293


One thing that doesn't make sense to me is that people are comparing the number of internet-connected devices to the size of the IPv4 address space, implying each device needs its own IPv4 address.

This is obviously not the case. NAT (Network Address Translation) techniques can hide thousands of devices behind one IP address. In fact, for security reasons, it's advisable to not have every device directly on the internet. I have an ADSL modem/router at home with NAT, which makes substandard firewall solutions as what is provided by Windows pretty much unnecessary.

So perhaps we simply need to reclaim large blocks (particularly those universities and companies that have large A blocks of ~16 million addresses) and save IP addresses for each machine for ISPs, servers and those that actually need them.

Of course that's hard to mandate but like many things you can solve it with market forces: charge people who have more than, say, 128 IP addresses on a scale such that the most economic thing to do is transition away from that and give up their addresses.


> implying each device needs its own IPv4 address.

This implication is basically correct, actually. Sure, not every machine needs to be a server, but every people should have at least one. Everyone should have his own mail server. Every blogger should have his own web server. And so on. It's just a matter of basic civil liberties, like privacy and free speech, which currently aren't fully enabled, because most people don't have the amount of control they should have.

If we let giant NAT routers spread further, such control will be effectively impossible. Even for computer nerds. Even for owners of a freedom box[1].

As a side note, the correct response for security is not using giant NAT routers. It's getting rid of Windows.

[1]: http://wiki.debian.org/FreedomBox


Why is a native IPv4 address a "basic civil liberty"? It's a technical detail. Surely the "liberty" is "first class citizen of the Internet", right? "The ability to publish content, the ability to build applications, the ability to pass traffic"?

If IP addresses were truly and in principle a right, then surely everyone would also be entitled to an ASN and the right to publish their IP address on the provider of their choosing. But they aren't; it costs hundreds of thousands of dollars to go from a standing start to default-less full peer.

Without peering, the IP address is just a totem. You think it matters, but it doesn't; you've simply been licensed to speak on the Internet by your ISP.

That doesn't really bother me, but it should bother you, if you take your principles seriously. IPv6 isn't going to solve that problem (IPv6 does not magically end BGP RIB bloat). But overlay networks, which don't care whether they're running on top of IPv4, CLNP, IPX, or IPv6, do solve it.


Well, actually, I do think that being your own ISP is very important, maybe even crucial. Alas, it's currently difficult, as more and more network providers don't want to peer with the small players.

I know nothing about overlay networks. I need to dig further before I update my beliefs.


Realize that even in our bright and shiny IPv6 future, no top-tier provider is going to let you peer with them to publish your addresses. So long as routing is controlled by the majors, addresses are just totems.


Everyone should have his own mail server. Every blogger should have his own web server. And so on.

I want that about as much as I want my own personal DVD press, oil well, glass factory, operating theatre, coffee bean plantation and lawyer. (Although if my own mail server was as profitable as my own oil well, maybe...).


You can running multiple web and mail servers behind a single IP address. I personally am not doing this though because I have 5 IP addresses allocated to me, so I take advantage of them.


Yes you can, but if you don't have a public IP yourself, you have to find someone willing to share his own (supposing IP already ran out). Most likely, you'd have to trust him. So for maximum independence, you need to have your own IP.

I reckon relying on a friend is very close to ideal. But not strictly so.


As a side note, the correct response for security is not using giant NAT routers. It's getting rid of Windows.

For an otherwise reasonable comment, I have to say: What a load of bull.

The only machine I have ever gotten hacked was a Linux machine, since practically all attacks flowing on the net are Linux attacks.

I run a mixed environment of Windows and Linux and have zero issues, despite my entire LAN being exposed to the internet via IPv6 (obviously firewalled though).

Windows itself isn't insecure. It's a very nice and secure platform. Locked down, there are no problems at all. Users aer insecure. They are the problem.

If Linux had near the same marketshare as Windows, they would require access to install whatever new and fancy thing which came around, would have to be locked up, and voila, Linux would be ridden with all the same problems Windows have now.


I think the article made it clear that IPv4 is running out of addresses DESPITE using NAT heavily.

As far as reclaiming large A blocks is concerned, I read somewhere that we are exhausting addresses equivalent to an A-class block in 5 weeks. Easy to see that we are going to need a long-term solution anyhow.


I linked this elsewhere, but you can see here: http://twitter.com/ipv4countdown from the history how fast they are being used up - a million every couple of days, or thereabouts.


Actually, we still need external-to-internal connections (e.g. p2p file transfers, not only Bittorrent, but also IM file transfer, VoIP, game servers and so on) so the NAT gets punched through routinely.

The security isn't really provided by NAT, it's provided by the firewall; yes, a stateful protecting the entire end-user network is desirable, but NAT isn't really required for that to function effectively.

Even security-by-obscurity in case of a NAT router is doubtful, unless it's fronting a relatively big network. For home networks, the address of the network is identifying enough. By monitoring traffic you can even guess how big the network is.


Actually, that is the reason why the conversion to IPv6 has stalled for so long. Which is a bad thing, since for one, it's much simpler to set up networks if you don't have to worry about NATs, and second NAT make many of useful technologies (like some P2P applications much more difficult). And we shouldn't forget that IPv6 is not only about a larger address space; it brings many useful standards like mandatory network layer security, jumbograms etc.


> it brings many useful standards like mandatory network layer security

Wikipedia and Google seem to confirm that IPsec is mandatory for IPv6 hosts, but how many hosts are really going to support it? I'm pretty sure I can turn on IPv6 on my Linux boxes a lot easier than I can turn on IPsec (step 1: figure out which implementation to install, step 2: figure out byzantine configurations, ...). I think my OS X box has IPv6 turned on out of the box, but I wasn't aware it'd do IPsec without being lovingly configured.

Like multicast and a better solution for portable IP space, I feel like mandatory IPsec will be just another purely theoretical benefit of IPv6. (Which is not to say that I'm not extremely interested in its highest profile promise: more address space.)


But my point is that the safest thing for most end-devices is NOT to be directly addressable from the wider internet. Any network admin will tell you that so it'll be interesting to see how that unfolds in the IPv6 world (if it ever unfolds).


The safest thing for most end-devices is not being servers in the first place.

Now, if the device is cracked through one of its client software (NATs don't prevent that), then it could start up a rogue server, while if it were behind a NAT it couldn't. That's no worse for the machine itself (it's hosed anyway), but you could argue that's worse for the rest of the network.

I think it's not. Botnets are annoying and dangerous when they act as clients. Spam, DDoS, automatic attacks are all client behaviour. Even if you want server behaviour, connections don't have to be initiated from the outside. The compromised device just have to know the relevant IP, and initiate the connexion itself.

Finally, if you want to block incoming connections anyway, a plain firewall is cleaner. At least FTP will work.


The point about NAT blocking unforwarded servers, not necessarily true: http://samy.pl/pwnat/

The goal of some botnets is to set up servers for software distribution or to host phishing sites. You can't characterize botnets activity as simply client behavior.

Thanks to UPnP being enabled on most modern consumer routers you can't even count on the NAT protecting you from having a unauthorized server.

Having end-devices NOT be servers is ignoring the fact that the UI for many home devices is delivered over a embedded web server. This is becoming more commonplace over time.

Ultimately you need a firewall that controls inbound and outbound traffic.


Why do firewalls suddenly go away when IPv6 is introduced? If NAT is the only thing stopping packets from coming into your network then that is bad. Set up appropriate firewall rules.


They don't. And I'd still like to have one. I was just saying that the root of the problem is insecure services.


coming back after the fact and saying "I know you got a sweet deal on those IP addresses when they were going cheap but we've run out and want them back so we're going to charge you extra" isn't really going to fly. You could argue the same case with domain squatters, or real estate, but it just can't work that way. You could buy them back -- at a premium of course.


You could hypothetically charge a small yearly fee for each IP Address owned. Of course there would be lots of financial hurdles with this approach.


ARIN already charges fees for IP addresses; I suppose they take 'em back if you don't pay. https://www.arin.net/fees/fee_schedule.html

Note that legacy holders don't pay fees on their massive /8s.


I would think it would be easy to provide the security side-effect of NAT through firewalls in routers. The concept of "you can't talk to me unless I initiated a connection" does not seem to require NAT.


It is easy, the one iptables rule that does it is in fact slightly longer to write than rule for NAT, but at least makes sense and does not involve words like "MASQUERADE" :)


which makes substandard firewall solutions as what is provided by Windows pretty much unnecessary.

Ahem. Are you, perhaps, basing this on Windows XP from several years ago instead of Windows 7? Because 7 (and 2008 R2 Server) have pretty comprehensive looking firewalling.


There is clearly a debate going on in this thread regarding whether or not we are running out of address space/whether or not we need truly globally routeable IPs, etc. Assuming that we do need to go the IPv6 route, for a second...

I wouldn't really expect any businesses to go the way of IPv6 unless there was some kind of 1) Financial benefit, or 2) A law that required them to do so. Otherwise, there is really no reason to spend any time/money on it (Even if it is negligible). I can't find the source, but I believe at one point (maybe still) the Japanese govt. was providing cash incentives (tax benefits, etc.) for businesses to go the IPv6 route.


IPv6 always reminds me of this http://www.youtube.com/watch?v=_y36fG2Oba0


It might be ugly but cellphones around the world rejoice. Your battery is going to love you for it.


Could you explain what you mean? What do cellphones have to do with this?


When your cellphone get a static IP instead of a dynamic one your cellphone should be able to save up to 50% energy.

http://www.usipv6.com/what_is_ipv6.php

"IPv6 is compatible with 3G wireless (near) broadband and has other features that support greater mobility. There will be two billion mobile phones by 2006 and (at least) two addresses are required per mobile phone, so just enabling every mobile phone will require more IP addresses than are left with IPv4. Static addresses can also double battery life by not wasting power by checking whether a call is completed so the carrier can grab back the dynamic IP address, which wastes a great deal of power."


I'll have to call [citation needed] on "double battery life". I have 3 jabber accounts with keepalives, skype with it's own keepalives, sip with reregistration and pinging, additional status messages are transmitted every couple of seconds on all of them. If I turn all of them off, my battery life does not increase 2 times. ~5-10% the last time I checked. I really doubt disabling the dhcp-equivalent on 3G can improve the battery life 2 times.


That seems quite optimistic. Do you have a source from someone who knows something about electronics engineering instead of networking?


I heard a Nokia engineer talk about it in some Podcast. Let me see if I can dig it up.

Edit: He (the Nokia engineer didn't talk about the 50%, that's only that article) only talked about better battery life in general. Whether the 50% is true I only have that article to point to.


Countdown of IPv4 address allocation: http://twitter.com/ipv4countdown


one thing that can be done when the transition is made. we could make an initial switch of all IPv6-capable machines for a short initial period that would reveal all machines not capable, then switch back to IPv4 for enough time to get those now very aware of their problem. We could do that for several weekends prior to the real switch.

Such an approach would require a lot of software to be changed, to be sure. But it might be worth it.


Yup - that's exactly what customers will allow you to do. Especially if you're running ISP / ITSP, your customers will be glad you disconnected most of their devices while testing this new thing they know nothing about and care about even less.


The class E block of IPv4 (240.0.0.0/4) has 268 million addresses available but no existing version of Windows will see/talk to them.

So just sue Microsoft to make a patch available since they obviously aren't going to fix that on their own and you bought IPv4 a few more years.


61 million smartphones sold in Q2. Your solution will last little longer than a year, assuming perfect allocation of that block.


Can we NAT verizon and at&t? lol


They're not, but they do. At least here in France, the main carriers put every single smart-phone under a NAT router. While IP hasn't ran out yet. They don't want their customers to run servers on their smart-phones.


People who do multicast may not be too happy.


It looks like the article ignores important fact that IP4 address is used to determine location and whether the sender is spammer or not. When IP addresses are easily available, it would be harder to maintain such locations and spam databases. It looks to me that IPv6 is a waste of time and would eventually be replaced by some other technology.


I find it hard to believe that the transition from IPv4 to IPv6 which has been anticipated for nearly two decades and is supported by all major hardware and software vendors (albeit with the issues mentioned in the article) will simply be discarded because software engineers will have to think a bit more carefully than just banning people based on IP (which is already a bad way to detect spammers since it is very common for many people to share the same IP if they are behind a NAT).

Furthermore, for location it would be no harder than IPv4 since location databases, just like routing rely on prefixing which doesn't go away with IPv6. Also because the IPv6 address space is so big, there are event proposals to allocate blocks of the address space to each country and then the country can further distribute its block among all the various cities/regions of the country which would greatly simplify determining location based on IP.


Naah.. The government will just require your unique citizen identifier in the bottom 64 bits. You won't be able to sign up for an ISP or a cell phone or any other connected device without it. Why do you think they made the address space so absurdly huge?


Who is this "the government?" In fact, I heard somewhere that the government of any arbitrary spammer is just as likely as not to be a whole different government!


Clearly the narrator is referring to that meta-governing-body, The Government. You know, the one all the conspiracy theorists never see, because it's OMG REAL, and therefore not a conspiracy. Narrators, however, have outside knowledge of the system they narrate, so they would be in a position to know of such a system.


Ok, it's a paranoid theory for now but I couldn't think of any other possible reason for such a large address space besides being permanent unique identifcation of clients, like a mac address. The bandwidth wasted on sending those addresses around multiplied by every packet on the internet seems ridiculously wastefull otherwise. For instance, there will probably never be 2^128 packets created and sent on the internet between now and the heat death of the universe.


Wasn't meaning to shoot down the paranoia, just having fun with / at your username, as I saw the potential for fun :) It's certainly excessive, but more layers = more efficient routing, and I'd assume they don't want to go through all this again in a mere 1000 years when we're populating 5 planets, everyone has 100+ network connected devices, and data centers have billions. And everyone tweets.


You don't understand how large 2^128 is do you? Let's quit all this handwavy crap about how that's a nice big number and look at how absurdly big it is.

The number of seconds since the universe started is 13.75 billion * 86400 * 365 = 4.3 x 10 ^ 17 seconds.

U.S Internet traffic was 18 exabytes (1 exabyte=10^18 bytes) a year, so let's just say that that we are sending 18 exabytes a second since the beginning of the universe and we are counting each byte. Yes, each set of 8 1s and 0s. How many is that?

4.3 x 10 ^17 * 18 * 10^18 = 7.74 * 10 ^36.

2^128= 3.4 + 10 ^ 38 / (7.74 * 10^36 ) =~ 43. So you could have a different ip address for every byte sent over the Internet, assuming the traffic for the whole year was sent over the course of 1 second for every second in 43 times the current age of the universe.


I understand it perfectly well, yes. I also understand that people like to buy ranges of IP addresses. This much space makes it not only easy, they can sell large ranges nearly indefinitely. Sell + indefinitely = yes, from their viewpoint.


Blacklisting IP's is typically based on the behavior of the address. More importantly, networks tend to be listed if a certain percentage of addresses on it send spam, act like a bot, etc.

Location databases are based on the owner of the netblock.

There are still networks in in IPv6. Typically, with IPv6, MAC addresses take up the right 48 bits of the 128 bit IPv6 address. other 90 bits are used for subnetting/routing. The new spam and location databases will focus on these 90 bits, just like location databases usually focus on the /24.


Like? And when? And on what hardware? That address space issue still looms - putting it off another 20 years won't help anything. IPv6 stuff is available now, the problem is in using two different formats - introducing a new format is unlikely to solve that issue.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: