Hacker News new | past | comments | ask | show | jobs | submit login
A Primer on IPv4, IPv6 and Transition (potaroo.net)
134 points by p1mrx on Apr 21, 2013 | hide | past | favorite | 123 comments

This is not an internet thing. We humans are very bad at pre-empting issues like this and doing something about it ahead of time. Especially when it needs money spending.

Here in the UK, for example, we know we have an energy crisis looming. We know we need to build more power stations. We have known this for over 20 years. We have the technology, money and skills, but only this year has one single new power station been signed off on.

Motor sport... knew neck injury was a common way to die or be crippled. They had the HANS device which would save a vast majority of lives, but people refused to use it. Not so much money, but attitude. Then a few drivers died who could have been saved, and there was a mass push to adopt the HANS device.

Same sort of thing with Islamic terrorists. We knew of the threat, but did little to stop it until the catastrophe of 9/11.

Look at AIDS. No one did much until the numbers got scary.

What we do is wait and wait until there is a catastrophic failure, then panic the known solution in to place, while blaming every one in sight. Of course then mistakes happen because its a rush job.

In short, this is perfectly normal.

I suspect that we wont get IP6 properly until the internet feels and looks broken. When the average punter is inconvenienced on mass is when IP6 will properly roll out. In this case, it must be very expensive to blanket upgrade, and prices will have to rise to cover it. But, the people paying those higher prices wont see immediate benefit, in effect they will be expected to invest up front while IP6 is rolled out. Or, for once, will it be different?

I have a suggestion. ISPs could offer a deal where the customer can pay more, but later on get IP6 upgrades for free, or something beneficial in return for the investment. Let that increased cost actually be a small private investment in one's ISP.

While certainly concerning, I think the OP is proving the opposite pole. The fact that solutions are continually evolving globally to solve the IPv4 address space issue, and pretty seamlessly too, speaks volumes to how amazing the internet and it's associated technology stack truly is.

IMO, the main issue with IPv6 is it's overall complexity compared to IPv4. While it's end goal is laudable, I agree with another comment in that there may have been a better halfway solution (IPv4.1) which could have eased the transition.

I say this knowing full well that the problem is very complex and the people working on it brilliant which makes me believe that there may have been no other way.

IMO, the main issue with IPv6 is it's overall complexity compared to IPv4.

And what complexity would that be, if I dare challenge you? I say it's the contrary: IPv6 is dead simple. IPv4, having to deal with NATs and other networking hacks like it makes everything which should be simple complex.

Really: IPv6 is a no-brainer, and the only thing remotely "complex" about it would be that addresses are larger, and that would be because IPv6 was designed to accommodate a larger address-space. But since end-users will be using DNS anyway, that's not going to bother anyone except techies.

Techies who should be able read a few simple FAQs on the subject and be done.

TLDR: If you find IPv6 "complex", that's because you've been too lazy to look into it. I recommend you go do something about that.

Since you called me lazy (thanks for that), I'll respond. Keep in mind, I'm not remotely an expert nor do I claim to be.

A few FAQs? I think you're being a bit disingenuous. While the protocol itself is relatively simple to understand, the complexity related to its implementation is immense which is why we're still talking about it and not fully doing it.

Why it's complex:

1. The recommended approach is dual-stack. This means that network admins/system administrators will essentially have to manage two separate networks. Of course, this is assuming you have core-to-edge IPv6 support. If not, then you're upgrading all of your hardware.

2. Cost. Predominantly why we're still here. Often easy for us to push aside in our cosy developed world, but ignores an enormous number of legacy hardware deployed all over the world that will simply not work and, quite frankly, the solutions like tunnelling read a lot like the supposed pitfalls of IPv4.

3. Training. It's not a matter of a few FAQ's to learn, for a lot of technicians it is a matter of re-certification as all hardware has to be replaced which likely means new operating systems and firmware, proxying/tunnelling servers/devices ala Teredo/ISATAP.

For anyone that says it's a "no-brainer", I say that you have little to no experience in managing large heterogenous networks in cost-strapped organizations which, up until now, have had little to no reason to upgrade as the IPv4 address exhaustion issue does not really affect them (yet).

While you bring up quite a few valid points, I strongly oppose that "the complexity related to its implementation is immense". These so called complexities has to be weighed against the cost of keeping patching IPv4 networks into still working as functional networks. And that job gets more complex and costlier for every passing day.

With IPv6, most issues you have in IPv4 space would be the solved the instant you enable IPv6. All the hacks goes away. It's a nice, clean network again.

The cost of getting IPv6 capable routers should be negligible as anything worth the money bought the last half decade already supports it. Chances are if you haven't replaced it yet, you will anyway, due to stupendously increased bandwidth requirements we've seen the last years.

I'm not saying it's a free ride, but I'm saying it's a no-brainer. Because keeping patching the rugged IPv4 landscape we have now is not going to stay viable. And then you might as well invest in the proper solution right away instead of wasting money, time and resources on a stop-gap solution.

Depends if you consider widescale NAT a solution or not since that is what ISPs seem to be considering.

Personally, I don't.

I don't either. But, you have to admit a grudging admiration of our ability to procrastinate and not have the whole thing collapse around us in the meantime. We've been talking about IPv6 for years in alarming tones about address exhaustion and yet, we're still ok.

I guess I'm a glass half-full person on this one.

At this point, IPv4.1 wouldn't solve the problem since it would require upgrading lots of software and hardware that already supports IPv6. If it had been developed ten years ago, then a slight modification to IPv4 might have flown.

One possibility that might have worked would be an option field to store the internal IP address from NAT. This would allow software that knew about the option to work around NAT. The expanded address would basically become a list of IPv4 address. Unfortunately, this would require support from NAT routers and the consumer routers are probably the slowest equipment to upgrade. This would also require support from operating systems at least, but might be possible for most applications to be unchanged. Protocols, like FTP or SIP, that include addresses would need to be extended but would get the most benefit from easier NAT traversal.

All of the changes have already been done for IPv6 (or not done like with Skype).

Like what?

People keep suggesting we could've somehow gone half-way, but it's an illogical proposition.

Router ASICs are keyed for IPv4 - that is the only thing they will ever do. Change anything about IPv4, and you have to replace all the routers (heavy duty ones, which do the heavy lifting of the internet).

There is no half-way solution to this problem that is somehow "easier" then existing transition mechanisms.

Change anything about IPv4, and you have to replace all the routers

That's a broad oversimplification. Use the Cisco 6500 line as an example: it's a platform that's very long in the tooth, and through modularity is still functioning quite well, even for IPv6.

Also: let's not pretend that similar issues weren't hit with v4, eg, the routing table explosion.

You can change plenty of things without fork-lifting entire platforms, and serious long-term challenges exist even without considering v6.

IPv4 and IPv6 are both complex protocols. Changing anything does mean that you have to replace all the routers, but if you only change the address length, the new routers are almost the same as the old ones, nothing needs re-engineered. Nobody needs retrained. Most software just needs a struct in a header file modified. You do need to replace everything, but you don't need to re-engineer everything. (Not necessarily my opinion, but my interpretation of GP's opinion)

If you change the address length, no old router and route packets to a new longer address. Nor can they route packets from long addresses to short ones. Game over.

You can't "just change the address length". They're hardware ICs. If the address is not 32 bits long, then its a malformed packet that should be ignored. They cannot handle it differently. They can't be reprogrammed.

There's been some consternation over Cisco stuff in this capacity, since they use an ASIC for IPv4 and a software-mechanism for IPv6 which is not nearly as fast.

The cost and complexity of changing hardware is enormous compared to every other part of the process. IPv6 in software has been solved, and where it can't be solved you can shim-it trivially by comparison. Software is not what's holding it back.

They could have made the change backwards compatible. Basically by doing something along the lines of designating a part of IPv4 space for use with the new protocol, and then stuffing the extra address into the optional header field. Then if you're running an older device that doesn't understand the new header it will just forward it along until it gets to a device that can.

Which puts you in the same problem: to any device (i.e. one with a hard-wired ASIC - the largest and fastest hardware generally) the new address space does not exist.

The internet goes to some lengths to avoid routing loops as well, so once a packet is passed one router we're not going to be able to easily spin it around and send it back to get to the right place. The net effect is going to be to randomly DDoS a bunch of devices which happen to be behind non IPv4.5 hardware, and so receive 50% of the internets traffic which doesn't get routed properly by an upstream router.

That is sort of what I meant. I stick by my caveat at the end though and recognize that it is likely far more complex than that which is why we're in the situation we're currently in.

So instead of an update to everything that handles addresses we have an update to everything that handles addresses and no change other than that 4 octet is now 5 octets.

What overall complexity of IPv6 are you talking about?

> We are hopelessly addicted to using a network protocol that has now run out of addresses.

We... "We"...

Let's see what the situation with germany's Telekom is like:

2010: http://heise.de/-1102458 - At the end of 2011 all DSL connections will be switched to ipv4/ipv6 dual stack

Mid 2012: http://heise.de/-1605061 - At the end of 2012 they will provide ipv4/ipv6 dual stack

End of 2012: http://heise.de/-1763557 - Only customers with VOIP or ISDN are getting ipv6.

It's not "me" who loves it, it's the ISPs.

> Only customers with VOIP or ISDN are getting ipv6.

I think you got that the wrong way around. Old analog and ISDN connections are specifically mentioned to never get v6 (which makes sense, I guess). Only new contracts with a fixed IP will definitely get it.

Oh right, I read that wrong.

But... Why does it make sense?

IPv4 works just fine, the problem is growing it. If you already have service, you've already got an address. No problems.

Alternatively, ISDN/dialup gateways just don't support IPv6 because they were all designed in 1995.

> IPv4 works just fine

Here you are supporting the network problem. If no ISPs support IPv6, it makes no sense for websites to support it. If no websites support it, it makes no sense for ISPs to support it.

You're right that everyone with an address needs no new address to be globally reachable, but in order to actually fix the issue we need everyone to switch. The protocol versions just don't inter-operate.

Yeah, here we had an "IPv6 Task Force" in 2004, showing how ready the Portuguese ISPs were. The site hasn't been updated since, and my ISP tells me there's no planned schedule for introducing it:|

I think in Germany the Subs get what "Deutsche Bundespest" decides was is good for them

And yet on many guides to hardening Linux, one of the checklist items is to turn off IPv6! If IETF wanted to fix the address space problem, it could have been done a lot simpler than IPv6 -- perhaps what the market actually wants is IPv4.1

From the total internet IP scans I've seen recently, there is also a significant percentage of allocated IPv4 space which is dark. Out of addresses we are not.

For example: http://www.nsa.gov/ia/_files/factsheets/rhel5-pamphlet-i731....


I am not an authority on hardening linux, but I would bet that disabling IPv6 is recommended simply because in most applications it is unnecessary extra complexity. All your careful IPv4 firewalls are meaningless if you forget to also configure ip6tables.

> if you forget to also configure ip6tables

Yeah, I've never understood that. I get that ping and traceroute have their own IPv6 versions, but that's sort of understandable (maybe).

A firewall, though, is a firewall, it shouldn't matter what one of the IP protocols is. TCP and UDP don't have separate tools. The maintainers should clean it all up and put it into one iptables tool. If you want only IPv4, use a -4 flag, and ditto for IPv6 with a -6 flag. Heck, for most rules you can just imply it by the nature of the src and dst addresses.

It's because most firewall rules we deploy are for a combination of layer 3 and layer 4 addresses, not just layer 4(port numbers).

And since layer 3 addresses are different between IP versions, we need different rules. (There may also be additional concerns if you enable V4 mapped addresses for IPv6)

It's primarily due to router discovery and automatic address assignment combining with stacks built to prefer 6 if available making it trivial to hijack/reroute/mitm your traffic. It's a significant concern on local networks that haven't intentionally deployed 6, you can see this behavior where things like laptops will inadvertently take over traffic on a network and push it through its 6 tunnel, generally evading network boundary controls.

What the market wants, or what ISPs want? IPv6 works perfectly for me, the only problem is having to use a tunnel to get to it, meaning until at least one of the ISPs in my area delivers the goods, It will continue to be an extra step. IPv4.1 turns out to be NAT, which makes direct-connections a privilege. Unless ISPs take initiative, IPv6 will continue to be relegated to being an overlay network on IPv4 for the foreseeable future.

People are again panicky about this, just like they are as IPv4 has gone through every major transition. But the solution is already known: it's called carrier-grade NAT. ISPs will be NATting at their side, so multiple customers will appear to have the same IP address.

As with the original adoption of NAT, it will be slightly bumpy at first. There'll surely need to evolve some sort of port forwarding protocol (hopefully something more like NAT-PMP and not like uPnP) that will allow incoming connections through what will now be double NATs (one at the ISP and one in your home), since we can assume people won't remove their original NAT just because a new one has been added. People like their personal NAT, it makes them feel safe. And I think those people are right to feel that way.

And then if you want to run a public web server, for now you'll have to pay extra for a static IP, just like you have had to do for years now. That's not a big deal, there are way less than 4 billion public web servers right now. The market won't run out right away. As we get even tighter on addresses, the price of a static IP will rise.

And when that becomes a problem (prices too high), someone will find a way to use something like DNS SRV records to have multiple web servers on a single IP address, using different port numbers, and DNS will tell your web browser which port to use. This will require browsers to support whatever new standard, but I figure we have 10+ years before this level of workaround is absolutely critical.

Remember, if you allow for identifying servers by ip:port instead of just IP, you have roughly 65536 times as many "addresses" available. That's enough to last pretty much forever.

We don't need IPv6. We will need new routers to do the carrier grade NATting (and it's a very hard problem to NAT at that speed/scale, but hey, that's what money is for. And ISPs can deploy it incrementally, as they run out of addresses).

What you've described is not a "solution" at all - it's just a series of hacks. I'm not a purist and I accept the need for some hacks (e.g I recognize some people will still want to use NAT with IPv6) but the future you're describing, where end users won't have routable addresses, and where it will be difficult/costly to get IP addresses for new services, is totally unacceptable. Transitioning to IPv6 is well within what humans are capable of, and frankly I find comments like yours extremely regrettable because this sort of sentiment is partly why we can't have nice things.

You're also underestimating the difficulty of deploying some of your proposed "solutions." DNS SRV has seen no adoption outside of a few services like SIP and XMPP. Getting web browsers, and every other last client out there that speaks HTTP, to support SRV won't be easy. Just look at the glacial pace other web standards move at.

And factoring in the port number doesn't give you 65536 times as many "addresses." Things like routing and ARP happen at the address level. Under your "solution" you wouldn't be able to migrate/fail over services just by moving their IP addresses between hosts. DNS-based solutions have never worked well for this.

If carrier-grade NAT is the solution, then the future is non-routable. If you are behind NAT and I am not, you can make an outgoing connection to me and we can still talk. If we are both behind NAT, we need a 3rd party's help. That is not an Internet IMHO.

It's also the end of anything but UDP or TCP on the Internet. I wouldn't trust my ISP to have a packet filter and NAT implementation for anything else.

All you need is a proper dynamic port opening scheme like NAT-PMP. You may also want to use a third-party STUN server to exchange routing info, but that's no more complex (actually easier) than DNS.

It's still the internet even though you need DNS to turn names into IP addresses, right? It's just a little more complicated. That's what the new world will be: the Internet, but a little more complicated. Which is exactly what happened when DNS, then CIDR, then NAT were introduced.

As important as DNS is for the Web, the Internet doesn't currently (and shouldn't) need DNS (or a DNS-like coordinator) any more than cupcakes need candles. If I know your telephone number, I shouldn't have to dial the operator and ask for their help (and implicit permission), I should be able to help myself and dial direct. We are re-imposing an unnecessary middle layer that has all sorts of social equality/neutrality implications. (People with global addresses have more power than those who don't.)

That's the point of direct addressing: removing ambiguity and allowing direct connections.

Edit: The cost of patching IPv4 is just another reason to move deliberately to IPv6 (or something that allows direct addressing again, but for the sake of argument, IPv6 is the leading candidate).

Edit: Deleted confused nonsense about NAT-PMP.

I think you may be misunderstanding NAT-PMP. Done correctly, that protocol can open a port through multiple layers of NAT without having to know how many layers there are. And then you'll have a well-defined public ip:port that other people can connect to you on. (You could, for example, advertise that ip:port in dynamic DNS or a bittorrent peer discovery protocol, just like you do today for dynamically-assigned IP addresses.)

There's no doubt that direct addressing is simpler and more appealing. Yes. But it requires worldwide 100% deployment of a replacement to IPv4, which is not simple at all.

There's still an upper limit of 64K listening ports at the top level of NAT for both TCP and UDP. The UDP side is probably worse. A shortage of UDP ports would pin down DNS servers to a single IP:port and make spoofing responses easier. You'd also need a DNS cache at each level of NAT to avoid burning through top-level UDP ports for DNS. That would mean that any successful DNS poisoning would hang around until the bad responses get flushed out of the resolvers. (If you're very lucky and the TTL is followed correctly, that would be after the top-level TTL expires.)

Yes, I had the wrong idea about NAT-PMP.

The cost of patching IPv4 and working around the quirks seems similar and less desirable to the cost of simply running IPv6 in parallel. I wouldn't describe it as all-or-nothing, but I would say ISPs need to help by providing low-latency tunnels/advertised routes. (Hurricane Electric can't handle everything, going forward.)

"All-or-nothing" itself creates an obstacle.

The problem isn't that it's expensive for ISPs to deploy ipv6; the problem is that an ipv6 address is strictly worse than an ipv4 address[1]. Therefore deploying ipv6 would be spending money to give customers something they don't want. Nobody is going to do that! 98%+ of customers will be happier with ipv4 behind CG NAT than an ipv6 address, as they don't see the internet as a network of peers, but rather see it as "I'm a client, I want to connect to servers"

[1] By "strictly worse" I mean there isn't, currently, any server that anybody cares about that you can connect to solely with ipv6; there are, however, numerous sites that are only connectable to with ipv4.

It's not really a case against IPv6 that existing websites don't need it since that's a privileged position that makes IPv4 seem fine. IPv4 exhaustion is only a problem for individuals at the edge who want incoming calls (to act as servers). Fortunately, today, new edge-homed servers can already use IPv6 through a tunnel, with the immediate advantage that they have a "real" globally-routable address that they can be reached at. In that sense, IPv6 has already arrived. It would just be nice if I didn't have to set up that extra tunnel to connect to something that only I and my clients connect to.

The people at the edge who don't care are, sadly, the one's who would most benefit. I agree that that is a problem. These social problems (the consumer apathy and the willingness of ISPs to exploit that to make a peer network a broadcast tree) are admittedly overwhelming but, as you also note, the costs to ISPs are minor: IPv6 can be provided to the edge, even if it's ultimately tunneled over IPv4-only hardware.

This is more a matter of technology-leaders, IMHO, pushing/expecting ISPs to do the right thing for once (if we can spare a few minutes from selling censorship to dictators). History has shown they aren't going to do it without public pressure. Their preferred distribution medium (cable TV) already existed: it was people who understood that it was a peer network that drove the adoption of the Internet. If IPv6 spends the first 10 years or more being used exclusively by that group of people, fine, but it's still worth promoting. ISPs only started taking it seriously in the last few years, so there's a long way to go, but I think it's a reasonable goal to get an upstream IPv6 router advertisement, eliminating the need for tunnels, to every IPv4-connected home in the next 5 years (it is really just a matter of installing Linux, or your preferred OS, on a spare box until the load dictates an upgrade; there is no chicken-or-egg problem).

there isn't, currently, any server that anybody cares about that you can connect to solely with ipv6

If a consumer-facing network with millions and millions of devices uses IPv6 exclusively for their management network to keep the service running effectively and efficiently, do I care? Do they?

As with the original adoption of NAT, it will be slightly bumpy at first.

The key difference here is that the NAT is now outside the users control, and if they want to fool around with anything funky and put it on the public internet (like a new Tim Berners Lee making a new World Wide Web) he wont be in a capacity to do so anymore. Ooops.

The old internet let people invent things and publish it as they saw fit. And that's why the internet we have now is awesome.

This new internet you are describing lets people apply for permissions to publish thing. That sounds like the exact opposite of what the internet was designed to do and what was required for the internet we have today to evolve.

It is a very short-sighted strategy pushed forward by people to lazy to read up on IPv6 and see how simple it really is. It's not earth-shatteringly different. The bigger, as by design, and that's about it.

> Remember, if you allow for identifying servers by ip:port instead of just IP, you have roughly 65536 times as many "addresses" available. That's enough to last pretty much forever.

I see you're using historical precedent:

640k of RAM ought to be enough

32-bit memory addresses will always be large enough

2 digits is enough to encode a year since it will be the 20th century forever

All the characters anyone would want to use fit within 8 bits

You may be right. At that time we will end up introducing or requiring some kind of horrible port multiplexing scheme, like maybe port knocking (using a pattern advertised in DNS) or maybe even... HTTP/1.1's Host: header :)

Separately, I think addresses are slightly different than those other measurements. I'd think the number of needed public server ip:port addresses is roughly on the same order of magnitude as the number of humans, or perhaps less. By that measurement, 4 billion is almost enough, but clearly not enough.

I wouldn't want to have to bet on that, but I don't have to. There's always another layer of indirection possible. And that layer of indirection will always be infinitely easier to deploy than a replacement to IPv4.

> And then if you want to run a public web server, for now you'll have to pay extra for a static IP, just like you have had to do for years now. That's not a big deal,

It is a big deal. The price is a signal of scarcity, and this means some webservers won't be deployed because of the scarcity. The only difference is that instead of plain running out of new addresses we will be gradually running out, while screwing those who can't afford public IP addresses.

The price is a signal of a scarce resource, when scarcity can be avoided by switching to larger addresses.

Gah, that'd be terrible for games. It's already annoying enough to enable port forwarding on /my/ router, I don't want to also have to do it upstream just to get a Terraria server for my 4 friends up.

Quick tip: Link local addresses in IPV6 are mandatory and all start with "fe80:". If you've enabled IPv6 on your machine but don't use IPv6 on your network, this is probably what you will see from ifconfig.

If you try to ssh user@fe80:0123::1, for example, you'll get an invalid argument error from connect(). In this case, you'd do an ssh user@fe80:0123::1%eth0, assuming that address is the link local address of eth0 on that host. Alternatively, don't use the link local address.

The tone is unnecessarily alarmist. IPv4 address exhaustion might limit internet growth, but it will not cause the internet as it already exists to break suddenly. Adjustments have been made and will continue to be made, such as corporate use of non-routable subnets for internal machines and a market for unused addresses. While address exhaustion is a serious issue that still must be dealt with in the next few years, the sky is not falling.

Or rather, the sky is falling, but it's still high enough up, and descending slowly enough, that we have plenty of time for a gradual and orderly transition to .. uh, I guess this is where the metaphor breaks down.

Underground bunkers?

In some ways this is worse because it will give time to implement half-measures like ISP-level NAT rather than forcing the adoption of a proper solution like v6. Over time, the price of an IP address will rise and end users will effectively be paying an artificial scarcity tax just to get connected without realizing that there is absolutely no reason for a shortage to exist.

IPv4 address exhaustion might limit internet growth, but it will not cause the internet as it already exists to break suddenly.

If the routing table grows too fast (eg, due to fragmentation of the available address space) there is a very real chance that the internet as it currently exists will break suddenly for some users.

Not that IPv6 fixes this automatically -- it grows the routing table as well -- but v4 fragmentation does cause very real problems. You can't just pick addresses up of the floor and put them to use. Each routed block carries a very real and marginal cost for all those globally who service that route.

Until you can't create a machine on EC2 because Amazon has run out of addresses. Obviously they will have been buying companies to asset strip IPs first.

And as a follow-up to the article, note that IPv4 depletion is not an abstract scenario. If you host a website, or manage a network, or write software that ever touches an IP address, it is quite literally your responsibility to pull out a keyboard and help fix this problem. If you're blocked on a system that isn't ready for IPv6, then you should find out why, or seek alternatives. The Internet is far too useful for us to stand around and watch it rot.

Microsoft has a tool called Checkv4 that scans code for things that have to be changed for IPv6 compatibility. Is there any equivalent for Linux?

I'm not aware of any automatic code scanners for Linux; usually the easiest thing to do is try it and see what breaks. You can use a HE.net or SixXS tunnel to get connectivity if your ISP isn't ready.

Beej's Guide shows how to use the C socket APIs in a dual-stack manner: http://beej.us/guide/bgnet/output/html/singlepage/bgnet.html...

For testing your own network and browser, http://test-ipv6.com/ is quite good.

IPv6 demonstrates a real flaw in the IETF. It has little ability to throw away a standard that suffers from flaws that aren't of a technical nature and start over. It's been clear for a long time that IPv6 was suffering from huge adoption barriers. There have been countless IETF transition standards proposed, adopted and deployed meant as stopgaps for ip6 like CGNAT while 6 continues to languish. Carriers, an obvious critical stakeholder in protocol adoption, didn't have an opportunity to participate in the standards process at the time. Yet we continue cry that the problem is the people that won't adopt it, not the protocol that people won't adopt. DNSSEC has suffered from a similar but less dramatic market failure.

Why should we expect carriers to adopt IPv6+1 any quicker than IPv6? Are there specific elements of IPv6 that are so carrier-unfriendly that correcting them would be worth throwing away all the progress that's been made?

Because we'd be talking about something closer to ipv4+1, something that allowed core equipment to not have to process packets in a different way and maintain completely separate routing tables for granularity of traffic that only matters at the edge.

Because we'd be talking about something closer to ipv4+1, something that allowed core equipment to not have to process packets in a different way

If that would be true, then it wouldn't be able to provide any better end-to-end connectivity than IPv4 and thus would be a completely wasted effort.

The problem was is that the IPV6 designers where all ivory tower types and forgot that the first priority should be migration and interoperability.

Migration and interoperability, IMHO, are very minor problems. Software needs very minor changes to support IPv6 addresses (the socket interfaces are not affected, only the address input string, which is handled by an OS library...). The problem is 1/3 hardware/software (a lot of firewalls and NATs) and 2/3 political: most ISPs began as telephone or cable providers with a vested interest in creating or maintaining media distribution monopolies. NAT is the best thing since sliced bread to them since it largely breaks end-to-end routing/global addressability. (VoIP, P2P, etc, are all much harder through NAT)

You miss my point they should have extended v4 and not tried to do a totally new standard.

As the register commented "IPv6 was neither designed for small biz nor consumers. IPv6 was designed by big-ticket network engineers bearing global infrastructure and enormous enterprise networks in mind. Learned gentlemen who live in a world where buying IBM and connecting it with Cisco never got anyone fired"

They have reinvented the OSI stack and we know how well that worked in practice ( I was third line for the UK's X.400 so I know what OSI is like)

I like the idea of something simpler than IPv6 that wouldn't simply amount to encapsulation, I just don't know that would look like.

Can you not use NAT with IPv6?

There were some NAT proposals, but I don't think any made it into the standard.

There is just no reason to use NAT if you have enough addresses. It's a hack to solve address scarcity, and doesn't add any security or any other benefits (unless you don't have a firewall, but you've got much bigger problems in that case!).

The RFCs for IP allocation say that every end site should get a /56 allocation - that is 256 subnets of /64 addresses [1]. A business site should be able to get a /48 (65,536 /64 networks) for no extra cost. Perhaps a mobile device with a cellular modem would get a /64 but that is the smallest allocation.

1. A /64 network has 2^64 addresses.

Sure, but there's not really much reason too (IPv6 still has private/non-routable addresses, so you might want to). NAT on IPv4 is used somewhat like a firewall - because there's nothing to configure - whereas with IPv6 the address space is large enough that there's (almost) no reason to use NAT (that I find convincing), and if a firewall is still desired, that can be run independability (for example, ip6tables).

Edit: To respond to your other comment, no, there's nothing stopping ISPs from inflicting NAT on IPv6 too, other than the consumer asking "why am I behind NAT when there is no shortage of addresses?".

Every device can have it's own address, so you don't really need it as much.

Sure, but if the ISPs number-one goal is putting a stranglehold on your personal freedoms, is there anything actually stopping them from using NAT on IPv6 if they wanted to?

If they are truly draconian cabals of evil, I don't expect "well, you don't need it as much" would stop them.

What's stopping them is their own greed; NAT costs more than not having it.

IMHO, the only thing blocking ipv6 adoption is the ISP's, like Comcast. Many of the datacenters support it, the devices support it, we're ready to go just waiting on the pipes. T-Mobile is already using ipv6 for mobile, so our old ISP's are dropping the ball as usual. Carrier grade NAT will be rolled out alongside ipv6, and ipv6 adoption will accellerate exponentially as everyone will want to avoid the performance issues that NAT has, all the devices already prefer ipv6. Markets are demand driven, as soon as the clients support it, the servers will fall in line surprisingly fast. I'm not worried, ipv6 is happening, we just need to give the ISP's a kick.

Comcast is a bad example since they are furthest along deploying IPv6 of any of the major US ISPs. They have coverage for 50% of customers.

But only 1% of traffic is IPv6 mostly because customer equipment doesn't support it. Cable modems need to be upgraded to DOCSIS 3.0. Most consumer routers also don't support IPv6 including many models currently for sale. The manufacturers don't seem to have plans to release upgrades for IPv6 support. The third part firmwares like DD-WRT and OpenWRT have some support but it still isn't great. Most customers wouldn't be willing or able to install new firmware.

Thanks for the interesting facts, I didn't know that about Comcast. I still think that widespread ISP support will lead to rapid adoption, with 50% of traffic being switched over in a decade once that occurs. How old is your cable modem and router?

My cable modem and router are brand new since I deliberately bought IPv6 capable ones so I can use IPv6 when it becomes available.

Mobile seems to be the area with the most IPv6 adoption. Verizon Wireless has 25% of traffic on IPv6. Verizon required IPv6 for its LTE network, and many of the popular mobile sites like Google and Facebook support IPv6.

Mobile definitely seems to be leading the way. I know the pace of obsolescence may be slowing a bit, but I still suspect that most consumer cable modems and routers will be replaced within a decade, similar to the pace of 802.11n adoption. It is troubling that ipv4 only ones are still being sold. So many negative comments here, but I still believe ipv6 will be rapidly adopted in the coming years and that all this concern about carrier NAT and the depleted ipv4 address space will become irrelevant in short order.

Three broad points.

First, the supply side: if you read this article through, you saw the demand curve inflections at the classful-classless change and at the NAT change. Downthread someone already mentioned carrier-NAT, which is one more potential inflection. But there are others; the biggest could be a liquid market for routable blocks. There are large companies pointlessly squatting on huge allocations; some of them assign routable IP addresses for every desktop in their network, despite (sanely) firewalling them off entirely from the Internet. A market for IP addresses would make that waste expensive and return potentially large numbers of addresses back to general use.

Second, the demand side: It will no doubt enrage HN nerds to hear this, but most Internet users do not need a first-class address. In fact, it's possible that most Internet users are poorly-served by first-class addresses! They never take advantage of them, but holding them exposes them to attacks. Because mass-market application developers in 201x have to assume large portions of their users don't have direct Internet connectivity, technology seems to be trending away from things that require it, and towards tunneling, HTTP-driven solutions, and third-party rendezvous.

Finally: Who says IPv6 needs to be the future? Bernstein's summary[1] of the problems with transitioning is getting old, but the points in it seem valid. If we're going to forklift a whole new collection of libraries and programs onto everyone's computer, why don't we just deprecate the whole IP layer and build something better. I don't think I see the reason why IP can't just be to 2020 what Ethernet was to 1990: a common connectivity layer we use to build better, more interesting network layers on top of.

The core functionality of the IP protocol has served beautifully over the last 20 years, but the frills and features have not. IP multicast is a failure. IPSEC is a failure. QOS is still a tool limited to network engineers. We barely have anycast.

These are all features that would be valuable if they work, but that don't work because the end to end argument militates against them --- their service models evolve too fast for the infrastructure to keep up, and they're pulled in different directions by different users anyways.

We can get new features and unbounded direct connectivity with overlay networks. We have only the most basic level of experience with overlays --- BitTorrent, Skype --- but the experience we've had seems to indicate that if you have a problem users care about, overlays tend to solve them nicely. We should start generalizing our experience with successful P2P systems like Skype and pull in some of the academic work (like the MIT PDOS RON project) and come up with a general-purpose connectivity layer that treats IPv4 like IPv4 treats Ethernet.

Special bonus to that strategy: Verizon and AT&T don't really get a say in how those overlays work, and nobody needs to wait for a standards committee to agree on whether things are going to be big endian or use ASN.1 or XML.

[1] http://cr.yp.to/djbdns/ipv6mess.html

Each of those three points has strong counter arguments.

> A market for IP addresses would make that waste expensive and return potentially large numbers of addresses back to general use.

The concept of legal ownership of IP addresses as property is explicitly denied by ARIN and RIPE[1], and for good reason. If they were property, the addresses would be held and hoarded as investments (which can be seen in the subset which is already owned in this manner). They would also fragment, which leads to worse performance and higher costs. Given those two reasons, the first point fails under the label of really bad idea.

> it's possible that most Internet users are poorly-served by first-class addresses!

Internet users, or let's call them end users, want to use software that works, is effective and thirdly cheap. Software that has those 3 attributes serves them. However, with NAT, those attributes are directly harmed. Some software will never work with NAT. Of those that will work, some are much less effective, such as harmed latency and privacy. And thirdly, NAT adds costs to software development in from of complexity, meaning that the end cost for users increase. Thus, because of NAT, software is less useful and more costly, which in turn harms Internet users.

> Who says IPv6 needs to be the future?

IPv6 was the smallest possible change to fix the problem, while still maintaining a form of performance requirement that overlay networks don't. Performance here being 1) latency, 2) router capacity, 3) privacy/security. If a new protocol would fulfill the performance requirements, then there would be a reason to discuss replacing IPv6 with something better, but until that time is here, IPv6 is the upgrade that is necessary for the Internet, end-users, and suppliers.

[1] https://www.arin.net/policy/nrpm.html

> IPv6 was the smallest possible change to fix the problem

Really? Going from a 32 bit addressing scheme to 33 bits would double the amount of addresses, pushing the problem a way down the track. Sticking to octets for simplicity, going to 40 bit addressing (five octets) would provide as many addresses as we need for the near future. But they went for 128 bit addressing with IPv6 - I fail to see how that was the smallest change that they could have made.

Making it backwards compatible with v4 would have been an even smaller change and we'd probably be using it by now if they hadn't broken compatibility.

And how was picking an address size that didn't match up to the native integer size of any common CPU good for performance? Maybe they expected us all to be running 128 bit CPUs by now.

If you're going to break compatibility by changing the fundamental address-size anyway, what does it matter if you future proof it by a factor of 256 or something ginormeous so that we wont have to go down this road again ten years from now?

Because the difference between a 64 bit integer (which would also have been future-proofed for the current IP service model) and a 128 bit integer is not simply 8 more bytes, but also the fact that all modern non-MCU computers can treat a 64 bit integer as a scalar, but are effectively forced to handle a 128 bit integer as a string or a structure of some sort.

This can be the difference between a 1-line patch to a C program and a 30 line patch.

Of course, the standards committees don't care about that cost (it is an externality to them), because "rough consensus and working code" stopped being the code of the IETF more than a decade ago.

AFAIK IPv6 originally became a RFC back in 1995.

One could always try rewriting IPv6 into a ipv4 extension as rfc1726 hint that some IP versions could be done, and then use the original ipv4 part as backward compatibility data, pointing towards a ipv4-ipv6 gateway. On the upside, it would be cooperative with the current ipv6 work.

I do not know if such attempts has been made or considered, or if its would be easier than the current approach.

Yeah, IPv6 is a great example of Second System Syndrome.

But, the only real change is the larger address space. And some rarely used features of IPv4 were removed. How is that 2nd system syndrome?

They increased the address space in the most disruptive possible way.

They could have defined a 64 bit address space and an escape hatch/upgrade path in the extraordinarily unlikely event that we ran out of addresses --- you could allocate a static IP address to every email sent in 2012, spam included, and still consume only 0.0003% of a 64 bit address space. You could address every page in Google's index in 0.00000005% of that address space.

In a 64 bit addressing scheme, IP addresses would remain scalar integers in the vast majority of programming environments used on the Internet (and where they aren't scalars, 128 bit addresses are even worse!). Instead, we have to forklift out not just the code that bakes in 32 bits as the width of an address, but also all the code built on the assumption that addresses are numbers you can compute with.

How would you define an "escape hatch" without replacing every device on the planet?

And IPv6 is really a 64 bit addressing scheme already. 64 bits for the network, and another 64 bits for the host within that network. The later part can be ignored by routers outside of the target network.

From IP address spam prevention perspective 40 bit addressing would be optimal: it would give enough IP addresses for everyone for comfortable use, but at the same time would make make IP addresses more expensive than zero (so it would be hard to simply use new IP address for every spam request).

40 bit address is also still possible to memorize (unlike 128 bit address).

Responses in reverse order.

First: IPv6 is not the smallest possible change. IPv6 is a very large change, involving an infrastructure upgrade and software upgrades across the Internet. Overlay networks necessitate neither. Routers on the Internet can at the beginning remain ignorant of new overlays. Endpoints add software as and when they decide to participate in a specific overlay. Meanwhile, the existing IPv4/HTTP service model, which works just fine under NAT, continues to operate. This is a more incremental approach than IPv6 and thus by definition a smaller set of changes.

Second: NAT demonstrably doesn't harm the interests of most Internet users, because a huge fraction of satisfied Internet users are already NAT'd. But if you believe Internet user interests are harmed by NAT, you also must believe they're harmed by IPv4, which has no functioning multicast or workable group messaging and has a security model designed in the '70s. At this point, arguing for a NAT-less Internet is de facto an argument for a massive software upgrade. If we're going to upgrade, let's upgrade to something better than IPv6.

Finally: we already have hoarding. It's just hoarding of fiat allocations.

Overlay networks do not fulfill the performance requirement, and your comment completely ignores that aspect. Overlay networks would be perfect if we could. Who here would not use Tor if latency was as good as outside Tor? Who here would complain if Skype-like networks were decentralized? The problem with overlay networks is that you either have to sacrifice latency or decentralizedness to get them, and IPv6 does not - ergo, overlay networks can not solve the problem because they can't incrementally improve without adding critically negative aspects while doing so. Invent an overlay network which is decentralized and adds zero latency, and such network would easily beat IPv6.

As for NAT, you are making an Argumentum ad populum fallacy to counter evidential claims. NAT adds complexity to software design when it operates over the network. Proof exists for this fact through RFCs and large design considerations documents. There are also evidental proof that such complexity adds to the cost of developing software, with equal proof that increased developing costs means increased prices. NAT traversal for many services also adds bandwidth costs and increases latency. I have a hard time understanding the argument that increased development costs, higher latency, and more bandwidth would be neutral for the user. The fact that a large number of users, mostly limited to a single ISP, are content with higher costs and higher latency doesn't seem to me to be a good argument in favor of NAT.

Vast amounts of content are delivered today on overlay networks (again: we call them CDNs). Overlay networks have enabled the current scale of content delivery on the Internet. Your performance concern --- about an overlay design you haven't even sketched --- is worse than handwaving: it can be falsified even without asking you to clarify.

I wish you'd stop trying to make me defend the NAT service model, because that argument is extremely boring. My point, which I think sees overwhelming evidence from just a cursory look at the modern Internet, is that most users are not harmed by NAT. Innovation continues despite its pervasiveness. We should use the time NAT has bought us to come up with something better than IPv6, which continues to bake critical policy decisions into $60,000-$200,000 Cisco router and switch chassis.

Legacy addresses, meaning those registered before ARIN's creation in 1997, are essentially property. Otherwise, they'd force you to pay for them.

No third party can call memory in some other third party's router's RIB their property. All it takes to overcome that hurdle is filtering.

I agree, but for "legitimate" use of address space ("resold" legacy or otherwise), why would this happen?

IPv6 support is pretty much standard in routers, applications, libraries, etc. these days, so we're not "going to forklift a whole new collection of libraries and programs onto everyone's computer" - all these libraries have been quietly ported over the last decade or so...

Really, CG-NAT is such an ugly hack that I think the only acceptable use for it is really just as a solution used when somebody has IPv6 to provide IPv4 connectivity to services that are behind the times.

Overlay networks are interesting, but that's no reason to not have the ability to have end-to-end IP routeability when pretty much the entire core of the Internet and many internet services support IPv6 already.

"IPv6 support is pretty much standard in routers, applications, libraries, etc. these days, so we're not "going to forklift a whole new collection of libraries and programs onto everyone's computer" - all these libraries have been quietly ported over the last decade or so..."

So what? DJB's linked criticism correctly predicted that would happen, and also correctly predicted that would not cause any significant uptake in ipv6.

The issue is that 98% of the ISPs customers will be happier with CG-NAT than with an ipv6 address, so the ISPs are going to spend money on the former and not the latter. This will be true as long as a majority of their customers connect to even one server without deployed ipv6.

The vast majority of people connected to the internet consider themselves a client, not a peer. CG-NAT is better for you than ipv6 if you are a client that wants to talk to even a single ipv4-only server.

So 98% of ISP users do not have an Xbox or PS3 or wii or skype

CG NAT breaks console multi player for most games (those without dedicated hosting) it will break voip systems such as skype and sip. it will break in game voice comms.

Guess what has no IPv6 support? Xbox, PS3, and Skype.

I will point out, though, that Dual Stack Lite may end up being cheaper for ISPs than NAT444 because CGNs are relatively expensive and native IPv6 traffic (including Google and Netflix) doesn't have to go through a CGN.

I actually have none of these; how does it dispute my argument? I bet all of these will work over CG-NAT.

I actually have none of these; ... I bet all of these will work over CG-NAT.

Yes. Let's gamble the indefinite future health of the internet on what works for you, a single point of reference, right now, at the very beginning of the IPv4 shortage, without a single thought spared for use-cases not concerning you.

That sounds like a very good and not at all short-sighted strategy.

That's not what I intended; I don't know what the issues are with any of these w.r.t. CG-NAT, since I don't have or use any of them. It was a request for clarification (you know the part of my quote you turned into ellipses).

I do know that every place I have been to the PS3/Xbox has been behind local NAT, so I would be surprised if CG-NAT broke these; my understanding also is that both of them have a central service for game-discovery which means there is no reason they couldn't implement NAT traversal there.

I also never said that CG-NAT wasn't more short-sighted than ipv6; rather that the ISPs have no motivation to deploy ipv6 and much motivation to deploy CG-NAT.

most Internet users do not need a first-class address

They would if they could use them, that is, if developers didn't "have to assume large portions of their users don't have direct Internet connectivity". What about we solve that instead?

We have only the most basic level of experience with overlays --- BitTorrent, Skype --- but the experience we've had seems to indicate that if you have a problem users care about, overlays tend to solve them nicely.

How well that BitTorrent and Skype work if everyone's behind carrier-grade NAT? How can Supernodes function?

I don't think I see the reason why IP can't just be to 2020 what Ethernet was to 1990: a common connectivity layer we use to build better, more interesting network layers on top of.

Because it imposes stupid restrictions on those connections that it's supposed to be serving. Like not being able to connect any two arbitrary endpoints.

Your second question is the only one I care about. The answer is, "just fine", especially so if not everyone is behind CG-NAT. Surely you're creative enough to devise ways for two parties on the Internet to rendezvous through a third party server. We already have overlay networks that are NAT-compatible; they're called CDNs.

Isn't the issue here that pretty much everyone would be behind Carrier Grade NATs? For every 1000 or so servers (or CGNs) an ISP adds to a network, 1000 home users must be NATted. I think you can rule out vhosts and the like for NAT traversal use, so that means you need to supply a VPS or dedicated server for this and pay extra for the bandwidth/IP address. Alternatively, you can use 6to4/Teredo/native IPv6 right now as your "overlay network" to address clients directly and avoid all that at the expense of using up CGN mappings. Bearing in mind that Windows 7 already has decent IPv6 support along with Teredo on by default anyway, why bother trying to work around the NAT?

No, everyone would not be behind carrier grade NATs. Presumably, in a dystopic future where CG-NAT became the new norm, the outcome for people like us is that we'd pay extra every month for our Internet service.

The rest of your comment presumes that the only connectivity on the Internet is via IP packets. But that's not true; it's an assumption based on historical patterns of access. Instead, assume the emergence of a routed message relay substrate built out of TCP connections (or even best effort SCTP or some other TCP-friendly datagram service). You'd "connect" to that next-generation Internet by making the same kind of connection your browser does, and having done so would be off to the races.

This stuff makes me want to blog again.

There are large companies pointlessly squatting on huge allocations; some of them assign routable IP addresses for every desktop in their network, despite (sanely) firewalling them off entirely from the Internet.

Ptacek, I'm a bit surprised to hear this from you of all people. Globally-unique addresses are very useful for things other than direct end-to-end routability.

Financial institution question: "What was the last device to hit that mission-critical system?" Answer: "Well, the access logs show, but that's the Internal --> DMZ NAT address, so I'll need to check the firewall logs to see."

Without NAT in play, network configurations become much easier to conceptualize. ACLs become easier to deal with. Things become more clear.

It's a bit of a slur to call even the legacy class-A allocation squatting, being able to do sparse allocation is a godsend. I'm looking forward to IPv6 for this alone.

Finally, I cringe at anything that dictates or predicts what an end-user needs: I don't think it's unreasonable to expect a global communications network to provide globally unique addressing.

I don't think it's unreasonable to expect a global communications network to provide globally unique addressing.

This a million times. People growing up these days, behind NATs, probably never experienced the internet like it was in the early days.

Back when anyone online could offer anything to anyone else without having to resort to "hosting providers" and confined to what they "supported". Because you didn't need anyone to "provide" you with "hosting", because, hey, you were already online!

That sort of openness allowed the internet as we now know it to flourish and develop. Who are we to deny the future the same possibilities?

You've totally missed my point. You read my comment as arguing that we shouldn't have globally unique addresses. My argument is that we shouldn't wait for IPv6 to provide unbounded unique addresses, and in multiple addressing domains and with different service models. There is no reason that we need to be held hostage to Cisco and the IETF.

"We can get new features and unbounded direct connectivity with overlay networks."

Unless routing is defacto-illegal (in Canada, a prior bill would have made routing come with certain data retention and interception obligations, with non conformance being a crime punishabel by $50K to $250K per day), or carried a prohibitive liability (ex, TOR exits). Skype got a foothold in a different political environment (and it's proxying isn't well-publicized), and BitTorrent isn't a general proxy (and politicians would love to ban it, even though that's a technical non sequitur). If I could afford to run an IPv6 tunnel+open router, I would (I would also love to run an open WiFi router), but I also don't want my door kicked in at 2 AM either, so it would be nice if my ISP helped too. These are pretty big obstacles to experimenting with large-scale network alternatives. (Not disagreeing, just making an observation.)

NB: Skype no longer does the Supernode thing since Microsoft took it over. The work of the supernodes is now done by Linux boxes in Microsoft datacentres http://arstechnica.com/business/2012/05/skype-replaces-p2p-s...

It's not necessarily a waste to assign globally unique addresses to internal networks that will never see the unfirewalled Internet, because the uniqueness means that when that company is merged with some other company and you want to route between their internal networks, you can do it without renumbering or NAT.

The short term solution is carrier grade NAT. This will extend the life time of IPv4 for another 5 to 10 years.

(I've been running IPv6 on my home network since 2008 using a tunnel provider. My ISP does not support native IPv6 yet.)

The problem is not in the routers or operating systems, it's in the applications.

Programmers do not yet know enough about IPv6, nor do they have the drive to learn about it.

The problem is essentially everywhere. Router/OS programmers are programmers too, and some are more on the ball than others.

In the consumer router space, the level and quality of IPv6 support is far from perfect. Even the third-party firmware is spotty. IPv6 works fairly well on OpenWRT trunk, thanks to 6relayd and odhcp6c, but the best you'll get from DD-WRT is a kernel module and a pile of scripts in the wiki.

Programmers do not yet know enough about IPv6, nor do they have the drive to learn about it.

Which is why this is solved inside existing frameworks so programmers don't need to meddle with it.

Want a socket to "www.google.com"? Sure have it. Is that IPv4 or IPv6? You don't know, and you shouldn't have to care.

Just like Unicode, this should already be a solved problem in mature frameworks.

Yeah, one thing that is definitely broken is justified text alignment on the web.

Sorry but I don't understand why and who can't make it happen if the hardware is there.

Maybe because the software ISP use is not ready yet ?

with proliferation of NATting what will happen to consumers geolocation? how will NAT statefulness affect reliability? NAT over NATs ? are push notifications on mobiles linked/affected? what about P2P networks?

Latencies will go through the roof, and P2P will be harder because of the difficulty in establishing connections without the help of willing 3rd parties with routable addresses (to act as proxies, or provide STUN-like functionality, etc). (Geolocation will probably stay about the same, but could go down depending on the depth of the NAT'ing... For example, if you are behind 2, you will only be seen as the outermost address...)

If it weren't for the latencies and the liability, I would consider a P2P IPv6 overlay an acceptable transition plan.

OK - I'm willing to transfer one of my servers from IPv4 to IPv6. How do I tell if my hardware/software/isp/etc. is compatible? Where should I get started?

Bug your ISP first. See if you can get assigned IPv6 addresses. Alternatively you can use tunnelbroker.

If the 3 VPS providers I use, two had them available. None of my local ISP's assign IPv6 addresses, even though nearly all hardware built in the last 5 years supports it.

One super-lazy approach for HTTP is to put Cloudflare in front of your server. That kinda feels like cheating though.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact