Hacker News new | past | comments | ask | show | jobs | submit login
Let's Encrypt now fully supports IPv6 (letsencrypt.org)
503 points by el_duderino on July 26, 2016 | hide | past | web | favorite | 187 comments



So unfair! Comodo once, a while ago, also thought about using IPv6!

But seriously: letsencrypt is doing excellent work. It's a great case study in how inefficient a mostly-free market can be: SSL adoption doubled within a year. All that was previously deadweight loss.


There is no such thing as a free market when certificates were only created and issued by those who had the political clout to get their authority in the Microsoft / Google / Mozilla trusted keyring or in the trust network of another trusted provider.

A free market is one where you can compete independently.

If anything, it is a demonstration of why faux-markets with monopoly control (or in this case, the collusion between established and trusted authorities to prevent competition) are dangerous.


What? Anyone can become a certificate authority, so long as you'll take the time to document your procedures, store the private key securely, and be externally audited. You probably want an insurance policy as well.

The only program that charges money to be included is/was Oracle, as far as I know. Everyone else is free.


Everyone else charges indirectly. Inclusion is free, you "only" need to go through someone like webtrust who will charge you $100k+ for an audit.


So? Relying on commercial auditors to verify compliance with published industry standards is a pretty common free-market solution to problems of trust.


I'm not disagreeing with it. I just think the parent post was weird: anyone can do it, just spend some time to obtain things, only Oracle charges for the program.


What's the premium look like for an insurance policy for a private key?


What does it cover, exactly? Every loss from misuse of the key? Sounds expensive.


> So unfair! Comodo once, a while ago, also thought about using IPv6!

In case anyone doesn't get it:

http://arstechnica.com/tech-policy/2016/06/800-pound-comodo-...


1It's a great case study in how inefficient a mostly-free market can be: SSL adoption doubled within a year. All that was previously deadweight loss.

And also in how a market fixes itself :)


Let's Encrypt is a non-profit organization run as a public service by the Internet Security Research Group. Not exactly the same thing as a competing company. Market forces are not compelling these people to donate money, they're doing it because they believe in the goals of the organization.


Nicely enough, this is made possible by the freedoms in the current market (economic system.) One should not forget that the freedom to donate ones' work is also part of a free market. I like this interview with Linus Torvalds [0, 1] where he admits it was for his pleasure and thus for his personal gain that he made Linux. Rational self-interest does not require financial rewards to be satisfactory. It did make him a millionaire in the end though (because Red Hat gave him stock options). This also show how he can be a dick sometimes while still being of immeasurable value to the global economy. He does it because he likes it, not because he likes you or wants you to like him.

[0] https://www.theobjectivestandard.com/2012/06/linux-creator-l... [1] http://www.bbc.com/news/technology-18419231


While true, the economic environment is also geared towards an assumption that you'll provide the vast majority of your work to a company that takes ownership of the results and directly pays you for it. In my country, the majority of the current generation will likely never own a house, let alone be financially comfortable enough to work on a social project full-time.

The result is that the only way for someone who isn't wealthy to do charitable work is to get either Government or companies to pay you to do it. Companies will generally only pay you to do work if they're getting something out of it, and Government has its own interests at heart.

Getting people to pay you to do something is very, very difficult - it's hard enough if you're selling them something directly and right now, significantly harder if you're still working on a project, and harder still if there might not be anything concretely benefiting the person donating at the end of it. Charities largely manage by not paying many of the people who perform work for them, along with begging money from Government and companies, and their volunteers primarily depend on Government welfare.

It's not a perfect system in the slightest - it's incredibly hard work to be able to perform any sort of charitable work, at least as hard as running a business, and is heavily subsidized. A system which ensured that everyone's needs were met by default, rather than requiring you to prove that you deserve to have your needs met, would allow significantly more charitable work - specifically, it would allow for work that rewards the worker in ways other than monetary pay.


Was it the same market though? Or is the shift largely driven by Google's ranking changes in that period?


This also reminds me of SGC certificates where browsers only trusted a limited number of roots.


SGC certificates were entirely a result of bizarre US export restrictions.


According to conversation on https://github.com/letsencrypt/boulder/issues/593 they couldn't support it because one of their datacenters didn't support IPv6 traffic.


Just to clarify re: the DCs - they did/do support IPv6, but actually getting the addresses we needed provisioned was kind of a pain. That was a blocker for a bit.

There were some other factors on our side that contributed to the delay as well. It comes down to the fact that our priority has to be reliable and secure operation of the services we're already offering.

I'm glad it's finally here though, and now Let's Encrypt won't be a blocker for someone else's IPv6 deployment!


Which sucks because a data center that doesn't support IPv6 is the 2016 version of a data center that only does Token Ring networking.


AWS does not support IPv6, except on load balancers.


Even then only on Classic-class accounts, new(er) VPCs don't support it at all.


They are unfortunately and shockingly common.


(Cough) AWS? (cough)

What's up with that anyway?


As someone who is responsible for protecting networks for large data centers the biggest issue I see with adopting IPv6 is the overhead required. I took a job with another company this last year and while I thought the asset tracking was bad my previous company, I found it can get much worse. I have clients whom we host servers for that I have no clear method of determining what their IP space is.

System Admins routines make mistakes with complicated host names and trying to acquire an accurate inventory is an absolute nightmare. This ties into IPv6 because why would anyone take that disfunctional system, which barely works with 'easy' IPv4 addresses, and make it even more complex? We would have to support both IPv4 and IPv6 simultaneously and firewall rules would get much more complex initially and they are already a huge issue for me to get changes made.

At my old job this was similar. Even though it was in the financial industry and that particular company was rolling in profits it couldn't keep enough network engineers around to save it's life. The turnover was high, documentation was horrible and projects to make things better languished on in the ether.

No no no, forget the whole IPv6 thing, just run IPv4 for all things internal and gratuitously support IPv6 outside if you really have to. I jest of course, but that is the reality of my corporate life in the last 6 years in two fortune 500 companies.


The way I think about IPv6 is that it's an entire second network which happens to frequently coexist on the same layer 1 and layer 2 equipment.

That said, I think there's often a strong argument for only using IPv6 for the internal parts of a network. IPv6 actually simplifies things, and where IPv4 remains needed, it can be encapsulated and routed over a v6 network.

But it takes a team which understands the v6 world and is able to take advantage of the benefits for this to become a reality.


Amazon built all their own networking hardware and routing protocols and didn't include support for V6.


And I am saying that it's a very bad thing.


For any Go users out there, I'd recommend Russ Cox's package: https://godoc.org/rsc.io/letsencrypt. It automatically acquires certificates and keeps them up to date.


someone play devils advocate and tell me reasons I might not want to use LetsEncrypt? (aside from potential issues from short-lived certs).


If you need a certificate that Let's Encrypt can't/won't provide. Some examples:

- Wildcard certificates

- OV/EV certificates

- S/MIME or code signing certificates

- Certificates with non-DNS SANs (e.g. id-on-xmppAddr or id-on-dnsSRV for XMPP servers)


There is a conceptual work around on wildcard certs, which is to provision on demand essentially, since the process is automated and free.

Issues with this approach, complexity of implementation, throttling (only so many certs per I think week are allowed per ip address)


It's actually worse. The current rate limits are:

> The current rate limits are 20 certificate issuances per domain per 7 days, 5 certificates per unique set of FQDNs per 7 days, 500 registrations per IP per 3 hours, 300 pending authorizations per account per week and 100 subject alternative names per certificate. See https://community.letsencrypt.org/t/rate-limits-for-lets-enc... for more.

Which means that you can add at most 20 subdomains a week. Any more than that, and you are SOL.


> Which means that you can add at most 20 subdomains a week. Any more than that, and you are SOL.

Though the use cases for needing 21+ new subdomains in a week are few and far between I expect, and probably all cases where a wildcard cert would be a better choice (which LE doesn't yet support).

Note that it is 20 certificates though, not 20 sub-domains, and LE lets you include more then sub-domain per certificate. So if you can group the sub-domains together you can get many more then 20 in the period.


You're right, I glossed over that distinction. My use case is provisioning domain names for servers/containers the moment they come up, and it's not really feasable to batch that.


And if other people have control over the resulting containers in that circumstance a wildcard wouldn't be suitable either, unless it is only used for HTTPS (and the local LAN/VLAN can be trusted) in which case you can put a proxy in front of the containers to handle it to avoid each container needing a copy of the private key.

I can see why limits are in place though, it protects them from abuse by badly written integration code and actions that are less accidental. Perhaps they'll lift the the limits a bit as the service grows and stabilises. Or introduce a cheap-but-not-free service for people requiring something beyond the standard submission rate limits.


Yes, I know, I've followed IETF ACME WG mailing list on the subject, as well as ACME GitHub account issue tracker.

It's being worked on and is probably coming someday, but not here yet. Which makes LE infeasible for some use cases. For now.


OV certificates are literally useless (or, rather, they have zero added value over DV certs) and EV certs are only valuable for the UI that browsers use when they're in use.


> EV certs are only valuable for the UI that browsers use when they're in use

Not exactly, you can pin your CA's EV root cert in your mobile app (or website using HPKP). This allows you to roll your cert at will while presenting a very high bar to an attacker to get a cert that will verify.


Or you pin your own CA root cert and keep the keys off the net.


I feel it's important to point out that no one "needs" an EV certificate.

Of course there is the "increased conversion" argument from which some users find it worthwhile. But groups certain groups have long pushed a myth that certain types of websites "need" EV, which only serves to help their profits.

More than one consultant has argued you won't pass PCI compliance without an EV cert (false).


I feel a lot more comfortable logging on to a bank when they have an EV certificate. So for things like finincial institutions, I think there is a huge value in the extra feeling of trustworthiness.

For a random online store or something I don't care and I think there is almost no value...


Hijacking...

BTW I'm trying to get in touch about ct_advisor but don't see an email listed...


Well it would appear you've identified a valid bug.

This has been fixed, and I've placed my email address on the site.


Thanks. Ideally, I would like to be able to update the email address used. I put the wrong email address in and there's no mechanism for correcting to the desired one.


Contrary to popular belief, it might be quite hard to configure it. I spent few days trying to make it work, but I didn't succeed yet. Though my requirements might be a bit atypical. I'm not going to run their software which does too much for my taste, therefore I'm using letsencrypt.sh, which I briefly inspected and I feel comfortable to have full control over process. I don't want to perform domain validation using HTTP, I want to use DNS validation, so I have to write an additional software layer to integrate letsencrypt.sh and my DNS provider API (vultr). And that turns out not so easy.

For me StartSSL was the best offering, but now I need few 3-rd level domains, so letsencrypt seems the only free choice.


To be completely honest, if you're not using the tool Let's Encrypt is creating with the specific intent of making configuration easier, it sounds like a moot point to state that it "might be quite hard to configure it". I'm not saying that your use case is invalid, but starting your comment off with stating that it is hard to configure might throw people off.


I don't think it's too much to expect people to read a single paragraph before jumping to conclusions.


Take a look at lego, it appears to support Vultr's DNS API[1] (and many others).

[1]: https://github.com/xenolf/lego/tree/master/providers/dns/vul...


+1 for lego.

It made things a breeze to configure. I'm now just hoping that ACMESharp will incorporate ACME DNS challenge support soon so that I can automate getting certs for individual machines right on the same box. Imagine: no more complaints of certs when RDPing to machine.


> it might be quite hard to configure it.

I want something completely custom and that's hard therefore Lets Encrypt is too hard.


I was in a similar situation a few months ago, I wanted to update my Route53 DNS and didn't want to trust the magic tool.

I wrote my own simple script in Ruby using the letsencrypt gem and the AWS SDK. It might be a good starting place, feel free to fork and modify it for your own DNS provider.

https://github.com/paul/letsencrypt-route53


If you need a wildcard cert, or need an EV cert.


HTTPS is a good idea for many (most) scenarios, especially when your traffic goes over uncontrolled/public networks, but depending on where you stand, it may not be necessary for _all_ use cases.

For example if the boss gives you an hour to e.g. stand up a internal company-facing blog on a controlled network environment where you don't necessarily have hostile actors in your threat model, configuring HTTPS might not be your first priority.

Another counterexample you hopefully won't have to experience: if you have to support end users where their device's date and time may be incorrect, e.g. their laptop battery died and NTP hasn't kicked in yet, on page loads users would see a scary 'certificate not trusted' warning[1]. Fixing the laptop's date is a non-obvious solution for end users.

1: https://support.google.com/chrome/answer/98884?hl=en


Regarding internal sites, Let's Encrypt requires an external accessible domain, which you might not have (either because you use a local domain, or because it's firewall'ed from the net).

On the other hand, the device date is less problematic nowadays, since Chrome recognizes this case and shows a specific message: http://4.bp.blogspot.com/-xOOCv0xLMxo/Vdu_Y8XlHeI/AAAAAAAADq...


Domains don't have to be externally accessible, they just can't be internal names (i.e. "made-up" domain names that you do not actually own), which is true for all public CAs. The DNS-01 challenge type does not require that you open any port, you just need the ability to create TXT records.


Unfortunately, there are still idiots (sorry, I feel strongly about this) that somehow thought it was a good idea to use "internal" names.

Now I face an uphill battle to get things corrected. ARGH.


When I last looked at the DNS-01 challenge type, I couldn't work out how to make it work with my DNS provider, Gandi.

How does the DNS challenge work with the delay in propagation of new records?


lego seems to have a plugin for Gandi's DNS API[1].

Let's Encrypt always sends DNS queries to the domain's authoritative DNS server and doesn't cache any results, so as soon as your authoritative DNS server has the record, you're good.

[1]: https://github.com/xenolf/lego/tree/master/providers/dns/gan...


Aha! The project that was missing Gandi API support was https://github.com/AnalogJ/lexicon - but it now has support.

I'll look into lego, thanks.


Well, all domains are internal names, just in the root space .

Not supporting internal domains is understandable, but requiring https for internal things is an issue.


I'm not sure what your last sentence is referring to. Who or what is requiring https for internal things?


For example, Chrome disables several JS features and APIs on non-https pages.


Oh, I thought you were referring to something on Let's Encrypt's end. For "real" internal names, internal CAs sound like the best option, and would work just fine with what browsers call "powerful features." For anything else, Let's Encrypt would work.


Except, you can't use internal CAs anymore on Android devices.

Adding CAs to those is now impossible since Nougat.


AFAIK that is only relevant for apps. It is still possible to import CAs for browser traffic, for example, and it's still possible to opt-in to trusting custom CAs as an app. So this is only really a problem for non-browser apps that a) need to communicate with internal domains and b) are not within the control of the organization the internal domain belongs to. I'm sure use-cases like that exist, but they ought to be exceedingly rare.


Well, such use cases are internal email via IMAP with STARTSSL, they are browser traffic with default browsers (those don't allow custom CAs either, you'll likely have to compile a custom build yourself), etc.

Basically, if you want to use any app - be it IRCCloud, Slack, Locally-Hosted Google Apps for big businesses as a box, etc locally, with a custom CA, you have to customly modify every single one of those apps, or you have to buy a CA.

That's a great fucking piece of shit.


> Well, such use cases are internal email via IMAP with STARTSSL, they are browser traffic with default browsers (those don't allow custom CAs either, you'll likely have to compile a custom build yourself), etc.

Do you have a source for this? Whether non-standard CAs are accepted is up to the individual apps. Android N still has the ability to install custom root certificates. I haven't seen an announcement regarding, for example, the standard mail client or Chrome for Android.

> Basically, if you want to use any app - be it IRCCloud, Slack, Locally-Hosted Google Apps for big businesses as a box, etc locally, with a custom CA, you have to customly modify every single one of those apps, or you have to buy a CA.

"Buy a CA"? There's no publicly-trusted CA that will issue certificates for internal domains, period. Just stop using internal names for this purpose and you're fine. You can get domains and DNS hosting for a total of $ 0.00, so that's not a valid argument in my book.

> That's a great fucking piece of shit.

That's a security trade-off that's meant to help protect regular users while inconveniencing a small number of organizations that chose to still use internal names while ignoring many warnings that this is not a best practice, and who are now unable to get publicly-trusted certificates for these domains.


Ah, right, forgot about DNS validation.


Those aren't reasons to use Lets Encrypt, those are reasons not to use HTTPS. In fact in the former case the growing ease of using Lets Encrypt is a reason to look at it if you are in such a hurry.

The latter seems like a pretty big reach. Oh, the user might see a cert related error so lets just toss out all security.


Their rate limits + short cert lifetimes makes it relatively easy to hit the limit if you are using it for a substantial number of certs.

(i.e. If you have a bunch of stuff web services exposed to the internet on separate IP addresses )

> https://letsencrypt.org/docs/rate-limits/

Although they've increased it enough, I'm not sure its still an issue.


Short certificate lifetimes should have no implications on rate limiting, as renewals do not count against the rate limits, so whether you need to renew your certificate every 90 days or every year won't matter.


The available software to do the automatic 90 day cert renewals don't work on anything but the latest operating systems. Go back a handful of years and they just crap out.


Certbot supports Ubuntu 12.04+ (14.04+ for the apache plugin), CentOS/RHEL 6+, and Debian 7+. With the exception of CentOS/RHEL 5 (which was released in 2007!), all releases of those distributions that still receive security updates are supported.

What OS are you referring to?


The short-term certs is an issue, but LE does automatic renewals and send expiration notices. However, I also use letsmonitor.org for monitoring/expiration alerts.


Short term certs are only an issue if you have inadequate tooling.


Or if you want to use certificate pinning in an app. You'd have to force-upgrade everyone every two to three months. Old versions would just stop working.


Just keep using the same key for renewals, or pin to something other than the end-entity certificate (like Let's Encrypt's intermediate certificate, or IdenTrust's root, plus some backup pins).


Sure. And the tooling only exists pre-made for modern operating systems. If you run anything even a handful of years old there's no support and you have to do it by hand.


So write your own tool. There are loads you can fork.

Not buying this excuse.


True, but if it fails and you don't know it, that could be a major problem. Monitoring with alert escalation helps me sleep at night.


That's not a problem specific to lets encrypt, or anything else really.

If you don't want to introduce complexity, then your problem is that you don't have tooling in place when you run into a situation where you have to rotate certs.


Famous question - intranet.

We can do dns-01 verification, on intranets (like valid domain). But the downside is our domain would be logged in the certificate transparency log. What is the downside of being on the log?


Whether you like it or not, all certs in the near future from all providers will be logged anyway.

Most sysadmin don't like their intranet adresses being in the log so as to not provide intel to intruders.


Ah. I didn't realize we will eventually. So if I get a cert for *.dev.example.com I am exposing just dev.example, but not foo.dev.example.com?


Yes, the log is static. It only contains the subject name if the certificate.

But there's little to fear from exposing internal domain names. DNS names are more or public knowledge - they are transmitted unencrypted, end up in plenty of caches etc. Attackers can probably brute force it or the PTR records anyway.


If you don't expose your intranet resources to the outside world, you can probably set up your CA.

With proper measures (non-exportable key on a HSM, stored in a safe, requires a PIN that only senior security staff knows, etc etc), it may even improve security for some attack scenarios - if someone gains control of your DNS servers for a short while, they won't be able to issue anything.


Can you elaborate where does the HSM come into play with the DNS verification? I am not following. thanks.


I was comparing two possible scenarios:

a) you have set up your own internal CA, which key is safely stored on an HSM, with all security measures.

b) you use Let's Encrypt, and they issue you certificates based on DNS validation.

With (b) if malicious party gains control over your DNS server, they can issue themselves a bunch of valid certificates that you may not even know about (unless you watch CT records). With (a) remote attacker barely has a chance. Thus, a self-hosted CA may be beneficial in terms of security.


Awesome! This was the second of only two steps to remain until I can fully turn on HTTPS. Now they just need support for IDNs (which they've also announced) and LetsEncrypt will be functionally complete from my point of view.


What's this mean? If a site only has an AAAA record it can now get a cert?


I think you could always get a cert via the ACME DNS challenge for an IPv6-only/AAAA-only domain. But you could not talk to the ACME/API endpoint from an IPv6-only system, so actually requesting and retrieving the cert would have to happen on another, IPv4, system.

(Just as a sidenote: you _never_ need to request and retrieve the cert on the system that the domain name points to. That is just the easiest way and the workflow most clients suggest, since it also makes a lot of sense.)


Would it be possible to set up a somewhat isolated VM that has the sole purpose of requesting certs? I get stuck on needing either the webroot or standalone method of certbot-auto.


What's your specific problem with the webroot method? I'm using it on my systems, and contrary to popular belief, the certbot can easily run as a non-root user when using the webroot method. (My configuration is at https://github.com/majewsky/system-configuration/blob/master... .)


Yup.


Does anyone know if Let's Encrypt supports DNSSEC validation? I mean, do their data center recursive DNS servers do DNSSEC validation?

I'm wondering how easy it would be to forge DNS responses to their servers checking that I control a domain name.


DNSSEC is enforced at the resolvers.


Thanks. Since my zones are secured with DNSSEC that makes me feel a bit safer.


How would you MITM their connection to the DNS servers, though?


You don't need to MITM, if you can predict the request and get your spoofed response in faster than the real server.


If the server randomizes both the Query-ID and source port, the attacker has less than 1/100000000 chance of sending a valid reply: http://unixwiz.net/techtips/iguide-kaminsky-dns-vuln.html#fi...


An attacker can send more than one packet, and likely make multiple attempts to 'verify' the domain under attack. If the attack window is 1 second, and an attacker can source 1 million (spoofed) packets per second, and the server is using the full space for source port and query ids, and there's a single resolver, the attacker can get a cert fraudulently issued about 1 out of 4000 times.

DNSSEC makes that much harder, so it's nice that their resolvers are using it.


> We’re looking forward to the day when both TLS and IPv6 are ubiquitous.

Kudos to Lets Encrypt for their great work on the former.

A single sad tear for the state of the latter.


Expected but when is Tor support coming? I read a forum thread indicating it would be nigh on impossible due to .onion tld status.


I'm asking out of ignorance because it'd never occurred to me before just now: would you need HTTPS on a Tor site? I thought Tor itself would handle trusted encryption for you. Or is that as a layer of defense against malicious nodes?


It does for hidden services, but HTTPS allows the browser to know the connection is secure, which lets it apply different rules, like mixed content blocking (if, say, you're browsing a .onion forum and someone links to an image hosted on a non-onion address).


Then perhaps browsers should whitelist .onion as "secure" regardless of protocol?

I'd also like to see whitelists for the reserved-for-private-use IPv4 ranges and a .local or .home TLD, since those are circumstances where HTTPS doesn't give you much either, and where getting a certificate is unreasonably difficult.


This is a terrible idea, since nothing stops the onion TLD from being spoofed. The way the browser requests resolution of .onion names is no different than any other; to visit them you need to be communicating through some sort of proxy (possibly on your own computer) that intercepts both your DNS requests and the HTTP requests to the returned address. HTTP does not provide any way to validate that your are connecting to the intended proxy instead of a malicious one.

These same thing applies to .local, .home, and private IPv4 ranges, which can all be spoofed depending on where an attacker is in your network.


> This is a terrible idea, since nothing stops the onion TLD from being spoofed. The way the browser requests resolution of .onion names is no different than any other; to visit them you need to be communicating through some sort of proxy (possibly on your own computer) that intercepts both your DNS requests and the HTTP requests to the returned address. HTTP does not provide any way to validate that your are connecting to the intended proxy instead of a malicious one.

Presumably you wouldn't be visiting a .onion address if you're not already connected through a Tor instance you know about.

> These same thing applies to .local, .home, and private IPv4 ranges, which can all be spoofed depending on where an attacker is in your network.

Which would be exactly the point, those are OK to spoof, since you'd only be visiting them through a trusted network, where nothing can be externally verified anyway.


> Presumably you wouldn't be visiting a .onion address if you're not already connected through a Tor instance you know about.

How about a link? Or even more problematically, using its now trusted status to load inside an HTTPS page!

> Which would be exactly the point, those are OK to spoof, since you'd only be visiting them through a trusted network, where nothing can be externally verified anyway.

What? How is it okay for safe-place.home to be trusted when an attacker can spoof the DNS resolution upstream (like ISPs already routinely do to point you to ads)?

The whole point of distinguishing HTTPS connections is that they provide some way of guarding against spoofing of name resolution/packets and snooping. Nothing about how .local, .home, .onion, or local-reserved IP ranges are handled by browsers prevents these from being attacked, in many cases even from outside your network. If you curl 192.168.80.1 (assuming that's not within your subnet), your router will happily shoot some packets at your ISP. The situation for the others is even worse.


> What? How is it okay for safe-place.home to be trusted when an attacker can spoof the DNS resolution upstream (like ISPs already routinely do to point you to ads)?

I guess I was unclear, my point was that I think some TLD should be dedicated for home networks, with ICANN and especially browsers recognizing that.

ISP spoofing wouldn't be an issue because if you used these TLDs then legitimate requests would never reach that far anyway. If not, well, you wouldn't be visiting such domains anyway and there would be nothing to spoof.

> If you curl 192.168.80.1 (assuming that's not within your subnet), your router will happily shoot some packets at your ISP.

But that's not an issue, because if it's not on your subnet then you wouldn't be visiting it in the first place. Any snooping ISP could just as easily make you visit some other address instead that actually did have a TLS certificate, as they could make you visit that. In the worst case, you could make the browser check your subnet mask. But since the contents on those IPs will be unique from local network to local network anyway, I really don't see the point in bothering.


> I guess I was unclear, my point was that I think some TLD should be dedicated for home networks, with ICANN and especially browsers recognizing that.

If it's for home networks, .home resolution would usually occur at the DNS server on your router. How does the browser know that your router follows the new rules and won't route that DNS request up to your ISP, and therefore should trust the request?

> But that's not an issue, because if it's not on your subnet then you wouldn't be visiting it in the first place.

Unless your attacker can get you to click a link? That's a pretty easy thing to get users (especially the unexperienced) to do. Or they can sneak it into a secure page and monitor requests/serve malicious assets.

> In the worst case, you could make the browser check your subnet mask. But since the contents on those IPs will be unique from local network to local network anyway, I really don't see the point in bothering.

This ignores the case where your local network is either (a) infiltrated (b) a coffeeshop. The second being super common, and would need to be guarded against by the browser having some sort of Windows-style public/private network distinction, which users would remember to configure correctly.

> But since the contents on those IPs will be unique from local network to local network anyway, I really don't see the point in bothering.

I'm not seeing the connection. If someone with control of your public internet connection (i.e. what HTTPS is designed to guard against) sends a response when your browser requests something from that address, what does it matter what that address does in another local network?

Everything I've described here has been an element of a real attack where something somewhere was more trusted than it was supposed to be. This would add a massive array of attack vectors, and at best would indicate to the user trust in something that has no reason to be trusted.

If you're doing something on your local network, it makes a lot more sense to just create a self-signed CA and put the root on your devices. In the onion case, you should use HTTPS between you and your proxy (e.g. with a *.onion wildcard cert) to make sure you actually connect to your proxy.


EDIT: Sorry, you're right, the browser would need a secure way to check this, which it currently doesn't.

This is not true; a .onion address is also a fingerprint of the public key of the node, so even if the connection is hijacked, the other node won't be able to authenticate itself.


Which the browser can't validate if it just talks HTTP to a proxy that connects to Tor (or a malicious proxy instead), as the parent post described. Doesn't apply to all Tor setups, but it's a risk.


Tor has end-to-end encryption for onion addresses, but TLS provides identity validation as well as encryption. So you can be sure that the .onion address hasn't been spoofed (or that you typo'd it). In addition, if you have both the .onion and .com on the same certificate, you get the additional benefit of binding both addresses together as being part of the same logical website.


Unless it's an EV certificate or the same cert as .com, I don't see how https helps against spoofing or typoing.


The same cert is how this should be handled. EV is /fine/ but is not practical for people that don't want to verify their legal identity (they just want to verify that they are the authorised source for x.com and y.onion).


That's not up to Let's Encrypt, Baseline Requirements created by the CA/B Forum only permit EV for *.onion.


Since ISRG is a CA/B Forum member, we can also propose to change this in the future.


Why do you need HTTPS for Tor? It's already encrypted. EV makes sense to prevent spoofing.


Is it authenticated, though?


Yes. Tor address is the public key, so nobody but its private key owner can decrypt traffic (or impersonate messages from it). It's actually superior to TLS, because you don't have to trust anyone (but you have to check that address in the URL is correct).


It is, however, only 1024 bit RSA for now.


Can we safely assume that anything 1024 bit RSA is now compromisable given the resources of a federal government?


Anything less than 2048 bit is probably a poor choice these days. NIST recommended that RSA-1024 be considered deprecated for use after 2010.

The trouble with RSA is that you end up needing to increase to pretty large key lengths to have significant increases in security after somewhere around 2048 bits. For example, a 4096 bit key is not really as great as it might first appear.

I might be wrong, but I have a vague recollection that Google went with a 16384 bit RSA key for their root key on Chrome devices. It's not a frequently used key (it's used to sign the signing key that can be updated - the signing key uses weaker and faster algorithms which can be changed in new firmware releases), but it's stored in read-only firmware that can only be updated by physically opening a machine. Given that you probably want the key to be good for somewhere between 5 and 30 years, the uncertainty which exists around quantum computers right now, and the tremendous problems which could occur if this key were to be factored, I can understand why they would choose such an obnoxiously large key length.


Because TLS provides identity validation and can be used to logically bind the .onion and .com methods of accessing the service.


This is in no way criticism against LE, where I work _nothing_ is IPv6 and we do not even have it on any agenda.

But when an "we are going to change the future of the internet"-project makes IPv6 a Prio-2 feature (to be added later, not native from the start) it just shows that we are really not there yet.


AWS still has no native IPv6 support. It's sad really. It's pointless for a project like LE to make IPv6 a first-priority feature when massive providers running the backbone of the internet don't support IPv6 without a ton of fiddling-around.


It's not in AWS business interest to support IPv6. As long as they only support IPv4, they need to own an address for every VM in their cloud. As they grow, they purchase more addresses, shrinking the pool of available IPv4 on the secondary market, increasing the price for their competitors. This makes it more expensive to compete with AWS on a large scale if you want to support IPv4.


Sorry, but I don't see it. While they are busy buying up IPv4 addresses, I'll just be over here moving everything to IPv6 only and not have to care about any of that.


Until almost the entire Internet has IPv6 at the endpoint, you're making your stuff unreachable to most people. You will still need an IPv4 gateway.

I have IPv6 at home now but still don't have it at our tech-oriented co-working space, and I've never seen an IPv6 address get handed out at a coffee shop, airport, restaurant, etc.


An IPv4 endpoint is fine and for some situations way cheaper than having every single host be assigned a public IPv4 address. I am not saying that we are there yet, but I can't see a situation where Amazon putting pressure on the market by buying every available IPv4 address resulting in a response of "Let's give all our money to AWS" and not "let's finally make IPv6 happen."


Good point, but the problem is that the companies responsible for "making IPv6 happen" are not necessarily the same companies who want to compete with Amazon. There's very little incentive for ISPs to roll out IPv6, especially when the biggest cloud providers don't support it. It's a bit of a catch 22.


Yeah, what about the hundreds of carriers that don't yet support IPv6?


Don't we have, for example, a Teredo client baked in into every relatively recent Microsoft OS?

My ISP doesn't support IPv6, but I've just checked and a Windows 10 machine was able to do `curl -6 https://ipv6.google.com/` just fine.

And I think I've heard OS X has something as well... not sure, don't have OS X or iOS devices at hand to try it out.

(On GNU/Linux and BSD machines there's a lot of choices, Miredo, 6to4, AICCU from SixXS - whatever one fancies. As usual on *nix systems, setup is manual, though.)


And it works out of the box? Doesn't Teredo necessitate using a relay (someone to translate to native IPv6) – does MS just have one set up somewhere, and preconfigured?


Yes, out-of-box. Haven't ever did anything IPv6-related by myself on that machine.


That curl command doesn't connect on my OSX El Capitan.


> Yeah, what about the hundreds of carriers that don't yet support IPv6?

That's like focusing on supporting Internet Explorer 8. At some point IPv4 will become the legacy protocol and supporting it will be annoying and expensive.

That day maybe not be today, but switching protocols is not always trivial, and you should start yesterday.


Nearly all the "second tier" clouds support IPv6: Digital Ocean, Vultr, Linode, SoftLayer, etc.

AWS's services are great but many people just need compute. I don't understand why people who just need compute use AWS, since for compute only Digital Ocean and Vultr destroy it in virtually every way: ease of use, performance, cost, IPv6, etc.

Digital Ocean now has block storage too, which plugs one hole. Of course they don't have things like RedShift or S3, but like I said not everyone needs that.


DO's v6 needs work. They block port 25, and they don't even assign an entire /64 to a machine, which breaks SLAAC and whole bunch of useful ways of using v6.


My ISP doesn't even seem to give me one, which is kind of annoying since I would love to be able to have direct access (through something secure) to all my computers, especially my NAS, no matter where I am. Without a lot of customers on ipv6, why would they support ipv6 on the servers?


My ISP offers it, but you have to buy a special $100 IPv6 PPPoE device and stick it between your IPv4 NAT router and your WiFi router...


This is 100% why my site is not ipv6


We slowly edged IPv6 into ${COMPANY} by a concerted effort to refer to IPv4 as 'deprecated protocol',unilaterally adding IPv6 support questions to our RFP questionnaires, boldly printing DOES NOT SUPPORT CURRENT IP PROCOTOL across design documents. All basically long-term psychological conditioning to raise awareness at the Director+= level.

Still took nearly a decade before the first internal servers began supporting IPv6 but what a cheer went up when that first IPv6-only e-mail arrived.

The amazing thing was that most network kit supported IPv6 out of the box, thanks to mainly to Government purchasing requirements that had hammered the big vendors into submission.


I'm glad their Prio-1 feature was free, automated TLS certificates, not IPv6.


Of course, so am I. But pretty much the same reasons we can bring up why IPv6 was not super important for LE are the reasons everybody else has to procrastinate on that. It's just a statement of overall sadness that accompanies my personal ~15 year wait for IPv6 adoption.


We would have had it a long time ago if it had been backwards compatible, so that a server with an ipv6 address could talk with a server with an ipv4 address and vice versa. Then it would only have required software upgrades, but since ipv6 is more than twenty years old, most software that used it would have supported it by now, if only because it would have been written a long time after ipv6.


I've heard this argument before … but how could that have possibly worked? An IPv4 client is only going to be able to address 2 ^ 32 addresses, and it seems like the pigeonhole principle implies that the client can't possibly address the entire IPv6 space with that address format.

How would an IPv4 client, with this hypothetical backwards-compatible IPv6, connect to an IPv6 server?


A special flag in the ip packet that indicates which type of address it should be interpreted as, really that should be all, the rest of the work is drivers for software and routers.


If you need to change software on all machines and routers, what have you gained vs IPv6 as it is? being backwards-compatible would mean that it would work with unchanged old machines, which doesn't work since that wasn't planned in IPv4. IPv6 could be simpler, yes, and has accumulated quite a bit of well-meant baggage, but you don't get out of updating everything if you want to completely switch.


It is fairly easy to introduce a proxy for IPv4 only networks. Like a NAT gateway, but even easier since you can do this at the application level (and get real firewalling).


I have several back end servers that are IPv6 only. Since they are just used by our front-end servers it doesn't matter that much.

It is nice to see 0 random port scans and SSH brute-force attacks. IPv6 space is so big it helps give some security through obscurity.


Eventually people might be able to infer things that help them reduce the space to scan a lot: https://arxiv.org/abs/1606.04327

(may not apply to every allocation strategy)


It was more of an ops issue (lack of IPv6 connectivity at the DC) rather than a software issue:

https://github.com/letsencrypt/boulder/issues/593


Why should they have waited if they got it to work over ipv4, which is where the majority of their requests right now were going to come from. Release early, release often and all that.

They probably have other features in mind that haven't been released yet, does that mean that whatever those features are obviously are not important on the internet?


Just a note on IPv6 adoption: A couple of weeks ago people made fun of axle counter limitations of swiss railway SBB, however they are running IPv6 already for some months…

  ;; ANSWER SECTION: www.sbb.ch.	14400	IN	AAAA	2a00:4bc0:ffff:ffff::c296:f58e
reply


And yet LE added IPv6 support a mere 2.5 months after it left Beta. It's not that bad!


As soon as an IPv4 address becomes expensive and IPv6 is not, all of that will change.


It's getting to be time for us to admit what the holdup is—IPv6 hasn't been deployed because IPv6 NAT ("NAT66") isn't a thing.

There are a million reasons NATs are terrible for the internet. But they're used on IPv4, and IPv6's technical goal of increasing the address space is tied up into the technical goal of killing NAT, immediately, and changing the way a lot of people think about networking. For instance, end-user ISPs are expected to give you a /64 or more instead of a single IPv6 address so that you don't need to NAT, but many of them don't, because that's not how people think about addressing. If you have a NAT-using site and you want to switch to IPv6, you have to pursue the political goal of convincing your ISP to think differently about addressing.

Meanwhile, IPv4 and IPv4 NAT works. I'm typing this from behind a NAT, you're probably reading it from behind a NAT. It's not ideal, but, rough consensus and running code.

As soon as we all put our collective feet down and insist on IPv6 NAT implementations, such that IPv4 sites can move without rearchitecting their environment (whether or not that rearchitecting would be a good thing), IPv6 will get deployed quickly.


> For instance, end-user ISPs are expected to give you a /64 or more instead of a single IPv6 address so that you don't need to NAT, but many of them don't

Name one? I've been on Comcast and Sonic, and both natively provide /64 networks. I've never heard of an ISP providing a /128.

> Meanwhile, IPv4 and IPv4 NAT works.

No, it doesn't. It breaks a million things more than it solves and it makes the Internet worse (and vastly more asymmetric, but that's repeating myself). NAT needs to die in a fire, and there is zero political or technical motivation to inflict its brokenness on a new protocol that absolutely does not need it. Evidence: that many ISPs are providing native, un-NATed IPv6 to their customers. Perhaps some don't, but someone will manage to screw up any given feature. They need to fix their shit, not coerce the rest of the Internet to break itself for their convenience.


OVH's cheap dedicated servers, Kimsufi, only provides a single /126 IPv6 address, instead of the recommended /64 block :(.


I have heard rumours that, although this is what they state, they actually in practice do assign the entire /64 to the machine. Not sure if this is true and I have not tested it myself.

I see it largely as an attempt to do market segmentation and limit the usefulness of Kimsufi to push people towards their other brands. Unfortunate, but...


Well, it does work (just have to allocate the IPs statically) but since you're kinda not supposed to do that, I assume it will either stop working one day, or get my machine voided as a TOS violation or whatnot.

It really is unfortunate. Not having to use a proxy for the sole purpose of sharing the 80 port would be nice...


> But they're used on IPv4

and they're not needed on IPv6, just the same as we don't require cars to have horseshoes, even though horses needed them before.

Anyway, I'm on HN right now over NAT because HN doesn't have a IPv6 endpoint, otherwise I would be here over IPv6. THAT is what is holding IPv6 back, there aren't the services on IPv6 so there is no user demand for it.


HN uses Cloudflare (a YC company) and could turn IPv6 on for free with a single click.

But that may not represent the full story for them. Internal moderation and anti-spam may need updating to be compatible with running on two different networks that each have different approaches to numbering.

Even companies that 'get' v6 sometimes have rough edges. E.g. Cloudflare - which has been a great supporter of v6 for a long time now - has been known to send 'log in from a new IP address!' notifications because my machine automatically rotated it's IPv6 privacy suffix.


> Cloudflare - which has been a great supporter of v6 for a long time now - has been known to send 'log in from a new IP address!' notifications because my machine automatically rotated it's IPv6 privacy suffix.

Either you get that message or the IPv6 privacy mechanism is not working well enough.


Well, all the IPv6 privacy mechanism does is rotate the /64 suffix. My /64 prefix hadn't changed. From a reputation/risk modelling standpoint, it's usually correct to view a /64 of v6 as a /32 of v4 (i.e. a single address).


I talk to a lot of people who think lack of NAT "exposes everything" and is a big security problem. I try to explain that firewall is not NAT and NAT is not firewall but people do not seem to be willing to hear or understand this. "NAT equals firewall equals security" is tattooed on the inner eyelids of an entire generation of developers and IT people. It borders on religion.

On a deeper level this is because almost nobody actually understands how networks work. In my experience even really top developers often have absolutely no idea what happens on the wire. As a result they basically cargo cult netsec. Since they don't understand it, anything that deviates from "standard practice" gives them security FUD willies because they don't see the implications.


Yeah, I'm not disagreeing with any of that. But the people who don't actually understand how networks work are the people who are reluctant to deploy IPv6. Do you want to teach them all how networks work before IPv6 gets deployed?


What I've seen is that the vast majority of even top developers don't understand much about how networks work. It's a big barrier.


Why? They're top developers, they've written networked apps that work.

It's a barrier to deploying IPv6, yes, but I'm arguing that that's an entirely artificial barrier. You shouldn't need to understand why IPv6 folks dislike NAT in order to write software that uses the network.


Even Apple got bit by this issue. They sold an Airport home router which by default provided full v6 connectivity. They were promptly lambasted in the press for "putting their users at risk by not having a proper firewall!". :(

There was a time when perimeter security was seen as an adequate and acceptable technique.


Yeah, that's where the discussion gets confusing. I will totally agree that a world without NAT would be better, but I will not agree that a world without home routers that drop all inbound connections would be better. If the ISP gives out /64s, the right behavior for the home router is to assign addresses in that /64, but keep doing stateful tracking of outbound connections just like a NAT would, and drop everything else on the floor.

Of course, at this point you've broken end-to-end connectivity (P2P apps don't work, active-mode FTP doesn't work, etc.) so this may not actually resolve the reason people wanted to get rid of NAT. Maybe a good portion of the the no-NAT-in-IPv6 crowd wants inbound routes to people's homes so apps work, and the "but you don't need NAT for security" crowd is misunderstanding them


"but keep doing stateful tracking of outbound connections just like a NAT would, and drop everything else on the floor"

Here are two important behaviours which come with "just like a NAT would".

1) UPNP - a protocol which allows an application to request it be exposed to the internet

2) Hole punching. E.g. two hosts send packets with matching ip/port values to cause a direct connection to be established between them

Applications don't really do those things on v6...

And, FWIW, ISPs should be giving residential users at least a /56 - if not a /48. Sites should be able to have enough address space to route within the site and still use the features of IPv6 which require a /64. Route aggregation and routing table size is the constraint of the v6 world, not address space.


Hole punching still exists in the V6 world if you want an app to do P2P from behind stateful V6 firewalls. It requires a three party handshake just like V4 NAT traversal. But unlike NAT traversal it nearly always works since there are no symmetric NAT nightmares.

V6 should make all this cruft go away but it doesn't.


Did it have a proper firewall with sensible defaults? (I wouldn't be surprised if some vendors shipped IPv6-enabled routers without a firewall for IPv6)


There's a reasonable argument to be made that router-based firewalls shouldn't be necessary for a home user to have a secure configuration.

If it's not safe to expose a service to the internet, it's also not safe to have it exposed within your LAN without access controls.

NAT-based 'firewalls' can have holes in punched in them in a variety of ways - they are specifically designed to allow holes to be punched, because it's necessary for many applications to work.

It's also possible to take advantage of a user's browser as a relay. Combine that with the ability to use ad networks to target HTML to be served to the IP of an attacker's choosing, and the illusion that a perimeter firewall will prevent an attacker from initiating connections to your network starts to shatter.

I agree that in practice, in many cases today the stateful firewall-like functionality provided by a NAT device will provide a net security improvement. But that's not a situation we should continue to allow.

It's unrealistic to expect users to manually create firewall holes. That's why default configurations tend to include UPNP (which, naturally, introduce a new set of security and DoS concerns) - which will automatically open holes in the 'firewall'.

Google has published some of their thinking on this topic under the 'beyondcorp' moniker. The summary is there is no "safe" and "unsafe" - you need to do a risk evaluation of each attempt to access a service, and "this is coming from inside our network" in inadequate.


De-facto they are necessary, because people expect their networks to be safe, run all kinds of not-great things in there and sort-of got used to configuring port-forwarding. Attacks that can go from inside the network are in practice quite rare (AFAIK), and are a reason to add more security between network devices, not to expose them to the public net.

I'd love if we could trust most devices to be publicly exposed, but IMHO we can not. If router manufacturers could be trusted one could add all kinds of clever things there, but ...


> Attacks that can go from inside the network are in practice quite rare (AFAIK)

This is not true. Most real world attacks I've seen begin by infiltrating malware via the web, e-mail, social media, or phishing. Once inside existing connections between existing internal systems are exploited to crawl around the network.

Remote attacks against non-DMZ things are fairly rare in practice.

The only way to stop this is to implement even more firewalling inside the network, which basically breaks LAN.

I very much agree with the parent and have been talking about Google's beyondcorp and deperimeterization for years. A device that can't be safely connected to a network is broken, and we should stop degrading our networks to support broken junk. If broken junk gets hacked, it is the fault of the makers of that broken junk.

It is not hopeless. I've been into this stuff since the mid-1990s and things have improved a lot since then. I would not be too afraid to hook up a Mac or a fully patched Windows 10 machine to the public Internet. In the 90s or early 2000s I would not even consider this. You'd get owned by a bot within an hour. I remember in 2000 hooking a virgin Windows machine up to a campus network and being able to watch it get infected within 5 minutes.

The trends for the future are positive. Safer languages like Go, Rust, Swift, etc. are getting more popular everywhere. Advances in OS security like W^X, ASLR, etc. are getting ubiquitous. Local app sandboxing and containerization is a thing almost everywhere. Devices security postures are improving.


I really passionately hate clueless security FUD.


Nearly 30% of the US traffic is now IPv6: https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...


IPv6 NAT exists on Linux, OpenBSD, Cisco, perhaps others. However you would better not use it.


NAT66 implementations exist, and you don't need anyone's permission to use them. But I think it would be more productive to pursue smarter ways to share a /64, than to modify packet headers in flight.

For most purposes, ND proxy is the new NAT.


And there are actually some use cases where NAT66 may make sense (various multi-homing and re-numbering avoidance scenarios). I think the jury is still out. I think we'll see lots of NAT66, but for completely different reasons than we've seen v4 NAT.

I do think those uses will tend to involve 1-to-1 prefix translation rather than many-to-1 mapping.


Great news.


You mean it didn't?


Does that mean that you are not using IPv6? I know I am not...




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: