But seriously: letsencrypt is doing excellent work. It's a great case study in how inefficient a mostly-free market can be: SSL adoption doubled within a year. All that was previously deadweight loss.
A free market is one where you can compete independently.
If anything, it is a demonstration of why faux-markets with monopoly control (or in this case, the collusion between established and trusted authorities to prevent competition) are dangerous.
The only program that charges money to be included is/was Oracle, as far as I know. Everyone else is free.
In case anyone doesn't get it:
And also in how a market fixes itself :)
The result is that the only way for someone who isn't wealthy to do charitable work is to get either Government or companies to pay you to do it. Companies will generally only pay you to do work if they're getting something out of it, and Government has its own interests at heart.
Getting people to pay you to do something is very, very difficult - it's hard enough if you're selling them something directly and right now, significantly harder if you're still working on a project, and harder still if there might not be anything concretely benefiting the person donating at the end of it. Charities largely manage by not paying many of the people who perform work for them, along with begging money from Government and companies, and their volunteers primarily depend on Government welfare.
It's not a perfect system in the slightest - it's incredibly hard work to be able to perform any sort of charitable work, at least as hard as running a business, and is heavily subsidized. A system which ensured that everyone's needs were met by default, rather than requiring you to prove that you deserve to have your needs met, would allow significantly more charitable work - specifically, it would allow for work that rewards the worker in ways other than monetary pay.
There were some other factors on our side that contributed to the delay as well. It comes down to the fact that our priority has to be reliable and secure operation of the services we're already offering.
I'm glad it's finally here though, and now Let's Encrypt won't be a blocker for someone else's IPv6 deployment!
What's up with that anyway?
System Admins routines make mistakes with complicated host names and trying to acquire an accurate inventory is an absolute nightmare. This ties into IPv6 because why would anyone take that disfunctional system, which barely works with 'easy' IPv4 addresses, and make it even more complex? We would have to support both IPv4 and IPv6 simultaneously and firewall rules would get much more complex initially and they are already a huge issue for me to get changes made.
At my old job this was similar. Even though it was in the financial industry and that particular company was rolling in profits it couldn't keep enough network engineers around to save it's life. The turnover was high, documentation was horrible and projects to make things better languished on in the ether.
No no no, forget the whole IPv6 thing, just run IPv4 for all things internal and gratuitously support IPv6 outside if you really have to. I jest of course, but that is the reality of my corporate life in the last 6 years in two fortune 500 companies.
That said, I think there's often a strong argument for only using IPv6 for the internal parts of a network. IPv6 actually simplifies things, and where IPv4 remains needed, it can be encapsulated and routed over a v6 network.
But it takes a team which understands the v6 world and is able to take advantage of the benefits for this to become a reality.
- Wildcard certificates
- OV/EV certificates
- S/MIME or code signing certificates
- Certificates with non-DNS SANs (e.g. id-on-xmppAddr or id-on-dnsSRV for XMPP servers)
Issues with this approach, complexity of implementation, throttling (only so many certs per I think week are allowed per ip address)
> The current rate limits are 20 certificate issuances per domain per 7 days, 5 certificates per unique set of FQDNs per 7 days, 500 registrations per IP per 3 hours, 300 pending authorizations per account per week and 100 subject alternative names per certificate. See https://community.letsencrypt.org/t/rate-limits-for-lets-enc... for more.
Which means that you can add at most 20 subdomains a week. Any more than that, and you are SOL.
Though the use cases for needing 21+ new subdomains in a week are few and far between I expect, and probably all cases where a wildcard cert would be a better choice (which LE doesn't yet support).
Note that it is 20 certificates though, not 20 sub-domains, and LE lets you include more then sub-domain per certificate. So if you can group the sub-domains together you can get many more then 20 in the period.
I can see why limits are in place though, it protects them from abuse by badly written integration code and actions that are less accidental. Perhaps they'll lift the the limits a bit as the service grows and stabilises. Or introduce a cheap-but-not-free service for people requiring something beyond the standard submission rate limits.
It's being worked on and is probably coming someday, but not here yet. Which makes LE infeasible for some use cases. For now.
Not exactly, you can pin your CA's EV root cert in your mobile app (or website using HPKP). This allows you to roll your cert at will while presenting a very high bar to an attacker to get a cert that will verify.
Of course there is the "increased conversion" argument from which some users find it worthwhile. But groups certain groups have long pushed a myth that certain types of websites "need" EV, which only serves to help their profits.
More than one consultant has argued you won't pass PCI compliance without an EV cert (false).
For a random online store or something I don't care and I think there is almost no value...
BTW I'm trying to get in touch about ct_advisor but don't see an email listed...
This has been fixed, and I've placed my email address on the site.
For me StartSSL was the best offering, but now I need few 3-rd level domains, so letsencrypt seems the only free choice.
It made things a breeze to configure. I'm now just hoping that ACMESharp will incorporate ACME DNS challenge support soon so that I can automate getting certs for individual machines right on the same box. Imagine: no more complaints of certs when RDPing to machine.
I want something completely custom and that's hard therefore Lets Encrypt is too hard.
I wrote my own simple script in Ruby using the letsencrypt gem and the AWS SDK. It might be a good starting place, feel free to fork and modify it for your own DNS provider.
For example if the boss gives you an hour to e.g. stand up a internal company-facing blog on a controlled network environment where you don't necessarily have hostile actors in your threat model, configuring HTTPS might not be your first priority.
Another counterexample you hopefully won't have to experience: if you have to support end users where their device's date and time may be incorrect, e.g. their laptop battery died and NTP hasn't kicked in yet, on page loads users would see a scary 'certificate not trusted' warning. Fixing the laptop's date is a non-obvious solution for end users.
On the other hand, the device date is less problematic nowadays, since Chrome recognizes this case and shows a specific message: http://4.bp.blogspot.com/-xOOCv0xLMxo/Vdu_Y8XlHeI/AAAAAAAADq...
Now I face an uphill battle to get things corrected. ARGH.
How does the DNS challenge work with the delay in propagation of new records?
Let's Encrypt always sends DNS queries to the domain's authoritative DNS server and doesn't cache any results, so as soon as your authoritative DNS server has the record, you're good.
I'll look into lego, thanks.
Not supporting internal domains is understandable, but requiring https for internal things is an issue.
Adding CAs to those is now impossible since Nougat.
Basically, if you want to use any app - be it IRCCloud, Slack, Locally-Hosted Google Apps for big businesses as a box, etc locally, with a custom CA, you have to customly modify every single one of those apps, or you have to buy a CA.
That's a great fucking piece of shit.
Do you have a source for this? Whether non-standard CAs are accepted is up to the individual apps. Android N still has the ability to install custom root certificates. I haven't seen an announcement regarding, for example, the standard mail client or Chrome for Android.
> Basically, if you want to use any app - be it IRCCloud, Slack, Locally-Hosted Google Apps for big businesses as a box, etc locally, with a custom CA, you have to customly modify every single one of those apps, or you have to buy a CA.
"Buy a CA"? There's no publicly-trusted CA that will issue certificates for internal domains, period. Just stop using internal names for this purpose and you're fine. You can get domains and DNS hosting for a total of $ 0.00, so that's not a valid argument in my book.
> That's a great fucking piece of shit.
That's a security trade-off that's meant to help protect regular users while inconveniencing a small number of organizations that chose to still use internal names while ignoring many warnings that this is not a best practice, and who are now unable to get publicly-trusted certificates for these domains.
The latter seems like a pretty big reach. Oh, the user might see a cert related error so lets just toss out all security.
(i.e. If you have a bunch of stuff web services exposed to the internet on separate IP addresses )
Although they've increased it enough, I'm not sure its still an issue.
What OS are you referring to?
Not buying this excuse.
If you don't want to introduce complexity, then your problem is that you don't have tooling in place when you run into a situation where you have to rotate certs.
We can do dns-01 verification, on intranets (like valid domain). But the downside is our domain would be logged in the certificate transparency log. What is the downside of being on the log?
Most sysadmin don't like their intranet adresses being in the log so as to not provide intel to intruders.
But there's little to fear from exposing internal domain names. DNS names are more or public knowledge - they are transmitted unencrypted, end up in plenty of caches etc. Attackers can probably brute force it or the PTR records anyway.
With proper measures (non-exportable key on a HSM, stored in a safe, requires a PIN that only senior security staff knows, etc etc), it may even improve security for some attack scenarios - if someone gains control of your DNS servers for a short while, they won't be able to issue anything.
a) you have set up your own internal CA, which key is safely stored on an HSM, with all security measures.
b) you use Let's Encrypt, and they issue you certificates based on DNS validation.
With (b) if malicious party gains control over your DNS server, they can issue themselves a bunch of valid certificates that you may not even know about (unless you watch CT records). With (a) remote attacker barely has a chance. Thus, a self-hosted CA may be beneficial in terms of security.
(Just as a sidenote: you _never_ need to request and retrieve the cert on the system that the domain name points to. That is just the easiest way and the workflow most clients suggest, since it also makes a lot of sense.)
I'm wondering how easy it would be to forge DNS responses to their servers checking that I control a domain name.
DNSSEC makes that much harder, so it's nice that their resolvers are using it.
Kudos to Lets Encrypt for their great work on the former.
A single sad tear for the state of the latter.
I'd also like to see whitelists for the reserved-for-private-use IPv4 ranges and a .local or .home TLD, since those are circumstances where HTTPS doesn't give you much either, and where getting a certificate is unreasonably difficult.
These same thing applies to .local, .home, and private IPv4 ranges, which can all be spoofed depending on where an attacker is in your network.
Presumably you wouldn't be visiting a .onion address if you're not already connected through a Tor instance you know about.
> These same thing applies to .local, .home, and private IPv4 ranges, which can all be spoofed depending on where an attacker is in your network.
Which would be exactly the point, those are OK to spoof, since you'd only be visiting them through a trusted network, where nothing can be externally verified anyway.
How about a link? Or even more problematically, using its now trusted status to load inside an HTTPS page!
> Which would be exactly the point, those are OK to spoof, since you'd only be visiting them through a trusted network, where nothing can be externally verified anyway.
What? How is it okay for safe-place.home to be trusted when an attacker can spoof the DNS resolution upstream (like ISPs already routinely do to point you to ads)?
The whole point of distinguishing HTTPS connections is that they provide some way of guarding against spoofing of name resolution/packets and snooping. Nothing about how .local, .home, .onion, or local-reserved IP ranges are handled by browsers prevents these from being attacked, in many cases even from outside your network. If you curl 192.168.80.1 (assuming that's not within your subnet), your router will happily shoot some packets at your ISP. The situation for the others is even worse.
I guess I was unclear, my point was that I think some TLD should be dedicated for home networks, with ICANN and especially browsers recognizing that.
ISP spoofing wouldn't be an issue because if you used these TLDs then legitimate requests would never reach that far anyway. If not, well, you wouldn't be visiting such domains anyway and there would be nothing to spoof.
> If you curl 192.168.80.1 (assuming that's not within your subnet), your router will happily shoot some packets at your ISP.
But that's not an issue, because if it's not on your subnet then you wouldn't be visiting it in the first place. Any snooping ISP could just as easily make you visit some other address instead that actually did have a TLS certificate, as they could make you visit that. In the worst case, you could make the browser check your subnet mask. But since the contents on those IPs will be unique from local network to local network anyway, I really don't see the point in bothering.
If it's for home networks, .home resolution would usually occur at the DNS server on your router. How does the browser know that your router follows the new rules and won't route that DNS request up to your ISP, and therefore should trust the request?
> But that's not an issue, because if it's not on your subnet then you wouldn't be visiting it in the first place.
Unless your attacker can get you to click a link? That's a pretty easy thing to get users (especially the unexperienced) to do. Or they can sneak it into a secure page and monitor requests/serve malicious assets.
> In the worst case, you could make the browser check your subnet mask. But since the contents on those IPs will be unique from local network to local network anyway, I really don't see the point in bothering.
This ignores the case where your local network is either (a) infiltrated (b) a coffeeshop. The second being super common, and would need to be guarded against by the browser having some sort of Windows-style public/private network distinction, which users would remember to configure correctly.
> But since the contents on those IPs will be unique from local network to local network anyway, I really don't see the point in bothering.
I'm not seeing the connection. If someone with control of your public internet connection (i.e. what HTTPS is designed to guard against) sends a response when your browser requests something from that address, what does it matter what that address does in another local network?
Everything I've described here has been an element of a real attack where something somewhere was more trusted than it was supposed to be. This would add a massive array of attack vectors, and at best would indicate to the user trust in something that has no reason to be trusted.
If you're doing something on your local network, it makes a lot more sense to just create a self-signed CA and put the root on your devices. In the onion case, you should use HTTPS between you and your proxy (e.g. with a *.onion wildcard cert) to make sure you actually connect to your proxy.
This is not true; a .onion address is also a fingerprint of the public key of the node, so even if the connection is hijacked, the other node won't be able to authenticate itself.
The trouble with RSA is that you end up needing to increase to pretty large key lengths to have significant increases in security after somewhere around 2048 bits. For example, a 4096 bit key is not really as great as it might first appear.
I might be wrong, but I have a vague recollection that Google went with a 16384 bit RSA key for their root key on Chrome devices. It's not a frequently used key (it's used to sign the signing key that can be updated - the signing key uses weaker and faster algorithms which can be changed in new firmware releases), but it's stored in read-only firmware that can only be updated by physically opening a machine. Given that you probably want the key to be good for somewhere between 5 and 30 years, the uncertainty which exists around quantum computers right now, and the tremendous problems which could occur if this key were to be factored, I can understand why they would choose such an obnoxiously large key length.
But when an "we are going to change the future of the internet"-project makes IPv6 a Prio-2 feature (to be added later, not native from the start) it just shows that we are really not there yet.
I have IPv6 at home now but still don't have it at our tech-oriented co-working space, and I've never seen an IPv6 address get handed out at a coffee shop, airport, restaurant, etc.
My ISP doesn't support IPv6, but I've just checked and a Windows 10 machine was able to do `curl -6 https://ipv6.google.com/` just fine.
And I think I've heard OS X has something as well... not sure, don't have OS X or iOS devices at hand to try it out.
(On GNU/Linux and BSD machines there's a lot of choices, Miredo, 6to4, AICCU from SixXS - whatever one fancies. As usual on *nix systems, setup is manual, though.)
That's like focusing on supporting Internet Explorer 8. At some point IPv4 will become the legacy protocol and supporting it will be annoying and expensive.
That day maybe not be today, but switching protocols is not always trivial, and you should start yesterday.
AWS's services are great but many people just need compute. I don't understand why people who just need compute use AWS, since for compute only Digital Ocean and Vultr destroy it in virtually every way: ease of use, performance, cost, IPv6, etc.
Digital Ocean now has block storage too, which plugs one hole. Of course they don't have things like RedShift or S3, but like I said not everyone needs that.
Still took nearly a decade before the first internal servers began supporting IPv6 but what a cheer went up when that first IPv6-only e-mail arrived.
The amazing thing was that most network kit supported IPv6 out of the box, thanks to mainly to Government purchasing requirements that had hammered the big vendors into submission.
How would an IPv4 client, with this hypothetical backwards-compatible IPv6, connect to an IPv6 server?
It is nice to see 0 random port scans and SSH brute-force attacks. IPv6 space is so big it helps give some security through obscurity.
(may not apply to every allocation strategy)
They probably have other features in mind that haven't been released yet, does that mean that whatever those features are obviously are not important on the internet?
;; ANSWER SECTION: www.sbb.ch. 14400 IN AAAA 2a00:4bc0:ffff:ffff::c296:f58e
There are a million reasons NATs are terrible for the internet. But they're used on IPv4, and IPv6's technical goal of increasing the address space is tied up into the technical goal of killing NAT, immediately, and changing the way a lot of people think about networking. For instance, end-user ISPs are expected to give you a /64 or more instead of a single IPv6 address so that you don't need to NAT, but many of them don't, because that's not how people think about addressing. If you have a NAT-using site and you want to switch to IPv6, you have to pursue the political goal of convincing your ISP to think differently about addressing.
Meanwhile, IPv4 and IPv4 NAT works. I'm typing this from behind a NAT, you're probably reading it from behind a NAT. It's not ideal, but, rough consensus and running code.
As soon as we all put our collective feet down and insist on IPv6 NAT implementations, such that IPv4 sites can move without rearchitecting their environment (whether or not that rearchitecting would be a good thing), IPv6 will get deployed quickly.
Name one? I've been on Comcast and Sonic, and both natively provide /64 networks. I've never heard of an ISP providing a /128.
> Meanwhile, IPv4 and IPv4 NAT works.
No, it doesn't. It breaks a million things more than it solves and it makes the Internet worse (and vastly more asymmetric, but that's repeating myself). NAT needs to die in a fire, and there is zero political or technical motivation to inflict its brokenness on a new protocol that absolutely does not need it. Evidence: that many ISPs are providing native, un-NATed IPv6 to their customers. Perhaps some don't, but someone will manage to screw up any given feature. They need to fix their shit, not coerce the rest of the Internet to break itself for their convenience.
I see it largely as an attempt to do market segmentation and limit the usefulness of Kimsufi to push people towards their other brands. Unfortunate, but...
It really is unfortunate. Not having to use a proxy for the sole purpose of sharing the 80 port would be nice...
and they're not needed on IPv6, just the same as we don't require cars to have horseshoes, even though horses needed them before.
Anyway, I'm on HN right now over NAT because HN doesn't have a IPv6 endpoint, otherwise I would be here over IPv6. THAT is what is holding IPv6 back, there aren't the services on IPv6 so there is no user demand for it.
But that may not represent the full story for them. Internal moderation and anti-spam may need updating to be compatible with running on two different networks that each have different approaches to numbering.
Even companies that 'get' v6 sometimes have rough edges. E.g. Cloudflare - which has been a great supporter of v6 for a long time now - has been known to send 'log in from a new IP address!' notifications because my machine automatically rotated it's IPv6 privacy suffix.
Either you get that message or the IPv6 privacy mechanism is not working well enough.
On a deeper level this is because almost nobody actually understands how networks work. In my experience even really top developers often have absolutely no idea what happens on the wire. As a result they basically cargo cult netsec. Since they don't understand it, anything that deviates from "standard practice" gives them security FUD willies because they don't see the implications.
It's a barrier to deploying IPv6, yes, but I'm arguing that that's an entirely artificial barrier. You shouldn't need to understand why IPv6 folks dislike NAT in order to write software that uses the network.
There was a time when perimeter security was seen as an adequate and acceptable technique.
Of course, at this point you've broken end-to-end connectivity (P2P apps don't work, active-mode FTP doesn't work, etc.) so this may not actually resolve the reason people wanted to get rid of NAT. Maybe a good portion of the the no-NAT-in-IPv6 crowd wants inbound routes to people's homes so apps work, and the "but you don't need NAT for security" crowd is misunderstanding them
Here are two important behaviours which come with "just like a NAT would".
1) UPNP - a protocol which allows an application to request it be exposed to the internet
2) Hole punching. E.g. two hosts send packets with matching ip/port values to cause a direct connection to be established between them
Applications don't really do those things on v6...
And, FWIW, ISPs should be giving residential users at least a /56 - if not a /48. Sites should be able to have enough address space to route within the site and still use the features of IPv6 which require a /64. Route aggregation and routing table size is the constraint of the v6 world, not address space.
V6 should make all this cruft go away but it doesn't.
If it's not safe to expose a service to the internet, it's also not safe to have it exposed within your LAN without access controls.
NAT-based 'firewalls' can have holes in punched in them in a variety of ways - they are specifically designed to allow holes to be punched, because it's necessary for many applications to work.
It's also possible to take advantage of a user's browser as a relay. Combine that with the ability to use ad networks to target HTML to be served to the IP of an attacker's choosing, and the illusion that a perimeter firewall will prevent an attacker from initiating connections to your network starts to shatter.
I agree that in practice, in many cases today the stateful firewall-like functionality provided by a NAT device will provide a net security improvement. But that's not a situation we should continue to allow.
It's unrealistic to expect users to manually create firewall holes. That's why default configurations tend to include UPNP (which, naturally, introduce a new set of security and DoS concerns) - which will automatically open holes in the 'firewall'.
Google has published some of their thinking on this topic under the 'beyondcorp' moniker. The summary is there is no "safe" and "unsafe" - you need to do a risk evaluation of each attempt to access a service, and "this is coming from inside our network" in inadequate.
I'd love if we could trust most devices to be publicly exposed, but IMHO we can not. If router manufacturers could be trusted one could add all kinds of clever things there, but ...
This is not true. Most real world attacks I've seen begin by infiltrating malware via the web, e-mail, social media, or phishing. Once inside existing connections between existing internal systems are exploited to crawl around the network.
Remote attacks against non-DMZ things are fairly rare in practice.
The only way to stop this is to implement even more firewalling inside the network, which basically breaks LAN.
I very much agree with the parent and have been talking about Google's beyondcorp and deperimeterization for years. A device that can't be safely connected to a network is broken, and we should stop degrading our networks to support broken junk. If broken junk gets hacked, it is the fault of the makers of that broken junk.
It is not hopeless. I've been into this stuff since the mid-1990s and things have improved a lot since then. I would not be too afraid to hook up a Mac or a fully patched Windows 10 machine to the public Internet. In the 90s or early 2000s I would not even consider this. You'd get owned by a bot within an hour. I remember in 2000 hooking a virgin Windows machine up to a campus network and being able to watch it get infected within 5 minutes.
The trends for the future are positive. Safer languages like Go, Rust, Swift, etc. are getting more popular everywhere. Advances in OS security like W^X, ASLR, etc. are getting ubiquitous. Local app sandboxing and containerization is a thing almost everywhere. Devices security postures are improving.
For most purposes, ND proxy is the new NAT.
I do think those uses will tend to involve 1-to-1 prefix translation rather than many-to-1 mapping.