
Progress Towards 100% HTTPS, June 2016 - dankohn1
https://letsencrypt.org//2016/06/22/https-progress-june-2016.html
======
rogerbinns
I keep hoping they will help address non-Internet TLS. For example if you run
a HTPC, fridge, printer, device controller or anything similar on your LAN and
want to talk to it over the same LAN using TLS. Getting a workable cert is
currently not possible: for example the LAN names aren't going to be unique.

Plex did solve this in conjunction with a certificate authority, but that
solution only works for them. The general approach could work for others if
someone like letsencrypt led the effort. [https://blog.filippo.io/how-plex-is-
doing-https-for-all-its-...](https://blog.filippo.io/how-plex-is-doing-https-
for-all-its-users/)

~~~
Retr0spectrum
For local traffic, why do you need a public certificate authority?

~~~
maratd
Because some devices and browsers have difficulty determining if they're
talking to something on the local network or not. And they don't try to guess.
So if your router requires you to connect via HTTPS, which is a good idea,
have fun clicking past a nasty warning and then have nasty icons everywhere
telling you that you're not secure.

And before you tell me to set up my own local authority and add it to the
chain on every device ... common, really? Nobody wants to do that.

~~~
Franciscouzo
That's not a problem of browsers having a difficulty determining if they're
talking to something on a local network or not, just because you're in a local
network doesn't means you can't be victim of a MiTM.

------
criddell
Just at the entire world is going HTTPS, my faith in the system is seriously
waning. When Symantec bought Blue Coat, it made me start to think about how
fragile this is. How long before Symantec gets an NSL demanding an appliance
that can mint bogus certs on the fly for dropbox.com, facebook.com,
twitter.com, etc...?

How effective is something like certificate pinning against fraudulent certs?

~~~
agwa
> How long before Symantec gets an NSL demanding an appliance that can mint
> bogus certs on the fly for dropbox.com, facebook.com, twitter.com, etc...?

If the bogus certs are not logged in Certificate Transparency, they will be
rejected by Chrome: [https://security.googleblog.com/2015/10/sustaining-
digital-c...](https://security.googleblog.com/2015/10/sustaining-digital-
certificate-security.html)

If they are logged in Certificate Transparency, then the world will know, the
offending certificates will be immediately blacklisted, and Symantec will be
booted from root programs.

With the ongoing advancements in Certificate Transparency, your faith in the
Internet PKI should be growing, not waning.

~~~
criddell
From the link you posted:

> However, we were still able to find several more questionable certificates
> using only the Certificate Transparency logs and a few minutes of work. We
> shared these results with other root store operators on October 6th, to
> allow them to independently assess and verify our research.

So finding questionable certificates is trivially easy, but nobody ever
bothers to look? What good is that?

~~~
agwa
"Nobody"? Google monitors for their domains. So does Facebook. I'll bet a lot
of other high value sites are monitoring too but haven't said so publicly.

As for everyone else, give it some time. The ecosystem is still very young and
we're still developing tooling.

~~~
criddell
I'm happy that Google and Facebook are discovering fraudulent certs. When they
pop up, hopefully those companies aren't prevented from going public with the
information.

Are there any end-user tools? When I open twitter.com, I would love for my
browser (or my phone if I'm using an app) to tell me that the certificate
fingerprint has changed unexpectedly since the last time I visited.

~~~
schoen
If you don't mind the risk of false positives (detecting changes that are
legitimate), you can get that information from

[https://addons.mozilla.org/en-
US/firefox/addon/certificate-p...](https://addons.mozilla.org/en-
US/firefox/addon/certificate-patrol/)

------
5ilv3r
I'm still bitter about this chain of trust model. The fact that I have to get
some other party to tell my users that they can trust me just seems wrong.
They trust me because of personal history, not because some banner says they
should.

Browsers and OS vendors shipping CAs seems to be the root of the problem, in
my mind. Those should be distributed by the service providers, who are the
actual trustworthy entities in the user's minds.

~~~
toomanythings2
And when we visit your site for the first time, having never heard of you
before, why should we trust you?

That's the point. Having some authority who did at least some minimal
checking, to extensive checking, and who will verify you really are who you
purport to be. Trust but verify probably plays a part in this.

But, remember, you don't have to go to HTTPS. There is no requirement for you
to do so.

~~~
5ilv3r
Why should you trust me if you have never met me? If you like what I do, trust
me, and please give me money. :)

Cert companies only do a phone call check for the very expensive EV certs.
There is no minimal to extensive checking. That is a scam.

Web tech is all https now. I can't even browse a lot of https sites with some
of my older devices. There is a requirement and I dislike it.

~~~
ademarre
> _Why should you trust me if you have never met me? If you like what I do,
> trust me, and please give me money._

What if a customer who trusts you returns to your site, but ends up on an
impostor's site instead? He was no way to discern the difference.

~~~
5ilv3r
I would argue strongly that such users do not have those abilities even with
https. A valid cert is a valid cert. My supporting point would be the major
browser vendors recent backpedal on throwing mixed-content errors,
demonstrating that a smooth ride for the user is far more important than
safety to them.

Actually I called shutterfly.com on the phone about that mixed content issue.
I emailed them screenshots of the error from 6 different operating system and
browser combinations, from 3 other users even. They claimed nothing was wrong.
They were serving javascript via http on an https page and told me I was wrong
and needed to update java, for weeks, on the phone, in chat, and in email, and
declined to send the report to their webmaster. Even those wanting to be
trusted are incapable of using these tools, from what I have seen. The whole
thing is broken.

------
Abundnce10
_Let’s Encrypt has issued more than 5 million certificates in total since we
launched to the general public on December 3, 2015. Approximately 3.8 million
of those are active, meaning unexpired and unrevoked. Our active certificates
cover more than 7 million unique domains._

How can you cover 7 million unique domains if you've only issued 5 million
certificates?

~~~
waterphone
One certificate can be for more than one domain.

~~~
altano
For example, a single cert can serve www.example.com as well as example.com

~~~
gummiruessel
That is true, but in this case I think Let's Encrypt and also parent to your
comment mean different domains as in e.g. one certificate to cover all three
of example.com, example.net and example.org.

~~~
5ilv3r
The same mechanism in cert generation provides that functionality. Hostnames
are hostnames. SAN certs just take a list of them.

------
cdolan92
This is great, I use LetsEncrypt for my company. _however_ , the graph is a
little misleading. Lets look closer:

LetsEncrypt is almost built upon the idea of frequently (and automatically)
re-issuing your certificate(s). The graph's line shows what appears to be an
accumulated sum of certificates issued by day.

If every 90 days most certificate(s) expire, of course the graph will look
like that!

Whats most interesting to me is the steps up in the graph. It appears that the
steps in the graph _roughly_ occur on 70-90 day intervals.

Impressive growth for a great mission/service, but I wanted to point out the
mechanics behind the graph. Hopefully others can offer some alternative
perspectives!

 _edit_ : Grammar, illogical sentence structure.

------
zzzcpan
Is it still problematic to issue lots of certs for lots of subdomains? I mean,
still no wildcard certs and crazy rate limits, that disallow issuing 1000s of
certs per day for user-generated subdomains?

~~~
tracker1
If you're generating that many subdomains (and you control the subdomains),
it's probably worth investing in a traditional wildcard cert.

Though, it would be nice if the likes of dyndns names were given exception,
since they are effectively second level tld's.

~~~
mieko
LE uses the Public Suffix List to decide what's a "domain". Their really-low
rate limits have caused a flood of applications which are overwhelming the
PSL's maintainers.

[https://community.letsencrypt.org/t/dyndns-no-ip-managed-
dns...](https://community.letsencrypt.org/t/dyndns-no-ip-managed-dns-
support/883/16)

------
yeukhon
My understanding is for intranet, you could use Let's Encrypt. For example, if
I own _.foo.com, and i want my intranet to be_.internal.foo.com I need to make
*.internal.foo.com in the DNS in order to verify I own .internal.foo.com,
correct? But then doesn't that expose my 'internal' network? Hope there is a
different way to solve this problem.

~~~
pfg
You don't need to "open up" your internal network (the ownership validation
can happen via DNS), but the hostname would be public through Certificate
Transparency.

Generally, if you're relying on your internal hostnames being secret (which is
a terrible idea anyway), you should consider using an internal CA, because
there's a good chance _all_ public CAs will start logging every single
certificate they issue to public logs, and that would include all the domains
the certificate is valid for¹. Better yet, don't treat your hostnames as
secrets.

¹ I _think_ there has been some discussions about allowing CAs to censor DNS
labels after the TLD+1 level for Certificate Transparency. Not sure if that's
going to happen, I'm not a fan. This would still require that your CA supports
this mechanism, something I don't think Let's Encrypt would do.

------
simbalion
This is extremely exciting. I've been supporting these folks since the beta.
It's great for offering free SSL to clients.

------
rsync
Am I the only person that is wary of 100% https ?

Remember, once you encrypt a web resource in SSL, you add a ton of baggage on
top of any methods that might be used to access it.

I like a world in which I can 'nc' a web resource and manipulate it with unix
primitives _without_ a truckload of software dependencies.

If sensitive information is involved, then certainly - use SSL. I understand
that we must give up conveniences for that functionality.

But there are a _lot_ of web resources that have existed, do exist, and
potentially exist that are completely benign ... I think we're shackling
ourselves by chasing after this perfection.

Or, put another way, we're chaining ourselves to a world where web resources
are only accessed by web browsers, and only by those web browsers that are
_chaining themselves_ to a fairly dubious security scheme...

~~~
icebraining
Just as you can use "nc" for an HTTP resource, you can use "openssl s_client"
or "ncat --ssl" (from the nmap project) or "socat" to manipulate an HTTPS
resource using the same unix primitives. Which truckload of dependencies does
this require? The Debian package for OpenSSL only depends on libc.

I do fully agree that the web is getting more tied to browsers, and to me
that's worrying, but TLS is mostly a transparent tunnel over which you can use
the same protocols; it's not part of that trend, in my opinion.

------
arca_vorago
Is there any alternative to ssl and tls out there? Sshttp anyone?

~~~
nisa
Tor Onions are technically an alternative. You access the hash of the public
key? and you are only able to put up that URI if you have control over the
private key. I guess something similar based on hashing and public key
cryptography might be possible outside of Tor but it's not exactly user
friendly to begin with.

------
g8oz
Blackberry 10 browser refuses to recognize Lets Encrypt certs :(

------
projectramo
Is there any work being done on being able to easily switch out standards?

That way when https is found to lack some feature, we can easily upgrade to
httpz almost immediately?

~~~
vtlynch
This will likely never be the case due to how HTTPS actually works. As someone
else stated, HTTPs is HTTP + TLS.

The "s" in HTTPS is for "secure", and TLS provides that security.

TLS is a evolving standard which is updated over time to add new features when
necessary. When HTTPS is negotiated, it can seamlessly choose which version of
TLS to use, based off what the client and server want to use.

So, HTTPS will never die due to lack of features. A new version of TLS will
just be approved and deployed, and newer devices can use that while older
devices can get by on an older version of TLS.

TLS is the successor to SSL. They are backwards compatible, so devices that
support TLS also support SSL. The full version history, from newest to oldest,
is: TLS 1.2, TLS 1.1, TLS 1.0, SSL 3, SSL 2. In reality, very few servers
still use SSL 3 or SSL 2, due to known weaknesses, but colloquially, all the
versions are just called "SSL".

TLS 1.3 is underway and will shortly be ready for primetime. Firefox and
Cloudflare have already written some implementations based on the draft spec
(sorta how routers will implemented the newest 802.11 standards before they
are 100% official).

~~~
profmonocle
Plus, even if we did decide to fully replace TLS, nothing would necessarily
need to happen with certificates. We call them "SSL certificates", but the
certificate standard - X.509 - actually predates SSLv1 by several years. A TLS
alternative/replacement could adopt the X.509 standard as its certificate
format and automatically work with the existing CA system.

------
Animats
I see this as security theater. Most web pages don't need to be encrypted.
Anything with a form should be, but if you're just viewing static content,
there's little point. Yes, it obscures what content you're viewing, slightly.
An observer often could figure that out from the file length.

Encrypting everything increases the demand for low-rent SSL certs. Anything
below OV (Organization Validated) is junk, and if money is involved, an EV
(Extended Validation) cert should be used. Trying to encrypt everything leads
to messes such as Cloudflare's MITM certs which name hundreds of unrelated
domains. This is a step backwards.

~~~
theandrewbailey
> Most web pages don't need to be encrypted. Anything with a form should be,
> but if you're just viewing static content, there's little point.

Some really cool HTML and JS functionality will only work over HTTPS.

> Yes, it obscures what content you're viewing, slightly. An observer often
> could figure that out from the file length.

If you have an attacker than can identify content solely from its length, you
have bigger problems than an SSL cert can solve.

> Trying to encrypt everything leads to messes such as Cloudflare's MITM certs
> which name hundreds of unrelated domains. This is a step backwards.

I do not see the problem. All those domain owners consciously choose to have
Cloudflare host their stuff. The cert might be a few KB bigger, but who cares?

~~~
Animats
_> Most web pages don't need to be encrypted. Anything with a form should be,
but if you're just viewing static content, there's little point._

 _Some really cool HTML and JS functionality will only work over HTTPS._

What "really cool" HTML feature requires HTTPS? There can be problems with
mixed secure/insecure content, but that's more of an offsite content issue.

 _> Yes, it obscures what content you're viewing, slightly. An observer often
could figure that out from the file length._

 _If you have an attacker than can identify content solely from its length,
you have bigger problems than an SSL cert can solve._

An eavesdropper knows the IP address and the length of the content, even if
it's encrypted.

 _> Trying to encrypt everything leads to messes such as Cloudflare's MITM
certs which name hundreds of unrelated domains. This is a step backwards._

 _I do not see the problem. All those domain owners consciously choose to have
Cloudflare host their stuff. The cert might be a few KB bigger, but who
cares?_

When sites share an SSL cert, and you can break into one of the sharing sites,
there's a way to impersonate others. Cloudflare customers for their lower
tiers of "security" often don't realize this. The customer doesn't pick which
sites share certs; that's up to Cloudflare.[1]

[1] [http://john-nagle.github.io/certscan/whoamitalkingto04.pdf](http://john-
nagle.github.io/certscan/whoamitalkingto04.pdf)

~~~
pfg
> What "really cool" HTML feature requires HTTPS? There can be problems with
> mixed secure/insecure content, but that's more of an offsite content issue.

One example would be the the Geolocation API, with more to come[1]. Another
example (specifically for HTML) would be Mozilla showing a user-visible
warning when it encounters a type="password" field in a form served via HTTP
(or with a HTTP target - I'm not certain). This is currently only enabled in
the Developer Edition, but will eventually land in stable.

> When sites share an SSL cert, and you can break into one of the sharing
> sites, there's a way to impersonate others. Cloudflare customers for their
> lower tiers of "security" often don't realize this. The customer doesn't
> pick which sites share certs; that's up to Cloudflare.

This is a non-issue for services such as CloudFlare. Site owners do not have
access to the private key, only CloudFlare does. Breaking into one of the
other sites won't give you access to the private key, only breaking into
CloudFlare would, and such a vulnerability would have nothing to do with the
fact that you're sharing a SAN certificate with other sites. I'm not aware of
any other cross-site vulnerabilities that stem from shared certificates in an
environment where every site on that certificate is served by the same
frontend.

[1]: [https://www.chromium.org/Home/chromium-
security/deprecating-...](https://www.chromium.org/Home/chromium-
security/deprecating-powerful-features-on-insecure-origins)

~~~
rsync
"One example would be the the Geolocation API, with more to come[1]."

Ugh. Why would they do that ?

I can understand that geolocation could be _tremendously sensitive_ and you
absolutely would want to offer the option of SSL ... but why limit it to SSL ?

geolocation is _also_ something that you'd want to hack into and build into
things ... and maybe even things with limited processing power and memory.

Wouldn't it be nice to have the option to interact with a geolocation API
(over http) with stdio and not include a giant truckload of dependencies and
libraries and megabytes of packages ?

~~~
pfg
> I can understand that geolocation could be tremendously sensitive and you
> absolutely would want to offer the option of SSL ... but why limit it to SSL
> ?

I think you answered your own question. ;-)

> geolocation is also something that you'd want to hack into and build into
> things ... and maybe even things with limited processing power and memory.

Presumably, once your device is capable of running a modern browser such as
Chrome or Firefox (which is what we're talking about here), TLS is a drop in
the bucket in terms of resource usage. Or were you talking about the server?

