Hacker News new | more | comments | ask | show | jobs | submit login
Digicert Withdraws from the CA Security Council (digicert.com)
166 points by dokov 7 months ago | hide | past | web | favorite | 72 comments

Note that this is not the CA/Browser Forum, the usual industry association you hear about. The CA Security Council is a group of a small number of old-guard CAs, which has been notable recently for pushing a distinction between (automatic) domain validation, organizational validation, and Extended Validation, mostly as a response to Let's Encrypt. The "London Protocol" appears to be a proposal for formalizing the distinction between DV and OV + EV: https://casecurity.org/2018/06/27/the-london-protocol/

If you look at this previous paper they (Entrust + Comodo) published, they're proposing that OV and EV certificates get padlocks and DV ones don't: https://casecurity.org/wp-content/uploads/2017/09/Incidence-... Or, in other words, you can't get a padlock from Let's Encrypt and you'll have to pay them for a padlock.

Compare with Google's plan at https://blog.chromium.org/2018/05/evolving-chromes-security-... which just drops the padlock for everyone, treats plaintext HTTP as affirmatively insecure, and stops pretending that paying money to a CA means that you're a morally upright website.

A CA leaving the CASC seems like a good thing for the world.

Also - CASC member Entrust Datacard told Troy Hunt (the Have I Been Pwned guy) that the London Protocol meant that they were going to "phase out insecure DV certificates": https://twitter.com/troyhunt/status/1012190910254235649

CASC member GoDaddy cold-called a random musician with an HTTP site and told her the only way to avoid an SSL warning from Google would be to pay them $200+: https://www.facebook.com/rebecca.ann.925/posts/1740244236062...

I can see why Digicert would want to run from the CASC and keep their reputation safe.

(from Ryan Sleevi of Chrome's crypto team https://twitter.com/sleevi_/status/1012338888088719360 - searching for his other interactions with the CA Security Council is pretty enlightening too)

Was it confirmed to be GoDaddy? That is dishonest marketing to say the least.

I assume it was edited in. But to be clear to future readers, the bracketed link in grandparent links to a tweet proving they did.

A more direct link to where they admit it (and publicly try to fix the mistake) is this: https://twitter.com/perezbox/status/1011708133960536064

They don't admit to a general pattern of mistakes like this there, they do say they will use this "as a learning opportunity for all our agents. Whether that is because this was genuinely a one off or because this is the only publicized example is unclear.

It's an interesting question: is there value to certificate identity verification ?

Apple's App Store for example demands that companies wanting to publish apps have to undergo a verification process using a DUNS number. And this has been useful for example in resolving trademark and DMCA disputes. But not really sure if it's improved security in any way.

It's an interesting question: is there value to certificate identity verification ?

This is interesting to me, in part because I've argued again and again against the idea that HTTPS must involve verification of the legal identity of the operator of a website, and for many years I always had people push back and insist that the encryption part was only a tiny, almost worthless portion of HTTPS -- identity verification was the real benefit. I even had people claim that encryption without identity verification was actually worse than no encryption at all! It was as if they lived in a completely different world than I did, where eavesdropping/recording of unencrypted transmissions was incredibly rare and not worth worrying about, but ten trillion googolplexes of HTTPS spoofers were lurking around every corner.

Now, of course, people have done a complete 180 and realized that the encryption is the important part, and the identity verification is at best a distant secondary or even tertiary concern. Identity verification doesn't significantly add anything to end-user security; the avenues for phishing and other malicious uses of the web rely on a general public so technologically illiterate that prominently-displayed identity verification is probably several hundred steps down the list of the top thousand things you can do to protect average users, if it even cracks the list at all. Which is probably why all the prominent identity display stuff seems to be phasing out in browsers; the vendors have recognized that it doesn't contribute useful additional security (and some notable cases have shown that it's easy enough, if you want, to spoof even "verified" identities).

It's true that unauthenticated, encrypted communication is an improvement over unencrypted communication, because it rules out pure eavesdropping attacks. The question is, what's the risk of eavesdropping compared to MITM?

Once upon a time, "using the Internet" meant shared-medium Ethernet, and the only choke-points were things like routers (too stupid to be nefarious) and ISPs (who were too busy).

These days, nearly everything is star-topography, and one of the biggest potential bad actors (the NSA) gets Internet backbone ISPs to silently do their bidding.

Sure, phishing is still a thing and identity verification doesn't solve that, but at least promoting it requires continual, repeated effort. You can't just add a black box to a network closet somewhere and come back a week later to pick up all the intercepted data.

There's two things in this thread under the name of "identity verification": automatically verifying the hostname (origin) of a site and manually inspecting the legal name of the entity who requested a certificate. The first is incredibly important, and encryption is useless without it - the second is not.

My company pays for an EV certificate, and we pin our clients to expect EV certificates in the corresponding domain names.

This gets us most of the benefits of standard certificate pinning without coupling us to any specific private key or certificate vendor.

The theory is that it’s hard(er) to fraudulently get an EV certificate issued, although—having gone through the process—it does not strike me as super secure against a determined adversary.

I should clarify that we are not pinning the organization information in the certificate or relying on it; just the fact that it’s an EV certificate.

Apple's case is a little different because it lets you publish native code on the phone. (You can argue that websites effectively also publish code, but the web sandbox is probably quite a bit stronger than the iOS one.)

I don't think there's much advantage in knowing that you're some registered organization. It's pretty easy to register an organization - see e.g. https://stripe.ian.sh , to which the CASC responded by proposing the rule "Applicants that have been in existence for less than 18 months are not eligible for EV certificates," as well as a requirement that the CA inspect your tax returns to make sure you're a bona-fide business. This is basically a losing game (losing for the CAs, because it won't work, and losing for the world, because startups can't get EV certificates and everyone else loses their privacy), hence my comment about validating that you're a morally upright website. It's the evil bit, except as a TLS extension.

One thing that does have value, though, is making sure a website is the right website. It's perfectly fine for https://stripe.ian.sh to have a cert as long as they don't get my https://stripe.com credentials or cookies. On the cookie side, the browsers already solve this with the same-origin policy, so the hard part is just making sure that you don't enter usable login credentials on the wrong website (i.e., preventing phishing, which is the stated goal of CASC). There are pretty good solutions for this like FIDO/FIDO2 - in the future I'd like to see places like banks either giving you U2F security keys or letting you bring your security key in when you enroll for an account and activate it on their machine, because the FIDO protocol lets the security key participate in the same-origin policy. If I've enrolled my key with bankofamerica.com, and my login requires using a security key, I can't divulge usable credentials to bankofamerica-secure-login-trust-me.com.

Relatedly, the same-origin policy doesn't (and can't) distinguish between the type of HTTPS you have or whether there's a verified organization name on the cert. It treats any HTTPS for the same domain as the same origin. So the browser makers have been pushing back against meaningful distinctions between DV, OV, and EV (and also against EV even having any costmetic distinctions), which is completely at odds with the CASC's strategy. Sucks for the CASC because it means a $0 Let's Encrypt certificate is as meaningful as a $300+ EV certificate, but I think the browser makers are right, here.

Same-origin policy does not help you on first visit, and trust on first visit is really the hard problem that PKI was invented to solve.

If you can count on a prior relationship in encryption, then you don't even really need PKI. Just mutually decide on an encryption key and then use that to encrypt/decrypt your traffic.

Not every website is Twitter or Facebook or Google or a major bank, where the vast majority of visits are repeat visitors. I run dozens of websites, and every single one has more than 50% new visitors, year in and year out. That is representative of the vast majority of websites out there.

Browser makers have dramatically shifted their security focus toward repeat visitors. Two-factor authentication, same-origin policy, HSTS, and other new security features are only useful for repeat visitors.

EV certificate notices (the "green banner" in the address bar) are one of the very few browser features that attempt to help first-time site visitors. Now, obviously EV certs have problems as you cite with Stripe... but instead of trying to solve this problem, browser makers are simply copping out and falling back on a posture that HTTPS just means encrypted over the wire, not identity. Essentially: establishing security is the user's problem. Once it's established, then we've got all sorts of tools to maintain it.

This is a mistake. It does not help most websites, and it privileges incumbent services (which have lots of repeat visitors) over new websites (who are trying to win new customers).

CAs and browser makers must continue to attack the problem of trust on first visit. It's not an easy problem but it is the essence of the promise of the web. Visitors should be able to load a new site and know if it is trustworthy, just like people trust that they can physically walk into a new store and not get robbed.

To be fair to Google, they do have a pretty-well-staffed portion of the company that works on giving you trustworthy URLs to the first visit to a website. :-)

But seriously, I think that's the actual answer. The only way you find websites is by receiving a URL from somewhere - whether that's from a search engine, or from a link on some other website, or from non-web means like a paper advertisement or a billboard. And the PKI does solve the problem of bootstrapping trust from a textual URL. (And to be actually fair to Google, they've HSTS-preloaded the top-level domain .app, so any website inside .app gets HSTS from first use onwards - which seems like the right way to solve the problem of URLs communicated aurally.)

To solve the problem of "Am I on the right website," the concept of "right website" has to be well-defined - which generally means you have to already know of the existence of some right website. The UX problem in security is how to take advantage of that knowledge, e.g., not let a Bank of America customer click a link in a phishing email that doesn't actually go to the website they previously signed up with.

The alternative is that some committee like the CASC determines which sites are morally upright and worth anyone visiting, and which ones aren't. That solution works okay for physical sites, more or less, but applying it to the internet generally doesn't produce good outcomes.

If you want to know if a site is trustworthy, you want a certificate from their insurance company, not their CA. Someone who is promising to pay you real money if the site contains malware or a scam or whatever. Unfortunately this sort of insurance would probably be more, not less, expensive than EV certs.

That type of insurance is often called cyber insurance, and is pretty common among businesses. I work for a nonprofit--not even a tech company--and we carry cyber insurance and require all our technology vendors to carry it too.

Personally, I would not be opposed to CAs requiring proof of cyber insurance in order to issue an EV cert.

"Cyber insurance" may reimburse the site operator for the liability they have to you for (say) being infested with malware. And that model seems to work OK for car accidents. But I think that only helps you (the site visitor) if you already have a practical ability to sue the site operator (particularly difficult if the site operator is on the other side of the globe, in a jurisdiction you know nothing about). What I'm proposing is that the insurance company offers to accept direct liability to site visitors, with well defined liquidated damages and arbitration processes so that it actually means something to the user.

If someone is offering that today, they aren't marketing it well. And again, actually signing the certificates would be a negligible part of the business (or the cost).

Actually, if I were in the insurance business I'd actually think about pushing this direction. If you could get the browsers to sign up for giving it special treatment it would probably greatly increase the size of the market. (Right now operators seem to mostly just escape significant liability for compromises)

There are two different parties effectively involved and it's worth distinguishing for very practical reasons:

The human who may be sat at a PC using, say, a web browser, can contemplate anything they like about a certificate and, in principle has some very sophisticated tools at their disposal, for example a human _could_ read the name "Fun Co. Jurassic Toys Inc." in the Subject O field of a certificate and think about a TV documentary they saw last week which said this company was run by a convicted fraudster from Spain, and that might colour their opinion of the web site they thought was a legitimate discount airfare company named "Cheaper Flights" they'd seen recommended on Facebook. They might ask their IT literate nephew Steve, "Hey, Steve, one moment, does this site look dodgy to you?" and end up not giving a criminal $850 for tickets to San Francisco that never existed.

The software, a Web Browser or similar User Agent software, is not capable of such sophisticated insight. But unlike a human its mindless checks are done for every single resource in every single page. That GIF in the top left corner, the CSS stylesheet, the Javascript auto-scroll, since it isn't thinking it doesn't get bored and skip things or take short cuts.

All trusted certs, including DV certs, allow the software to do all its checks, in real time, as it proceeds. When you submit a form by pressing the "reply" button on Hacker News to insist I'm wrong, your browser will insist on verifying that it is posting that reply to a server which has the appropriate credentials for news.ycombinator.com before it transmits the reply, not a minute afterwards when it's too late. This simple, entirely automatic, verification is the only way to make it painless enough to actually get used. Any security strategy that says "And then obviously the human operator does X" is from a dreamworld unrelated to ours, a world which also has no drink driving, nothing is ever left in the back of a cab, a world where the pencil eraser was never invented.

If the browser vendors had wanted EV to have a practical impact on the actual security, rather than just the cosmetics desired by the CA industry's sales people, they'd have instigated a more complicated origin policy, but they intentionally didn't do that.

If your situation really is that your sites mostly attract visitors who genuinely had no previous connection, nothing we can do fixes the actual problem, you are asking merely for theatre, which will cost you extra money, in the foolish belief that conmen aren't going to also put on a show, and probably a better one than you, if it makes them money.

plenty of people walk into stores and get robbed. sometimes it's just 1$, sometimes it's more, sometimes it's misleading advertising, but the physical world has exactly the same problems.

I hadn't actually even _heard_ of "OV" before. Googling... I still can't tell what the difference between it and EV is supposed to be, other than how much someone paid a CA.

So, some years ago now, two groups had orthogonal concerns about the Web PKI, they met and reached an agreement.

The two groups are the major public Certificate Authorities and the browser vendors. The browser vendors wanted better quality certificates, to improve the trustworthiness of HTTPS (and thus of their products), the CAs wanted to push the higher-priced certs they were offering which included Subject fields like Organisation name by augmenting the browser UI.

The immediate result was EV (Extended Validation) certs, the CAs would promise to do a better job checking the identity of their subscribers before issuing these certificates (and so would charge a premium for them), and the browser vendors would mutate their UI to show the new "green bar" with the organisation name highlighted if the certificate was made to EV standards.

The side effect, and larger long term consequence was the formation of a standing meeting between the two sides, the CA/Browser Forum or CA/B Forum which now sets baseline rules for all "SSL certificates".

Over time browsers have used CA/B to improve those rules, the Baseline Requirements so that the cheapest certificates are more fit for purpose, and so in effect they got their original goals achieved for all certs - practices like issuing for non-Internet names, or issuing without SANs, or without EKUs have been driven out, and today the DV cert you buy for $5 from a reseller (or get at zero cost to you from Let's Encrypt) has a more solid basis behind it than the certs people were paying say $100 for fifteen years ago. They also got the CA/B to behave transparently, you can read most of what it discusses, as well as all votes and documents on a public website, no more meetings in (metaphorical) smoke-filled rooms.

Google's certs are OV (verified organisation but no visible sign), GitHub's and Apple's are EV (verified organisation and visible), your blog is probably DV (only proves you have a domain).

OV is EV where you also get the business name in the URL bar.

OV does not put the name in the address bar.

OV requires users actually open the certificate (assuming browser support), hit "details", then scroll down to the Subject section. Then, if there is an O= section there, you can read the Organisation name. That field is missing in a DV cert.

The CA industry wants us to believe the average user does this regularly, in order to determine trust in a website. It's absurd.

Uh... this is crazy.

Are you suggesting that they both verify the same thing (that a particular legal entity is the owner of the cert, verified by a certain CA), so convey the same security-related information to a client... but with one of them, you pay extra to a CA to get a name in the URL bar?

If the browsers aren't getting a cut of this fee to have _their_ software put someone's name in the URL bar... they're really missing out! It's browser software that's providing the "value" to the person paying the fee, but the CA apparently obtaining the fee, and none of it seems of any value to the user... what am I missing?

I hadn't realized the cert market had gotten _this_ screwy.

Roughly, yes.

I believe that the CA/Browser Forum has extensive documents on what passes for EV (and, notably, you have to be a registered business/organization for EV), and the only rule for OV is that you have to "implement a process" that makes sure you have "verified" the information but doesn't care how stringent you are, so in theory the "extended" part actually means something.

What the Commercial public Certificate Authorities wanted, more than ten years ago, was a change to browser UI to make it easier to sell the better (more expensive) certificates to their clients ("subscribers" in technical speak). They wanted the Subject Organisation shown prominently.

What the browsers wanted was for all the Public Certificate Authorities to get their shit together and do a better job.

They met and discussed at length how both sides could get most of what they wanted. The immediate results were twofold:

1. New versions of popular browsers (all for desktop operating systems because this is over a decade ago) added the "green bar" showing the Subject Organisation for certificates which met some agreed criteria.

2. The Commercial CAs all agreed to obey these "EV SSL Certificate Guidelines". https://cabforum.org/extended-validation/

This is a pretty good deal for the CAs, they get a new product they can sell for a premium price, the browsers do a bunch of extra engineering work. Many feared this was creating a something like a treadmill, they predicted that soon "EV certificates" would be cheapened and a new "Even more Extended Validation" would be needed, with correspondingly higher prices just to get back to reasonable trust and the cycle would repeat forever.

But there's two unexpected consequences. Humans like socialising, so the meetings continued, the CA/Browser Forum standing meeting is now important across the industry and it set not just these "EV SSL" guidelines but eventually the Baseline Requirements for all "SSL certificates". https://cabforum.org/baseline-requirements-documents/ Also, desktop browsers ceased to be as important because everybody now owns a mobile phone, and the quite different UI in a phone browser lets them reconsider what is important. An Android phone doesn't show Organisation info prominently, just the domain name.

So in the end mostly the browsers got what they wanted more than the commercial CAs. The CAB BRs have allowed them to gradually tighten things up, the treadmill runs in reverse - so that today your $0 Let's Encrypt cert is produced under more stringent conditions than were needed for the $$$ Extended Validation certificates from 2007, and the EV UI is less important though it still exists in popular desktop browsers.

Fixing the problem is arguably better for at least two of the big browser vendors than just getting a cut of the money, that is Mozilla and Google. Both need a trustworthy web, Mozilla as part of their charitable purpose, and Google as a direct need of their business, so for them improving DV over ten years was much better than making a few grand off the higher priced certificates AND it avoided the inevitable conflict of interests taint.

Thanks for the background, very helpful.

Did "OV" exist at the point you are talking about, more than ten years ago, that "EV" was solidified?

If not (I literally hadn't heard of it until now), it seems like the "treadmill" has worked in a different way, filling in the market underneath with a cheaper "OV", which it's unclear how it's security assertions are any different than EV, it's just cheaper (and it doesn't get a name in the location bar). I'm not sure this is helping the security landscape.

More history I'm afraid

In the mid-1990s, the Netscape Corporation invents SSL so that their web browser ("Netscape Navigator") can offer secure encrypted web pages, a huge innovation that will make it possible to do things like sell stuff on the Internet.

They find out that basically there's a problem here about who the hell you're communicating with securely, and one obvious option is that Netscape sets itself up to make this decision. And the problem is that obviously competitors won't accept that, and so either the Web gets balkanized or nothing ends up secured. Also Netscape would need a whole new division to bootstrap this whole identifying web sites and proving their identity problem.

However, the X.500 directory system already has an associated system of certificates X.509, and some fairly serious-sounding companies are minting such certificates, so Netscape picks some of those (names you're familiar with today like Verisign and Thawte) to offer certificates for their new Secure Sockets Layer encrypted web.

The X.500 system envisions a single global directory hierarchy, countries are at the top (e.g. C=GB - the United Kingdom of Great Britain and Northern Ireland) and under those are States, then Localities, and Organizations, and Organizations have Organizational Units under them, one vast all-encompassing tree, envisioned by technologists but never actually realised.

So, SSL uses X.509 certificates, and those have X.500 directory names for their subjects, but no such directory really exists, and so the Certificate Authorities write whatever seems roughly correct into the certificate, or they check what an applicant wrote, and if it seems OK they just sign that.

At first then, the certificates are what we'd now call OV because there is no other kind of certificate.

But there's a pressure on the Certificate Authorities to drive more revenue, and making more certs at a lower price seems easier than persuading everybody to buy a $500 cert. So they invent "Domain Validated" certificates. You write the Fully Qualified Domain Name into the X.509 "Common Name" human readable field, and then you write "Domain Validated" in fields like Organizational Unit, and it looks pretty much OK in Netscape Navigator or the shiny new Microsoft Internet Explorer web browser, but instead of needing make international phone calls and own paper business directories for foreign countries now you can just send an email to like "certificate-officer@example.com" and call it job done.

And it kinda, sorta works, and there are other more pressing problems like making dynamic HTML work, so the web browsers mostly just let the Certificate Authorities carry on doing whatever seems best to them... for a while.

Other way around.

> but the web sandbox is probably quite a bit stronger than the iOS one.

I really don't think that's true, purely by thinking about the number of iOS jailbreaks that have been published recently Vs the number of browser exploits.

Security is part of the reason Apple charges like it does, but it is only part and I'm not sure the distinction between the Chrome and iOS sandbox is much of a factor.

Nearly every platform has something like EV - anyone can make a username or domain names that sounds official, some mechanism is needed to tie it to a real world identity.

Email/GPG has services like keybase, Instagram has verified accounts, WhatsApp does too. Even bloggers who dislike checking organisations for certs (EV) do so from verified accounts - though Twitter mistakenly tried to tie it to both noteworthiness and ethics, which they're now trying to fix.

Of course there is. If you don't care who you're talking to, it doesn't matter if your message is encrypted. The anonymous mystery person who decrypts your HTTPS connection might be the same person who would have MITM'd your HTTP connection.

The entire concept of encryption presupposes that you care who you're talking to. If you don't, then why not just broadcast your message in the clear?

It's FUD to say encryption doesn't matter without authentication - unencrypted traffic is eavesdroppable by default and exposes you much more broadly than just being exposed to one potential bad actor.

Of course using authentication is vastly superior, but lacking it doesn't render encryption useless.

Without authentication, a man in the middle is indistinguishable from your intended target, so you are exposed to anyone on the network path.

Lets be clear, we're not talking about self signed certs here. An attacker would still need to prove they have ownership of the domain to get a non-EV CA-signed certificate which makes a MITM attack highly challenging to the point of being down right impractical to accomplish successfully. Its not simply not a large enough threat level* to worry about since it would require either DNS spoofing (in which case why bother with a MITM anyway since you now have ownership of the traffic you can just steal people's cookies and get log in directly), or access to the domain owners email (in which case gaining access to their infrastructure becomes significantly easier depending on where it is hosted) or attacking the clients PC to install your own CA certificate (in which case yiu might as well just install a RAT or keylogger and capture inputs for all websites and thus save yourself the trouble of MITMing only one specific single domain).

* I mean if your business is online banking then it's a little different. But for 99.9% websites out there having EV isnt necessary.

Anyone in the network path and willing to expend the computational power to decrypt and re-encrypt your traffic. Unauthenticated encryption still dramatically increases the cost of mass surveillance.

Browser makers must not agree with you, because self-signed certificates get huge warnings, but HTTP connections get at most a little "Not Secure" notice in the address bar.

If people are on unencrypted wifi at a coffee shop anybody can passively get your credentials or see what you post.

A MITM attack requires more work to pull off.

You might not care who you're talking to but you do probably care that you're still talking to the same/single anonymous person.

> stops pretending that paying money to a CA means that you're a morally upright website.

The rest of this is correct, but nobody is asserting EV means you're good, just that it means you are whoever you're verified to be.

I.e. your cert has an Organisation field of 'Verified Org Inc' rather than a blank item like a DV cert has.

DigiCert still very much supports EV, they just don't like how CASC is planning to do silly things like require you to be in business 3 years before getting verified.

How can they decide what certificate gets the padlock? It's really a browser decision, isn't it?

Yes, three of the browsers are commercial products (albeit at zero dollar price), Google's Chrome, Apple's Safari, Microsoft's IE/ Edge, so it's ultimately a commercial decision for those companies.

The Mozilla Foundation on the other hand takes these decisions in public, m.d.s.policy is the newsgroup for security policy decisions, which ultimately mostly means stuff about SSL/TLS and these certificates.

I'd be curious the real reasons for this, because the reasons listed in this press release sound like marketing-speak.

Exactly what I was thinking as I read it.

letsencrypt destroying their business model, they are trying to find new sources of income.

I didn't look in the last few months, but most definitely when I did last look the reality is that Let's Encrypt and the strongly related "HTTPS Everywhere" movement actually drove growth for the _entire CA industry_

This is undoubtedly at least in part due to a halo effect. If Alice and Bob have $0 Let's Encrypt certs for their blogs about, respectively, an obscure species of tree frog and restoring muscle cars, when Carol asks them if her new business "needs" an SSL certificate they are more likely to say "Yes" rather than "Not really" or "I don't know what that is". Even if most Carols use Let's Encrypt, any who don't are new customers for a commercial CA.

I think that a lot of small customers moved from commercial CAs to letsencrypt. I don't even know a single reason not to use letsencrypt now. Sure, there will be new customers, but I'm not sure about old customers and it'll be worse as existing customers slowly would learn about free letsencrypt and migrate to it.

I have mixed feelings about all of this. The big philosophical divide is really around what you think a cert/https should mean.

The one side (I'm summarizing here) says that HTTPS only means that the data transferred over the wire is secure and has nothing to do with authenticating that you're actually talking to the correct website. And that's correct, there are all sorts of MITM attacks, etc. that could be done and people shouldn't automatically trust that seeing a green lock means it's safe to put their password into a website. And that's correct.

The other side says: nothing is 100% secure and security is all about defense in depth and these EV certs can be really helpful. Consider the Washington Post:


There is a whole cottage industry of unscrupulous advertisers who make their living off of scraping the look and feel of websites like the WashingtonPost, setting up a URL like washingtonpostnews.com and then making up a fake story about how the Rock was arrested because he had some special muscle growing formula on him that oh look they have a link to buy for only $15/mo.

It's quite unlikely that an EV cert would be issued for washingtonpostnews.com and it doesn't seem crazy to think that there is some value in having the EV cert on the proper www.washingtonpost.com website.

They'll also point out that even DV cert issuance pre LetsEncrypt was a mixed bag of good and bad. Some CA's wouldn't issue you a cert if it had certain habitually abused trademark names, etc. in it. You'd have to do more than just have an email address or a DNS entry for it. This is for terms like BMW, Tesla, Facebook, etc. and this practice legitimately tripped up (but likely didn't stop) a metric ton of phishing sites.

I think LetsEncrypt is awesome and makes a ton of sense and probably has made the internet safer in general. But the general response to things like LetsEncrypt issuing upwards of 15,000 different certs for domains containing "paypal" [1] is that you should rely upon sending your browsing information to Google, Microsoft, etc. so that they can tell you "Warning this is a possible Phishing Site" [2] which worryingly seems like we've then replaced multiple CAs (which we thought were too centralized) with an even fewer number of browser vendors (which are even more centralized).

1 - https://www.bleepingcomputer.com/news/security/14-766-lets-e...

2 - https://www.google.com/safebrowsing/static/faq.html

> It's quite unlikely that an EV cert would be issued for washingtonpostnews.com

I think the https://stripe.ian.sh story argues otherwise - it would be pretty easy to start an LLC named "Washington Post News" in some other state and have the government agencies be fine with that. (In fact, for the specific case of Washington Posts, half of all US states have a city/town named "Washington.")

Also the more practical problem with the Washington Post is the existence of the Washington Times, established in 1982 by Sun Myung Moon well before the web or SSL.

That's a flaw (specific to the US, where the same name can be registered in different states) but DigiCert (the subject of the article) is proposing remedies to fix it.

This is one example, sure, but EV does, in general, work. And the reason why is not because it's an ironclad indefeatable process for proving you are legitimately a major trustworthy party: It's a frustrating pile of hoops to jump through. The arcane and archaic nature of getting an EV gates out malicious actors pretty effectively.

If you go through all of the process to get an EV (often involving scanned/faxed documents and phone calls), and then do something malicious with it, you're going to end up burning it (getting it revoked, generally, though, obviously the nightmare of revocation being still effectively broken comes into play) or even just the domain blocked by web filters and marked unsafe in the Safe Browsing list, setting you back to square one. All an EV vendor has to do for this to work is remember not to issue an EV again to anyone who uses the same credentials or proofs, whether or not they verified them with other authorities.

Whereas a domain-validated cert can, of course, be automated (as can domain purchasing), meaning DV certs can scale. As with spam, malware, etc., you just publish it in volume, keep changing addresses, content, etc. to evade automated filtering, and press on.

Meanwhile, if someone gets a Washington Post News EV, and starts impersonating WaPo, they only have one target to shut down a malicious actor.

I could give non-CA-related examples, but I've come to the belief recently that security and trust is often a matter of creating solutions which don't scale.

https://stripe.ian.sh addresses this objection:

> First, from incorporation to issuance of the EV certificate, I spent less than an hour of my time and about $177. $100 of this was to incorporate the company, and $77 was for the certificate. It took about 48 hours from incorporation to the issuance of the certificate.

> The primary point raised by advocates of extended validation is that obtaining EV certificates would leave behind a signifigant paper trail of the bad actor's identity. However, there is minimal individual identity verification in the process. Dun & Bradstreet1 is the only entity who attempted to verify my identity, and did so with a few trivial identity verification questions. Purchasing identities with answers to common verification questions is neither hard nor expensive.

> Otherwise, there was no attempt at identity verification from the state of Kentucky or the registered agent I used in the process. This is typical of company formation in the United States.

Given the use of a registered agent, and given that registered agents are a pretty common thing, I don't think that anything is actually being burned here, unless you want to say that any registered agent who incorporates a phishing company gets all their other companies banned. Which, like the CASC's "You can't have EV if you haven't existed for 18 months" suggestion, will technically work at the cost of making EV useless.

Also, a lot of attacks don't care that they're burning credentials anyway. You don't need to spoof MyEtherWallet more than once to walk off with hundreds of thousands in Ethereum. You don't need to spoof the Washington Post more than once to spread fake news 48 hours before a presidential election.

Your first quote proves my point: $177 times... how many malicious attempts? That isn't a cost that scales, because in the case of a multitude of malicious activities online, the number of attempts, at $177 a try, would not be worth it. You are using an example of singular high profile attempts, such as "I incorporated a company for this demo" or "I am a state actor trying to derail a Presidential election". While you fail to recognize that EV would likely protect users from 99% of attacks caused by malicious actors spoofing domains, which could trivially be domain validated otherwise.

I find this a particular trend amongst security discussion, particularly here on HN: Focus on the most possibly sophisticated and capable attack, while ignoring that tens of millions of users are afflicted by pretty rudimentary attacks that are more or less automated on a daily basis. If EV were to solely succeed in ensuring an actual human had to go through the registration process, it would almost certainly be worth it.

Additionally, as a point of note, EV certificates could be made vastly more effective by easily addressing your concerns: Raise the price of EV certs significantly, which CAs would be more than happy to do, and ensure you needed to go to one of the upper levels of the CA system (say, having to go to Comodo for them, rather than your Trustico nonsense) to reduce the number of parties which could issue them as much as possible, and raise the mutual requirements between them to issue them.

Our scales of economics must be totally off. I'd argue that an attack that costs $177 a try is entirely scalable - presumably you should gain more than $177 in value from a single spoofed domain, right?

I do agree that if $177 is too high a cost, then the problem is solvable by just setting a price floor on certificates (modulo CA/Browser Forum antitrust concerns). I do not think that solution is likely to work, except at a price point at which EV becomes unusably expensive for legitimate users.

EV only works if we are reasonably confident that users will refuse to enter credentials into a site without EV. As EV stands today - with EV costing less than $177 for a legitimate user (because they've already incorporated) - that isn't happening. Making EV harder to acquire doesn't actually solve phishing. That's my objection. Sure, you can make it harder to get an EV cert for a phishing site, but, so what?

(A better solution that actually works against rudimentary attacks is to just give everyone a U2F security key.)

I don't agree EV should be the bar for "entering credentials". I currently don't consider it a requirement that I only "enter credentials" in HTTPS sites either, insofar as "credentials" is a very wide term. (I find the fact that simple password boxes on low security sites now pester me with a dropdown warning on Firefox. I know they're HTTP sites, but I don't care.)

As an example I do not care about: My HN account. I would not care if it was intercepted and "hacked", because the damage it can do to me personally is minimal. It would probably annoy the mods somewhat, but the handful of spam posts would get cleaned up, I'd have to reach out to the mods to get unbanned, and life would move on. I don't really care that HN has a cert at all, I certainly don't think HN needs an EV cert, I have no real concern about either my password being intercepted, or me being phished into believing a fake site is HN.

If we move DV certificates to be the bare minimum a site is expected to have, then EV should be a bar for important credentials. Aka, your Gmail account, your Bank of America account, etc. You know, Fortune 500 level companies which could afford, I don't know, a $5,000 or more annual investment. The higher cost would also justify higher requirements to get one. At that sort of cost, they could literally fly someone out to your office to check you out directly.

Whereas people have mistakenly corroborated the lock icon with "legitimate websites", an EV cert should ideally be the sort of thing household name companies have, which someone trustworthy has directly ensured is actually the keepers of that certificate.

I don't think placing the burden on my grandma to have a U2F security key is reasonable, but we should have a process so that google.com is clearly marked as being owned by Google, LLC, and that it is, in fact, the real google.com.

I support DV as the bare minimum for sites, and agree that sites with specific concerns should have something stronger.

I don't think that EV for "important credentials" is likely to be a workable option or a good one for society for the following reasons:

- Simply displaying EV and not using it as part of some automated should-the-browser-pass-credentials flow is, I believe, unlikely to deter phishing. (If you disagree, let's find studies, which probably exist, and I'll concede the point if the studies show that it works. But given that Verified by Visa uses this site with this EV cert https://www.mycardsecure.com/ and expects you to enter personal info there, I'm not optimistic.)

- I do not see how a web browser can determine whose credentials are important to whom. I would expect that there are people on HN who do care deeply about their account not being compromised (e.g., founders and VCs).

- If we simply state that the only people with EV certs are current Fortune 500 companies and current household names, that seems like a bad political decision for the web - first, it is political, and second, it seems like a position that's pretty strongly at odds with the web's political values, insofar as they exist: the web has long been a place for upstarts to compete on an equal footing with companies that have been around for years.

- There are solutions other than EV for these problems, like U2F and also password managers. The fact that the MyEtherWallet BGP attackers were able to steal user login credentials without even getting a valid SSL cert makes me not optimistic about "make certs harder to get" being the answer - we should stop the mechanism by which those credentials were sent in the first place.

I do think placing the burden on your grandma to have a U2F security key is entirely reasonable - she has a key to get into her house, right? A fob to get into her car? A credit card with an EMV chip? I think your grandma is perfectly capable of plugging something into a USB port.

How does a U2F key even solve the problem of knowing that a website associates with a real identity? It may stop a phishing attack, but where do you get the initial trust to begin with?

I think the issue is that we need to know that google.com can be shown to clearly be owned by Google, LLC, and not GoogleCo in a small African nation. (And also, presumably, that google.com is actually google.com, and not some weird substitutions in there like https://www.xn--80ak6aa92e.com/ which shows as apple.com on Firefox.) That's going to be the EV system or something like it, and it's inherently impossible to keep that in line with "the web's political values", if you describe those values as requiring that "anyone can get one". There will always be some justification that someone, somewhere, can find to call themselves something.

> I think the issue is that we need to know that google.com can be shown to clearly be owned by Google, LLC, and not GoogleCo in a small African nation.

Google, LLC? Why not Google, Inc.? Wait, is it Alphabet, Inc.? Why do I trust companies from Palo Alto (or Delaware?) more than I trust companies from small African nations?

I don't trust google.com because they're run by some Delaware corporation named "Alphabet." I trust them because I have come to trust G O O G L E dot C O M, generally by being told by other people that G O O G L E dot C O M is a good site (or perhaps in the case of Google, because it's baked into my web browser, which I've figured out how to bootstrap trust for). I trust - and want to enter credentials into - gmail.com because they're the owners of the website gmail.com, and when I give out my email address to people, I say "gmail.com", not "an email service run by a subsidiary of an American corporation named Alphabet."

Anyone can call themselves anything - that is and has long been part of the web's values. The other one is that, once you get a name, you get to keep that name as long as you want. Dropbox didn't originally have dropbox.com, but they still have getdropbox.com, the name I saw on marketing and on referrals from friends about 10 years ago. I don't care if getdropbox.com was started by some of my college classmates or by people from a small African nation, as long as I can type the letters "getdropbox.com" into my browser, I've successfully bootstrapped trust.

If you personally trust companies because they're from the United States and not from small African nations, you do you, but I see no reason that the web should design itself around your politics, when the web already has a perfectly good system for bootstrapping trust - typeable domain names.

It is perhaps ironic that your primary objection to my line of thinking is that it gates out smaller players and startups, potentially, but then you mention trusting google.com because it's baked into your browser, which of course, gates out everyone but one centralized monopoly deeming itself trustworthy. (Note: I have removed a litany of malicious Chromium-based browser installs from people's PCs in the last year, I am not sure convincing the average user to trust their browser is a good plan.)

And what about https://www.xn--80ak6aa92e.com/? How is the average user supposed to keep up with the latest and greatest ways to pretend to be someone else's URL? I mean, goog1e.com is likely to still be a reasonably effective deception to the average user. Should nobody ever click a link from anywhere, and solely find websites by memorizing the URLs and typing them in?

As Troy Hunt has been at pains to point out, some of the largest, most important and most spoofed/phished sites on the internet use only DV certs.

Google, Amazon, Facebook, and eBay are all DV. If they don't see value in EV/OV, then who on earth is it meant for?

Google, Amazon, Facebook and eBay all use OV certs, but you're probably right in that they don't see value in EV.

Like any security/UX thing there are issues all the way down. One of them being that you can't get an EV wildcard (because they want them to be "more secure" in certain respects) and so they're harder to manage - in particular harder to manage in big heterogenous environments.

OTOH: your bank, Apple, GitHub, npm, most fintech companies do.

And yes your bank account is a higher value target than your gmail.

It's all very well that my bank does, but it displays "National Savings and Invest..." which isn't its full name and would also be shown for any number of businesses with the same start to their name.

My mother's bank says "Lloyd's Banking Group PLC" which isn't its name, you'd have to know that it's owned by another group of banks to decode that.

Another of my accounts says "first direct (HSBC Bank plc)". It so happens I know that First Direct is owned by HSBC, but if I didn't shouldn't that further concern me?

This all ends up as extra cognitive load for humans, and it barely contributes to helping with the problem, because this information is only (can only be) displayed after all the backend stuff has finished happening, so it's too late to tell me that wasn't really my bank after I tell them my password...

It's a seperate topic, but yes, there's definitely a need to improve how names map to human understanding. Most people know 'Coke' or 'Coca Coca' than 'The Coca Cola Company' or 'CCA Amatil' or cocacola.com.im. There's discussion (mainly led by DigiCert) now about making EV use trademarks.

Getting access to someone's personal email account is often sufficient to gain access to everything else they use, including banks and other financial services, at least for long enough to clean out some money.

In some cases - most banks use 2FA, compromising email wouldn't be sufficient to handle that, but a phishing site can do a pass through attack and get both from the user.

But the point still stands: getting access to someone's bank account is far more direct way of getting access to their money than their gmail.

What percentage of users do you think routinely check for the presence of an EV or OV certificate on the sites they're browsing?

I think it's much, much higher in the aging population.

The only place where the presence of an OV certificate is exposed is in the certificate details dialog -- the same dialog that will typically display arcane technical details like the public key and signature of the certificate. Non-technical users never look at this.

The certificate system is broken. Technical people should not be promoting centralization, raising barriers to publishing and empowering middle men. That is not a technical solution, but behavior that favours business and vested interests.

There is now a clear pattern of attempts to raise barriers and cede more control to actors in the middle.

First it was de-legitimizing http, now its standard certs, then it will be 3 years. What's stopping this degenerating into an elaborate 'approval process' from a central authority in a few years which you pay dearly for, all in the name of security?

This kind of control gives some the power and fig leaf of any 'arbritary process' to delegitimize and silence others.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact