Hacker News new | comments | show | ask | jobs | submit login
Chrome's Plan to Distrust Symantec Certificates (googleblog.com)
460 points by JoshTriplett 72 days ago | hide | past | web | 198 comments | favorite



I wish browser vendors would let me choose a trusted entity and make it simple for me to trust only CAs that my trusted entity supports, or the intersection of what multiple trusted entities endorse.

The incentive for a mass-market browser is to trust pretty much everything, but I'd prefer to use a browser that is a bit more paranoid.

If a website can't load properly because I don't trust one or more of the CAs, I might want to temporarily "live dangerously" but would be a bit more cautious about typing data into a form, etc.

Browser vendors should not try to create a one-size-fits-all list of trusted CAs, since there is obviously a very different level of trust deserved by various CAs based on the track record of each one.

If I were a state actor intelligence agency, compromising CAs would be toward the top of my list because of the amazing opportunity for man-in-the-middle attacks.

Distrusting Symantec certificates is a great step in the right direction.


> I wish browser vendors would let me choose a trusted entity and make it simple for me to trust only CAs that my trusted entity supports, or the intersection of what multiple trusted entities endorse.

This is an idea that I hear in variations from time to time, yet I think it's utterly wrong and goes against everything we know about IT security UI.

The reason why HTTPS works at scale and is - with all its weaknesses - a crypto success story is because it works automatically. Unlike other things like PGP that ultimately expect from the user to get an understanding of complex concepts like public key cryptography and web of trusts.

What you're trying to do is move HTTPS away from "it just works" to "user has to understand what a CA and a PKI is".

Fortunately none of the relevant actors or browser vendors is moving in that direction, the opposite is happening. HTTPS is becoming the default, it works automatically and the security improvements that are being deployed (mainly CT) are systemic improvements to strengthen the whole system.


The parent is not suggesting a deviation from the "it just works" model. They are suggesting a convenient way for expert or "paranoid" users to be able to make changes to whom they trust as they see fit - this does not affect average users at all, but may provide significant benefits to an important minority of users.


We can add or remove root certificates for OS, it's just deliberately not easy.


This makes me think that the package containing the trusted certificate list ought to be separated into a standalone repository, so that it is easy to change (like how you subscribe to an adblock list).


the implication being that if you don't find it easy enough to do it, then you probably aren't expert enough that you should be doing it.


> it's just deliberately not easy

And thus crippled and useless against state attackers.

I fail to see the reason why my browser refuses to let me "pin" a CA only for certain sites for example.


> crypto success story

Suppose 5% of CAs are compromised. Is that a success? It is if you compare it to everyone using self-signed certs, but it is not if you consider that there are likely broad vulnerabilities that can be silently exploited by some groups/nations.

We don't hear much about man in the middle attacks because we have no reason to be aware of them.

> it just works

The point of my remark is not to suggest that a list of trusted CAs compiled by someone like Bruce Schneier would result in a broken web. If it would, then it's hard to argue that the system is not already broken.

The point is to allow experts to establish authority on the basis of careful (possibly paranoid) stewardship of a list of trusted CAs. Then when an attack is revealed experts who whitelisted that CA lose a bit of credibility, and those who blacklisted it gain some.

As it stands, firms that ship a default list of trusted CAs have an incentive to err on the side of whitelisting, and then claiming "oops we had no idea that CA x was compromised..."

There are clues about the relative trustworthiness of CAs, some of which are simply the governments that have jurisdiction to demand private keys, etc.


I certainly agree that something that just works is by far the most important priority and do worry about anything that could muddy that message. On the other hand, I suppose what the parent proposes could be useful for users with heightened security concerns even if they are only a tiny minority, similarly to how tor is useful (and, in fact, using this setting would be less conspicuous than using tor and also less suspicious since there is no way of using it for criminal activity).


I agree with Hanno here. There are great tools to make the CAs behave or at least make their mistakes being easily spotted: Name Constraints, DNS CAA records, Certificate Transparency and, if you must, HPKP and much more.


Not really. To be honest, the existing UX for trust relationships is hardly near optimal. Nobody needs to understand what CAs and PKIs are. I actually think the proper UI could make it pretty smooth and understandable by the layperson. All a user needs to be asked is "Whom would you trust to certify that the websites you visit are genuinely those they claim to be?" and be presented with a few reasonable options, such as perhaps (1) Default (recommended), (2) Certification companies (Verisign, Thawte, etc.), (3) Government agencies (e.g. CIA, NSA, etc.), (4) Other/Custom (advanced).


I think that implies there's a choice, that when you access, say, ycombinator.com, you can choose to have that site certified by one or many agencies. In reality that choice only exists for the site owner, so if you choose a subset of CAs, you'll just have HTTPS errors on a bunch of sites.

If Convergence[1] ever becomes a thing, then I think your proposal makes sense, otherwise not really.

[1] https://en.wikipedia.org/wiki/Convergence_(SSL)


(Edit: I edited this example here to make it even simpler.)

Again -- HTTPS errors are not exactly the epitome of the best UX. You could pop up a dialog saying "We can't seem to verify this site is really x.com. It might be someone else impersonating x.com. What would you like to do? [Ask Google/Microsoft] [Continue] [Cancel]" or something like that. And asking Google/Microsoft would be implemented by checking that the certificate is what they expect (so that we know it's not a MITM), whose public key they have already hopefully known about and pinned.

Note: I just came up with these ideas on the spot. I believe they're much better than the current system but in no way am I suggesting they have no room for further improvement.


Have you ever seen this picture? https://www.reddit.com/r/funny/comments/42xxdh/what_computer...

Because that's exactly what you're proposing.


Not that particular one, though I have seen similar, but no, that is not what I was proposing. Where is the "technical crap" in what I just suggested? Which part of it was hard to understand or make a decision on?


Sorry, I know it was well-meaning, but it's riddled with "technical crap" and assumed knowledge which make it hard to decide upon. Not least a pretty foreign uses of "trust" and "certification" for non-technical people. E.g.:

1. "Certified by" — what does this mean? Is like a rating thing? A tripadvisor thing for the web?

2. "X" who are they? Do they want my money? Do I need a new one?

3. "Current trusted certifier" oh god, did I choose them? Are they bad now? Did I not pay them money? Has it been hacked? Didn't it do a review of BigWebsite yet maybe? I could just wait I guess.

4. There are three buttons that say "yes"! If I hit "always" what if it was wrong? And "if Google verifies" well that's obviously a scam because my web is Google so it would already know. This is all some kind of scam. I'm out of here.


I edited it. Is this better?

I honestly don't see why the concept of "This site claims to be WorldBank.com, but we can't verify this. Whom do you trust enough to ask?" requires technical knowledge of any kind. People already understand this is why they have ID cards and background checks -- "I don't know if I can trust this guy; do you trust X to run a background check (or ask him to show an ID card issued by Y)?" is the same exact concept. Why do you think ordinary people fundamentally cannot understand this on the computer when they already understand it (and much more difficult things) just fine in real life?

Also, don't forget that none of this applies to when you choose the default settings. This is only if you choose to distrust some organizations.


Most people, funnily enough, don't run background checks on other people, and don't have a well-formed mental model of why they're necessary and what the security risks in using them are.


> Most people, funnily enough, don't run background checks on other people

They don't do it because they're not employers/landlords/etc. and hence the need never arises, not because they somehow have a complete lack of understanding of the concept.

> and don't have a well-formed mental model of why they're necessary and what the security risks in using them are

They understand why they need to keep their bank passwords safe, and hence they can understand that they shouldn't enter it on someone else's website. They understand what a fake ID or passport is, and they understand why they shouldn't give their ID numbers to random strangers.

The specific examples I used are not the point. I explicitly said they're not perfect; they're just there to get a bigger point across. Look at the larger point. The point is people are already dealing with much more complicated stuff on a daily basis. They understand the concept of lying. They understand that identity theft can wreak havoc on their lives. They understand how to prevent it. So they can handle the concept of what it means to enter their credentials on the wrong site. It's not technical and it's not complicated, and it's not necessary for them to understand how exactly doing the wrong thing might lead to an undesirable result.

If you'd like to convince me that the average person can't understand this concept, at least support your claim by providing a link to a study or something that isn't based on a cherry-picked strawman version of what I've suggested? Surely someone's already looked into this kind of an approach and what you're saying isn't just pure speculation?


> They understand why they need to keep their bank passwords safe, and hence they can understand that they shouldn't enter it on someone else's website.

Not in my experience - people reuse passwords regularly, and are exceptionally vulnerable to phishing scams, as you should ask anybody working in corporate IT. This is why people actually working in security are very excited about U2F - a U2F-generated token is tied to a specific domain, so password reuse and phishing are both no longer as serious a problem, since a login to scammer.com no longer works as a login to mybank.com.

On top of that, people give their bank passwords to e.g. mint.com on a regular basis, when it's specifically asking for their bank password and they have no reason to trust that website.

> they understand why they shouldn't give their ID numbers to random strangers

Also not in my experience, and the basis of many scams.

People at large are terrible at security. We've been moving towards "automatic security" for a while for a reason. And that's before you get to the phenomena where critical thinking goes out the window as soon as something is on a computer instead of "in real life".

> Surely someone's already looked into this kind of an approach and what you're saying isn't just pure speculation?

Moxie Marlinspike's Convergence would be the closest thing, iirc. Whenever you access example.com, you ask a handful of user-selected notary servers to access example.com and tell you its TLS cert. If all of them match, you're not being MITMed. You don't have to trust any individual notary server, because it's all of them put together that provide the trust.

Of course, trying to figure out a good UX for "my Government and Google say it's fine, but Estonia and Facebook say it's not" in such a way that users aren't going to be severely inconvenienced by a malfunctioning/MITMed notary server but also aren't going to click on "I have no idea what this is just let me see the website" on a MITMed page is hard - probably impossible. Moxie has since given up on it.


The issue is not whether they are capable of understanding it, it's that they don't currently, and browsers have to be designed with that fact in mind. Showing them a message like you suggested only works if they already understood it.


> The issue is not whether they are capable of understanding it, it's that they don't currently, and browsers have to be designed with that fact in mind. Showing them a message like you suggested only works if they already understood it.

But I'm only saying the current UX could become much better, not that they have to switch to my suggested UI by tomorrow. Of course spending some time on user education and transitioning gradually would be a great idea. I was never against that; I'm all for it. So why implicitly rule this out as an impossibility?


> I wish browser vendors would let me choose a trusted entity and make it simple for me to trust only CAs that my trusted entity supports, or the intersection of what multiple trusted entities endorse.

But you do; the trusted entity is your browser manufacturer. Given the massive amount of attack surface in the browser, I'm already trusting my browser to make sound security decisions all the time. And I've seen really good work from both the Firefox and Chrome teams on asking hard questions towards CAs; I don't think I can think of a group I'd trust to make better decisions. (Arguably there's the EFF, but I definitely trust the EFF less as a technical body than Mozilla or Google, for the simple reason that the EFF is an advocacy body, not a technical body.)

> The incentive for a mass-market browser ... If I were a state actor intelligence agency

But protecting against intelligence agencies is most effective and important in a mass-market browser, not in a nerd browser. There isn't any sense in a society where nerds as a group have more protection against government dragnet surveillance than everyone else.

If you're a specific nerd working to save the world in a specific way and you're worried about targeted attacks against you personally, you have a threat model that can't be adequately responded to by trusting a subset of CAs. You need to do your important work on a machine that's locked down far more deeply than just by distrusting a few CAs (e.g., you probably want to disable 80% of the web platform to reduce attack surface), and ideally you'd do your less-sensitive work, e.g., reading the news and chatting with friends and listening to music, on a browser that looks as normal as possible to avoid standing out.


1. Uninstall the CA certficates the browser has pre-installed

2. Create and install own CA certificate

3. Download or create desired server certficates, sign them with own CA and install them

I have not tried 1 but I regularly do 2 and 3. (Usually for monitoring outgoing encrypted traffic.)

Anyway, the idea of your comment is spot on, I think. The process whereby users blindly trust browser authors has serious flaws.

The user should be the one controlling the list of trusted CAs and servers, not a third party such as ad-supported company or organization distributing a web browser.

The more the user (cf. third parties) actively controls what endpoints she will trust, the better.

The system should encourage users to be active in managing what certficates they will trust versus passively letting ad-supported web companies do this for them.

It is the user who has the greatest incentive to make things right in terms of her own security and privacy. Third parties may help, but they ultimately have their own interests to worry about.


If the system by default without users doing anything said common popular websites were insecure, most users would just learn to click through (no matter how many clicks it took) and generally ignore all security warnings. It's crying wolf. You would have actually decreased overall security.

(Or they'd just switch to a different browser without these problems).

I understand the intent of your suggestion, and you're not wrong in an ideal world, but mass-market security needs to take actual human psychology of non-technical users into account too.


I think if Chrome started showing alerts that the CA Reddit got it's cert from was sketchy, Reddit would go looking for a new CA. (And maybe given time the world would all end up using the Alphabet CA, which would presumably always have a five star rating....)


> The user should be the one controlling the list of trusted CAs and servers, not a third party such as ad-supported company or organization distributing a web browser.

No, they really shouldn't. Security is a ridiculously complex arena, and the amount of knowledge you need to make an intelligent setup on this is considerable. For tech-heads, fine, but the vast majority of people are not tech-heads.

If you switched over to this model, we'd end up in the bad old days of Windows XP, where untrained people would advise untrained people to 'just install this package and everything works!' and half the time they'd be installing malware. The right way to do this is 'sensible defaults, that the user can adjust'.


The funniest part of this ongoing debate I see -- to me, at least -- is that the end result is, effectively, one of two options. Given the way we as a society use technology, today:

1) Piss off the like, 10,000 technical nerds who care about this, by having sensible defaults. "We should really teach people to fully manage their CA chain! It sucks I have to spend 20 minutes doing this once a year after spending 10,000 hours learning how computers work."

2) Don't piss them off, but in return, get ready for user-blaming whenever someone messes up, and for your users to be 'responsible' for themselves. "If users were EDUCATED and not so stupid, they'd be able to spend 100 hours understanding the CA system and keep themselves secure!!! Not our fault if you install certificates from Brazilian hackers"

I dunno, it's mostly an observation, even if it's something of a false dichotomy, but this always seems to be the way it goes. Technical users simply have vastly different value systems. But honestly, my go-to strategy these days is mostly "whenever most technical people try to give security advice and speak on behalf of most users, almost always: do the exact opposite thing, in order to keep the users secure".

Most of us have some variant of Stockholm Syndrome to some extent.


>The funniest part of this ongoing debate I see -- to me, at least -- is that the end result is, effectively, one of two options. Given the way we as a society use technology, today

Speaking more generally, rather than specifically about CA chains, I respectfully disagree with you there.

Imho, what usually happens in a sane world, is that we spend most of the time between these two extremes, in a pendulum as we balance one way, get burned, go back the other way, get burned again and repeat.

Im fine with it being that way until someone releases a true ground breaker and redefines what we are balancing.

>But honestly, my go-to strategy these days is mostly "whenever most technical people try to give security advice and speak on behalf of most users, almost always: do the exact opposite thing, in order to keep the users secure".

I think I understand what you are saying, but that comes across as you cherry picking the bits of a technical persons advice that a layman wouldn't understand. The "exact opposite" is sorta hard to define.


It seems to me that if someone is truly an expert, they should already be able to manually manage their certificate authorities if they want to. It's just a list stored somewhere on their computer, after all, and at least 2 major browsers are open source and can be forked into a private repo for personalized development.

What I see in this thread looks more like an insidious middle: people who think they are experts, but are asking browser providers to do the development to change the entire product and provide a nice friendly UI.

This seems like an area where the old maxim applies that if you have to ask for help, maybe you're not as expert as you think you are.


And how would one know which certs to include? Seriously, I've been trying to solve that issue for years.

Discussion on Super User (Stack Exchange): https://superuser.com/questions/818065/how-to-know-which-cer...


Does SSLKEYLOGFILE include the server certs or only the session? Apparently, "Chrome stores SSL certificate state per host in browser history" if you can figure out how to access that. https://serverfault.com/questions/279984

In any case: scan the hosts in your web history, and follow the cert chains to the various roots.

Not sure if you're asking how or if you're asking whethor or not someone has already implemented something like this that is easy to use.


> scan the hosts in your web history

That's simple SQL query against the Firefox profile sqlite database. No problem.

> and follow the cert chains to the various roots.

Doesn't scale. If it can't be scripted, then it can't be done for tens of sites that I regularly visit, and hundreds more that I come across.


Makes sense. What operating systems do you need this for?


While I do think blind trust has serious flaws, I do believe that having "trustworthy enough defaults" is good enough for the average user while making it relatively simple to overwrite those defaults would be nice.


Right now there is a big gap between "trustworthy enough defaults" and "security expert who has the judgment to determine which CAs deserve to be trusted with the most sensitive data".

It's easy enough to remove CAs for one's self, the issue is understanding the possible entanglements that might make some of them less trustworthy or more easily compromised. That requires quite a bit of expertise and judgment.


How do you know you're not being MitM'd during step 3?


For all reasonable adversary examples shy of panopticon, downloading from multiple physical sites and global proxies and comparing the same roots across downloads should be sufficient, no?


Ohhhh gotcha, seems legit. I thought the certs in step 3 were individual site certs, not the roots. Thought he was going for some site-level pinning or something.


Browsers are primarily in the user fingerprinting business, but I would not be surprised that there is absolutely no incentive nor desire to invest in "security and privacy"(tm) other than to facilitate their main purpose. Let me stress this once again, the main purpose of a modern browser is to enable global, cross-network user tracking so that companies like google can collect data regardless of whether you "accept cookies", use vpns or sign up into one of google apps(which makes that "see what data google has about you" project even more evil).


What's Apple's incentive for doing that?


As mentioned, you do have this power. But for the average user, this gives too much control, because the average user has no way to judge the trustworthiness of various CAs, if they even understood what CAs were, which they don't have the time or inclination to care about. I'd say most Hacker News readers wouldn't be able to make the correct determinations about this either.


It's also worth noting that for average users, whenever any sort of "option" is given that can potentially open a security whole, there exists some malware out there that tries to get the user to abuse that option in order to infect them. This always happens, and is the reason so many things on Chrome are locked down and very hard to modify.


This is a managed by your OS.

You are free to delete CA's from its trust store.

What Chrome is doing is irreguarless if that cert is in your OS's store it won't trust it.

Keychain for OSX

mmc in windows cmd prompt

Linux has /usr/share/certificates


> This is a managed by your OS.

It depends on the browser -- e.g. Chrome uses the system's CA certs, but Mozilla ships with its own.


Ok well the broader point still applies, right? You can edit Firefox's CA list if you want to.


Firefox has it's own certificate bundle, so do Java applications.


Moxie proposed something like this in the past, quite forward thinking: https://en.m.wikipedia.org/wiki/Convergence_(SSL)


Being cautious about entering data on a form is a cute idea but it doesn't work in reality. The web, with JavaScript loaded from multiple origins, doesn't work that way. These origins are practically invisible to you, they have complete access to the pages they are included in and you have no way of checking their certificates.


I'm aware of this. At present, those scripts can be run from any origin using any of the many CAs the OS/browser trusts.


I don't understand the reasoning behind not being allowed to deactivate/deinstall the standard trusted root certificates.


Would you accept a browser plugin that reads only the certificate information and makes a decision to alert, or block/override the DOM rendering to protect the user?


Exactly. I've never understood why all the US / EU language versions of Chrome, Firefox, etc. include all the root certs for CAs from China, Turkey, Russia, etc.

I cannot read Mandarin or Turkish, and in the unlikely event I get forwarded to a site from a company targeting citizens from these countries, I'd prefer to just get an SSL exception instead of the 'trusted' page.


What about the huge number of English language Chinese run ecommerce sites? Why should they be discriminated against because of their native language?

That just makes the normal user who wants to buy stuff learn to ignore SSL errors.


Did you not miss the part where I referred to my 'localized' version of Firefox or Chrome (i.e. US English)? For example: https://www.mozilla.org/en-US/firefox/organizations/all/

Again, I cannot read Mandarin or Turkish, and likely never ever will. It seems to me that it'd be much more likely that a site, even if run by Chinese or Turks, caters to American customers, should invest in a certificate trusted by a US CA, and similarly for every market they expect to do business in.

And likewise, I don't know why localized versions of browsers for the Chinese market would include trusted certs from Turkey or Brazil.

Seems like its a lot easier to force multinational companies, who already are going to have to invest in huge amounts of research and technology to deal with foreign taxes, currencies, bank accounts, etc., to add to their burden the responsibility for obtaining certs from their customers' local CAs, instead of having every single CA on Earth be trusted by all releases of browsers.


Is it unreasonable to expect a foreign ecommerce site to use a CA that it's users can trust?


You're right. Americans should discard Turkish and Chinese CAs in favour of one homed in the heart of their tech capital in Silicon Valley. Clearly those foreign ones are shysters and not to be trusted like a locally-built, trustworthy business!

Oh, what's that? Symantec is an American company, headquartered in Mountain View, CA?


Meanwhile, the rest of the world probably wonders if they should trust American CA's!

I'm not sure a geographically silo'd internet is what we should be going for. i miss the days when the dream of the internet was prefiguring a borderless world.


Yes, absolutely.

I think the @grandalf idea of selectable trust list providers has some merit (although there are some security trade-offs there).

I don't think the automatic mistrust of certificates from "CAs from China, Turkey, Russia" automatically follows from that.


Wouldn't you say it would be enough to pin those certs to TLDs?


By TLD do you mean country domains?

Then absolutely not. Why should a US company not be allowed to use a Swiss-issued certificate for a .io domain name?


And for that matter -- if a CA really is untrustworthy, why should domains under any TLD be exposed to them?


One of the happiest days of my life was when a Symantec support person, very courteously, helped me remove every Symantec product from my computer.


i remember needing a util for this:

https://support.norton.com/sp/en/us/home/current/solutions/v...

was a key component of my debloating efforts back in the XP days.


Norton protect quietly reinstalled itself after I ran that tool.

Oracle snuck it onto my system in a java update and I ended up having to dig through the registry and disk to weed it out. It is malware, plain and simple.


What could the lifetime value of a Norton customer be that they went to such incredible lengths to aquire?


Quite a lot, and it’s not just Norton. Google does the same, and pays companies to secretly include Chrome in their installers and auto-updaters, like they included the Google Toolbars before.

Here’s an interview with the VLC developers, where the VLC developers complain how Google tried to get them to secretly install Chrome with it, and offered enormous amounts of money, but they obviously said no: https://www.youtube.com/watch?v=jWx1P93nS0c&t=48s


> This plan, arrived at after significant debate on the blink-dev forum, would allow reasonable time for a transition to new, independently-operated Managed Partner Infrastructure while Symantec modernizes and redesigns its infrastructure to adhere to industry standards.

It's funny they still mention giving Symantec an opportunity to modernize and redesign its infrastructure when Symantec simply decides to give up: http://investor.symantec.com/About/Investors/press-releases/...

Interestingly, this investor.symantec.com doesn't properly support HTTPS at all (certificate host name mismatch). I guess that speaks volume about how much Symantec itself cares about SSL/TLS and PKI.


Wise actually. This way nobody can say that they were being unfair to Symantec, and it costs them nothing. They can still decide either way.


What are some trustable providers of EV certificates? LetsEncrypt is wonderful, but if I'm a company that needs to show the company name next to the padlock, who should I be using? What's an easy way to check if a provider (for instance Gandi, who I use for my domains) is going to be culled by this?

In fact, I don't even seem to able to find certificate information in Chrome any more - clicking on the padlock just gives me a bunch of settings and "Learn more" which is some annoying page.

Edit: It's in Developer Tools under "Security"


You will need an Extended Validation (EV) certificate to get your name in the bar. To get it to go green you "only" need a Domain Validation (DV) certificate.

In simple terms: DV proves you own your DNS domain and EV proves that your real world entity owns the DNS domain.

Lets Encrypt will only do DV and quite right too. EV costs a far bit more because it should require some proper checks - for example checking Company's House in the UK for a Limited Company's details and matching them up against real people.

You can set a flag in Chrom{ium!e} that will put a link to the cert in the menu that drops down when you click on the green bit, saving you a trip to dev tools. Go to chrome://flags and search for "Show certificate link". Don't know why it isn't the default.


> Go to chrome://flags and search for "Show certificate link". Don't know why it isn't the default.

Thanks for this tip! I like to check the certificate, so this will save me some clicks. I also don't know why it's not enabled by default.


Yeah I figured that EV is out of scope for LetsEncrypt. Thanks for the tip about the flags!


https://certsimple.com

It’s incredibly fast, the guy who runs it is nice, great customer service. It just does what you need.


Thanks James! Mike from CertSimple here. HN folk can give me a shout anytime (mike@certsimple.com) or ask here - it's midnight in the UK but I'll be back up in the morn!


I believe that EV certificates cannot be wildcard certs, is this still true?


Yep, what @detaro said. Here's the specifics: https://certsimple.com/blog/wildcard-ev-certificate


Like the site, but the example seems off.

It talks about 'bankofamerica.com.fraud.ru' in the first paragraph, but then discusses how 'fraud.ph' (why did we change TLDs? This should stick to ru or the above should be changed to ph) gets a wildcard cert for '* .com.fraud.com' (we changed TLDs again, to com this time. I assume we're talking about 'fraud.ph' getting a cert for * .com.fraud.ph - or .. we stick with ru)


Good point - I was experiencing a bunch of fraud from the Phillippines and thought the Russian example was too cliched. I'll tweak the example to be more consistent.


yes.


Generally a quick google answers the question as well, Gandi lists their partnership with Comodo on their EV certificate ("SSL business") page [1] as well as a coule places on their wiki.

So yes, if you're using Gandi (and I still do myself for things LetsEncrypt cannot be used for, namely things where frequent rotation requires time-consuming manual intervention) you are safe from Symantec's distrust.

[1] https://v4.gandi.net/ssl/business


I'm unfamiliar with the problem space. What are some of the use-cases for EV certificates?

It looks like lots of big name sites don't use them. Why do some companies feel they need it, while letting internet giants such as Facebook, Google, Microsoft, etc. get away without it?


EV shows (in the address bar and in the cert details) the legal entity that the certificate is for.

Eg anyone could register "bankofanerica.com" and get a DV cert issued for it.

Only Bank of America can get a cert that shows their legal/trading company name.

Well, that's the idea. I'm not aware of any ev misissuance but I don't follow this stuff like a hawk either.


Starting w/ version 60, go to chrome://flags/#show-cert-link, click "Enable", and restart your browser. Ta-da!


> LetsEncrypt is wonderful, but if I'm a company that needs to show the company name next to the padlock, who should I be using?

Sorry, what's a company that needs to show the company name beside the padlock? (Except maybe: Company that has too much money to spend on pointless "premium" security services without any shown benefit.)


EV is an option for companies that worry about branding - specifically ones that are well-known to do business offline. This extra layer of trust, relayed to the user from the web browser, reinforces the fact that they are on the correct website.

For example, I can go grab something like "bannk.com" (notice two 'n's) and get a DV cert for it - because I hypothetically own that name. Now, I can copy the real "bank.com" onto the site and perform a phishing attack. They both appear to be encrypted but the EV'd one has the organization noted, the fake one will not be able to have that legitimacy.

I am not saying the system isn't without its faults (people might not even notice), but it's an extra measure a company can do to keep itself distinguished from phshing websites or copycat services.


Too many negatives in that last sentence.


> without any shown benefit

on the contrary, I know that the presence of an EV line has helped my mother avoid a pretty decent phishing attempt. Any company holding confidential details or data should have one, its just a shame they are so expensive (which due to their verification is unlikely to come down too far).


> I'm a company that needs to show the company name next to the padlock

Does anyone actually look at or care about that?


Marketing often cares about that a lot.

Users seem to notice but often times there's confusion as to why the address bar is, for lack of a better term, stuttering.

And god help you if your certificate uses a name you don't have any branding for.


Exactly for example the fiasco of behringer whose site is get this "www.music-group.com".

It always seem to be German firms that don't just get the internet at all


See here [1] for a pretty good write-up arguing EV isn't worth much. One issue is that it only has value if you would notice it is missing. Would you trust a paypal site that just had the green padlock, but not the name? Do you think your parents would?

[1] https://www.troyhunt.com/journey-to-an-extended-validation-c...


It's useful for no-name companies, an EV certificate highlights a company that went through unnecessary pain just to get a little green checkbox - I know I personally trust smaller online retailers more when they have one since they've had to prove their identity to a CA. Nobody notices the lack of EV on amazon.com, because, well, it's Amazon.


> Nobody notices the lack of EV on amazon.com, because, well, it's Amazon.

Which means that nobody notices the lack of EV on amazoone.com, because, well, it'as Amazon.


amazoone.com actually redirects to www.amazon.com, as do amazoon.com, aamazon.com, and amezon.com. It seems like finding a domain name that is "close" to amazon.com that is not already registered by Amazon is not that easy.


We need an 'Expect-EV' header, just like 'Expect-CT'. Or maybe all websites that request certain info (like SSN or credit card) should be required to have EV. EV is more expensive and onerous but it has the potential to thwart phishing.


> We need an 'Expect-EV' header, just like 'Expect-CT'.

What good would that header do? A phishing site (say, "paypaaaaal.com") would simply not set it.

> Or maybe all websites that request certain info (like SSN or credit card) should be required to have EV.

EV certificates are not available to everyone. In particular, they're only available to users in certain countries, and even then only to registered businesses.

Besides, how do you enforce a requirement like that, short of barring users from submitting data that looks like a credit card to a site without an EV certificate? (Which would cause massive and entirely justified outrage, and would be quickly bypassed by phishing sites regardless.)


> What good would that header do?

It would protect against MitM attacks where the attacker has fraudulently obtained a non-EV certificate, such as through CA compromise or a BGP attack, especially since EV certs are required to have CT log entries. I guess since Expect-CT already solves this, Expect-EV may not be that helpful.

>EV certificates are not available to everyone.

I agree this is a problem. EV should be available to anyone anywhere willing and able to prove their legal identity, be it a person or an organization. Then if someone uses EV to phish, they can be held accountable.

>how do you enforce a requirement like that

If this becomes standard, then we can train users to expect EV indicators when asked for payment data. Any site asking for it without EV would automatically become out-of-the-norm.


>Does anyone actually look at or care about that?

No. You'll note that Amazon doesn't. They spent a bunch of time trying to figure out if it made a difference for customers. It turns out it doesn't, so they don't bother with the extra expense.


Apple does, as a counterpoint.

I'm interested in the Amazon study if you have a link, I haven't seen that before.

(Note, I run CertSimple and we specialise in making the verification process for EV faster and much less painful, so I'm biased)


I don't have any link... but off the top of my head they found that such a tiny percentage of people actually noticed the EV, even amongst technical people, that the value added was negligible. Even those that notice the EV certificate indicated that it didn't change their decision to purchase from a site.

You can make the EV process as smooth and simple as you like, but unless there's good solid business value provided by it (in $$s terms), you've got way more of an uphill struggle ahead.


For someone as big as Amazon, isn't it cheaper to just get an EV cert than spending significant time figuring out whether it's worth it?


Not if it increases conversation rate by X%.


That would only reinforce the point.


It's proof of identity, matching a pubkey to a legal name. Debian does it, Windows does it, your university does it, keybase does it. HTTPS is one of the few areas where it (since DV was invented) no longer the common practice.

Note: I'm biased. See bio.


I definitely do for my bank. And the country code that is shown next to it.


Yes, everyone who cares about their security when using online banking.


if I am logging onto a bank or my broker Abso-fragging-lutely, damn it :-)


SWIFT is the only single entity trusted by all banks in the world. I think they would be a perfect fit for a CA to issue online banking certificates, since there is no way to get around trusting SWIFT as a bank, so they might as well trust them for their certificates as well, instead of trusting Symantec or any other CA.

The less entities you have to trust, the better.


> since there is no way to get around trusting SWIFT as a bank

I understand banks are free to have some kind of direct-transfer mechanism between them, like internet peering, it would just be redundant.

But anyway, I would not trust SWIFT as gatekeepers of banking security. They're a prime example of a business that runs on COBOL written 40 years ago and they're not the paragon of progressive thinking in banking (SWIFT could torpedo Western Union overnight if they really wanted to) and finally, they already have a poor reputation in INFOSEC circles: https://en.wikipedia.org/wiki/2015%E2%80%932016_SWIFT_bankin...


Aka. putting all your eggs in one basket...

Banks shouldn't be relying on outdated infrastructure providers to do things they aren't good at. The sensible thing (and indeed more or less what happens now) is for banks to choose a best-of-breed CA to issue SSL certs, the same way any other company offering sensitive services over the Internet would do it.


> Aka. putting all your eggs in one basket...

To a certain extent the alternative here is to put all your eggs in all the baskets - if an elephant steps on any one of the baskets the eggs (your security) breaks...


That's really not good security logic. It's better to trust a very strong entity for 1/2 of your security and a weak entity for the other 1/2, so that when shit breaks you at least have a more accurate suspicions on which part of your system broke, to start the investigations from.

It's much worse to trust a single weak entity for 1/1 of your security.

That said, it's possible SWIFT can be a good CA. But you shouldn't choose them if they do a crap job of it, simply to "reduce the number of entities you trust".


I want to thank the Chrome people that continued to investigate the initial issues with Symantec certificates rather than just take their word for it that it was a honest one-time mistake. I imagine there has been a lot of tension between Google and Symantec over the past year because of this, so it must not have been easy to keep poking at Symantec. After all, Symantec almost was one of the "too big too fail" CAs. Almost.


To add to this sentiment.

I think routine checks should done to all CAs, they are an important part of how "trust" works on the internet. and I see their importance only growing at this point.

Google should not be the only company trying to enact change in this area.

[edit: so many typos - oops]


> Google should not be the only company trying to enact change in this area.

Fortunately they're not. Mozilla is involved here as well; just look at [0] and the other *.mozilla.org links in this thread. They also broke the WoSign/StartCom debacle.

[0] https://wiki.mozilla.org/CA:Symantec_Issues


Not just the Google Chrome guys, but also the Mozilla team who were also investigating a variety of issues.


If you are using the free SSL provided your Webhost "Let's Encrypt" certificate, you will be fine. That is not a Symantec cert.


When will google decide let's encrypt is not secure enough and start giving a warning around that.


Considering Google is a major sponsor of the project, I'd say likely never.

Not to mention that Google has been pushing hard for https on all sites, which is exactly LE's goal.


Not only that, but they're doing things like Certificate Transparency - publishing every certificate they sign into public logs and they're funded and supported by some of the biggest names in online security and privacy.

They're probably the most trustworthy CA on the planet.


Considering that literally anyone who gets even local access to any server at all, or can spoof one - even servers that have never used LetsEncrypt before - can generate new valid LetsEncrypt certificates, and nothing generated has a password on it, I don't know if I would consider them trustworthy.

If you want to passively monitor encrypted traffic on a massive scale and not get caught (other than via their log - and who is reading that to see if every server they have has been issued a cert unnecessarily, anyway?), LetsEncrypt is awesome. It's plausible to do this without LetsEncrypt, but they made it a lot easier.


Anyone who had the necessary access to a server could get a domain-validated certificate before LE existed, they didn't introduce that.

You can generate and handle the private key entirely yourself, without ever having LE code touch it if you want. You only need to generate CSRs from it.

And you now more than ever have tools to control this if you worry about it: CT logs (that you don't have to check yourself, thanks to free services that alert you about each new certificate for your domain) and CAA records


> Considering that literally anyone who roots any server at all, or can spoof one - even servers that have never used LetsEncrypt before - can generate new valid LetsEncrypt certificates, and nothing generated has a password on it, I don't know if I would consider them trustworthy.

This is true for the vast majority of all CAs.

> If you want to passively monitor encrypted traffic on a massive scale and not get caught (other than via their log - and who is reading that to see if every server they have has been issued a cert unnecessarily, anyway?), LetsEncrypt is awesome.

Wouldn't you rather pick a CA that doesn't log all certificates publicly? (At least while that's still possible - i.e. till early next year.) If you're doing this on a massive scale with a CA that logs publicly, there is absolutely no way you're not getting caught.

Certificate Transparency Monitoring is fairly easy to set up, by the way. Even Facebook runs a public monitor you can use.


>Considering that literally anyone who roots any server at all - even servers that didn't use LetsEncrypt before - can generate new valid LetsEncrypt certificates

As opposed to any other CAs? There are plenty of other CAs that will happily grant a certificate if you prove control of a server that the domain resolves to.

>nothing generated has a password on it, I don't know if I would consider them really trustworthy.

If you have root on the server, can't you just dump the certificate out of memory, even if there's a password on it? short of using a HSM, you need the certificate decrypted so the server can use it.


Well I have a problem with the fact that you can't stop random CAs from issuing certs for a domain you own. But I don't think that's changing anytime soon.

Passwords provide security for data at rest, a properly hardened server makes it very difficult to dump memory in certain circumstances, and hsm are practically a requirement in some environments.

And finally, say I wanted to use LetsEncrypt, but I wanted to manage the keys myself, and require only private key X could be used to sign Y, and manage it myself. They don't really let me set those requirements at the CA level - it's all on my own host + network security, which IMHO is unnecessarily risky.


> Another thing is, let's say I use some other CA for my certs, and I don't want LetsEncrypt issuing any. I can't really stop them, can I?

You absolutely can. Let's Encrypt was one of the first CAs to support CAA (and, IIRC, they supported it when they first launched). CAA is a DNS record that lets you specify which CAs are permitted to issue certificates for your domain.

> And finally, say I wanted to use LetsEncrypt, but I wanted to manage the keys myself, and require only private key X could be used to sign Y, and manage it myself. They don't really let me set those requirements at the CA level - it's all on my own host + network security.

I'm not quite sure what you're saying here. Do you want the ability to issue certificates under your own (constrained) intermediate certificate? That's unfortunately not possible under the current Baseline Requirements unless you get audited as a CA.

If you just want to use your own private key, that's of course possible (in fact, there's no way for Let's Encrypt to generate a key for you). Or is it that you want to limit limit issuance for domain X to key Y? What other CA allows you to do that? And how would you prevent some other CA from issuing a certificate for a different key, even if Let's Encrypt would support such a feature? With that in mind, it becomes clear that in the end it's up to your host and network security again, even with such an agreement in place.


I wasn't aware of CAA, that's a nice development.

> Do you want the ability to issue certificates under your own (constrained) intermediate certificate?

Look at it this way: currently, if you can send network traffic from some IP space, you can create valid domain certs. This is the equivalent of using a hosts file with a list of IPs to authenticate an ssh connection. Yes, I think an intermediary key, and not simply some arbitrary control of a network, should be required to generate a cert. It seems like CAA, or some extension thereof, could help this become a reality.


I'm not sure I understand your suggestion. How does the intermediary key improve the domain validation process?

There's an CAA extension in the works that will allow you to bind domains to ACME accounts (which are protected by your account key). Let's Encrypt has plans to support it. Is that what you're looking for?


An example:

  1. Webserver requests cert for xyz.com.
  1.1. Generates private key
  1.2. Generates csr
  1.2. Sends csr to ICA.

  2. ICA requests cert for xyz.com.
  2.1. ICA generates private key
  2.2. Signs Webserver's csr with private key
  2.3. Sends signed csr to CA

  3. CA issues certificate
  3.1. CA looks up CCA record for xyz.com
  3.1.1. CCA contains ICA's key fingerprint
  3.2. CA verifies signature of Webserver's csr with ICA's key fingerprint
  3.3. CA verifies Webserver controls domain xyz.com.
  3.4. CA issues cert
At no time could a bad actor simply compromise the webserver and issue a new cert for xyz.com, because it would need the ICA to approve it (and that could require user intervention). Thus, network access alone would not be enough to generate certs. Maybe this is the extension they're making?


> Another thing is, let's say I use some other CA for my certs, and I don't want LetsEncrypt issuing any. I can't really stop them, can I?

CAA records. If you want full control over when issuance happens, you can even set the list of allowed CAs to empty and only change it when you actually want a certificate.

> And finally, say I wanted to use LetsEncrypt, but I wanted to manage the keys myself

If someone can submit a CSR and complete the challenges they get a signed cert, yes, just like with other DV-CAs. LE code (certbot or other clients) doesn't have to touch your private keys, as long as you give them CSRs, as with other CAs.


A lot of non-EV certs are given out if you control one of about 6 pre-defined email addresses like webmaster@. The security is already lacking given SMTP's bad security architecture and I'd argue that LE's protocol is a lot better.


I think it's a valid criticism, not LE but industry as a whole. Intel should add usable HSM in every cheap laptop, not even talking about servers instead of their ME backdoors. Technology is there, it's cheap and it's needed.


Nearly all laptops, desktops, and servers sold in the last decade have a Trusted Platform Module (TPM) which is in fact a HSM. You need drivers and utilities to use it but it's there.

TPMs generally aren't fast enough to do RSA signatures a on busy webserver though, but they're wonderful for protecting VPN certificates (tools for managing this are built into windows Group Policy, I imagine it's very painful on Linux)


That... is the point of the DV cert?


Pushing TLS on all sites is only possible with free certificates. LE is essential. Nevertheless, never say "never".


When they do the stupid that other CAs have?


Exactly. And to be clear, the malfeasance by Symantec is voluminous and well-documented: https://wiki.mozilla.org/CA:Symantec_Issues

Distrusting Symantec is not an arbitrary action as some people seem to believe.


Probably some time after Let's Encrypt shows some form of negligence while issuing certificates.


Considering how bad these violations were, and how carefully FF and Chrome are documenting everything, I'd say about never.


>When will google decide let's encrypt is not secure enough and start giving a warning around that

When they start issuing illegitimate certificates.


for a decentralized web - a way for people to be compensated for their time - mesh network - decentralized dns market - reservation system that supports known brands keeping their name - allows enough fluctuations that a name can be rereserved - opensource browser - keeping it simple enough for newbies to figure it out - a way to associate a public key to an IP address, reverify it and legitimize it - p2p shared files - trackers/dht torrent - decentralized hueristics - timelimit, algorithm fn, score fn, prize for heighest score


Surely the distrust doesn't apply to websites with the Norton Secured Seal, the most trusted mark on the internet: https://www.symantec.com/page.jsp?id=seal-transition


Of course not. Norton Secured Seal is indistrustable.


I don't know about that, I trust Mark Zuckerberg more than that.


"including Thawte, VeriSign, Equifax, GeoTrust, and RapidSSL"

RIP RapidSSL wildcard


Wow, I never realised VeriSign was a Symantec brand.

I'm not sure what it says that Google doesn't trust the company that operates the registry for .com


Verisign's SLL authentication business was spun off and sold to Symantec a while ago, but Verisign continues to operate the .com registry as a separate company.


Operating a domain registry and issuing EV certificates require different levels of trust.

Don't read too much into Alphabet/Google using other gTLDs instead of .com, like https://abc.xyz or https://blog.google -- that's separate work done by separate people on a separate team.


We are in contact, though. Don't read too little into it either ;)


From my own experience with TLDs, .com, .info, etc are the worst to deal with. It’s insane how wrong they get the TRANSFER process, and everything.

On the other hand, I’m positively surprised by how well DENIC (.de) handles it all.


LE Wildcards should be around by then! Hopefully enough tooling will exist to make that migration seamless, or as seamless as cert migration can be.


You'll be looking forwards to discovering how to do DNS dynamic updates.

LE wildcard certs will only be DNS validated and not using a web server (for obvious reasons, when you think about it)


LE wildcards are coming early 2018 I believe so you'll still have a month or two gap.


No action will be taken to impact users until March 15, 2018 at the earliest and only if you got your certificate before June 1, 2016. At least when it comes to any existing certificate (they are ambiguous about when they will cut off new certificates exactly).


> they are ambiguous about when they will cut off new certificates exactly

They are? It seems pretty clear to me. Symantec's old infrastructure will be distrusted in Chrome beta in September 2018. At that point, no certificates issued by Symantec's old root certs (including those issued after June 1, 2016) will be trusted.


They mentioned to not get any certificates from the old infrastructure after the new infrastructure is available as they will be untrusted earlier (to ensure that Symantec starts using the new infrastructure exclusively once it is available).

However they didn't say when they would do that.


DigiCert is taking over the cert issuing infrastructure on Dec. 1st, so presumably those brands are coming along for the ride.


Also note that GeoTrust is Google's root CA.


Just so I'm understanding the implications of this correctly, my company uses certs issued by Thawte. Would these certs be affected by this move?


Seems so. The article says:

If you are a site operator with a certificate issued by a Symantec CA prior to June 1, 2016, then prior to the release of Chrome 66, you will need to replace the existing certificate with a new certificate from any Certificate Authority trusted by Chrome.


Is there a list somewhere of the actual Root CA certificates that will be removed as part of this? Is it just any Root CA certificate that has "Symantec, Thawte, VeriSign, Equifax, GeoTrust, and RapidSSL" in it's name?


I think this is what you're looking for: https://chromium.googlesource.com/chromium/src/+/master/net/...


That's it, thanks!


I understand that the solution is complex, but until 17 April 2018 my browser is going to accept some certificates that are reputed to be unsafe today. If this is the power that the browsers have over CA I don't feel secure.



Slightly offtopic but still related to web-security in general; how can I be sure about chrome extensions that I'm using are not doing malicious stuff? I think if a state actor or a resourceful entity wishes to steal data of users, attacking/hijacking/buying a popular chrome extension would be so much easier than attacking a CA or doing MITM over SSL, and trying to decrypt it.

I've always hated the concept of certain chrome extensions having full access to all the pages I visit in the browser but sometimes its necessary. Ad blockers are way too high on the list of potential attack targets, close second would be web development extensions like editthiscookie.

There needs to be a way for webpages to indicate that they don't want any external scripts running on the page. Even a setting in chrome would be really helpful. I don't want external scripts to be running on my bank website, or when I'm working with my stock exchange website. Right now I'm making do with multiple chrome profiles but that doesn't cut it. There needs to be some initiative in this regard from major browser vendors.


Yes, I agree. The possibility of a rogue extension stealing data also freaks me out. All you need is the Google account of 1 popular extension's author to be hijacked, and now the person has full control over all the credentials of every user of that extension within an hour or so.

I was so concerned about this potentially ticking time bomb, in fact, that I sent an email about this to a prominent person on Chrome's security team many months ago.

I did not receive a reply.


Chrome has a good permissions model that i don't think can be much improved on. An ad-blocker needs access to data on every page you visit in order to do its job. I don't really see a way around it; Thats just how the web is constructed.. You can disable 3rd party scripts on your side (Firefox has Noscript addon for that; chrome should have something similar) but that may break the website you 're visiting

Workarounds : Disable add-ons in incognito mode, and conduct your important business there..

Or use a second chrome profile with no extensions for confidential stuff..


> Chrome has a good permissions model that i don't think can be much improved on. An ad-blocker needs access to data on every page you visit in order to do its job. I don't really see a way around it

I don't imagine an ad blocker needs the ability to send data over the network.

I also don't expect it should need to alter cookies or local storage, though perhaps there's a random counterexample somewhere?

Most importantly of all, I think disable forced auto-update of extensions would help. It would make things a lot safer if users had control over when their extensions were updated. That way a malicious version could be taken down without having affected too many people.


Chrome doesn't allow extensions running on https://chrome.google.com/webstore/category/extensions, which means there's already provisions in code which disallows extensions to run on certain URLs. They could just add a setting where users can add to that list of pages.


For web pages to indicate? No, that'll get attached to every terrible ad in the world.


Yes, that would actually do more harm than good, but no reason to not provide a setting where user can add to the list of URLs similar to what chrome is already doing for flash, notifications, location, camera etc


Comodo is giving away free Certificates to Symantec customers https://www.comodo.com/Google-Chrome-announces-proposal-to-d...


For those considering Comodo, I'd ask that you reconsider.

They tried to trademark LetsEncrypt:

https://news.ycombinator.com/item?id=11964583

https://news.ycombinator.com/item?id=11973232

Bad OCR led to certificates being issued to the wrong people:

https://news.ycombinator.com/item?id=12761248

They created their own Superfish-like adware:

https://news.ycombinator.com/item?id=9091917


Don't forget Chromodo which was a reskin of Chrome with --disable-web-security set. Because that totally makes their browser super secure!


Another potential problem for people who pinned certs.

There should be a standard to pin EV-ness. That way you know all certificates for your site have been issued by an audited process and not by dns hijacking.


I realize that the title here is what Google put on their blog, but it seems to me that "detrust" would be more accurate than "distrust".


“distrust” is an established English words that means exactly what Google intends. “detrust” is, while not hard to figure out, an unnecessary neologism.


But "distrust" doesn't convey the full meaning. They're not talking about merely not trusting Symantec certificates; they're talking about removing existing trust in the certificates.


> But "distrust" doesn't convey the full meaning. They're not talking about merely not trusting Symantec certificates; they're talking about removing existing trust in the certificates.

“distrust” alone only communicates not trusting, true. But the word isn't alone, it is in the phrase “plan to distrust”, which communicates that the not-trusting is a change from the status quo that will occur in the future.


which communicates that the not-trusting is a change

Not necessarily. "Plan to distrust" would be consistent with "a company has announced that they're going to start issuing certificates; we won't trust them". You and I know that Symantec is a major CA which is currently trusted by Chrome, but that isn't implied by the headline.


If it makes the two of you feel any better, the US Supreme Court justices also don't agree on whether to use precise or approachable language. Different strokes.

http://legaltimes.typepad.com/files/garner-transcripts-1.pdf


Isn't 'removing existing trust' the same as 'distrust'? The word 'distrust' in no way implies that you have NEVER trusted them.


"Distrust" doesn't imply that you have never trusted them, but it does allow for you never having trusted them, and so communicates less.


Generally, yes if it's without context, but "Plan to Distrust" or "Going To Distrust" (and similar) certainly implies that there previously was some level of trust. I mean if you had no trust to begin with, why would you even say it that way (or say it at all)?


As pointed out elsewhere, statements of future distrust admit the possibility that there wasn't yet opportunity to trust or distrust. For instance, if Symantec had never produced certificates and announced that they were going to, and Google announced "when they do, we won't trust them".


Why create a new word when we can just add a new adjacent meaning to an existing nearby word...


Language is mutable for this reason. If we didn't add any words but just kept using "nearby words" then we would be DANGER NO GO.


untrust?


"stop trusting" would perhaps be clearer, sure. We don't necessarily need a single word for that.


Is there a list of CAs that will be affected? I can't find a link to one in the article.



Thanks!


Amusing that security.googleblog.com itself is using a TLS certificate ultimately signed by GeoTrust Inc, which is owned by Symantec.


Fun fact: one of the "independently-operated and audited subordinate CAs" that is exempt from this is, yep, you guessed it, Google. They've recently acquired a GlobalSign root and are in the process of getting their own roots added to trust stores, but I imagine they'll want to keep chaining back to the widely-trusted GeoTrust root for a few more years.

(IIRC Apple is on that exception list too.)


Actually, that subordinate CA isn't affected. See:

https://chromium.googlesource.com/chromium/src/+/master/net/...

And the blog post states:

> This will affect any certificate chaining to Symantec roots, except for the small number issued by the independently-operated and audited subordinate CAs previously disclosed to Google.


Formerly owned. They agreed to sell their certificate business to DigiCert.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: