I haven't considered the internals of the datacenter though...
You will see lots of custom stuff that Google does. There are many things that are better to outsource to 3rd parties, but many things are better in-house because the solutions just don't exist or they cost too much for the volume they need. Some examples:
Network routers for CLOS topology (there are pictures of some of the hardware): https://research.google.com/pubs/pub43837.html
Custom built SSDs (custom firmware/controller with a mix of flash chips from various vendors): https://www.usenix.org/system/files/conference/fast16/fast16...
Then you the TPU custom chip for running Tensorflow.
They're still eons away from e.g. Intel or Samsung when it comes to vertical integration on the hardware side.
I believe that Samsung is the only company in the world that is capable of building an entire computer (e.g., laptop or smartphone) from scratch completely in-house. They can design software and OSes AND manufacture SoCs, memory, LED panels, etc. It's very impressive.
For serious though, there's not really any lock-in here (yet). You could replace everything from the certificate through the public DNS with GoDaddy and things would work just fine. I don't really see Google moving to close the web parts of this.
Good luck making something that's both 1) the most convenient and 2) not centralized.
the problem is that there isn't a multi-billion dollar business case for decentralized user systems. a lot of server/dc tech is both decentralized and distributed because it is a more robust architecture. again, the reason this doesn't extend to the consumer is because it give them too much control and clogs up the revenue stream.
this isn't a technical or usability problem, as you claim. this is 100% economic.
WordPress solved the spam issue for decentralized blog comments by centralizing it, which gives them the scale to solve it well but also gives them the ability to read probably half the blog comments on the web in real time.
To go deeper, it turns out that Bayesian filtering is remarkably resilient. Even attacks which try to poison your filters by including ham-like content in spams are ineffective, because spammers cannot very accurately predict what your own particular flavor of ham is like. (People don't often mail me passages from out-of-copyright Victorian romance novels.)
I find that a few botnet-reducing SMTP heuristics plus Bayesian is sufficient; I dallied with some of the fanciness that compares known-spam hashes with other people, but it turned out not to be necessary.
Google actively restricts which programs and extensions you can install on a Chromebook and Android, Google restricts what you can publish on their infrastructure, and AMP is also becoming somewhat of a problem.
On Android, Google killed all other push notification services (and tries to prevent people from writing open source libraries for theirs), by only allowing notifications from Google Cloud Messaging to work when the device is saving battery (basically always on recent versions).
After trying to fight this for quite a while, I do really see Google moving to close this.
This ought not be surprising: presumably, who better to say that Google is indeed Google than Google itself?
The reason everyone doesn't run a root CA is because it's difficult to coordinate trust between parties that may not know about each other ahead of time, and each and every root CA adds more maintenance burden on part of trust-stores. When I self-sign my cert, I am effectively my own root CA, but I lack a compelling value proposition for everyone to add it into their trust-stores, and of course there's the initial difficulty of me propagating my key fingerprint over a tamper-proof "out-of-band" channel ahead of time where you have assurance that it's coming from me.
Google, on the other hand, is fairly easy to verify that they're indeed Google, considering they just published their public keys on their own website. By having a prior web property that's already trusted, they have bootstrapped the trust necessary for fingerprint distribution, and the rest should follow.
When Google's CAs start issuing certs to non-Google parties, we can revisit the 'eggs-in-basket' question.
I'm not really seeing the problem there, either. Unless we trust Google to do this less than we trust every other root CA. All evidence I've seen points to Google caring very much about internet security.
I don't think this is a bad thing. ... This ought
not be surprising: presumably, who better to say
that Google is indeed Google than Google itself?
As stated in this Stackoverflow response:
Especially #2 is rather nasty, even if you
pay for a highly trusted certificate, your site
will not be in any way locked to that
certificate, you have to trust all CAs in the
client's browser since any of them can generate
a fake cert for your site that is just as valid.
It also does not require access to either the
server or the client.
0 - http://stackoverflow.com/a/14907718
If implemented for all of these roots (and I don't see why they wouldn't, given their push on it), CT would create an open, unalterable record of every cert published from all of these roots and their subordinates. CAA, as a complement, would create a method by which you as a domain owner could control which CAs are allowed to issue certs for your domain, removing the ability to man-in-the-middle your domain without your permission.
The first step for both of these is making them mandatory for CAs (which Google is pushing hard on); once they're out there, it's possible to write plugins that will check CAA and CT records and fail closed if something looks wrong. It's a long way from perfect, but it's definitely a step in the right direction. Given Google's strong push for making those mandatory, I'm far more worried about a lot of the CAs already in my trust store than I am about these.
CT would create an open, unalterable record of
every cert published from all of these roots
and their subordinates.
Given Google's strong push for making those
mandatory, I'm far more worried about a lot
of the CAs already in my trust store than I
am about these.
0 - https://www.certificate-transparency.org/how-ct-works
1 - From :
During the TLS handshake, the TLS client receives
the SSL certificate and the certificate’s SCT.
As usual, the TLS client validates the certificate
and its signature chain. In addition, the TLS
client validates the log’s signature on the SCT ...
There will be tremendous pressure on them to do it for certain parties, or have it done on their behalf unwillingly, or possibly unknowingly.
I'ts like a self signed certificate.
This doesn't prove anything, unless you already trust another CA which can vouch that you're looking at Google's site in the first place. Or by some sort of consensus with other people that you see the same key.
The foundation of a more secure web apparently requires you to trust Google with the entire internet, using their properties as leverage to force it to be so.
Especially since Google is US based.
Back in the real world, we have multiple CAs who have accountability for lots of overlapping domains. You can wish for some other non-existent situation, everyone else has to make the best of the situation as it stands.
Domains can compete with each other, particularly given the big opening up of TLDs. We could have actual competition between CAs at the end-user-facing level because it'd be visible to the user who the CA was (the CA and the registry ought to be merged - at the moment they're two parallel sets of infrastructure for doing the same thing), and if particular domains/CAs had poor-quality identity checking users might actually start to notice. As opposed to today, where the only one who knows which CA a domain might be using is the domain owner, and so the incentive largely is for the CA to do as little checking as possible.
> Back in the real world, we have multiple CAs who have accountability for lots of overlapping domains. You can wish for some other non-existent situation, everyone else has to make the best of the situation as it stands.
There's a migration path. Enable DNSSEC/DANE with all CAs authorized for all domains initially, then allow countries / TLD owners to start restricting who can sign certificates for their domains. If Hong Kong moved to requiring only Hong Kong Post Office to sign their domains, we could see how well or badly that model works - if it reduces phishing / spying then other countries will follow the same, if it stifles innovative internet businesses then they'll move away from that. But 150+ entities all having the power to own every site on the internet can't possibly be the right model.
Simply mean used by the majority of the users?
So any sufficiently good product is monopoly, assuming that they are goodness is beyond the threshold to be favored by the majority of customers.
What do you want to say about Google's monopoly? Are Google going to hurt others and throttle effective competition? Was there any competition in CA market at all?
To answer your later questions:
Too many essential services under one umbrella.
It's competition stifling.
Google saw the dismal situation of Internet CA, and forces internet to move to a better situation. Forcing people behave better is a good thing, IMHO. If you think other way around, there will not be a common ground for discussion between you and me.
It's disgusting but pretty much corporate life 101.
It's after the fact, to be sure, but it matters for reputation.
They will already log their public certificates to CT and this will continue given their push for CT.
Perhaps just a general feeling that all the internet eggs are being put, one by one, in one single alphabet basket.
They add this cert and they control a vast chunk of the internet.
Same for Google Chrome. Switching to another browser is a matter of a few minutes (including taking the data with you). But as long as Chrome is at least as good as the others, there's no reason to do so.
I think it's important to recognize the differences between monopolists with lock-in (e.g. Microsoft with Windows and Office) and those without lock-in. Even though I'm also concerned about the amount of data that Google has, I'm sure they'll at some point end up like Yahoo or AOL. The question only is how long it takes.
IMO Microsoft is only around because they could generate revenues from lock-in effects during years where their new products were really bad. They've caught up now and did a lot right in the last years (and already benefit from it financially). But if they'd only had a portfolio like Google, I'm not sure they would still play a role.
DDG has yielded results far superior to google years (IMHO, this is a bit subjective), but I don't see people moving over because google is simply "what they know".
A better alternative won't make people move, they need further motivation.
"We see you haven't signed up yet for AMP, so we've done the work for you"
Yes, I get they wouldn't do this, but the fact that they could is a little scary.
It's unusual for a root CA to be run by a service that otherwise has nothing to do with CA issuance, for the primary purpose of issuing certificates for that service's first-party sites, and not for third-party sites. I can't think of a single other example of a single-purpose root CA like this.
(The announcement mentions that they might use this to operate as a CA for third-party sites as well, but right now it exists to certify first-party sites.)
Department of Defense: http://www.disa.mil/enterprise-services/identity-and-access-...
I wonder why browser won't automatically store the fingerprint for every HTTPS-certificate it encounters and throw up a fuzz to the user if a certificate changes without any good reason?
This proves that whoever has the new cert used to have the old cert. A browser would save a copy of a certificate the first time it visits a site, then when it visits again later it could request the chain of certs back to the first one it ever encountered. Past certs verify new certs; this check wouldn't even involve a CA.
This basically overlays trust-on-first-use security model on top of the CA security model and would make it much more difficult to perform a MITM on sites that the user regularly visits (which are probably the most valuable targets).
It's possible for sites to instruct browsers to do it though, but that's opt-in. https://en.wikipedia.org/w/index.php?title=HTTP_Public_Key_P...
The certificate pinning of CA is not that useful.
So google rotate a lot of certs, but I bet 95% of the internet use one cert for one server until it expires. Google could fall in in line.
In any case, I was only trying to suggest why this seemed odd, based in part on the availability heuristic: whether this is a common practice or not, it doesn't seem like something people (even people who regularly read about changes or events in the CA landscape) would see regularly.
Edit: At least it felt like a slow stream to me. Search isn't being very cooperative towards my cause right now... The only item matching my memory is https://news.ycombinator.com/item?id=13013494 , but I'll be damned if there weren't others.
CAs should be required to announce ownership or large administration changes, and trust in said CAs should be revoked upon change unless/until they have been re-audited.
It should not be based on "root" certificates rather something more like blockchain for generating security keys and each roll /session generates a new key.
IANAEE but if building a currency is possible without it being possible to create fake money then it should be possible to protect websites in a similarly decentralised way.
Brian Smith has argued for supporting only P-256, P-384 and Curve25519: https://briansmith.org/GFp-0. That said, Mozilla decided to continue to advertize support for P-521 for NSS (https://bugzilla.mozilla.org/show_bug.cgi?id=1128792).
P-256 and P-384 are widely supported in various TLS libraries (SChannel, SecureTransport, OpenSSL, NSS), whereas Curve25519 doesn’t yet seem present in Microsoft or Apple’s libraries. I suppose with TLS 1.3 support perhaps we may see it implemented?
Unfortunately it seems none of the NIST curves (P-*) are considered “safe” by DJB and Tanja Lange: https://safecurves.cr.yp.to/.
As much as we might trust Google, shouldn't there be something like separation of powers as a safeguard?
* The international banking regime
* The US legal regime for corporate purposes
* Many other national regimes for operating purposes
But of course that train left the station 30 years ago... in the current landscape I don't mind gTLDs any more than the general mess that is current DNS "hierarchy".
That's also how large sites handle it. Amazon.com will lead you to the US page, not to an international section (e.g. where you could choose where to go).
How do we (or Google) know that the CIA and FBI can't create certificates from all the CAs because they have stolen/demanded the Root CA for them?
If I was a TLA I'd want the ability to perfectly MITM anyone.
I think these questions imply that there needs to be a better way to think about security and trust for web endpoints in the days of the state as a bad actor.
Well, we know they don't do this on scale, because we'd spot the rogue certs in the certificate transparency logs.
Might be part of the reason they are becoming their own CA.
Lots of issues with the current PK infrastructure is limited by the certificate transparency.
No, the certificates are pretty terrible too. Take a look at Peter Gutmann's presentations, or read the SPKI RFCs.
Among other things, certificates conflate identification, authentication & authorisation; they are based on a flawed, centralised, global phone book model; they are ASN.1; in one case, I believe that the meaning of a single flag has been inverted because of a mistake in a (Microsoft?) library that everyone has had to be bug-compatible with.
Some folks think that XPKI is so broken precisely in order to discourage its use (others claim the same thing about IPsec). I don't actually think that's true, but sometimes when I'm banging my head against some stupidity in XPKI, I wonder. I really do.
Shall we eliminate software freedom too? Otherwise companies can just install a fork of Firefox/Chrome with MITM support added back in to their managed endpoints.
Thank you for constructing the words I could not.
I think an encryption solutions that cannot be 'broken' for decryption is far more required than one that has the 'good guy' in mind. I do not find it an acceptable solution for critical data.
an encryption scheme cannot be designed to be broken and expect everything to be 'secure'
EDIT: I am not being allowed to reply.
excuse me, I think you need to read what I wrote more carefully. I do not care about certificate transparency. I must not be communicating clearly I will try again..
I am not referring to the ability to issue a new certificate.
I'm talking about the ability to perform SSL decryption without the end user knowing. you do not need to issue a new certificate to do this, you just need the end user to have trusted a new root CA... which brings us to this article where another company is issuing a root CA. do you trust everyone in your 'trusted root ca's on your computer?
Here are some ways to untrust certs and another conversation on this 
"One of the problems with digital certificate management is that fraudulent certificates take a long time to be spotted, reported and revoked by the browser vendors. Certificate Transparency would help by making it impossible for a certificate to be issued for a domain without the domain owner knowing."
- We need clients to authenticate servers as well as the reverse.
- Browsers need to allow better user control over certificates. I know the reasons why this isn't provided, and I don't care. Add a "reset to defaults" if you're worried about people breaking their browsers, but a sensible way for users to control whom they trust is important. See next point.
- As of now, we have a pile of registries with near-zero public view into the operations of people with whom we're literally entrusting our bank accounts. Some of these are overtly in the control of nation-states, and many more are assumed to be at least covertly "assisting". Those of us with a problem with that (which should be everyone - even if you trust your friendly neighborhood intelligence agency, what about all the others?) need much better visibility into the operations of the CAs. I'd argue that they shouldn't even be for-profit operations, but that's not a huge point to me - the important points are knowing which ones are incompetent or compromised by their masters (which are the same thing in once sense, not not in others), and there are various paths to get there.
To the browser apologists: is a balkanized web worse than one that cannot be trusted? And are your actions actively harming people by making them believe it is trustworthy when it is not?
Being trustworthy means fulfilling the expectations of the party that is reliant on us, and which are based on what we promise. Currently, the CA system promises very little, with the general idea being that your data is safe from thieves and scammers; certainly not that your communications are safe from law enforcement. Therefore, they are mostly trustworthy.
If you start telling people that you can say which CAs are free from interference from intelligence agencies and other top-level snoopers, you're making a much stronger promise, and therefore any flaws in your assessment are much more dangerous.
While I can not say what Google will do in the future, I can say we are very supportive of Let's Encrypt. We have provided them funding and I personally act as an advisor to Let's Encrypt.
In short, we love what Let's Encrypt is doing.
This new CA doesn't change much in that respect.
* pki.goog does not enforce TLS
* Why use .goog instead of .google?
While it would be ideal for that not to be the case, one has to build out infrastructure that supports the way UAs behave today.
but then again, government players plague every security system we have.
In the meantime, DNSCurve would be a great start, vs the major issues I have found with DNSSec.