Google is big enough to develop a trusted hardware solution for internal use only, it has no financial need to sell it. Worse, due to competitiveness in the cloud segment, it is dis-incentivized from selling the solution.
Amazon Glacier is another one. It's an interesting long-term storage solution, whose hardware implementation is unavailable to the market, since AMZN can better explore it as a service under AWS.
We are heading onto a more closed ecosystem than we are used to up until here. The cloud, which gave us the immense positive benefit of moving all capex to opex, is birthing this immense negative side effect of closing off hardware implementations in favour of exploring the added value in the form of services.
Moreover, the GPUs and more so, DSPs and ISPs in your phone or PC are hidden from you in that they run code written by a very small number of people. You don't even have an idea how many small DSP cores are scattered throughout a desktop-class chip, let alone what they do or how to program them. Effectively it's for internal use of a very small number of hardware and software vendors, and the software is very much tied to the hardware.
The reason computing hardware used to be open is that very few could make it and they only stood to gain from making it usable in as many applications as possible, or at least so they thought. Once (almost-)cloning hardware products became increasingly cheap with less and less vertically intergated hardware vendors (from fab to design), vertical integration moved to design+software because that's how you fend off competition today.
Didn't also Facebook started some open server hardware initiative? I don't remember what happened with that...
I do agree that the current status is not great, and that we could all benefit from more open hardware design. I think that it would also benefit large companies as well.
It's alive and well, but not newsworthy, thus phased out of public's attention span.
 https://www.youtube.com/watch?v=nT-TGvYOBpI#t=2824 (sec. 10 - http://geer.tinho.net/geer.blackhat.6viii14.txt )
†(as far as your objective is concerned. Yes, Google could choose not to have secure hardware, but that wouldn't change the end result that the market leaders in five years will have secure hardware — Google just wouldn't be among them.)
The smart grid requires a lot of General Purpose Computers gather that data. However, this risk has already been considered. From the link in my previous  (sec 7):
... privacy [is defined as]: the effective capacity
to misrepresent yourself.
Misrepresentation is using disinformation to frustrate
data fusion on the part of whomever it is that is
watching you. ... Misrepresentation means putting
a motor-generator between you and the Smart Grid. ...
In any case, as others have pointed out, the War isn't about centralization. The War is about the inability to turn a Turing complete system (the General Purpose Computer inside everything) into an appliance that doesn't run some programs. The universal nature of the computer puts a lot of power in the hands of the people, which scares some people and undermines many business models.
Thus there is a desire (possibly indirect) to wage war on this new threat by limiting how many General Purpose Computers end up in the end user's control and hobbling the rest with spyware/drm. If everyone has dumb terminals and "appliances" that only run authorized software, the threat of people actually using the power inherent in every General Purpose Computer is neutralized.
This war is ongoing right now, with small battles happening in every "appliance" or "service" that pretends a Turing complete computer is an appliance. The war is far from over, but we are losing a little bit more every time some piece of technology is centralized.
It is absolutely in Google's best interests to externalize security for its customers as a differentiator of Google Cloud. The parent article itself links to the white paper that outlines how this is done for Google Cloud.
I understand how one may consider this a "closed ecosystem" from one perspective. However, from a customer point of view any startup or mom-and-pop can leverage these very complex and expensive world-class security developments, whereas in the past this access has been reserved to the very select few that could afford it. When the barrier to entry is lowered and access is commoditized, customer wins.
(work at Google cloud)
The good news is that the "evil" mega corps aren't so evil, and generally contribute back.
Also, the bar for custom hardware is dropping, and the recent moore's law stall means a good board design can live for 3-6 years instead of 1-2. In turn, this means that niche operating systems like BSD and opensource Solaris have increasingly stable hardware targets.
Taken together, this is great news.
I don't expect startups or mom-and-pop's to build internal clouds. I do expect medium to large companies to do so. The current market turns innovations such as these into competitive advantages of Google, instead of directly exposing the innovation to the market and allowing incorporation of its advantage into anyone's product. It's a less liquid market. Either you buy all of Google's solution as a package, or you buy none. You can't pick, sort and mash a solution of your own from disparate parts.
Note that Google here is just an example. All companies of similar size in IT are doing the same, and strategically it is the correct option (for them). I'm just stating that the overall result is sub-optimal through a collusion of disparate factors.
I agree. Consider what's at stake for them. I can't even begin to wrap my head around how bad that would be if an entire server farm got rooted. At least defending a bank you know what the attacker's endgame is: steal money/SSN's. If a server farm were hacked, you'd see identity theft, blackmail, massive customer (and e-commerce) downtime, malware distribution, ddos/large botnets, market manipulation (if you started spreading false news about a particular company, at scale, on social media), perhaps brute-force RSA/SSL cracking. If those guys got hacked, it could be an absolute shitstorm. So I dont blame them at all for creating their own TPM or whatever.
I don't understand why the customer can't get these benefits AND the ecosystem be open as well.
AFAIK, the consumer systems that are most resistant to physical attack (and that lack spooky things like Intel's system management CPU) are game consoles. The hardening is a requirement for anti-piracy and anti-cheating, and in newer generations of consoles it's been quite successful. Recent iPhones are a distant second in terms of security architecture.
Source? Apple does some pretty sophisticated stuff around hardware security mechanisms and software correctness.
(I genuinely want a technical description of the mechanisms consoles use these days so I can read it -- not trying to start an argument...)
Until a random cosmic ray triggers Google to revoke access for mom and pop.
There's tradeoffs, and most people don't discover them until the random cosmic ray hits the fan. And, to be fair, most Google customers probably never encounter a RCR event.
In realty, mom-and-pop don't need these security developments, because mom-and-pop have much less attack surface on a server running in their back room. The cloud necessitates it be possible to manage a server over the Internet, but for many situations that isn't necessary. And in many cases, the limited needs a mom-and-pop company has doesn't require their infrastructure be public facing on the Internet at all.
I'd argue the only reason one needs these "world-class security developments" is because Google itself is a world-class target. The sort of threats you're defending against would almost never be necessary for a smaller business with an on-premises solution to be concerned about.
Many small business internally need little more than a shared network drive, rudimentary user management, etc. And you'd be stunned how many businesses today still operate off a single AOL mail account.
Disclosure: I work on security at Google.
Google's security measures here largely are a result of a security problem Google created in the first place. That isn't unusual, mind you. Web design is much the same way. We create new problems via added complexity, then have to solve them.
The whole threat model that requires you put custom silicon in your servers just doesn't apply or matter to smaller parties.
This article is about the custom security silicon in Google servers, and Google Cloud employees selling the false concept that this is a must-have for anyone but themselves. This has nothing to do with 2FA, and 2FA, in case you're curious, works everywhere not powered by Google Cloud too.
Do not attack people when you do not know the topic of the conversation you are participating in.
Hmm. Is that really a benefit or Enron-ing yourself? I can see how it allows efficient ramp-up of small companies which can't afford a whole sysadmin for ops, but the big downside is that the return on capital goes to the megacompanies, to whom you are paying rent.
I can sort of see why it's happened, because it's very hard to capture the added value of software by selling it. Especially in this area, the value of software is driven to zero. Whereas restructuring as a service and putting up barriers to entry brings back the margins for the developer.
We almost need an update to Coase's Theory of the Firm to account for the disruptive effects of IP-heavy organisation.
Cloud's biggest benefit is really convenience, because you don't need to go to the datacenter and put in another hard drive yourself when you need more space. That's absolutely not worth the price premium most of the time. Large companies could hire hardware jockeys in-house for a fraction of the cost and smaller companies can't afford such conveniences.
Chasing the cloud has cost companies a massive amount of money. I'm familiar with companies that would've saved nearly a million bucks a year by staying on bare metal/in-rack hypervisor. They weren't forced into the cloud by a shortage of people who could perform hardware maintenance, they went because it was the hip thing to do.
Cloud does have some benefits and there are specific applications that are smarter to run in the cloud than on colocated hardware, but they're almost never going to be cheaper to run in the cloud.
You can sometimes save money sort of indirectly. For example, if your MySQL application is struggling and you put it on Aurora and it runs fine there, then you've saved tons of labor costs in exchange for the cost of your Aurora instance, which isn't cheap, but is probably cheaper than consulting time, but even this is a short-lived benefit because at some point the monthly rent crosses the threshold, and it locks you into an application that can only run well on Amazon RDS.
I read it as sarcasm. ;)
It wouldn't be more open even if they didn't spin their own hardware. You won't ever see it nor have access to it on a low level - it's 'the cloud'. The only thing it tells us is that they have reached a scale where custom hardware makes things cheaper, more reliable and more manageable for them.
Of course. If the alternative is non-development, we are better off. However, my text states that the alternative is de-coupling: One company develops and markets the innovative hardware solution, other companies build services on top of that solution. This would be more open, it would present a more liquid market.
Not necessarily. Balancing forces always come up when this kind of things happen. The closing in of big players may give a big boost to open source hardware alternatives, which combined with the advent of 3d printing, may very well lead to the democratization of hardware...
This will ensure more modularity in the market. And more competition as well, because barriers are lower.
Companies could only be as large as X before starting to pay prohibitively large taxes in order to stay 'for profit entities' or become companies devoted to the public good.
So you'd end up with large telecoms who were non-profits dedicated to improving the level of global interconnectivity, and lots and lots of tiny 2-10 man companies that did research or sales.
Similar arguments could be made for UFI and similar. Just because some can write code for it only a handful of major suppliers actually do.
Are those any more accessible than this?
it may be in a long time though..
Isn't it more correctly stated moving fixed costs to marginal costs? capex is mostly fixed, and opex is mostly variable, so you are not strictly incorrect.
Its related work & extension page has a ton of references to other things showing just how much work it is to stop regular black hats in systems without verified software. Nation states just do more of the same stuff.
Edmison's has a nice survey & design for when you don't trust anything outside the SOC:
NSA targets those levels, too. At that point, your bases are covered so long as the fab receives and doesn't alter your design. Plus the complex tooling works...
Edit, it seems I have copied the wrong video, please us the link in the child comment
Why would the NSA eavesdrop on Google, they are in bed with them, aren't they?
Of course they can still use courts to get data directly from Google, but that way they can always only target individuals or small groups, not whole nations.
Because at the time Google did not use encryption in their network links between data centers, NSA was able to siphon up a whole lot of information that way - maybe more than just interaction between users and Google services, potentially as much as interaction between internal components of Google.
Anyway, Google and other companies responded by employing encryption in these links, and promoting the use of encryption across a number of other protocols. The Snowden disclosures were in my perspective a catalyst for Google's promotion of HTTPS in the Chrome browser, and TLS in Gmail, and probably Certificate Transparency. The fact that these initiatives such as encryption between data centers started after the disclosures suggests that they were a response from Google to thwart this kind of surveillance.
Personally I would go further, those multi billion dollar companies should have found a way to speak up if a contractor was able to, not just after the fact.
First, they can force Google to hand over anything. NSLs and the PRISM program are evidence of this, and both are relatively narrow in scope. However, each time the feds compel a top-tier tech corporation like Google to cooperate, the entire thing is scrutinized by lawyers on both sides, and risks drawing the ire of pissed off employees. There's probably many more potential Mark Kleins in Google than there are at a telecom like AT&T—the latter company's relationship with the NSA being best categorized as incestuous.
That being the case, why conduct bulk data collection overtly when they can do it covertly? The aforementioned overt measures ensure prompt data access in the event of an emergency, and keep everyone thinking they're on the up and up. Meanwhile, the truly nasty stuff like domestic bulk collection is conducted behind the scenes.
I don't know. Maybe it is easier than going to court all the time? Who knows?
It is interesting that they are doing some variant of trusted computing mostly because their homogeneity would allow Google to build a robust containment architecture with much more rigorous whitelisting and a robust SW distribution rules that go beyond what a measuring host and local SW bundle verification can do. So defense in depth.
We (skyport systems) do the same thing as a service for enterprises (we sell and operate cloud-managed trusted systems as a service) and I will say it's pretty hard to get people to think about depth and trustworthiness when the entire security industry has trained CIOs to believe that all they need to do is install some random agent on their VMs.
Good for Google.
Why not just shred all decommissioned disks? Someone must be buying them for enough money that Google created a multi-step process for cleaning and verifying them. Presumably Google keeps disks in commission until they're no longer economic in their own operation.
So, does anyone know about the operation that makes profitable use of disks that are no longer economic for Google?
You can't really verify a shredding, the pile of shredded remains no-longer has a serial number. I assume the cleaning process cryptographically verifies the identify of the disk both before and after the wipe, making it impossible to sneak a drive out.
For those drives which fail the cleaning process, they probably have a complex process with multiple witnesses to ensure it actually gets shredded.
> “We enable hardware encryption support in our hard drives and SSDs and meticulously track each drive through its lifecycle. Before a decommissioned encrypted storage device can physically leave our custody, it is cleaned using a multi-step process that includes two independent verifications. Devices that do not pass this wiping procedure are physically destroyed (e.g. shredded) on-premise.”
Interesting. There were discussions on the past on how to clean HDD, if multiple-passes were really necessary or not.
Then SDD become the problem, since there is a interface between what you see (from the OS) and where the data really is (inside those chips). Now Google not only encrypts data before saving (that should be enough, no?) but also tries to wipe using multiple passes and 2 verifications.
Wonder how many companies do that.
Most of these drives use cryptographic keys even if you don't use a password on the device. Think about it as an SSD manufacturer - what's the easiest way to wipe a drive? To actually go and zero out every cell on the disk or to overwrite a very small cryptographic key with a new one - effectively destroying the data without the need for any other write cycles to occur.
Pretty easy to verify - if you have an SSD with support for this, which most do now.
That's not the reason why encryption is always on. Flash endurance is; encrypting the data before FEC means that it will have a random distribution, which avoids pathological worst cases with certain workloads. You could also use a different (cheaper) scrambler than AES (like CPUs do ), but since encryption is a marketable feature...
 Which are also switching to using AES and offering memory encryption in current mainstream architectures.
They basically wipe the drive first and verify it appears to be wiped and then shred it. The highest level allows for only 0.5mm^2 sized particles with tolerance up to 1.5mm^2.
If data is encrypted, then in theory destroying the key should be sufficient given that the encryption is good (Chapoly or AES)
Imagine that your user was Coca-cola and they uploaded their recipe. They wouldn't be happy if in 100 years the encryption was cracked.
Far fetched, maybe slightly but a real consideration.
If you are going to go to that much effort why not physically destroy the drive anyway? You might still want to test them to flag up problems in your process, but if you have the facility locally why not use it for all drives instead of paying an external party to do some of them?
Would be interesting to know where you can buy old Google disks? Should be rather high volume.
And it's all open source and nicely documented for anyone who cares to look. With a bit of work you can actually create your own chain of trust and run your own verified boot process.
It's very cool.
My solution was to print the TCB on a process node that was verifiable by eye. Then, verify a random sample of each batch. Possibly speed it up with image processing algorithms if producing the same component or components.
Note: Deviations from intended circuitry in deep sub-micron can have measurable differences at analog or RF level. DARPA is funding research to do such things. A monitor at visible node could then be combined with CPU's on cutting-edge node. Common practice in commercial sector is obfuscations, though.
This paper is approachable, it's understandable without too much background if you're interested in the topic.
>"There's plenty more in the document, like news that Google's public cloud runs virtual machines in a custom version of the KVM hypervisor."
Does anyone know if this "container inside kvm" is true of their internal infrastructure as well or its just an extra layer of security for their public facing cloud?
But whereas Nintendo's chip was DRM, this Google chip appears to be more about determinism in boots and server provisioning, allowing them to immediately cut out a server that appears malicious or that has been compromised.
I.e. pry open case to insert an implant, chip notices bios has been altered, sends the "don't trust me" message to the network.
From the viewpoint of a government agency, that's a tremendous surveillance enabler. It's really hard to imagine it's not been compromised.
They may indeed be really good at securing their data but 'their data' ironically is derived from my emails and browsing history and that of my friends.