Hacker News new | comments | show | ask | jobs | submit login
Google reveals its servers all contain custom security silicon (theregister.co.uk)
363 points by chris-at on Jan 16, 2017 | hide | past | web | favorite | 125 comments



This is another signal of an interesting development on the hardware front. What used to be decoupled, with some companies offering hardware, and different companies buying hardware, is now coupled and hidden within these mega-companies (Google, Amazon, FB).

Google is big enough to develop a trusted hardware solution for internal use only, it has no financial need to sell it. Worse, due to competitiveness in the cloud segment, it is dis-incentivized from selling the solution.

Amazon Glacier is another one. It's an interesting long-term storage solution, whose hardware implementation is unavailable to the market, since AMZN can better explore it as a service under AWS.

We are heading onto a more closed ecosystem than we are used to up until here. The cloud, which gave us the immense positive benefit of moving all capex to opex, is birthing this immense negative side effect of closing off hardware implementations in favour of exploring the added value in the form of services.


It's not the cloud - it's the sad downside of the democratization of hardware design, as in fabs like TSMC and IP companies like ARM making it relatively cheap to make your own chips with competitive functionality in a wide range of areas. There's a lot of custom hardware outside the cloud, say in embedded electronics, that's just as closed as the stuff in server farms - closed specs and no way to program the thing, increasingly often no ability to run binaries unsigned by someone in a small set of vendors.

Moreover, the GPUs and more so, DSPs and ISPs in your phone or PC are hidden from you in that they run code written by a very small number of people. You don't even have an idea how many small DSP cores are scattered throughout a desktop-class chip, let alone what they do or how to program them. Effectively it's for internal use of a very small number of hardware and software vendors, and the software is very much tied to the hardware.

The reason computing hardware used to be open is that very few could make it and they only stood to gain from making it usable in as many applications as possible, or at least so they thought. Once (almost-)cloning hardware products became increasingly cheap with less and less vertically intergated hardware vendors (from fab to design), vertical integration moved to design+software because that's how you fend off competition today.


Another good example that I have experience with - closed firmware blobs in everyone's wifi chipsets and cell phone basebands. Early-standard ARM processors are cheap enough to embed in peripherals these days, making it easy to hide your functionality in hard-to-extract embedded software.


They are even embedded in micro SD cards...

https://www.bunniestudios.com/blog/?p=3554


Pretty sure this is nothing to do with Google manufacturing their own silicon. It's an open secret that standard Intel server chips contain "special silicon" with features which are only switched on for certain customers. I'm pretty sure this is what Google is referring to. Source: https://techreport.com/forums/viewtopic.php?t=118026


Google has been doing custom security hardware for 5 years in Chromebooks[0].

[0] https://chrome.googleblog.com/2011/07/chromebook-security-br...


Very much true. I don't think the two reasons are exclusive; it is probably a factor mix. Lowered development costs, economies of scale, need for competitive edge in a market that slopes into commoditization.


It seems a bit counterintuitive that open hardware results in less choice, so I disagree. I think that hardware is getting more and more open, also drivers for it. With FPGAs it is (relatively) straightforward for one to create it's own crypto processor and integrate it in the system. Also PCBs are getting easier and cheaper to make. I hope that also there will be some open PCB designs that incorporate some kind of crypto chips and functionalities outside of CPUs, so everyone can start creating their own servers, if desired.

Didn't also Facebook started some open server hardware initiative? I don't remember what happened with that...

I do agree that the current status is not great, and that we could all benefit from more open hardware design. I think that it would also benefit large companies as well.


> Didn't also Facebook started some open server hardware initiative? I don't remember what happened with that...

It's alive and well, but not newsworthy, thus phased out of public's attention span.


Could you link?




The fact that you're reading an article about a paper Google just published suggests it's not as closed as your doomsaying might suggest. Also the paper notes that Google is one of the largest contributors of bugs and CVEs to KVM, which is a security tide that will raise a lot of boats.


I don't think your use of "doomsaying" is fair, and detracts from a useful comment.


More ground is lost in the cold[1] civil war[2] for control of the General Purpose Computer. I hope that everyone choosing to centralize computing power likes the future they are creating.

[1] https://www.youtube.com/watch?v=nT-TGvYOBpI#t=2824 (sec. 10 - http://geer.tinho.net/geer.blackhat.6viii14.txt )

[2] http://boingboing.net/2012/08/23/civilwar.html


It's not a choice†, it's market forces. You'll never change the world effectively if you don't start by correctly diagnosing the problem.

†(as far as your objective is concerned. Yes, Google could choose not to have secure hardware, but that wouldn't change the end result that the market leaders in five years will have secure hardware — Google just wouldn't be among them.)


What is different about centralized compute power compared with centralized energy production?


Your centralised energy supplier can't monitor what you're doing with the energy, or exfiltrate your results, or even stop you from doing it.


Yes they can. See smart meters. With analytics, they can determine every appliance in your house, and know exactly when and where you come and go at all hours of the day.


> smart meters

> analytics

The smart grid requires a lot of General Purpose Computers gather that data. However, this risk has already been considered. From the link in my previous [1] (sec 7):

    ... privacy [is defined as]: the effective capacity
    to misrepresent yourself.

    Misrepresentation is using disinformation to frustrate
    data fusion on the part of whomever it is that is
    watching you. ... Misrepresentation means putting
    a motor-generator between you and the Smart Grid. ...
If smart meter monitoring becomes commonplace, there are solutions that can be deployed. In case of a pedantic reading of that quote, I'm sure Dan Geer was merely listing examples. Further isolation from the grid should probably include some amount of local energy storage to smooth out the usage rates in addition to electrical isolation.

In any case, as others have pointed out, the War isn't about centralization. The War is about the inability to turn a Turing complete system (the General Purpose Computer inside everything) into an appliance that doesn't run some programs. The universal nature of the computer puts a lot of power in the hands of the people, which scares some people and undermines many business models.

Thus there is a desire (possibly indirect) to wage war on this new threat by limiting how many General Purpose Computers end up in the end user's control and hobbling the rest with spyware/drm. If everyone has dumb terminals and "appliances" that only run authorized software, the threat of people actually using the power inherent in every General Purpose Computer is neutralized.

This war is ongoing right now, with small battles happening in every "appliance" or "service" that pretends a Turing complete computer is an appliance. The war is far from over, but we are losing a little bit more every time some piece of technology is centralized.


I never read the War On General purpose computing being about centralization as much as DRM restrictions on content and restricted developer support. In other words, it's not about deploying to the cloud versus deploying to a billion individual devices. It's about not deploying anywhere at all.


It's a sign of changing times indeed, but for the consumer's benefit.

It is absolutely in Google's best interests to externalize security for its customers as a differentiator of Google Cloud. The parent article itself links to the white paper that outlines how this is done for Google Cloud.

I understand how one may consider this a "closed ecosystem" from one perspective. However, from a customer point of view any startup or mom-and-pop can leverage these very complex and expensive world-class security developments, whereas in the past this access has been reserved to the very select few that could afford it. When the barrier to entry is lowered and access is commoditized, customer wins.

(work at Google cloud)


I've worked on both sides of the fence. My take on it is that the cloud is raising the security bar in many dimensions, but lowering it in otheres. Frankly, for-profit surveillance (and government conspirators) is at the top of my list of security concerns.

The good news is that the "evil" mega corps aren't so evil, and generally contribute back.

Also, the bar for custom hardware is dropping, and the recent moore's law stall means a good board design can live for 3-6 years instead of 1-2. In turn, this means that niche operating systems like BSD and opensource Solaris have increasingly stable hardware targets.

Taken together, this is great news.


> However, from a customer point of view any startup or mom-and-pop can leverage these very complex and expensive world-class security developments, whereas in the past this access has been reserved to the very select few that could afford it.

I don't expect startups or mom-and-pop's to build internal clouds. I do expect medium to large companies to do so. The current market turns innovations such as these into competitive advantages of Google, instead of directly exposing the innovation to the market and allowing incorporation of its advantage into anyone's product. It's a less liquid market. Either you buy all of Google's solution as a package, or you buy none. You can't pick, sort and mash a solution of your own from disparate parts.

Note that Google here is just an example. All companies of similar size in IT are doing the same, and strategically it is the correct option (for them). I'm just stating that the overall result is sub-optimal through a collusion of disparate factors.


> and strategically it is the correct option (for them).

I agree. Consider what's at stake for them. I can't even begin to wrap my head around how bad that would be if an entire server farm got rooted. At least defending a bank you know what the attacker's endgame is: steal money/SSN's. If a server farm were hacked, you'd see identity theft, blackmail, massive customer (and e-commerce) downtime, malware distribution, ddos/large botnets, market manipulation (if you started spreading false news about a particular company, at scale, on social media), perhaps brute-force RSA/SSL cracking. If those guys got hacked, it could be an absolute shitstorm. So I dont blame them at all for creating their own TPM or whatever.


> I understand how one may consider this a "closed ecosystem" from one perspective. However, from a customer point of view any startup or mom-and-pop can leverage these very complex and expensive world-class security developments, whereas in the past this access has been reserved to the very select few that could afford it. When the barrier to entry is lowered and access is commoditized, customer wins.

I don't understand why the customer can't get these benefits AND the ecosystem be open as well.


Pressure from governments to not supply consumers with hardware that is resistant to surveillance is one reason.

AFAIK, the consumer systems that are most resistant to physical attack (and that lack spooky things like Intel's system management CPU) are game consoles. The hardening is a requirement for anti-piracy and anti-cheating, and in newer generations of consoles it's been quite successful. Recent iPhones are a distant second in terms of security architecture.


> Recent iPhones are a distant second in terms of security architecture.

Source? Apple does some pretty sophisticated stuff around hardware security mechanisms and software correctness.

(I genuinely want a technical description of the mechanisms consoles use these days so I can read it -- not trying to start an argument...)


I don't have any details but without a exploit a game console will never run a single line of code that isn't signed. It makes the attack surface rather smaller than an iPhone.


Without an exploit, how does one run unsigned code on an iPhone, exactly?



You're still signing the app when you side load it in that way.


I would agree with you if they cannot be updated via an internet connection.


You can't have an open ecosystem when part of the ecosystem is hundreds if not thousands of full-time security experts and 24x7 opsec analysts. It is an integrated software and operations system. The software alone doesn't get you anywhere.


> any startup or mom-and-pop can leverage these very complex and expensive world-class security developments,

Until a random cosmic ray triggers Google to revoke access for mom and pop.

There's tradeoffs, and most people don't discover them until the random cosmic ray hits the fan. And, to be fair, most Google customers probably never encounter a RCR event.


Let's be clear: The customer never wins when the product is closed. Google used to understand that: https://googleblog.blogspot.com/2009/12/meaning-of-open.html

In realty, mom-and-pop don't need these security developments, because mom-and-pop have much less attack surface on a server running in their back room. The cloud necessitates it be possible to manage a server over the Internet, but for many situations that isn't necessary. And in many cases, the limited needs a mom-and-pop company has doesn't require their infrastructure be public facing on the Internet at all.

I'd argue the only reason one needs these "world-class security developments" is because Google itself is a world-class target. The sort of threats you're defending against would almost never be necessary for a smaller business with an on-premises solution to be concerned about.

Many small business internally need little more than a shared network drive, rudimentary user management, etc. And you'd be stunned how many businesses today still operate off a single AOL mail account.


Strongly disagree. Mom and pop businesses get owned all the time and close as a result (see Krebs On Security for cites). The economics of online attacks mean that even smallish targets are not obscure enough to be safe.

Disclosure: I work on security at Google.


People's Google accounts get owned all the time too. None of this excess security measures Google is talking about helps if you have bad security practices or your password is 123456.

Google's security measures here largely are a result of a security problem Google created in the first place. That isn't unusual, mind you. Web design is much the same way. We create new problems via added complexity, then have to solve them.

The whole threat model that requires you put custom silicon in your servers just doesn't apply or matter to smaller parties.


Your comment extrem bad. If totally and utterly false that nothing google does helps against bad passwords. Google has some of the best 2Fa system pretty much compared to everybody else. They support TOTP, SMS and U2F.


Your comment is extremely bad, because we aren't talking about TOTP (an open standard), SMS (an open standard), or U2F (an open standard).

This article is about the custom security silicon in Google servers, and Google Cloud employees selling the false concept that this is a must-have for anyone but themselves. This has nothing to do with 2FA, and 2FA, in case you're curious, works everywhere not powered by Google Cloud too.

Do not attack people when you do not know the topic of the conversation you are participating in.


You are really good at trolling.


> immense positive benefit of moving all capex to opex

Hmm. Is that really a benefit or Enron-ing yourself? I can see how it allows efficient ramp-up of small companies which can't afford a whole sysadmin for ops, but the big downside is that the return on capital goes to the megacompanies, to whom you are paying rent.

I can sort of see why it's happened, because it's very hard to capture the added value of software by selling it. Especially in this area, the value of software is driven to zero. Whereas restructuring as a service and putting up barriers to entry brings back the margins for the developer.

We almost need an update to Coase's Theory of the Firm to account for the disruptive effects of IP-heavy organisation.


It reduces the resources needed to bootstrap your own company. Of course once it reaches a certain size, it makes it more cost effective to host your resources internally. There are also many organizations who do not want to deal with a whole division dedicated to maintaining a reliable, distributed, secure computer network for their employees and would be more willing to pay for the cloud rather than the extra people required to maintain the in-house solution.


This is not really true. Not "once you reach a certain size", but once you move off the lowest-tier starter hardware, cloud quickly becomes much more expensive than owning hardware, and security solutions continue to depend on the individual administrators (most backups do too).

Cloud's biggest benefit is really convenience, because you don't need to go to the datacenter and put in another hard drive yourself when you need more space. That's absolutely not worth the price premium most of the time. Large companies could hire hardware jockeys in-house for a fraction of the cost and smaller companies can't afford such conveniences.


For "convenience" substitute "having the technical sophistication to manage it properly". How many companies know enough to hire and retain top-notch people to run a datacenter and keep upgrading it?


Renting a rack is not the same as running a data center. The technical sophistication is available. It can be hired on the employment market or contracted as an ad-hoc service. Many colos offer a hands fee to have their staff do things like install disks on your behalf, and there is always the traditional managed server.

Chasing the cloud has cost companies a massive amount of money. I'm familiar with companies that would've saved nearly a million bucks a year by staying on bare metal/in-rack hypervisor. They weren't forced into the cloud by a shortage of people who could perform hardware maintenance, they went because it was the hip thing to do.


That's not also strictly true. I can spin up redundant cloud server instances on three continents for roughly the same price as three instances in a data center physically near me. If I need to put a drive in a physical server on the other side of the world, I'm dead in the water. It's only a matter of convenience if there's a reasonable, cheaper alternative.


While I'm sure there are unique situations that for whatever bizarre reason work out where cloud is cheaper, they're pretty rare. Most of the time, if you have a server "on the other side of the world" you're in a data center where you can ask the datacenter's staff to install another disk for you and pay any associated fee. If you put it in a datacenter that doesn't offer such services and you're 5000 miles away, that was probably a bad call.

Cloud does have some benefits and there are specific applications that are smarter to run in the cloud than on colocated hardware, but they're almost never going to be cheaper to run in the cloud.

You can sometimes save money sort of indirectly. For example, if your MySQL application is struggling and you put it on Aurora and it runs fine there, then you've saved tons of labor costs in exchange for the cost of your Aurora instance, which isn't cheap, but is probably cheaper than consulting time, but even this is a short-lived benefit because at some point the monthly rent crosses the threshold, and it locks you into an application that can only run well on Amazon RDS.


>Is that really a benefit or Enron-ing yourself?

I read it as sarcasm. ;)


"We are heading onto a more closed ecosystem than we are used to up until here. "

It wouldn't be more open even if they didn't spin their own hardware. You won't ever see it nor have access to it on a low level - it's 'the cloud'. The only thing it tells us is that they have reached a scale where custom hardware makes things cheaper, more reliable and more manageable for them.


Google joined open compute last year.


> It wouldn't be more open even if they didn't spin their own hardware.

Of course. If the alternative is non-development, we are better off. However, my text states that the alternative is de-coupling: One company develops and markets the innovative hardware solution, other companies build services on top of that solution. This would be more open, it would present a more liquid market.


> We are heading onto a more closed ecosystem than we are used to up until here.

Not necessarily. Balancing forces always come up when this kind of things happen. The closing in of big players may give a big boost to open source hardware alternatives, which combined with the advent of 3d printing, may very well lead to the democratization of hardware...


I think we should limit companies to a maximum of N employees.

This will ensure more modularity in the market. And more competition as well, because barriers are lower.


I read a sci-fi story with this premise.

Companies could only be as large as X before starting to pay prohibitively large taxes in order to stay 'for profit entities' or become companies devoted to the public good.

So you'd end up with large telecoms who were non-profits dedicated to improving the level of global interconnectivity, and lots and lots of tiny 2-10 man companies that did research or sales.


Do you have a link to this? It sounds interesting, thanks.


I like this idea. Maybe we should limit governments to a population of N citizens for the same reasons.


interesting idea, impossible to enforce. we'd end up with even more contractors, and mega-conglomerates of hundreds of companies with ~N employees each.


Is it really any different in practice? TPM is installed in basically every computer used in basically none. Even though its been a uniquitous "standard" for two decades its effectively impossible to correctly do attestation.

Similar arguments could be made for UFI and similar. Just because some can write code for it only a handful of major suppliers actually do.

Are those any more accessible than this?


I think you would have a hard time showing this is any less closed than earlier days. For good and bad. IBM used to be at your door to replace hardware you didn't know was broken yet, and couldn't have fixed if you had known.


it's probably going to be just a cycle. Much like energy is very centralized with a lot of custom hardware ( think nuclear power plants) but tends to have decentralized alternatives ( solar panels ) for some advantages, you'll probably end up with the same cycle with computing power. Once a single affordable machine will be able to serve all your applications to all your customers with zero maintenance cost ( because innovation will keep going on), you'll probably switch away from the cloud.

it may be in a long time though..


http://opencompute.org proves the exact opposite


Thinking from another perspective, are those hardware just hardware? Not really, they are solutions, with tons of software supporting them. There is no point selling individual pieces, and they will not work out of box.


> gave us the immense positive benefit of moving all capex to opex

Isn't it more correctly stated moving fixed costs to marginal costs? capex is mostly fixed, and opex is mostly variable, so you are not strictly incorrect.


Looking at what's going on in Shenzhen[1] in regards to hardware hacking, I'm not worried.

[1] https://youtu.be/SGJ5cZnoodY


This is how things have always been. It's the reason for our patent system, to incentivize companies to open up their proprietary tech.


So basically what was assumed to be a linear trend is turning out to be a dialectic.


This is what security looks like when your threat model is well funded government agencies.


Same threat model with black hats. Here's a recent example from high-security field that's remarkably simple but stops tons of attacks:

http://www.cc.gatech.edu/grads/c/csong43/oakland16-hdfi.pdf

Its related work & extension page has a ton of references to other things showing just how much work it is to stop regular black hats in systems without verified software. Nation states just do more of the same stuff.

Edmison's has a nice survey & design for when you don't trust anything outside the SOC:

https://theses.lib.vt.edu/theses/available/etd-10112006-2048...

NSA targets those levels, too. At that point, your bases are covered so long as the fab receives and doesn't alter your design. Plus the complex tooling works...


Exactly, and I think it's worth noting that they likely only apply this level of security because of state actors. Which then shows that they are trying to prevent eavesdropping by NSA & Co., they probably just realized too late how far advanced they were.


Eric Grosse from Google says as much here, ...

Edit, it seems I have copied the wrong video, please us the link in the child comment


This talk is also discusses the adversaries: https://www.youtube.com/watch?v=0knR6vXba7g


. . . how far advanced, and the lengths they would go to.


Which then shows that they are trying to prevent eavesdropping by NSA & Co.

Why would the NSA eavesdrop on Google, they are in bed with them, aren't they?


Snowden revealed the opposite: https://cdn.grahamcluley.com/wp-content/uploads/2013/10/nsa-... NSA actively tries to eavesdrop on Google.


And the chips are probably the reaction to exactly that slide. Not only enabling SSL between datacentres, but verifying all code that servers run, to avoid that the NSA can download private keys from Google servers just because they handed an ESL to the datacentre operator.

Of course they can still use courts to get data directly from Google, but that way they can always only target individuals or small groups, not whole nations.


Got links?


For me, the whole takeaway of the Snowden leaks was that the NSA can legally force Google (or anyone) to hand over basically anything, am I mistaken? Articles like [1] seems to underline they are indeed working together.

[1] http://www.huffingtonpost.com/2014/05/06/nsa-google_n_527343...


The Snowden disclosures did not suggest that Internet companies like Google were coordinating with NSA. The disclosures suggested that NSA was wiretapping all traffic across the Internet, and then parsing it, storing it, and indexing it so as to be able to make sense of what traffic represented e.g. a Google web search, and then search that semantically later.

Because at the time Google did not use encryption in their network links between data centers, NSA was able to siphon up a whole lot of information that way - maybe more than just interaction between users and Google services, potentially as much as interaction between internal components of Google.

Anyway, Google and other companies responded by employing encryption in these links, and promoting the use of encryption across a number of other protocols. The Snowden disclosures were in my perspective a catalyst for Google's promotion of HTTPS in the Chrome browser, and TLS in Gmail, and probably Certificate Transparency. The fact that these initiatives such as encryption between data centers started after the disclosures suggests that they were a response from Google to thwart this kind of surveillance.


Google was requested large amounts of data by the NSA since ~2007 [1]. They were might or might not ordered to keep silent about it. They only started to address it as a reaction to the Snowden revelations. Additionally, there are letters such as linked above that show they are in fact very friendly with the NSA. One can argue they were caught red handed with little to nothing like an email here and there, but circumstantial evidence shows otherwise.

Personally I would go further, those multi billion dollar companies should have found a way to speak up if a contractor was able to, not just after the fact.

[1] https://en.wikipedia.org/wiki/PRISM_%28surveillance_program%...


If the NSA could force Google to hand over anything, why were there Snowden slides showing that the NSA was secretly tapping Google's internal networks?

https://www.washingtonpost.com/world/national-security/nsa-i...


>If the NSA could force Google to hand over anything, why were there Snowden slides showing that the NSA was secretly tapping Google's internal networks?

First, they can force Google to hand over anything. NSLs and the PRISM program are evidence of this, and both are relatively narrow in scope. However, each time the feds compel a top-tier tech corporation like Google to cooperate, the entire thing is scrutinized by lawyers on both sides, and risks drawing the ire of pissed off employees. There's probably many more potential Mark Kleins in Google than there are at a telecom like AT&T—the latter company's relationship with the NSA being best categorized as incestuous.

That being the case, why conduct bulk data collection overtly when they can do it covertly? The aforementioned overt measures ensure prompt data access in the event of an emergency, and keep everyone thinking they're on the up and up. Meanwhile, the truly nasty stuff like domestic bulk collection is conducted behind the scenes.


The infiltration is especially striking because the NSA, under a separate program known as PRISM, has front-door access to Google and Yahoo user accounts through a court-approved process.

I don't know. Maybe it is easier than going to court all the time? Who knows?


Snowden didn't reveal anything even suggesting this "front door access". He's got some PowerPoint slides that say prism and some that have Google logo on them. All meaning of these materials has been supplied by internet conspiracy theorists.


An abusive, rapey relationship is not the same thing as being "in bed".


That link is anything but an abusive relationship. Still, I fail to see how silicone is a countermeasure to a court order.


When Google has its networks compromised and is routinely compelled to comply with NSLs, I'd say that counts as abusive.


The actual document - https://cloud.google.com/security/security-design/ - was linked previously.

It is interesting that they are doing some variant of trusted computing mostly because their homogeneity would allow Google to build a robust containment architecture with much more rigorous whitelisting and a robust SW distribution rules that go beyond what a measuring host and local SW bundle verification can do. So defense in depth.

We (skyport systems) do the same thing as a service for enterprises (we sell and operate cloud-managed trusted systems as a service) and I will say it's pretty hard to get people to think about depth and trustworthiness when the entire security industry has trained CIOs to believe that all they need to do is install some random agent on their VMs.

Good for Google.


"Before a decommissioned encrypted storage device can physically leave our custody, it is cleaned using a multi-step process that includes two independent verifications. Devices that do not pass this wiping procedure are physically destroyed (e.g. shredded) on-premise"

Why not just shred all decommissioned disks? Someone must be buying them for enough money that Google created a multi-step process for cleaning and verifying them. Presumably Google keeps disks in commission until they're no longer economic in their own operation.

So, does anyone know about the operation that makes profitable use of disks that are no longer economic for Google?


Probably because of the verification.

You can't really verify a shredding, the pile of shredded remains no-longer has a serial number. I assume the cleaning process cryptographically verifies the identify of the disk both before and after the wipe, making it impossible to sneak a drive out.

For those drives which fail the cleaning process, they probably have a complex process with multiple witnesses to ensure it actually gets shredded.


My guess to the decommissioned disks is that HDD manufactures will sometimes give a hefty discount on disks if they can have them back at the end to run diagnostics on them. My company has a no disks leave the company policy and there has been talk about modifying this for the discount on disks.


I'd imagine it's easier to recycle an intact HDD than a jumbled mix of silicon, ceramic, steel, aluminum and what not.


Just speculation but they could be used elsewhere in Google for test/dev or pre-prod environments.


> Disks get the following treatment:

> “We enable hardware encryption support in our hard drives and SSDs and meticulously track each drive through its lifecycle. Before a decommissioned encrypted storage device can physically leave our custody, it is cleaned using a multi-step process that includes two independent verifications. Devices that do not pass this wiping procedure are physically destroyed (e.g. shredded) on-premise.”

Interesting. There were discussions on the past on how to clean HDD, if multiple-passes were really necessary or not.

Then SDD become the problem, since there is a interface between what you see (from the OS) and where the data really is (inside those chips). Now Google not only encrypts data before saving (that should be enough, no?) but also tries to wipe using multiple passes and 2 verifications.

Wonder how many companies do that.


If you use on-board crypto on most SSDs, there's a dedicated place for key storage and using the SSD's onboard wipe feature just changes the key and TRIMs the whole drive.

Most of these drives use cryptographic keys even if you don't use a password on the device. Think about it as an SSD manufacturer - what's the easiest way to wipe a drive? To actually go and zero out every cell on the disk or to overwrite a very small cryptographic key with a new one - effectively destroying the data without the need for any other write cycles to occur.

Pretty easy to verify - if you have an SSD with support for this, which most do now.


> Think about it as an SSD manufacturer - what's the easiest way to wipe a drive?

That's not the reason why encryption is always on. Flash endurance is; encrypting the data before FEC means that it will have a random distribution, which avoids pathological worst cases with certain workloads. You could also use a different (cheaper) scrambler than AES (like CPUs do [1]), but since encryption is a marketable feature...

[1] Which are also switching to using AES and offering memory encryption in current mainstream architectures.


Ah, interesting. That's really cool. I guess it makes things easier for them and better for their customers in several ways at once.


'multi-step process' doesn't imply they are wiping more than once, only that their procedure consists of multiple steps (e.g. wipe + verification)


The article specifies two independent verifications.


I remember reading up on the BND (German Intelligence Agency) Guidelines on how they wipe their data.

They basically wipe the drive first and verify it appears to be wiped and then shred it. The highest level allows for only 0.5mm^2 sized particles with tolerance up to 1.5mm^2.

If data is encrypted, then in theory destroying the key should be sufficient given that the encryption is good (Chapoly or AES)


It also depends on how long you want the data to be safe. So if you are storing user data you probably don't want to release drives containing encrypted user data as you don't long they wanted that data to remain secret for.

Imagine that your user was Coca-cola and they uploaded their recipe. They wouldn't be happy if in 100 years the encryption was cracked.

Far fetched, maybe slightly but a real consideration.


Well, yes, that's what you shredder them for.


Check out "Opal" and "SED" https://en.m.wikipedia.org/wiki/Opal_Storage_Specification. Many (most?) drives support it these days. When I briefly looked a year or two ago basically all drives had the physical capability, but some firmwares didnt expose it to the iser. As always key management is the hassle.


>* Devices that do not pass this wiping procedure are physically destroyed on-premise*

If you are going to go to that much effort why not physically destroy the drive anyway? You might still want to test them to flag up problems in your process, but if you have the facility locally why not use it for all drives instead of paying an external party to do some of them?


They don't say that the drives are all destroyed by the external parties. And even if they are, I could imagine that proper recycling is easier if you have the full drives and not a shredded mixture of all the materials in it.


Thought they'd shred all disks? This reads as most drives (all passing the test) are sold to others.

Would be interesting to know where you can buy old Google disks? Should be rather high volume.


They likely recycle the disks rather than sell them. I'd imagine that reclaiming the materials is cheaper with intact disks compared to shredding.


I guess at their scale and the business they're in, it's cheaper to dedicate a few engineer-hours/days/weeks to implement an overkill wiping procedure rather than arguing with potential customers that "no, it's not really feasible to extract usable data with an electron microscope despite what you have read on the interwebs". Or even worse, losing said customers if they're not persuaded by your arguments.


A lot of stuff from this made it's way into the chromebook. There's a verified boot process, hardware assisted key management, rollback protection, ...

And it's all open source and nicely documented for anyone who cares to look. With a bit of work you can actually create your own chain of trust and run your own verified boot process.

It's very cool.


What is this called? I'd like to look into it, but cursory searching is giving me more vague results.


But if they don't own an IC fab, how do they know it is secure?


They don't. Too many attack vectors. I illustrate here:

https://news.ycombinator.com/item?id=10468624

My solution was to print the TCB on a process node that was verifiable by eye. Then, verify a random sample of each batch. Possibly speed it up with image processing algorithms if producing the same component or components.

Note: Deviations from intended circuitry in deep sub-micron can have measurable differences at analog or RF level. DARPA is funding research to do such things. A monitor at visible node could then be combined with CPU's on cutting-edge node. Common practice in commercial sector is obfuscations, though.


Basically splitting the trusted circuit and testing the parts separately. This requires a trusted master circuit, but it can be arbitrarily small.

See https://perso.uclouvain.be/fstandae/PUBLIS/177.pdf


But what if the malicious code is time activated? (just an example)


This is actually addressed in the paper. Basically you can use testing to detect the timebomb, up to a negligible probability.

This paper is approachable, it's understandable without too much background if you're interested in the topic.


I was curious about this:

>"There's plenty more in the document, like news that Google's public cloud runs virtual machines in a custom version of the KVM hypervisor."

Does anyone know if this "container inside kvm" is true of their internal infrastructure as well or its just an extra layer of security for their public facing cloud?


Internal Google stuff does not use KVM and that's one reason it took them a while to offer VMs — they had little experience with it.


Do you or anyone else know if there is another reason for doing this besides security?


I can't speak for Google, but there are several reasons. Docker and k8s are not multitenant, so if you want to build a public k8s cloud you need a tenant layer under it. That layer could also be containers (e.g. LXD), but then you're talking about secure nested containers which was not really available in November 2014.


Oh good insight. That makes a lot of sense. Thanks.


I barely dug into the article when It came to me: Google just did a lockout chip, 1980s Nintendo style.


I suppose if you're looking for a sound bite, yes.

But whereas Nintendo's chip was DRM, this Google chip appears to be more about determinism in boots and server provisioning, allowing them to immediately cut out a server that appears malicious or that has been compromised.

I.e. pry open case to insert an implant, chip notices bios has been altered, sends the "don't trust me" message to the network.


Makes me think of Intel's IME. It has legitimate uses on corporate desktops and servers. But when it makes its way to consumer desktops it runs face first into a massive conflict of interest.


IME's huge problem is its shroud of secrecy. The CPU can do just about anything on the bus, it has access to external ports, and the code it runs is encrypted.

From the viewpoint of a government agency, that's a tremendous surveillance enabler. It's really hard to imagine it's not been compromised.


Still it doesn't make me want to use their services.

They may indeed be really good at securing their data but 'their data' ironically is derived from my emails and browsing history and that of my friends.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: