Hacker News new | past | comments | ask | show | jobs | submit login

tl;dr of confidential computing:

In normal cloud computing you are effectively trusting the cloud provider not to look at or modify your code and data. Confidential computing uses built in CPU features to prevent anyone from seeing what is going on in (a few cores of) the CPU (and in EPYC's case, encrypt all RAM accesses). Very roughly: These CPU mechanisms include the ability to provide a digital signature of the current state of the CPU and memory, signed by private keys baked into the CPU by the manufacturer. The CPU only emits this signature when in the special "secure mode", so if you receive the signature and validate it you know the exact state of the machine being run by the CPU in secure mode. You can, for example: start a minimal bootloader, remotely validate it is running securely, and only then send it a key over the network to decrypt your proprietary code.

Effectively, it increases your trust in the cloud from P(cloud provider is screwing me over) to P((cloud provider AND CPU manufacturer are both working together to screw me over) ∪ (cloud provider has found and is exploiting a vulnerability in the CPU)).

Disclaimer: I work for Google but nowhere remotely related to this (I know only publicly available information about this product); I happened to do very similar research work 6 years ago in grad school.




.. except in this product, the software to "remotely validate it is running securely" is provided by the same party that is running the cloud.

So it increases your trust in the cloud from P(cloud provider is screwing me over) to P(cloud provider is screwing me over) ∪ (only "cloud ops" department in my cloud provider wants to screw me over, and they cannot get help from anyone else in the cloud provider)

Not a very big change if you ask me.


That's a nice theory. In reality, VMs have no innate source of randomness and call into their hypervisor for that sweet sweet entropy - just as they ask hypervisors to map hardware into their address space, which drivers then proceed to innately trust.

This improves the situation by an infinitesimally small amount.


Is this true? I thought modern server CPUs had access to true randomness that didn't need hypervisor mediation. Or does the hypervisor have the ability to trap RDRAND?


Sounds cheaper to run job batches on a local server then to run them encrypted on a remote mainframe, if you want that level of security.

I mean what is the switching overhead of signing the VM memory.


> I mean what is the switching overhead of signing the VM memory.

Not much. The crypto is implemented in hardware. It's transparently added and removed as data is stored to and fetched from memory. The CPU contains unencrypted data but the rest of the components never see the plaintext.


Does this solve a real problem? Such as, hardware owner leaking stuff from VMs was an issue?


The hypothetical possibility is enough to be a very real problem if decision makers perceive it to be.

And unlike facetious data locality laws that equate physical location with logical control, confidential/trusted computing might actually be able to address their (in my opinion not unfounded) concerns.


You can't prove they don't spy on you for their own gain (financial or otherwise). A single rogue employee with physical access is all that's necessary otherwise. There are also plenty of small cut rate cloud providers out there without much in the way of reputation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: