Does this not require one to trust the hardware? I'm not an expert in hardware root of trust, etc, but if Intel (or whatever chip maker) decides to just sign code that doesn't do what they say it does (coerced or otherwise) or someone finds a vuln; would that not defeat the whole purpose?
I'm not entirely sure this is different than "security by contract", except the contracts get bigger and have more technology around them?
We have to trust the hardware manufacturer (Intel/AMD/NVIDIA) designed their chips to execute the instructions we inspect, so we're assuming trust in vendor silicon either way.
The real benefit of confidential computing is to extend that trust to the source code too (the inference server, OS, firmware).
Hi Nate. Routinely your various networking-related FOSS tools. Surprising to see you now work in the AI infrastructure space let alone co-founding a startup funded by YC! Tinfoil looks über neat. All the best (:
I'm not entirely sure this is different than "security by contract", except the contracts get bigger and have more technology around them?