I've also worked in this field but it feels like a foundation built on quicksand. You depend on so many turtle layers and only one of them has to be adversarial and game over.
> it feels like a foundation built on quicksand. You depend on so many turtle layers and only one of them has to be adversarial and game over
Interesting. Please elaborate.
Here's how I see it.
Reproducible builds: I think we'll eventually see Linux distributions like Debian make reproducible builds mandatory by enforcing it in apt-get's trust policy. The trust policy could be expressed as "I will only trust .deb packages where their build hash and source hash are signed by three different build pipelines I trust".
Remote attestation: If you ensure that the server's CPU SoC and the TPM have different supply chains, you could construct a protocol where the supply chain attacker would have to own both supply chains in order to impersonate the server.
Transparency logging: One of the projects I've been working on for the past four years is Sigsum (sigsum.org). It is a transparency log with distributed trust assumptions. Our goal was to figure out the essence of transparency logging technology, identify the most significant design parameters, and for each parameter minimise the attack surface. You'll find the threat model on our website.
Here's a recent presentation by me on the subject of system transparency / runtime transparency / the technology underlying Apple PCC: https://www.youtube.com/watch?v=Lo0gxBWwwQE
I think the only shaking part, is the Secure Enclave, which provides the root of the guarantees. From there, everything is attested so if one layer is adversarial, other layers can notice.