Hacker News new | past | comments | ask | show | jobs | submit login
The Foolishness of Self Destruct Switches (pluralistic.net)
21 points by worik 7 months ago | hide | past | favorite | 4 comments



> This separate system can't be user-modifiable or field-updateable, because then malicious software could impersonate the user and disable the security chip.

I feel like I'm repeating myself after a decade or two, but this is a situation where the best answer is to a physical switch for read-only mode. (Software isn't always the answer!)

Sure, it might not protect you from a spy who sneaks into your house to physically flip it before installing undetectable deep-backdoors... but that basically never happens when compared to conventional malware and hacks over the internet.

Of course, this assumes the goal of the trusted-chip is indeed to protect the user and the community around their computer at large... as opposed to a deceptive campaign by large companies and copyright-holders to cripple your computer for their own ends.


The fact is that scuttling a spaceship would be trivial for its crew, the hard part of space is keeping the constant threat of death at bay. You don't need a button, any more than you would on a submarine, you just need an order given to a trained crew that knows their vessel. For things like secret information, use the same systems they do in embassies, which again boils down to an order given to people who know what to do.


Yes

But to protect your technology you'd want a big bomb

What could possibly go wrong?


In many ways, an insightful piece. Trust is one thing but then also:

"Trusted computing creates a no-go zone where programs can change their behavior based on whether they think they're being observed."

Things generalize beyond computers. A further thing he missed is the issue of developing a secure TPM (or whatever term you want to use). SGX should be a warning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: