Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Earlier, it was software controlled. Now, it is not. Isn't that a good thing?


It is still software controlled; by software on a separate processor, and which is far harder to access than before. I'm not sure if that's a good thing.


Why not?


The basic idea is that this is an area of the system that even you as the owner cannot inspect, but I'll let others do the bulk of the explaining:

https://www.gnu.org/philosophy/right-to-read.en.html

http://boingboing.net/2012/08/23/civilwar.html

http://boingboing.net/2012/01/10/lockdown.html


It's interesting to me that you feel this way. Do you also object to the Secure Enclave architecture on the iPhone?


I object to the iPhone entirely, and to a (only slightly) lesser extent, Android devices. Securing against non-owners attacks is one thing, but securing against the owner is definitely repulsive for me. It is sad that the single adjective "secure" has been confusingly used for both.


That makes sense, and is a totally coherent point of view. You're more than typically clueful about these subjects, and it's interesting to get the "against" take on the SEP. Like a lot of systems-y security people, I tend to see it as an unalloyed good, and a big part of what makes the iPhone the most secure computing platform in the industry. But nobody can reasonably argue that it's the free-est.


I certainly do. You must, by definition of Apple's iOS security, trust them it is actually secure. We cannot know how secure it is because we cannot know how it is implemented in hardware or software because it is all proprietary.

Meanwhile, I can reasonably trust, say, crypto++ and its component libraries, because I can (and when I use it, do) audit its source code. I may not be able to trust the processor that eventually compiled binary runs on, but that is another problem that several fronts (POWER8, RISC-V) are moving towards solving.

And no, at the end of the day, you have to trust someone untrustworthy with your computers. We don't have access to fabrication hardware to make them ourselves, and even if we did... the rabbit hole continues and how can you trust the fab? Instead, you minimize your risk and have the smallest surface area of exploitation possible. It is harder to make a whole CPU, whose instructions you have, operate against your will (because you can see the output and compare results) than a black box of silicon you just get cryptographic data out of.

But without source and design documentation on products I own I have no reason to believe they are anything but malicious with planted backdoors to circumvent any of my own attempts at privacy. Which makes using computers for private reasons borderline impossible without a full libreboot system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: