> Securely erasing saved keys is just as important as generating them. It’s especially challenging to do so on flash storage, where wear-leveling might mean multiple copies of data need to be erased. To address this issue, iOS devices include a feature dedicated to secure data erasure called Effaceable Storage. This feature accesses the underlying storage technology (for example, NAND) to directly address and erase a small number of blocks at a very low level.
I guess that means separate storage, as the main storage in recent iPhones is an NVMe SSD and not raw NAND attached to the processor.
BTW, is there a good / easy way to connect raw NAND to a normal desktop PC?
However beyond this, you need to know a bit more information to interpret this raw data. This includes any data framing structure, error correction, scrambling, encryption and read error recovery algorithms. A lot of this information is non-standard or only available under NDA from the manufacturer.
https://gist.github.com/computerality/3e0bc104cd216bf0f03f8d... <--- only the new parts
(Yeah, it says optional on the syllabus but Weaver said required in lecture.)
(It's just called something else.)
Historically Android has been lagging behind a bit from iOS devices when it comes to security, but Pixels and their software have a very similar security model and design (with some exceptions - less granularity with file-based encryption and some other mostly minor details).
Non Google devices however are usually significantly less secure - not so much due to Android design, as due to manufacturers deliberately disabling Android's security featuers (e.g. only Pixel actually uses dm-verity at this moment if I remember correctly), refusing to update them, building devices with bad trustzone drivers... etc.
If you keep to the 1st party (Google-branded) devices like in iOS world, you're mostly ok.
Could it be the case that Apple is leveraging TrustZone but with a customized L4 kernel? Or is it confirmed that the Secure Enclave is a custom IC designed by Apple? I wouldn't be surprised if it's the former as it becomes much cheaper to implement the required security features.
Edit: Check out this previous discussion on this exact topic: https://news.ycombinator.com/item?id=8410700
By default no SW runs on HW. "Mobicore" (now called "Kinibi" from Trustonic) is based on L4.
No, 2012 was when Trustonic was formed from competing TEE vendors: ARM, Gemalto, and Giesecke & Devrient.
TrustZone has been around since before that. TI OMAP were front-runners of using it.
Yeah, nation state level attacks will still work, especially if they have the phone. But with Android it's not nation state level. It's corporate level and maybe less if they have the phone.
I know of another similar implementation that's used by Microsemi for their FPGA-based secure boot process. They claim to protect the initial AES key transmission using an "obfuscated" crypto library that is sent to the processor over SPI on boot. Also, I wonder if Apple exchanges a nonce during the setup to prevent replay attacks?
: It's a C/C++ library called WhiteboxCRYPTO. There is a whitepaper (http://soc.microsemi.com/interact/default.aspx?p=E464), but AFAIK the gist of their argument is that the code and keys are sufficiently obfuscated to prevent reverse engineering (typical marketing-speak).
I still downvoted izacus because it was an uncharitable fanboy rambling. The charitable interpretation would be that the walled garden (in regards to the enclave) is a side effect of their implementation, and not the intention.
But to me, they seem to be trying to find a moderate level of security with a profitable cost of goods. It doesn't seem that their heart is in it the way Apple's is with the Enclave. iOS is still breakable at the nation state level but well that's quite a high bar. Nation states are breakable at the nation state level.
 - https://developer.android.com/about/dashboards/index.html#Pl...
iPhones are typically supported for 4 years.
The latter is important: at the end of the day, software can only be as secure as the hardware on which it is installed. For example if someone can tamper with the hardware random number generator then your crypto becomes compromised.
There's a bunch of areas which only matter a lot to a small group of people where, when you investigate it, Apple has quietly been doing the right thing for a long time.
I know I certainly wish there was a Keychain access app like on macOS available for iOS rather than only being able to access passwords via Safari settings.
Any better examples come to mind of apps that refuse to run unless hey have an unreasonable feature granted?
The Chinese WeChat messenger also refuses to run unless location access is granted, even though messaging apps do not depend on location to work.
This type of behavior makes fine-grained permissions systems not very useful. It should be prohibited by the Apple App Store and Google Play Store.
If you allow user-generated usernames, what's to stop me signing up as Linus Torvalds or Hillary Clinton, and creating drama for the lulz?
Using the phone number as a unique and verifiable identifier seems like a pragmatic - if not perfect choice. By using the SMS confirmation it makes it much more difficult for me to impersonate Linus or Hilary - because I'd need to impersonate their phone number _and_ respond to an SMS sent to it. Not nation state secure, but better than nothing...
The other problem Moxie's trying to solve is the discoverability problem - which jwz _doesn't_ want solved (nor do people with abusive exes or other categories of users Signal if often very vocally advocated for "Use TOR. Use Signal. Use a VPN!!!"). Moxie wants to be able to calculate the intersection of your contact list with every other Signal user's contact list, so it can prompt you to let you know you can use Signal to communicate with them which you'd otherwise probaby no know. And as he says, to be most valuable, e2e encrypted messaging needs to become the default messaging channel under normal use, so it'll not need to be installed/setup/learned under stress when it's need becomes critical.
I think Signal's got the "soundbite message" of what they do very carefully crafted and it's very enticing, but by nature soundbite sized or elevator pitch sized message inevitably leave out the complexity of edge cases.
I'm 99.99% sure Moxie isn't lying about what we could all read in the sourcecode if we cared enough to spend the time reading it - all the people jwz is concerned about sending him Signal messages already had his phone number in their contact list so could have already been sending him text messages. Moxie's view is jwz is better off having all those people know they can _also_ contact him using e2e encrypted messaging as well. jwz doesn't agree, and doesn't think letting all those people know he has installed an encrypted messaging app is "privacy protecting". There's certainly merit in both points of view.
The "interesting" bit (to me) of Moxie's explanation of what happens is that Signal sends "truncated sha256 hashes" to the Signal servers so it can compute the intersection of all the numbers it scrapes from your contact list with everyone elses.
Seems to me there's just not enough entropy in phone numbers to make that nation-state secure.
If Moxie gets served a warrant (and a NSL) it wont take _too_ much effort to reverse out all those truncated SHA intersections into a social graph...
But then Moxie's POV seems to be "those people would get that same info from your telco records if you use SMS, and at least that's the _only_ metadata we leak, your telco probaby hands that over without a warrant along with at least the date/time of every SMS you've ever sent or received and quite probaby the contents as well...
I lean a lot towards jwz's argument that they're _way_ overselling the privacy-preserving nature of Signal. Especially if one of your adversaries is someone who knows your mobile number and would benefit from knowing you choose to use encrypted communication (like, say, everybody in the UK right now...)
Incidentally, in the list of IDs you published, are those real? If they are: that's BS that you are publishing real people's IDs, and I'm also surprised by the number of numeric qq.com accounts.
Anyways, it's a privacy violation. Apple shouldn't be handing out your email without your permission. Facebook has a setting allowing you to choose whether you want your email to be public or not.
You must be new, but here's some resources I suggest you review before you go on a crusade in future Apple articles:
Also, keep your bait to other sites like reddit, we don't stand for it here on HN.
Oh I have, and I am well aware of the links I chose and their accuracy, regardless of you having a different "primary goal" in mind.
Thank you though, but I've got it from here.
What difference does any of this make with iOS when we all know the US Gov't can simply access backdoors whenever they please? Don't fall for this security meme.
What does any of this matter when iCloud is a hacker's dream??
And I didn't ask if HN readers' phones CAN be hacked, I simply asked if they WERE asked. F off Elitist troll.