Hacker News new | past | comments | ask | show | jobs | submit login

Do they don't know/care about security, or is it simply the case that it is hard to have something like the secure enclave across all Android devices? Genuine question.

Mmm? Secure enclave is present on most Android devices and is mandatory since Android 6.0.

(It's just called something else.)

Historically Android has been lagging behind a bit from iOS devices when it comes to security, but Pixels and their software have a very similar security model and design (with some exceptions - less granularity with file-based encryption and some other mostly minor details).

Non Google devices however are usually significantly less secure - not so much due to Android design, as due to manufacturers deliberately disabling Android's security featuers (e.g. only Pixel actually uses dm-verity at this moment if I remember correctly), refusing to update them, building devices with bad trustzone drivers... etc.

If you keep to the 1st party (Google-branded) devices like in iOS world, you're mostly ok.

Yes, they're using the ARM Trusted Execution Environment rather than a separate Enclave chip with its separate OS (L4). Apple is an ARM architecture licensee, designing their own compatible chips. So the TEE would have been an available path (compatibility is still required, no?) but they instead went the extra yard with a separate Enclave chip. As their white paper details, they also go to insane levels with that chip and moreover with its communications rather than just trust the TEE within an ARM chip and call it a day.

More recent ARM chips (A9+) come bundled with ARM TrustZone[1]. In a nutshell, the processor has two (hardware) isolated execution environments each running a different OS and different software. By default, the secure environment of TrustZone runs an L4 kernel (edit: this is incorrect, see reply below).

Could it be the case that Apple is leveraging TrustZone but with a customized L4 kernel? Or is it confirmed that the Secure Enclave is a custom IC designed by Apple? I wouldn't be surprised if it's the former as it becomes much cheaper to implement the required security features.

Edit: Check out this previous discussion on this exact topic: https://news.ycombinator.com/item?id=8410700

[1]: https://www.arm.com/products/security-on-arm/trustzone

> By default, the secure environment of TrustZone runs an L4 kernel.

By default no SW runs on HW. "Mobicore" (now called "Kinibi" from Trustonic) is based on L4.

I know that mate, no need to get snarky. What I meant is that it was bundled with the core by default, but thanks for the correction. I thought I read it somewhere, but judging by a quick search, it seems I'm mistaken.

Depends on the HW manufacturer and SKU on what is bundled or not.. Even ROM code can be different per SKU.

TrustZone was announced 2012 (?). The Security Enclave is a separate very Apple designed chip. They've patented aspects of it, dated also 2012:

https://www.blackhat.com/docs/us-16/materials/us-16-Mandt-De... https://www.google.com/patents/US8832465

> TrustZone was announced 2012

No, 2012 was when Trustonic was formed from competing TEE vendors: ARM, Gemalto, and Giesecke & Devrient.

TrustZone has been around since before that. TI OMAP were front-runners of using it.

Yes, indeed they went above and beyond - probably because they also need to defend not only against external threats, but against the user of the device himself to keep the walled garden intact.

Yes (dunno why all the downvotes) but Apple went even further than the walled garden would require. They could have easily left an Apple backdoor. But they encrypt the protocol going over wires to/from the Enclave. They go insanely far rather than sufficiently far.

Yeah, nation state level attacks will still work, especially if they have the phone. But with Android it's not nation state level. It's corporate level and maybe less if they have the phone.

I felt that Apple's description of the initial key setup between the enclave and the main processor was hand-wavy at best.

I know of another similar implementation that's used by Microsemi for their FPGA-based secure boot process[1]. They claim to protect the initial AES key transmission using an "obfuscated" crypto library that is sent to the processor over SPI on boot[2]. Also, I wonder if Apple exchanges a nonce during the setup to prevent replay attacks?

[1]: https://www.microsemi.com/products/fpga-soc/security/secure-...

[2]: It's a C/C++ library called WhiteboxCRYPTO. There is a whitepaper (http://soc.microsemi.com/interact/default.aspx?p=E464), but AFAIK the gist of their argument is that the code and keys are sufficiently obfuscated to prevent reverse engineering (typical marketing-speak).

Read the Apple patent I cited above. Apple isn't exporting an API so don't expect much in the docs. But they do have to teach in the patent.

There was an article about iOS security where someone argued that Apple controls the enclave for security reasons, to which I answered that this is basically security by obscurity. You can see I was downvoted for this: https://news.ycombinator.com/item?id=13676135

I still downvoted izacus because it was an uncharitable fanboy rambling. The charitable interpretation would be that the walled garden (in regards to the enclave) is a side effect of their implementation, and not the intention.

My OnePlus 3T uses dm-verity as well, sadly. Displaying an "unlocked" badge during boot is acceptable. Actually pausing boot for 10 seconds every time is not by a long shot.

That's good to know when recommending devices.

Nexus 5x and 6p have dm-verity as well.

Yes, IMHO they know. Their Chrome security model was required reading in CS 261.


But to me, they seem to be trying to find a moderate level of security with a profitable cost of goods. It doesn't seem that their heart is in it the way Apple's is with the Enclave. iOS is still breakable at the nation state level but well that's quite a high bar. Nation states are breakable at the nation state level.

Do you have a source on the "breakable at the nation state level"? Last I heard they've only been able to compromise models before the 6.

"Breakable" and "have been broken" are different budgets.

It feels like users don't care about security. Nearly all Android phones are not running a supported OS.[1] As an Android user it appears my only choice is a custom ROM or buying a new device every 2 years.

[1] - https://developer.android.com/about/dashboards/index.html#Pl...

If you go with the Pixel, which is basically the iPhone of Android, you'll get a similar to iOS update experience.

You are mistaken. The pixel has the same 2 year support length of the Nexus series.

iPhones are typically supported for 4 years.

Nitpick: Pixel has 3 years of security updates, and 2 years of Android OS updates.

I think you get something like 3 years, not as good as iOS, but by the 4th year, you only get a very crippled version of the newest iOS anyways.

It's probably a little bit of both - they don't care as much when it comes to Android (since Android's "open" nature is one of its primary selling points) and they also don't control the entire supply chain like Apple does.

The latter is important: at the end of the day, software can only be as secure as the hardware on which it is installed. For example if someone can tamper with the hardware random number generator then your crypto becomes compromised.

I would guess that it is tougher to secure the OS when you don't own the hardware, although I don't know enough about this to comment

Agreed. Apple's vertical integration is really nice here. As an Android OEM it's tough to have to consider all the trade-offs between different HW vendors (best HW for battery/performance may not be so great for security or software support, etc) versus being a consumer of some internal group.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact