I guess that's because they know that Chain-of-trust only gets you so far.
Eventually you're running something big with bug after bug found every month and and an attack surface that includes the local filesystem and the network. At that point the buzzwords make no difference.
Chain of trust won't reduce the attack surface, but adopting memory corruption mitigations and replacing C code with something with stronger memory-protection guarantees would -- and while some kinds of memory protection can be bolted on later with minimal disruption, minimizing C is best done from the start.
That sounds reasonable, but here we are with Android's endless security disaster and all their apps written in not-Java from the beginning.
The most cancerous aspects of Android are by design, that you cannot control network exfiltration from apps, you cannot update or modify the OS pieces at will, and the apps are monetizing everything you do and everything they can find against you. Librem will answer these.
> that you cannot control network exfiltration from apps
GrapheneOS has a Network permission toggle, which is one of the features already restored from the past work on the project. There are many other privacy and security features that still need to be ported to the latest release, although a lot of them have become standard features especially in Android Q. https://gist.github.com/thestinger/e4bb344dcc545d2ee00dcc22f... is an overview of the Android Q privacy improvements(not security improvements, just privacy) in the context of GrapheneOS. To conserve development resources, the past features that are becoming standard aren't going to be ported over rather than just waiting for the standard implementations being released around August. Some of them will need to be adjusted to make them a bit more aggressive when it comes to apps targeting the legacy API level, but that's a lot less work than maintaining downstream implementations of all of this.
> you cannot update or modify the OS pieces at will
Having a well-defined base OS with verified boot and proper atomic updates with automatic rollback on failure is a strength, not a weakness. It's the same update system (update_engine) as ChromeOS. The update system is not the problem with the broader Android ecosystem with lack of updates to vendor forks. The migration towards everything being apk components that can be separated updated rather than moving more towards the ChromeOS design is a negative thing in terms of GrapheneOS and it's one of the things that has to be changed downstream to improve verified boot.
> Librem will answer these.
That's nonsense. First of all, that's hardware, and also moving to a far less secure software stack with non-existent privacy and security, an inferior update system and no verified boot is not a solution to these problems. The solution to privacy and security problems is not completely throwing away privacy and security...
>The migration towards everything being apk components that can be separated updated rather than moving more towards the ChromeOS design is a negative thing in terms of GrapheneOS
Doesn't stuff like fs-verity help with stuff like this instead of just a block based RO partition that can be verified ? Overall, for the android ecosystem, it seems like a net gain if google moves more and more stuff out of band away from OEMs as OEMs are not incentivized to do anything other than sell devices. That is, as long as everything is still pushed to AOSP.
Large majority of Android security exploits are in C and C++ written drivers, hence why with each release the amount of freedom with native code gets further locked down.
Chain of trust does protect you from evil maid attacks.
And yes, there can be bugs in application layer, but at least half of all CVEs are memory corruption bugs.
These practices do offer a massive reduction in attack surface. You seem to argue it doesn't matter since it doesn't eliminate attack surface completely.
No, chain-of-trust only has one trick... it can check that what you're about to run is unaltered from what was signed to some degree of probability.
If that is the - shipped and validly signed - bugridden nightmare-fuel like the propreitary Qualcomm 802.11 stack or proprietary multimedia bits that are a rich and continuous source of vulnerabilities (take a look through the last months here https://source.android.com/security/bulletin/2019-06-01 ) all the buzzwords did was ensure the vulnerable version is running so it can be exploited. The evil maid can get in that way.
Librem's security model is that of a Linux box, signed update packages... it's not a panacea against hacks but nor are the buzzwords you mentioned. At least they're trying to eliminate the really dangerous proprietary pieces that constantly provide new vulns.
> No, chain-of-trust only has one trick... it can check that what you're about to run is unaltered from what was signed to some degree of probability.
This is only one of many privacy and security regressions from moving to a far less secure software stack without anything close to the same level of hardening or work on privacy / security.
> If that is the - shipped and validly signed - bugridden nightmare-fuel like the propreitary Qualcomm 802.11 stack or proprietary multimedia bits that are a rich and continuous source of vulnerabilities (take a look through the last months here https://source.android.com/security/bulletin/2019-06-01 ) all the buzzwords did was ensure the vulnerable version is running so it can be exploited. The evil maid can get in that way.
Counting CVEs is not a way to judge security. Qualcomm's SoC hardware, firmware and driver security is the leader among the available options. The huge amount of both internal and external public security research targeting it is a strength rather than a weakness. The lack of attention given to other assorted drivers is not a strength of those drivers but rather reflects their obscurity and lack of hardening / auditing.
It's also not the norm in the Linux world to assign a CVE for a security vulnerability when it's fixed. The norm is to fix them silently without trying to obtain a CVE. It's completely bogus to judge security based on counting CVEs for many reasons. Not having public lists of the fixed vulnerabilities with CVEs assigned doesn't mean there aren't a bunch of vulnerabilities being fixed, and it's even worse if the vulnerabilities aren't being found and fixed.
Every x86 and ARM device is proprietary and has a massive amount of complex proprietary hardware, firmware and microcode. There is no escaping that for these architectures. The Librem 5 is not an open hardware device and has a proprietary SoC, proprietary Wi-Fi, etc. all with their own proprietary firmware and in some cases entire operating systems (Wi-Fi / Bluetooth, cellular, etc.). The distinction of an OS like PureOS is that they don't ship updates to this firmware but rather leave it vulnerable to all the fixed security issues, because they won't redistribute the proprietary firmware updates. The firmware is still present, but the OS is 'free'. Either way, that firmware is running, and with a bunch of known vulnerabilities if you don't update it.
Proprietary hardware and software is also not inherently less private or secure than open source software. These are differences in development model, not privacy or security. You're very mistaken if you think open source software eliminates backdoors / vulnerabilities or even reduces them. It's not how things work out in reality. Open source reduces the barrier to entry for security research, whether it's for good or evil, but it's certainly still possible without it being open source. Either way, the comparison you're making is between proprietary hardware + proprietary firmware + open source OS to proprietary hardware + proprietary firmware (but without updates shipped by the OS) + open source OS.
> Librem's security model is that of a Linux box, signed update packages
Again, you're mixing hardware and software. The Librem hardware isn't only for PureOS and will be able to run Android.
Signed update packages alone are inferior to not only having signed update packages but also verified boot and attestation. GPG also has far too much complexity and attack surface for this, and having online build / signing servers, etc. is a joke.
Android is Linux, and the Linux kernel is not a strength but rather the most prominent weakness in Android. A massive monolithic kernel written entirely in a memory unsafe language and entirely responsible for enforcing the low-level privacy/security model is not a strength. That's a major problem which needs to be resolved, not a hole to dig deeper. It's fundamentally not fixable and while a bunch of work on mitigations can help, it's very limited in what can be achieved. Moving to the desktop Linux software stacks also gives up the vast majority of these mitigations and the security model that has been rapidly improved over the years. It gives up having such strict SELinux policies developed as an integral part of the base system, as just one of many things that are lost. This level of security cannot be obtained on a traditional Linux distribution without a well-defined base system that's developed together with lots of holistic systems level privacy and security work. Addressing it in a bunch of separate fragmented projects doesn't work out, and prevents having the same kind of security model and security policies. The way that SELinux is used on Android compared to a distribution like RHEL / Fedora is day and night. It's drastically different and not even comparable at all. The same goes for the deployment of other privacy and security features / models.
Kind of, Google can release Android running on top of any OS that implements the NDK stable APIs, plus their POSIX subset, and besides OEMs no one would notice the change.
Yes, that's true. I mean the Android Open Source Project, rather than Android as an OS family. For Android as a platform defined by the Compatibility Definition Document / Compatibility Test Suite, it doesn't have a specific kernel, and Windows could have become certified as Android if they had actually gone ahead with pursuing that.
Eventually you're running something big with bug after bug found every month and and an attack surface that includes the local filesystem and the network. At that point the buzzwords make no difference.