Specifically, it allows microG apps (which are open-source and auditable) to impersonate Google Play Services apps (which are closed-source and not auditable) and thus provide their functionality.
() No app spoofing
() Microg apps can spoof Google apps
() Any apps can spoof any apps
This would be similar to how root apps originally allowed anything to use root capabilities (with user permission), and then they made the default "Apps-only".
The only thing this patch does is provide a clean API for it, so that microG doesn't have to patch your entire system every time.
There may be many valid reasons to prefer open source software but security audits aren’t one of them.
Open source software is much easier to audit than closed source software. People have a finite amount of time to do things like audit their software.
This is rarely the case unfortunately, and for most of open source prebuilt software you use, you rely on trust and not on audit.
Trusting trust is one of the seminal talks in software.
Open Source vs Closed source is not where I or the security professionals I know put most trust emphasis. I would enthusiastically trust something closed from Google over a rando open source project.
But back to the original point, even the most basic audit steps are the same on an open source project vs closed one. Observe what the binary does & inspect it for standard patterns.
I think having a trusted compiler is an important first step to trusting software, even if you have to analyize it in depth yourself.
But note you are now adding a lot of extra preconditions that are largely not available.
The counter argument is reverse engineering & black box audits are actually easier than getting the conditions right to trust code audits. As a bonus they work regardless of the code availability.
My original claim, that I stand beside, is that code audit-ability for security purposes is not a reason to prefer open source software. For all the reasons this thread points out, that is just as fraught as auditing closed source software. Further, a competent audit of the software would not look much different between open and closed source projects.
Absent a competent audit, there are lots of other factors that are higher on my (and many more knowledgeable peoples) lists for importance to security and privacy than open vs closed source. Things such as documented and approved algorithms, the team involved, the amount of legal backing, the market incentives etc.
That is not to say there aren't reasons to prefer non-Google based API or to prefer open sourced software for other reasons. Just security audit-ability is a bad one.
I never said that. Come on, dude.
Seems waaaay easier than looking for the mythical badCodeGoesHere function.
The thing is this isn't just about network interactions. By taking a quick scan of the code you also (1) might learn something new, (2) can see the athors general attitudes to things, (3) might spot some other nasty activity (does this program hot load code from a remote source, try to obscure what it is doing, scan the file system? Etc)