Hacker News new | past | comments | ask | show | jobs | submit login

Just to provide some context for the "very important security feature" that is disabled. The change adds a permission that allows (after explicit whitelisting by the user) an app to impersonate another app.

Specifically, it allows microG apps (which are open-source and auditable) to impersonate Google Play Services apps (which are closed-source and not auditable) and thus provide their functionality.




Why not hardcode this for microG only, and not any system app ?


I think it's not really in the spirit of openness/freedom to say "you can replace Google's software, but ONLY with ours"


You could allow this in config though. I.e.

() No app spoofing () Microg apps can spoof Google apps () Any apps can spoof any apps


They could still make it default and then allow an option in settings to allow other apps.

This would be similar to how root apps originally allowed anything to use root capabilities (with user permission), and then they made the default "Apps-only".


Well, functionally that would be the same as if you just don't grant the permission to any other app than microG. Unless you don't trust that Android's permissions work properly, but then I think you have much bigger problems.


If you can put an app into /system/priv-app, you can already overwrite everything.

The only thing this patch does is provide a clean API for it, so that microG doesn't have to patch your entire system every time.


You can't install any system apps without root permissions, and at this point you can do anything, so why bother?


Minor nitpick. Closed source software is of course audit-able. Being able to audit binaries is table stakes to even call someone an auditor.

There may be many valid reasons to prefer open source software but security audits aren’t one of them.


I'm gonna take it on good faith that you're not a troll...

Open source software is much easier to audit than closed source software. People have a finite amount of time to do things like audit their software.


But auditing the source is only useful if you can do reproducible builds to be sure you run the audited source.

This is rarely the case unfortunately, and for most of open source prebuilt software you use, you rely on trust and not on audit.


That's not true. You could always, you know, run the version you compiled yourself.


We’ve known for 30 years that’s not enough (depending on your risk characteristics).

Trusting trust is one of the seminal talks in software.


We've known that in theory your compiler et al can be backdoored but in practice we can feel a lot safer compiling our own software than using proprietary binaries.


I think that might be where we disagree. Without a competent security audit I think you are falling back to trust.

Open Source vs Closed source is not where I or the security professionals I know put most trust emphasis. I would enthusiastically trust something closed from Google over a rando open source project.

But back to the original point, even the most basic audit steps are the same on an open source project vs closed one. Observe what the binary does & inspect it for standard patterns.


Observation only goes so far. You couldn't find backdoors unless you started to decompile it.

I think having a trusted compiler is an important first step to trusting software, even if you have to analyize it in depth yourself.


2 actually:

https://www.dwheeler.com/trusting-trust/dissertation/wheeler...

But note you are now adding a lot of extra preconditions that are largely not available.

The counter argument is reverse engineering & black box audits are actually easier than getting the conditions right to trust code audits. As a bonus they work regardless of the code availability.


So you can trust your disassembler and strace, but not your compiler? Your method is just vulnerable to another flavor of trusting-trust. What about the compiler that built your rev-eng and blackbox tools?


That is of course, not what I'm saying.

My original claim, that I stand beside, is that code audit-ability for security purposes is not a reason to prefer open source software. For all the reasons this thread points out, that is just as fraught as auditing closed source software. Further, a competent audit of the software would not look much different between open and closed source projects.

Absent a competent audit, there are lots of other factors that are higher on my (and many more knowledgeable peoples) lists for importance to security and privacy than open vs closed source. Things such as documented and approved algorithms, the team involved, the amount of legal backing, the market incentives etc.

That is not to say there aren't reasons to prefer non-Google based API or to prefer open sourced software for other reasons. Just security audit-ability is a bad one.


I don't think that you've fully appreciated the rebuttals to your position. Consider this: you audit your build toolchain and thereafter trust it not to manipulate your binaries. With this axiom in place, is it not true that it's easier to audit open source software (assuming it's built on a trusted toolchain) than proprietary software?


I don’t think you’ve understood the original premise. Suggesting that closed source software isn’t auditable Is laughable. No one who does software audits for a living supports that premise.


>Suggesting that closed source software isn’t auditable Is laughable

I never said that. Come on, dude.


I think I see the confusion. The post they first replied to said that, quite specifically. You were not that poster, however.


There are different types of audits. Yes someone doing a full security audit is going to be happy with doing reverse engineering. But I can perform a quick check on a lot of the software that I use that it doesn't do user hostile things (like ring home on startup) this is harder to do on a binary - so given the choice I'll use the open source option.


For those kind of checks why would you look at the source? Stick a proxy between the internet and the device to see what it does.

Seems waaaay easier than looking for the mythical badCodeGoesHere function.


Because I can trivially read and run code in my head I do that all day. I don't have a clue how to set up a proxy. Also my scan over the code tells me if it is generally badly writtes and a lot more than just one example of potential bad behaviour.


You are more prepared to run arbitrary code “in your head” than setup a simple network proxy?... uh huh


Yeah. As a developer the former is literally the $dayjob. The latter - I've never done so it could be simple or it could be hard. I've heard that getting software to respect proxies is tricky though...


So um. I'm a developer and the idea that I could take an arbitrary code base and get it into my headspace in less time than it would take me to figure out a programs network interactions is one of the most absurd things I've ever heard.


How would you force an arbitrary program to use a software proxy for all network traffic?

The thing is this isn't just about network interactions. By taking a quick scan of the code you also (1) might learn something new, (2) can see the athors general attitudes to things, (3) might spot some other nasty activity (does this program hot load code from a remote source, try to obscure what it is doing, scan the file system? Etc)


How would looking at network sniffer logs let you detect any security flaws for a server, as long as none of the live traffic is doing anything sketchy?




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: