Hacker News new | past | comments | ask | show | jobs | submit login
Google, Xiaomi, and Huawei affected by zero-day flaw that unlocks root access (thenextweb.com)
411 points by lp001 on Oct 5, 2019 | hide | past | favorite | 228 comments



To me, the biggest part of this story is:

1. Over two years ago, this was apparently detected automatically by the syzkaller kernel fuzzer, and automatically reported on its public mailing list. [1]

2. Over a year and a half ago, it was apparently fixed in the upstream kernel. [2]

3. It was apparently never merged back to various "stable" kernels, leading to the recent CVE. [3]

So you might read that and think "Ok, probably a rare mistake"...

...but instead:

4. This is apparently a _super_ common sequence of events, with kernel vulnerabilities getting lost in the shuffle, or otherwise not backported to "stable" kernels for a variety of reasons like the patch no cleanly longer applies.

Dmitry Vyukov (original author of syzkaller fuzzer that found this 2 years ago) gave a very interesting talk on how frequently this happens a couple weeks ago at the Linux Maintainer's Summit, along with some discussion of how to change kernel dev processes to try to dramatically improve things:

slides: https://linuxplumbersconf.org/event/4/contributions/554/atta...

video: https://youtu.be/a2Nv-KJyqPk?t=5239

---

[1] https://twitter.com/dvyukov/status/1180195777680986113

[2] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...

[3] https://mobile.twitter.com/grsecurity/status/118005953923380...


The failures of the Linux core team to properly prioritize security is quite well known. A lot of people have poked the bear by trying to bring this up, also with specific real examples, and got a tongue lashing from the team and moved on to other things.

I'm amazed the GRSecurity people have managed to do it for so long. Even if merging their stuff mainline legitimately wasn't practical, I've seen plenty of snark and dismissiveness from the Linux team towards them and others. And GRSEC does actively bring in CVEs into their kernel patches all the time and get paid via sponsors to do so.

I'm sure going through old CVEs is a great way to find "zero days" and/or relapses after old patches. Or even just following the work GRSec does there's probably plenty of stuff for a highly motivated company like NSO to exploit.


>The failures of the Linux core team to properly prioritize security

Why even post this, when it has nothing to do the with the case GP & OP described? It's misleading at best.

The failure here is in the way Google has set its Android development process. They keep a separate "stable" kernel, and manually select certain patches to backport to. In process they skip all kinds of patches - performance, features, and yes, security ones. Given that only selected patches are backported, the process is best described as insecure by default. It was Google's decision to favor stable API over security here.

This is compounded by the fact other Android phone vendors are pretty slow at releasing OS upgrade - and tend to stop releasing them altogether shortly after the phone's no longer manufactured.

The mainline kernel, as released by the Linux core team is up to date with security. Hold to account people that decided to skip patches as a matter of course, resulting in the insecure by default process.


> The mainline kernel, as released by the Linux core team is up to date with security. Hold to account people that decided to skip patches as a matter of course

To me, the particulars of this exact case are not as interesting as the fact that the entire Linux patching and backporting of security issues seems _very_ fragile, with things frequently getting "lost" for mundane reasons, and a key part of the "why" it is so fragile is due to many of the core Linux development processes.

This particular CVE is apparently one small example that happened to catch some headlines out of _thousands_ of similar problems.

That talk linked above by Dmitry Vyukov is worthwhile for getting a sense of the magnitude of the problem.


I think there is a lot to be said about the shortcomings of the Android world in how Linux Kernel updates are trickling down the whole foodchain (or rather: are not). BUT: This particular episode is a very bad choice for your argument. Cause in this case the bugfix in question was never ported back to any regular Linux LTS version, while it actually was cherrypicked for the Android Common Kernel (branches 3.18, 4.4 and 4.9).

Now why that fix never made it in most vendor kernels (besides a few like the one in the Pixel 3 that is based on 4.9) is a good question. But at the same time there is the reality, that everyone focusing on the upstream LTS kernel would have never gotten the fix.


> The failure here is in the way Google has set its Android development process.

I think it's still the won't layer. Google may be able to put some pressure to change things, but it's describe the issue as "the failure is in the way SOC manufacturers have set their kernel porting process". You often get chips which work with one version and a dump of specific drivers. Beyond pressuring the company to upstream their changes, or writing clean room versions, I don't see many solutions.


> failures of the Linux core team

But isn't the issue that Android didn't merge it? Linux patched it.


GRSec is the primary reason the Android devices from BlackBerry have never, to my knowledge, been rooted (despite their many flaws). It's crazy that it's not more accepted.


I personally wouldn't trust a company that openly bragged it built a system to provide local police and Intel agencies with real time access to Blackberry messaging flowing across an entire city in 2010 for G20. In addition to sharing their "master" encryption key for a number of years:

https://www.theverge.com/2016/4/14/11434926/blackberry-encry...

Also AFAIK Blackberry only provided a hardened kernel with a single device in 2015 called Priv. I haven't heard anything from them since... maybe someone could correct me here.


The new Android devices also have hardened kernels but it doesn't really matter phones are insecure as fuck in other ways.


Indeed. Who the hell thought it was a great idea for the modem baseband device to have unlimited direct memory access to the host processor memory space? I mean, especially when the baseband firmware can usually be remotely updated by the network with zero user interaction?!


> have unlimited direct memory access to the host processor memory space

Can you give some reference for that claim?


https://www.usenix.org/system/files/conference/woot12/woot12...

If you want more, literally google “baseband attack host processor memory” or “baseband exploits DMA” or “baseband exploits memory”.


Is this all Android? Or just Blackberry?


There is no 'Linux core team'


There is one, but they have nothing to do with this.

This falls pretty much on Google and people in charge of the backports


Sure there is, the ones with commit rights to validate pull requests.


That is too large to be considered 'core' and too unstructured to be considered a 'team'.

Hence these issues, arguably.


If you submitted a huge patch to Linux that would really improve support for real-time audio, but break many other things like IO throughput, etc. but your argument for accepting it anyway was "yes, but it's for real-time audio support, so it's really important and more important than what everyone else is working on", you'd be laughed out of the mailing list.

The fact you use the word "prioritize security" is indeed telling. Security is just one aspect of a system. There is no particular reason for Linux to prioritize it above everything else, no matter what twitter infosec drama queens believe.

Obviously the fact that infosec people are quite often insufferable does not help their case.


When you're developing such a crucial part of an operating system, wouldn't you want to put security pretty high at the top of the priorities list?

If I were an average consumer, I would care much more about my device being secure, than having real time audio.


You act as if we don't have an experiment about exactly this going on for decades that says the exact opposite, but we do. It's called OpenBSD. The average consumer doesn't care, because it's less performant in some cases and certain software doesn't work or work as well.

It's not even a choice between better security and real-time audio, since the average consumer doesn't even know about that unless specifically called out by marketing. For phones it's about what looks better, both physically and digitally. It's about how the emojis look, how good the pictures the camera takes look (or how good you're told they look), and how responsive and smooth the screen movements are.

The average consumer goes off what they can immediately see and what they're told by marketing, and by what they feel social pressure to buy. The discerning technical expert goes off marketing (but a different set of claims), and a bit more of a discerning eye, and while far more knowledgeable than the average consumer, is still mostly driven by hearsay.

The number of people with enough knowledge to actually make a real data driven choice is probably much less than 0.0001% of people, and that's far from average. I'm not one of them, but I can look at the systems often affected, make some assumptions about how many people know enough about them to speak usefully on risks they actually have, and do some napkin statistics to know almost nobody else is either, even here.

It's easy to call out the average consumer, but truthfully, the last time you bought a phone or computer, how deeply did you analyze the actual security considerations to do with the different aspects of the system, and how much did you rely on what some site told you, trusted recommendations, what you already preferred, and your hunch was which was better? How many millions of lines of code are involved in these systems now? How could you, or any of us actually do anything other than that?


>If I were an average consumer

If you were, you would behave like one, i.e. not care that much (if at all) about security. What you are saying is "I do care about my device being secure".


Yes, I suppose that's true. I should have worded it differently.

Perhaps, the average person would be more upset/notice if they were negatively impacted as the result of a security issue, than if some feature, e.g. real time audio were missing, which I'm sure no one would even notice.


You’d think no one would notice real time audio being missing on android devices, but that discounts the huge amount of music creation thats done on mobile platforms these days. It’s a way bigger market than you’d think!


Security should indeed come before anything else.

If you're system gets p0wned there is hardly any audio to play.

macOS, iOS and Windows security improvements, while being the musicians choice for real time audio, show it is possible to put security first, while offering a good audio stack.


Security is always in service of something, not the other way around. The highest point for security is coming together with something side by side, but not before it.


That is how languages like C or JavaScript get adoption.


It’s “telling” of what exactly?

Linux distributions make up the majority of public web and database servers and approximately none of the real-time audio players, is that not a “particular reason” to prioritise security over real-time audio?


I'm not saying that Linux should neglect security, but there is no reason for it to be above anything else. People who care about security can look at systems that are more focused, like OpenBSD.


Agreed on fair prioritization, but most people care about Linux security, because that is the major platform. OpenBSD is an obscure platform that is not an option for massive majority of users, and no phone uses it.


The Linux kernel is a glaring example software malfunction due to its combination of moderate defect density and incredible extent, along with a culture intolerant of competence. People who became subsystem maintainers because they happened to be hanging around a mailing list in the 90s are still gatekeepers of important subsystems despite their now-decades-long records of continuous malfeasance. Patches that demonstrably improve the health of the project are rejected if they would reduce the powers of these gatekeepers. We should look at the whole project as a cautionary tale of the kind of leveraged destruction that some programmers of modest ability but extreme confidence can wreak on our industry.

It's bad enough that syzbot finds fifty serious bugs per hour, but I'll relay a personal anecdote. Earlier this year I wagered a colleague that I could open up the source of the 4.10 kernel (the one that was once current in Ubuntu 16) and find an obvious defect in less than an hour. It actually only took me about 15 minutes, to find a deadlock in the squashfs that was triggered by kmalloc failure and an error path via goto, which of course nobody should ever use. And while I'm reading it I'm just thinking to myself that this is the worst program I've ever seen and it would never pass a code review at my workplace, but it's out there right now running on billions of computers.


You’re wrong that no one should ever use goto. Goto is a perfectly fine control flow operator IF AND WHEN you use it in a highly structured, well-understood way. This is how systems programming is done. A “goto cleanup” section at the end of a function is the best way to do exit-on-error in C, hands down.

I hate that people keep peddling this nonsense because they wrote a little C and read a headline about “goto considered harmful”, which is a gross oversimplification and a BAD piece of “wisdom” that for some reason won’t die. This is how serious handling in C is done. Please stop repeating this tired trope.


Goto is only de rigueur in terrible languages that are unable to release function-local resources automatically. It is an intentional choice by kernel authors to ignore all progress in our field after 1988 and insist on writing everything in C. It’s not the goto statements that are the problem, rather it is the culture that necessitates them.


Do you realize how big the kernel source is? No one wants to rewrite it to C++. Also C++ compiles slower and can be more difficult to read, depending on code style and features used. C++ is also far more fragile regarding compiler compatibility and sometimes spits rather confusing error descriptions.


I'm sure some of the people parroting that got it from their professors in college. I know mine sure did.


Care to name and shame with supporting evidence?


it's fairly well known that small kmallocs do not fail, and that there are many, many instances in the filesystem code which assume that small kmallocs do not fail. there have been two LWN articles on this exact subject.


And goto being used for error handling is pretty stock standard across most C codebases I’ve worked on or seen over the years, so I’m not sure what the particular gripe is there


Since when is "we do it all the time" the same as "it's a good thing"?


goto for error handling is not just "freeform anything goes goto". It's a very specific idiom, being an "error" label and a bunch of "if (resource) free(resource)" statements at the end of the function. It is essentially analogous to a common use case of Go's defer. Typically an accepted pattern when dealing with many resources and possible exit points. Prevalent in I/O heavy code.

Different ballgame from the subject of Dijkstra's manifesto.


> It is essentially analogous to a common use case of Go

Go, which is also known for its terrible error-handling.

Great.


Really? I have never seen a Go program to misbehave while not printing some meaningful output. It is possible but almost no one ignores the error parameter. Yet I have seen many, many Java and Python software which quit after the first Unhandled Exception. So often that I consider exceptions as the bad error handling mechanism.


It's both common and a good idea, when used purely internally to a function in order to redirect error paths to a standard set of cleanup code. It helps prevents memory errors and other problems.

This is the sense in which the Linux kernel uses goto.


I'm not experienced in C or kernel code but from my understanding they use it almost entirely like a catch/finally clause of many higher level language which is a widely accepted and successful pattern.


Properly used goto is essentially an inline exception handler. That is a good thing to eliminate repetitive fault checks and fragile cleanup code.


Since when are standard practices the same as "it's a good thing"?

Well written code is the best code. Some well written code uses goto. Some not well written code doesn't use goto


Yeah that’s the rumor but allocations of any size can fail when kmem cgroup accounting is enabled and the container is out of space.


Well, if the allocation is done with GFP_ACCOUNT bit set, i.e. kmemcg accounting enabled, isn't that the intended behaviour that the allocation can fail?


Yes that is the point. In the relatively rare case of failure this particular function uses a goto to return without releasing a held mutex. Error paths within the kernel are a rich vein of malfunction.


"We should look at the whole project as a cautionary tale of the kind of leveraged destruction that some programmers of modest ability but extreme confidence can wreak on our industry."

Oh man, can I put that on a t-shirt?


The biggest problem is that instead of vendors submitting drivers for their devices in the mainline kernel and profit for all the fixes being done there, everyone makes a fork and there continues to work. Obviously, when merging updates from mainline kernel in the forked one, something is discarded or lost.


> The biggest problem is that instead of vendors submitting drivers for their devices in the mainline kernel and profit for all the fixes being done there, everyone makes a fork

The other half of the problem is the companies that actually use these garbage dump forks and build products on top of them.

For me, getting the SoCs and chips we use running on latest upstream kernels was a high priority in platform bringup.

I only used SoC vedors' garbage dump SDKs for quick testing & some reference. And chip vendors' drivers I ported straight to upstream git version.

Of course this isn't how it goes in companies where "shit to market" is top priority.


So should we be fuzzing the stable branches separately?


syzbot is already fuzzing the latest two stable kernels and has found hundreds of bugs, including lots of use-after-frees. All these bugs are listed here:

- https://syzkaller.appspot.com/linux-4.14 - https://syzkaller.appspot.com/linux-4.19

As far as I know, no one is doing anything with the syzbot bugs against stable kernels directly, since no company using Linux is paying anyone to do it as their job. But some are getting fixed; e.g., some get reported against mainline too, then fixed and backported.


How complex it is to test all known issues against all current kernels?

A weekly report with some easy to understand graphs would probably convince more people to work on these bugs.


Time and cost, same as it would be to do it across all kernel versions, not just current ones. Theoretically could be done pretty simply via a CI/CD pipeline if someone wrote solid test cases for the issues found by the fuzzier.


Thats my point! Make this part of the kernel regression and also run it on the old kernels.


Could this be an issue of not appropriately identifying the impact of the bug? If it was reported by an automated tool and easy to fix perhaps the developer failed to fully investigate the problem, failing to realize it is a critical vulnerability and have the fix back ported.


KASAN identified it as a use after free bug. That makes it very plausibly a security vuln. What more do you want to automate?

The problem is there are so many of those.


After the recent disclosures about Apple vulnerabilities, I've seen a lot of (unwarranted, in my opinion) criticism from HN of Project Zero, specifically the accusation of non-Google bias. For those who hold this position, does this affect your stance?


Their release pattern with the Apple fault could effectively be called a PR campaign, including a lot of editorial narrative about bad software development processes, etc.

This one gets a bug tracker entry.

When Project Zero posts a lengthy analysis with lots of scurious claims about the victims of the exploit, the window of exploitation, and narrative about the poor development practices that led to it, then call it even.

If it follows the traditional pattern, they'll write a post blaming some external party. No, seriously, when people point out all of the "Android" faults they've found invariably it is some variation of "but it isn't really Google's fault....".

Project Zero is brilliant, full of brilliant people, and is a remarkable effort, but when your paycheque is signed off by someone, it is human nature that you're really going to pussyfoot with them.


>This one gets a bug tracker entry.

For now. A comment from the reporter on the bug tracker entry:

>A more detailed explanation of this bug and the methodology to identify it will be written up in a forthcoming blog post when I find the time.


The iOS “deep dive” was a timed media push of a months-old problem right before a major Android release. They didn’t even try to obfuscate the timing or narrative. Blog post or not it’s pretty hard to top that.


You are paranoid.

Apple has started multiple keynotes by talking about Android security issues. Pointing fingers and ridiculing Google, Samsung and others.

Then a few weeks later, a Google keynote would demo something on an iPad and praise its beautiful hi-def screen.

I have _never_ heard Google officially talk crap about Apple.


> Apple has started multiple keynotes by talking about Android security issues.

Historically, they didn’t directly identify other vendors, but strongly implied it so it was obvious to most without directly saying names. This has changed a bit recently and I feel isn’t a good thing.

> I have _never_ heard Google officially talk crap about Apple.

No offense, but then you aren’t paying attention. There are examples given directly in this thread already.


[flagged]


> I bet apples own security team are 100% thankful for someone uncovering this.

That’s not my point at all with my original reply. I know first hand that some Apple security members are thankful for the work of ProjectZero. But that isn’t the point I was making or you made previously, Google “not saying anything bad about Apple” is patently false.


What Google did is industry standard, they do it to their own products all the time:

https://en.m.wikipedia.org/wiki/Full_disclosure_%28computer_...


No. I may change my mind but the fact that they haven't written a blog post about it reinforces Project Zero's bias.

A minor windows exploit is found, and they publish "Windows Exploitation Tricks". An iOS exploit is found and they do a six part "very deep dive into iOS Exploit chains".

Now, they find a bad Android exploit and they don't publish anything.


I've not seen that criticism myself. But to me what Project Zero is doing re: Apple vulnerabilities is great. I own Apple products and it's only going to improve/harden them

However, I do think some of the motive is to take a bit of shine off Apple - meaning it's partly a marketing campaign.


So far this further supports the argument that they are special casing and going into a lot more detail when it comes to non Android or Chrome bugs.

Will there be a large analysis how frequently this was exploited and so forth? How about a public Google blog post around this?


[flagged]



Not related to parent post but I think those guidelines urgently need to be updated with rules about abuse of the flagging mechanism.

I have had multiple HN posts flagged by people who wanted to reduce its visibility.

(One happened on this very page)


Wasn't this a case where members of the Project Zero team were individually commenting in a Chromium bug thread and not a Project Zero public facing blog post?

Was there a Project Zero blog post before those comments went public that I missed?


It's not a "Chromium" bug, it's project-zero bug [1]. https://bugs.chromium.org/ is just a bug tracker site to host batch of projects by Google. While most of them are related to Chromium, there are also things like project-zero.

[1] https://bugs.chromium.org/p/project-zero/issues/detail?id=19...


So where was the Project Zero blog post on the matter?

Because if this wasn't announced on their blog I'm going to have to say that this particular case would not be an apples to apples comparison.

Here was the Project Zero blog post on Apple's exploit, for comparison.

https://googleprojectzero.blogspot.com/2019/08/a-very-deep-d...


Note that that blog post was published in August 2019, while the vulnerabilities mentioned in the blog post were reported in a wide range of dates from October 2017[1] to December 2018[2] (that's the latest one I found in a quick skim, maybe there are later ones). This Android vulnerability was reported September 2019[3], so it may take 8-22 months before the blog post comes out. The reporter does intend to post a blog post about it[3].

[1] https://bugs.chromium.org/p/project-zero/issues/detail?id=14...

[2] https://bugs.chromium.org/p/project-zero/issues/detail?id=17...

[3] https://bugs.chromium.org/p/project-zero/issues/detail?id=19...


They CNAME the site and give P0 a dedicated domain if they wanted to alleviate this confusion.


The bug is scaringly easy to trigger. It just takes four system calls, none of which are niche or take unusual arguments.

    int fd, epfd;
    struct epoll_event event = { .events = EPOLLIN };

    fd = open("/dev/binder0", O_RDONLY);
    epfd = epoll_create(1000);
    epoll_ctl(epfd, EPOLL_CTL_ADD, fd, &event);
    ioctl(fd, BINDER_THREAD_EXIT, NULL);


It's interesting that even such basic usage of binder is buggy. It's long been known that binder is horrible code, but I didn't know it was quite this bad.

It's unfortunate that Google chose to use a custom IPC system, binder, for Android, instead of changing Android's design to better fit Linux. If binder was in use outside Android, I expect this bug would have been caught long ago and certainly would have been backported to stable.


Judging by how much Binder's "elegant design" of BeOS pedigree was praised, I was expecting it to be quite good.

I just had a look at binder.c and the ref counting and locking in general looks like a nightmare to maintain. This reminds me of how much I hate resource management in C.


Most of the BeOS was written in C++. Nobody at Be would have tolerated the code quality you find in Linux binder.


Compare that to the level of sophistication required to do exploits in the recent iOS deep dive blog post and the commentary about “bad programming” in regards to it.


This is another great chance to root your phone and take complete control of what you should rightly own.


In Android land you can buy a phone where the bootloader can be unlocked and directly flash a pre-rooted ROM rather than relying on people exploiting security vulnerabilities like this.


From my experience, this is not necessarily true, even when a phone is advertised as unlockable. I know that Huawei phones, for instance, had two different unlock modes. One called USER Lock, and the other FB Lock. The unlock codes given to users only applied to USER Lock, which means only some partitions could be modified (e.g. recovery could be modified, but fastboot was restricted). I'm not sure whether or not other brands have similar "protections" or not, but I wouldn't be surprised it this was the case due to DRM.


Plenty to choose from that don’t do things like that. E.g. Sony has a page dedicated to AOSP[1] and ASUS recently was sending their new flagship to LineageOS developers[2].

[1] https://developer.sony.com/develop/open-devices/

[2] https://www.xda-developers.com/asus-zenfone-6-custom-rom-twr...


Huawei doesn't even gives aways the unlock codes anymore, but it's for your own safety, mind you.


I agree the pragmatic approach is to research which devices are unlockable, as well as unlock them when you're still well within the return window.

But the general problem isn't whether it is possible for someone to buy an unlocked phone, but rather that it is possible to "buy" a phone and have it turn out to be non-unlockable. The general case is important so that users who run up against manufacturer shenanigans can straightforwardly route around them, and also simply preventing the pileup of more unusable planned obsolescence ewaste.


But will it have Google play services? (and pass safety-net checks)


It will have Play Services if you choose to install them. And yes, it can be made to pass safety-net by rooting using Magisk.


Safety net checks that the user does not have full control of their device, so why should those checks pass?


Another great chance to install cryptojacking malware on a huge population of rooted phones


More things to exploit as well.


Yeah like one binary that grants the root?


And your chance to share this complete control with every installed app. Phones should be like desktop computers. You install an app, you give it access to everything your account can touch on the computer.


I always wonder if the people who go around spreading this FUD have ever used a rooted phone. Privilege elevation is a specific, clear, and targeted procedure. It's not automatic, apps don't just get access to the entire system right away bu default, and nobody who rooted their phone just answers "yes, give root to this app" for any old purpose.


Nobody would willingly run outdated, vulnerable software if there was an easy, direct, and supported way to root your phone. Sell phones locked for security or whatever, I couldn't care less, but give people the option to root when they want.

So many iPhone users knowingly refuse to upgrade to newer versions of the operating system just so they can keep their jailbreak.


> Nobody would willingly run outdated, vulnerable software if there was an easy, direct, and supported way to root your phone.

People do that all the time with desktop computers.


As much as a certain outspoken group want you to think, there are lots of other reasons besides security to do something.


Yeah but at least there's a way to upgrade desktop computers, phones just aren't.


“Many” users?


Yes, nearly all of /r/jailbreak is running iOS 12.x or lower. Specifically, practically no one there has updated to 12.4.1, which fixes Pwn20wnd's exploit.


R/jailbreak has about about 5400 members and “or lower” includes discussions about a first generation iPad and the iPad mini one. They aren’t using an older version by choice - they are using old versions because there are no newer versions available.


SHOULD desktop apps be like this? :D


For most people yes.

For programmers and experts no.

But if it was toggle-able, I would toggle "sandbox everything, don't let anything not secure run, only allow trusted apps" in a heartbeat for work machines, my parents, and so on...


You could run everything in containers.


They are extra tedium to setup and I wouldn't like the overhead...


Absolutely not. It's an outdated concept from the days where all the software you run was either preinstalled or you created it yourself and the only security consideration was stopping you from messing up another users setup on the shared computer.


My desktop environment doesn't associate data with particular programs. Data is ideally in standard file formats and multiple programs can interact with it. I can see that tying data to a particular program may improve security, but it would also be extremely inconvenient.


>extremely inconvenient.

Maybe. I'd argue "extremely".

It does suck, for instance, that Discord as a Flatpak can only access a fixed subset of $HOME directories. But it can't scan your machine's processes like ordinary Discord can (to report the game you're playing), which is a privacy gain. The security (and portability) advantages of sandboxing/containerizing apps may outweigh the hassle of the workarounds/memory inefficiency.

Meanwhile, with unfettered access, you have things like: https://www.thegamer.com/civilization-6-steam-eula-change-sp...

>"We may combine the information with your personal information and across other computers or devices that you may use"

They also mentioned "photo", which there's no obvious way to collect. I remember someone joking, "What, do they go through your directories, looking for a picture of you?!"


It's possible to have sandboxing and convenience. Programs just need to change the apis they use. Rather than expecting to have access to everything on the system programs can just request the OS file picker API and get the user to select the file they want to access and the sandboxed program is now allowed access just like how it works for websites.


There'd be a case for sandboxing proprietary software, for sure.


With the recent raft of vulnerabilities such as Spectre/Meltdown and Rowhammer, there's an argument to be made that maybe, just maybe, running "untrusted" code on the same physical machine as "trusted" code is fundamentally insecure. Not just because of those particular vulnerabilities, but because they all seem to point to some sort of axiom - when two pieces of code share physical state (CPU, RAM etc), you cannot ever be 100% certain that they do not affect each other in some subtle way. And even if the hardware itself was perfect, it's not like you can practically prove all the code on a modern machine correct and flawless.

So maybe, instead of trying to somehow make it safe to run untrusted software on our machines, we should focus on making sure all the code that runs on our machines is trusted - not in the sense of "proved correct", but in the sense of "written without malicious intent". If you regard Discord scanning your processes as an unacceptable invasion of privacy, and it bothers you that there's no way to turn that off, perhaps avoiding Discord is a better solution than trying to sandbox it.


We could just put all our personal data on some non-networked machine with minimal software installed. But data like that wouldn't really be much use for anything. The data is already being shared over the Internet. People just want the data to stay in the right context: financial details shared with a bank shouldn't find its way onto Facebook or a random game company.

Using a separate computer for each counterparty would be more secure, but again not very convenient.


Multiple programs can interact with data on iOS - but you have to explicitly give the app access to the file.


Android has one of the best security models and sandboxing for apps. It's based around SELinux.


> Android has one of the best security models and sandboxing for apps. It's based around SELinux.

You mean a security model that misses the fact that the kernel cannot be updated and relies on a "sanboxing" solution that doesn't bother limiting kernel attack surface. No, I don't think they even had a security model in mind, a threat model or anything beyond random ad hoc ideas. And if they did some thinking, they would not have chosen SELinux either, as it's not a decent solution to anything, it's more like a solution to "we must to do something, this is something, we must do this".


SELinux actually does significantly reduce the kernel attack surface on Android, and it has made a lot of kernel vulnerabilites unexploitable on Android. This particular bug was simply one in the remaining attack surface.


Despite the fragmentation and lack of updates provided by most OEMs, Android's architecture from security point of view is safer than any other Linux based platform, with exception of ChromeOS.

Managed languages userspace, drivers implemented in Java or C++ in their own process with IPC to the kernel (since project Treble), whitelist of allowed native calls beyond the rather thin set of native libraries, to touch IO beyond own APK install dir or TCP/IP, native code needs to go through managed layer, several security critical processes are deployed in production with FORTIFY and sanitizers turned on.

ChromeOS turns the notch even higher by running Crostini on its own Rust implemented hypervisor and Go written userspace syscalls wrapper (gVisor).


And it throws all that security model out of the window by having the manufacturer do whatever they want and lacking upgrades. How long is it going to take for this update to propagate? I'm betting on at least 4 months for my phone.


Which this vulnerability bypasses, so kinda a moot point.


"Best" security model ≠ "Perfect" model. In fact, no model is perfect.


The macro-level progression of digital security really worries me. Each day the attack surface grows, the number of bad actors grows, the number of internet-connected individuals grows, and the quantity and sensitivity of data per-capita grows.

Is there a well-researched theory that considers a "breaking point" in this pattern? Where we either a) accept that all data is at risk of being exposed or b) develop fundamental security patterns to privatize our data or c) something else?


> The researchers speculate the bug is being used by NSO, an Isreal-based group known to sell tools to authorities to exploit iOS and Android.

> Due to evidence of in the wild exploit, we are now de-restricting this bug 7 days after reporting to Android.

Why is this a good idea?


Because the "bad guys" already know about the vulnerability, so there's no benefit from keeping it secret but a duty to the consumers to inform them as well - especially since the kernel patch already exists.


If the "bad guys" were one team it would make sense, but there's a whole world of guys who can turn "bad" when the opportunity is given. And it is given, when a new vuln is unmasked.


The precautions a cautious user will want to take when armed with this information don’t really change or become more burdensome whether there’s one group of attackers or many, so I don’t see how that changes the calculus at all.

It’s also probably not a terribly great assumption that no one else has independently discovered this vulnerability that’s already been independently discovered twice. Caution suggests we should assume this is in the wild and act accordingly.


How many consumers across the world would actually be at risk from NSO having details of the exploit vs. all the other "bad guys" though? Isn't there a significant distinction that's being brushed under the rug here?


> How many consumers across the world would actually be at risk…

We don't know, because we don't know who bought it and how widespread they deployed it.


Oh come on. We can never know with 100% certainty. But they already know the company is selling to authorities, not random people. Can't we make an educated guess here?


When you say 'authorities' that includes countries that persecute civil right activists.

Amnesty International has specifically criticized NSO specifically regarding UAE activist Ahmed Mansoor. He is currently serving 10 years in jail. UN human rights experts considered his arrest and imprisonment "a direct attack on the legitimate work of human rights defenders". He was monitored by the UAE using NSO technologies.

Amnesty International have also complained that they have been targeted with NSO Group technology - specifically Pegasus. They're currently launching a legal case in Israel to restrict their export license.

A separate case claims that NSO Group used Pegasus to help the Saudis spy on Khashoggi, who was brutally murdered in the Saudi embassy.

I don't think we need to argue about the bad guys in this case.

Sources: https://www.amnesty.org/en/latest/news/2019/09/nso-spyware-h...

https://www.amnesty.org/en/latest/news/2019/05/israel-amnest...

https://www.nytimes.com/2018/12/02/world/middleeast/saudi-kh...


This is supposed to justify putting millions of more people at an even larger risk of getting hacked?


The upside is a certainty - giving a heads up to people we know are getting hacked, and whose repercussions for getting hacked could mean imprisonment or death. The downside is allowing other hackers to maybe produce an exploit in time (before this bug gets patched), and maybe hack some more people.

I'd say the upsides outweigh the downsides.


I don't know - I can't, maybe Google/other researchers can. But true, my comment was a bit needlessly glib.

It also seems like you could discover the bug from public sources - it had been created and fixed in the kernel. Apparently at the time not registered as a security issue, but review could discover the link and check for Android's not having the fix.

In the case of Android, "just tell the vendor" is also kind of awkward: There's dozens, if not hundreds of those. If there's an active threat, it's kind of hard to justify not to inform all of them, but could you trust an embargo over so many parties?


Sure, just model the relative probability that each hacking group will find a vulnerability. For a simple example, let's just say there are 5 organizations in the world that are equally good as NSO. So the odds that NSO were the first to find this one are 20%. In reality I think the odds would be quite a bit worse.


It's very hard to tell because we've observed NSOs using comparable vulnerabilities both very sparingly against select individual targets, and also observed exploits used in a mass fashion against whole populations such as the recent uncovering of exploits targeting Uyghur communities.


Well, one thing is it was apparently already publicly reported over 2 years ago by syzkaller:

https://twitter.com/dvyukov/status/1180195777680986113


That was apparently fixed?

> No longer occurring on linux-next, probably fixed by the following commit:

> #syz fix: ANDROID: binder: remove waitqueue when thread exits.

https://groups.google.com/forum/#!msg/syzkaller-bugs/QyXdgUh...


Yes, fixed upstream near start of 2018, but apparently never merged back to various “stable” kernels.

See for example:

https://mobile.twitter.com/grsecurity/status/118005953923380...


Not sure if it's that exact one, but according to https://bugs.chromium.org/p/project-zero/issues/detail?id=19... it was fixed in "Dec 2017 in the 4.14 LTS kernel, AOSP android 3.18 kernel, AOSP android 4.4 kernel, and AOSP android 4.9 kernel".


The general reasoning is that since it's already being exploited, there's more value in warning people so they can decide to not use said affected devices, rather than being in the dark.


You have to "install an application from an untrusted source, attackers can take advantage of that. Attackers can also take advantage of the bug if they pair it with vulnerabilities in the Chrome browser to render content."

I guess it informs us what not to do at the very least. Given the track record, I'm not very optimistic of the vendors pushing a patch very soon (if ever). This keeps us informed at least.


Or the attacked surface has to be exposed to the browser sandbox, which apparently this one is, per: https://bugs.chromium.org/p/project-zero/issues/detail?id=19...


> You have to "install an application from an untrusted source, attackers can take advantage of that. Attackers can also take advantage of the bug if they pair it with vulnerabilities in the Chrome browser to render content."

They can easily give that^ information without exposing details of the bug though?


Well, Google are themselves the vendor here. Also seems it's fixed and this might encourage manufacturers to push out an update.


I just received an system update notification on my Galaxy S9. I can't help but wonder if it is related to this news.


Right, I realize they're the vendors, but isn't this just going to make even more people exploit the vulnerability before consumers get patches? Like actual hackers targeting random people in the wild, not merely law enforcement?


It may depend who are targeted by this exploits, if the targets are politicians and people with high positions is better to make it public and have this people not use their phones.


I feel like this goes against responsible disclosure. Google should give the manufacturers a month to push updates themselves, just like Google would expect a month to fix an issue someone reported to them.


It's being actively exploited, which changes the calculus.


"Actively exploited" by... law enforcement? Do all consumers really need to freak out about this the same way they would if hackers had access? Doesn't that detail change the calculus here?


Not all law enforcements are working for the good of their people, probably.


That still doesn't counter my point.


> "Actively exploited" by... law enforcement

"Actively exploited" by at least law enforcement. It’s sheer folly to presume that if one motivated group has already discovered this that nonetheless somehow others won’t have as well.


The manufacturers don't need to fix anything though, they just need to push out the update. The entity that needs to fix (and has fixed) the bug is Google itself.


I don't know the reasons behind that policy, but I'd guess with the exploit already being used, there is less incentive to keep silent about the issue. The opposite is true: putting more pressure on the vendors to provide patches, and disclosing any malicious actions that are already underway as soon as possible


They could tell vendors about the issue earlier without telling the rest of the world earlier though, can't they?


Well, the longer that vulnerabilities are kept secret, the longer that users are unable to take any action to protect themselves, and the less incentive that vendors have to roll out fixes quickly and to prevent vulnerabilities in the first place. See the Project Zero disclosure FAQ: https://googleprojectzero.blogspot.com/p/vulnerability-discl...


> It’s advisable that you don’t install apps from non-trustworthy sources, and use an alternate browser such as Firefox or Brave till the issue is fixed. We’ll keep you posted on any updates issued by phone makers.

The recommendation that other browsers are inherently protected doesn’t make sense. Any app with an rce bug could be a vehicle to exploit this Android bug.


“It’s advisable that you don’t install apps from non-trustworthy sources, ”

Unpopular opinion but this is why I prefer walled garden apple for my family then alternative.


I don't understand people who want to remove choice. Don't want the ability to install apps from untrustworthy sources? Don't enable the option that gives you that ability.


He did say ‘for his family’. I don’t expect older people and children to not enable that option if convinced by an advert to the lure of free pirated apps.


That's what parental controls are for. You can disable app installation completely if necessary.


The position is self-defeating. How can you hold it and at the same time advocate against their choice to use a system that doesn't have the ability to install apps from untrustworthy sources? The availability of such systems is obviously an increase in available choice, not a decrease.


> How can you hold it and at the same time advocate against their choice to use a system that doesn't have the ability to install apps from untrustworthy sources? The availability of such systems is obviously an increase in available choice, not a decrease.

Saying "doesn't have the ability to install apps from other sources" is the same as saying "doesn't give you the choice to install apps from other sources" -- it's removing a choice.

If you don't want to install apps that aren't approved by Apple then... don't. You could choose not to even if your ability to choose was not restricted.


I'm sorry, but you have completely missed the point. I generally try not to respond in ways that can be construed as rude, but there was only one point, it was core to my comment, and your comment indicates that you did not understand it.

> You could choose not to even if your ability to choose was not restricted.

Choosing not to install apps that aren't approved by Apple is not the same thing as choosing to use a device where apps that are not approved by Apple cannot be installed. That is the entire point.

How can someone (perhaps yourself?) be in favor of expanding choice and also opposed to to the existence of this choice?


> How can someone (perhaps yourself?) be in favor of expanding choice and also opposed to to the existence of this choice?

The issue isn't that an iPhone on which you can't install apps not approved by Apple exists, it's that an iPhone (i.e. Apple hardware running iOS) on which you can install apps not approved by Apple doesn't exist, so that choice is missing from the market. In order for it to be a choice it is necessary for both alternatives to be available.


> In order for it to be a choice it is necessary for both alternatives to be available.

Fair, and ideally both options would exist. But an Android phone is a close approximation of an unlocked iPhone, while there is no other close approximation of a locked iPhone. Pushing the security stance of iPhone to align more closely with Android is a homogenization of the landscape and a reduction in the diversity of options.


> it's removing a choice.

Not from the marketplace.

If you care about this so much, you can go over to Android. I personally like the way Apple handles their platform with an iron grip and is part of the reason I buy their products.


Yes, it does remove a choice, because choice in the market is not along just one axis.


I want to buy into a platform that is controlled tightly by a single entity that I trust to deliver a good user experience reliably; in this case, that's Apple.

I consider the tight control a feature and I'm glad this philosophy of computing is available in the marketplace.

I think people underestimate this view.


I don't think it's an underestimated view, it's quite common. I just don't think it would be compromised very much if there was an optional escape hatch.

Buying a hypothetical Apple device and never flicking on the sideloading switch would still give you that. Whereas the person who generally prefers the security engineering, design choices and/or integration of iOS but wants some exceptions is now told "If you care about this so much, you can go over to Android.".


> Buying a hypothetical Apple device and never flicking on the sideloading switch would still give you that.

Until someone (my abusive spouse? someone with a narrowly scoped zero-day and physical access to my phone?) abuses the existence of this functionality in ways that compromise my security.

> Whereas the person who generally prefers the security engineering, design choices and/or integration of iOS but wants some exceptions is now told "If you care about this so much, you can go over to Android.".

I agree with you here. It would be great if Apple offered two classes of iPhone, one where such a switch was present and one where unsigned code was prevented from executing by hardware.


> I agree with you here. It would be great if Apple offered two classes of iPhone, one where such a switch was present and one where unsigned code was prevented from executing by hardware.

You don't even need two classes of phone, just a setting to allow unsigned code which requires a factory reset to change.

Nobody can compromise the data on your phone with unsigned code if switching to unsigned code requires erasing the phone, and you're going to notice immediately if your phone has been wiped, which is no worse for you than someone with physical access smashing it with a hammer and replacing it with another phone.


Could be very visible. E.g. on some Androids (and I think Chromebooks too), unlocking the boot loader adds a large exclamation mark or other warning to the boot screen.


Unless I'm misunderstanding your point, haven't you just rephrased Karl Popper's "paradox of tolerance?" Advocating freedom to create walled gardens in mainstream computing is what's self-defeating.


I disagree. The existence of a walled garden to opt into is a meaningful choice. As long as other choices exist, the existence of a walled garden limits nobodies choice. Based on some other comments in the thread I'll add that it would be even better from a choice perspective if Apple offered two configurations of iPhone, one where sideloading was available via a software toggle and one with hardware that cannot run unsigned code. Ultimately though, an approximation of this choice is available by substituting some Android device for the first option.


The actual bug talks about "untrusted app code execution". As in, code in any app, regardless of where it was installed from. So you're relying on review by the walled garden as protection.

And of course, "untrusted sources" is not the same as "all side-loading". I use sources to sideload from that I trust more than the average app developer.



Good question, not sure. Haven't actually needed an install channel that wasn't through F-Droid (since it supports custom repositories), and would generally be wary of new ones.


In case of 0-days, the Google play store is a non-trustworthy source.


It's been a awhile since I had an Android phone but don't you have to jump through some hoops to install arbitrary .apk's from the Internet, plus understand what downloading and executing a file is (something non-technical people sometimes struggle with). It's possible on Android but not typically out of the box.


> is why I prefer walled garden apple for my family

You can install family link, which blocks that option. It's a good option for parents to keep tabs on their kids - it shows you location, and which apps are installed.


Apple can have their walled garden of scrutinized apps. Why not allow other gardens? How about my own garden? Why not theo's garden?


Mobile phones, especially Android ones are very vulnerable as they rarely get updates, or if they get updates at all. And these devices are used for second factor security. And sometimes they are the only thing needed to get access to your entire life.


> However, if you install an application from an untrusted source, attackers can take advantage of that. Attackers can also take advantage of the bug if they pair it with vulnerabilities in the Chrome browser to render content.

So, you have to sideload an app or from some other source. Is it unreasonable to say don't do that? How common is it anyway? I work with IT folks and only a few ever seem to load outside the Play store. Perhaps in other parts of the world it's more common...?


I don't get why you're being downvoted, because that's an excellent question: I do that all the time: I'm using F-Droid more often than the Play Store. Actually, I haven't ever used the Play Store on my second smartphone (it requires a Google Account and I don't want to link it to a Google identity) — I've tens of apps on it.


I believe the article is translating that badly from the bug report. An app installed through the Play store is also an "untrusted app code execution" - Play deploys some scanning tools on submitted apps during the review, but do you trust them to catch it always? There's also things like Amazon devices with Amazon app store, ...

Similarly, Chrome is only mentioned because it's notable that it can be effective from inside its isolation if combined with a browser exploit. That likely applies to all browsers, but the article recommends to switch browsers.


There are lots of sites out there that host APKs of apps (older versions, etc.)... I'd wager they have more than a few users.


...there need to be more developers like these!

I'm not sure, but aren't Play Store submittals APKs anyways? Is additionally self-hosting them that much more complicated?

I've heard that developers often go Play Store exclusive, for fear that Google might block them. Is that true?


Some of us are just trying to remove ourselves from Google's teat because we don't want to be sucking from Apple's teat instead, but it's getting increasingly harder to do so.


I've been using LineageOS on my phones for a couple of years now, recently reinstalled and made the decision to not install the Play Store... and am totally happy with it!

I get most of my stuff from F-Droid and some software vendors provide APKs straight from their websites and whatever is Play Store exclusive, I simply don't use.

It was going really well, at least until recently, when here in Germany they started introducing mandatory apps for online banking, available (of course) only on Play Store or App Store.

I wouldn't even mind everyone's app-obsession if they'd at least always provide a store-free APK as well.


Use YALP [1] or Aurora [2] (both are on FDroid) to get the APK's for your banking apps. These apps This is what I do for the Swedish electronic ID app, it has worked for years and hopefully will continue to do so.

[1] https://f-droid.org/en/package/com.github.yeriomin.yalpstore...

[2] https://f-droid.org/en/packages/com.aurora.store/


Tried 'em both, IIRC Yalp didn't work, Aurora seemed fine.

Since it was a banking app, I got the APK of many different sites/programs and compared the hashes, and one of the programs had definitely tampered with the APK, but I can't remember which.

Since I left Aurora on my phone, they seemed to have passed on untouched APKs, but don't take my word for it.

EDIT: Also, this voids me of any "warranties" my bank would offer me, so I'm really not going down that path. Really, the only correct thing would be for the bank to offer the APK on their site, but I'd probably have to wait until the government forces this to happen (if ever).


YALP and Aurora get the APK directly from the play store, that is the whole point about these programs.

Get the source to Aurora (or YALP) and find out how it downloads the APK from the play store. Now either build something which does what you want (e.g. feed it an identifier and it downloads an APK), simplify the existing code until only the required functionality is left or use Aurora as it is. You can feed it a Google account (if you have one) to log in to the play store or it can do so 'anonymously'.


In addition to "essential" apps, like the early versions of Pokémon Go(?) that bizarrely needed to be sideloaded IIRC, and running your own (unpublished) apps - there are the numerous vendor-provided apps and app stores that avoid the play store gatekeeping. (of course, the vendors should be gatekeepers here..)

Then there's f-Droid, the people that run without Google apps (alternate roms).

And then there's "Android TV" that has a "different" play store due to the TV profile being different - but allows sideloading of apps like zerotier(VPN) or chrome - that work fine on TVs - but unfortunately isn't flagged as supporting TV in the Manifest.


In places without proper internet access it's common for phone stores to host their own fdroid repos on the local network to set people up with apps.


I use some apps on F-Froid.


AWESOME!

Please, give me instruction to root my Xiaomi Ido until they fixed it! (updates on my phone disabled for a while)


I see that samsung s series is also affected. The thing about the samsung phones is that you can root them, but that basically breaks the KNOX (secure folder) functionality forever (efuse AFAIR). Couldn't this exploit be used to root the phones while preserving knox by not tripping the efuse?



> However, if you install an application from an untrusted source, attackers can take advantage of that.

I'm slightly confused: Do they mean any app or a compromised app?


A compromised app. The vulnerability requires local code execution, so either an app you install and run, or a malicious webpage that somehow breaks out of the browser sandbox, or... loading a specially constructed video file in a vulnerable version of VLC, or something. Basically you need to combine this with some other vulnerability or convince the user to just straight up run the code.


What wasn't clear was that is there a known browser sandboxing vulnerability at this point that can be used to get root or the idea is that if someone has one they could use this one to get root. Can I use Chrome in my Android phone now or cannot?


Generally to exploit an OS from within Chrome you need 2 vulnerabilities: (1) to break out from javascript into machine code execution within the sandbox, (2) to break out of the sandbox into the general OS (and maybe (3) to then get root on the OS).

From my reading of this comment[1], it sounds like this vulnerability is such that if an attacker has vulnerability (1), this vulnerability can be used as vulnerability (2) and (3). It sounds like NSO group either had or has a separate vulnerability (1), possibly something from [2].

[1] https://bugs.chromium.org/p/project-zero/issues/detail?id=19...

[2] https://www.cvedetails.com/product/15031/Google-Chrome.html?...


Thanks, the ambiguous wording from the article suggested to me that a vulnerability in the system code used for installing from an untrusted source may be used by the exploit.


Yes, it's frustrating when people casually conflate using any alternative source of apps than official app stores with actual malicious application sources.

The reality is, you don't need to "trust" the source in general, you just need to trust that the source is not malicious. If the source is untrusted but you have faith it is not malicious then you can rely on Android's built in permissions system to protect you from it exceeding the bounds of what you would like it to do.

This may sound very pedantic, but the whole existence of an app ecosystem that isn't controlled by the hegemony of the 2 or 3 big app stores depends on this nuance.


Android’s built in permission system didn’t work too well with the Fortnite installer

https://www.pcmag.com/news/363357/google-irks-epic-games-by-...


this has nothing to do with androids permission system at all... epic made the decision to distribute Fortnite themselves while bypassing the Play Store in order to save on the 30% cut google would otherwise get from game sales. This worked by installing an app manually that did download the game and installs or updates it afterwards. This installer had a serious flaw allowing malicious apps to install other software by abusing rights the installer had gained to install or update the game.

You could claim google has been irresponsible for publishing technical details about this flaw in epics installer too early but it still has nothing to do with Androids permission system.


From the article

“However, on Aug. 15, a Google researcher discovered a flaw with the installer, which can let a separate app on your phone hijack what the software actually downloads.”

So a separate app can hijack what another app does. That means the sandbox is broken.


their installer downloaded the game into shared storage. They should at least have known that access to shared storage is not sandboxed and should have been verifying what they are about to install instead of trusting no other app would maliciously place an apk where their installer did expect it to be... ideally they would never have used shared storage at all... that said, the sandboxing and permission system worked as expected.


So you have to trust the app not to do anything stupid or malicious? Isn’t the whole point of an operating systems permission system so you don’t have to trust the developer? All apps having access to shared storage without explicit permission is no better than computer operating systems.

If an app can trash your user data the permission system is useless.


If you install an app that has permission to install other apps with arbitrary permission sets... Yes, you must trust them not to do anything stupid or malicious. However this is by choice you don't have to trust them although that means you should not allow them either.

I would agree that conflating external storage options with shared storage was a huge mistake which made developers somewhat unaware that this is something that is actually not sandboxed as anything else would have been. But i think nonetheless it is the responsibility of the developer to not just assume how stuff works and in this case it is quite obvious if you actually happen to ask yourself if something like this would be possible. This is especially true if you require novice users to install your app with delicate permissions as the ability to install other apps.

To be clear on this, they made the choice to use shared storage instead of using storage protected by androids sandboxing and permission systems. They probably made this choice without knowing that they are effectively allowing other apps access to files of their installer because shared storage has been conflated with external storage options but mainly because they did not evaluate the options they had and the implications of those carefully enough.

Apparently Android 10 improved this somehow (probably by making filesystem permissions work as usual).


No.

There are things intentionally included and things intentionally excluded from the sandbox. App-specific data storage is part of the former, and external storage access is part of the latter (this changed with Android 10, anyhow).

Epic's installer used the external storage


Kudos to Epic for using their power to attempt to break Google's Play Store hegemony, facepalms for unfortunately proving one of Google's (both legit and pretended) concerns on safety and security regarding acquiring apps from outside of the Play Store.


Hasn’t the entire argument been that Android’s permission system would have prevented this even if the app was installed outside of the store?


The Epic app installed other apps using a private Samsung API, so the apps were installed using the permissions of the Samsung Galaxy Apps Store app, which has permission to install other apps (on Samsung devices). There is nothing to stop a similar confused deputy vulnerability in the iOS App Store. The Android permission system prevents it from taking pictures or doing other things it was not granted permission to do.


Except there isn’t a way for an app to download code outside of the App Store and have the executable bit set.....


There isn't a way to do that on Android either. You have to use the confused deputy to install the app.


The installer as designed was able to download code and execute it - so there was obviously a way to do it on Android.


> The installer as designed was able to download code and execute it

No, it was able to download code and ask a Samsung app store system app to install it. This doesn't work on non-Samsung Android devices, and the exact same confused deputy problem can exist on iOS.


People, many on this very site, often erroneously claim that P0 does not disclose vulnerabilities in Google's own products. Or they claim that Google gets favorable treatment, like the disclosure only of less severe bugs, or longer disclosure deadlines. Here is a countervailing datapoint.


Oppo A3, but not OnePlus 7 pro?


This is why we need the kernel to be re-written in Rust ASAP -- to make these flaws a thing of the past.


Too much effort for something that is only theoretical. Better to start new and see how it goes.


this is a feature and not a bug


Do you have a source for that?


the source is me, and the reason why I said that is because they usually block you from being root on your own devices... sometime, bugs like these are the only way to really own your device.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: