As usual, getting the fix on the older versions is a whole different story.
The other Google failing here is the whole permission thing - since 6.0 Android apps need to ask for permissions for drawing over other app. However, for some really baffling reason Google still lets us publish APKs that target API below 6.0, which automatically triggers fallback mode which grants permissions at install time. Why I have no idea.
A brand new blank slate app will be configured with a target SDK of 25 (latest Nougat) and a minimum SDK of 14 (Ice Cream Sandwich iirc). This means it will run on ancient devices but also be able to use the latest Nougat APIs (when available.) However, because it is targeting 23+ (6.0), it will force you to use runtime permission prompts and stuff.
The big limitation on these attacks is that you have to install a malicious application and then trick people into getting to a situation where the application can interact with a specific target; that's something that should be straightforward for app stores to screen for.
> that's something that should be straightforward for app stores to screen for.
Also wouldn't it be relatively viable to gain update access to a popular existing application or to provide an innocuous application at first and then push a patch enabling this vulnerability?
Keyboards have variable heights, widths, etc, so they at least have to know what the keyboard looks like.
These are dangerous engineering decisions and should be publicized as much as possible not tone it down.
> Current — All the attacks discussed by this work are still practical, even with latest version of Android (Android 7.1.2, with security patches of May 5th installed).
Honest question - does HN feel this is actually responsible disclosure? A large number of Android devices seem to be vulnerable to the issue, and the whitepaper includes exploit details. Is the intent to force Google to pay attention to the issue?
The term is, quite literally, an Orwellian scheme to promote vendor interests as if they were naturally shared by those of researchers and users. A more neutral term is "coordinated disclosure". We can discuss how carefully coordinated things are, but there is --- to say the least --- a lot of disagreement about how much responsibility researchers have for coordinating this kind of stuff with vendors who shouldn't be shipping vulnerable code in the first place.
A warning in advance, though: this is a pretty boring discussion to be having on a thread about a new kind of vulnerability. Better that we should be talking about the merits of the paper itself.
Your neighbor left their door unlocked. You have their phone number. You can call them to tell them first, and make a funny FB post about the event afterwards. Or you could just post about it on FB.
Not to mention that the users of the software are the ones who suffer the most. Seems pretty ethical to at least try to get the issue fixed.
I would think that if you make an honest and serious attempt to coordinate with the vendor, it's responsible.
But you might want to tell your neighbors housemates about it if your neighbor keeps leaving the door unlocked and they haven't noticed but you have. Or worse, if your neighbor is making duplicates of his keys and handing them out around the neighborhood.
The flaw in the analogy is that your neighbors door being unlocked effects nobody but your neighbor and their housemates.
If you as a security researcher reasonably believe the vendor has neglected to patch the vulnerability, and you release exploit details, you have made things less safe for security-unaware users. There's no way around that. You already gave the vendor the opportunity to fix the problem. Now you are giving attackers the tools they need to take advantage of it.
You and I might care about abstract principles of corporate responsibility, but the typical user just wants to avoid getting phished, and I can't help but feel open disclosure has a lot of potential to harm them. I hope it's worth it.
At least the public now has a fair chance to evaluate if they want to use their devices despite this vulnerabilities.
I am pretty sure the line of defence will be weak "no warranty" claim that has nothing to do with lock in.
It is well documented that vulnerabilities are traded and sold freely online at various forums; since this isn't really a 0-day or an exploit as much as it is a new spin on a classic trick, there's no value in malicious actors hoarding the secret, especially since it's mostly a social engineering trick not an actual exploit.
In this particular case, the weight of the options seems to be "Make key users aware and put public pressure on vendors at the cost of giving a few bad actors an idea they didn't have before" versus "prevent giving a few more bad actors a new idea at the cost of Users being in the dark and vendors having no reason to address the issue"
The cost in this case seems pretty minimal, given that I'd assume there's no reason to really hang onto such a tactic.
For other exploits, it's basically the same idea: Who has the exploit, how likely is it that it's been sold, is the vendor likely to do anything about it, how dangerous is it to users? In most cases, if the exploit is already known and used among bad actors freely, what benefit is there to users in keeping such exploits hidden just to prevent a few more bad actors from doing it as well? Just seems like if the house is on fire, there's no sense in worrying about the wallpaper. (paraphrasing the quote since I can't remember its origin)
The heart of this vulnerability is more in that these settings aren't clear on their implications and there are bad defaults. Disclosing to users seems to be a responsible thing to do since users can prevent an attack on their own device (especially given google's response of "working as intended").
Definitely a factor to take into account, particularly when deciding how the information will be publicized.
The bells & whistles of giving it a fancy name, domain, etc suggest an additional layer of intent with the disclosure, but that's a separate issue because they have followed reasonable procedures beforehand.
That's the main concern going through my mind when I see this. It would be different if the average user was a business, a developer, a security professional - but this isn't Heartbleed and we're talking about normal people this time. Are they even going to read this, and if so, what can they do besides keep an eye on their permissions?
You have to assume malicious actors will find the vulnerability on their own, if they haven't already. They are already alerted.
If there are enough open, unpatched exploits people will switch to other phone vendors.
They should have went public earlier, like August_22nd_2016 + 90days (this is what google themselves does)
Then there is this accessibility thing which can actually click on the permission dialog and does not require one itself. (Only a check in settings, which is not protected against overlays.)