Hacker News new | comments | show | ask | jobs | submit login
Cloak and dagger – a new kind of attacks for Android (cloak-and-dagger.org)
157 points by elsombrero on May 24, 2017 | hide | past | web | favorite | 52 comments



Note that Android O now clearly shows a warning when an app is rendering an overlay - probably the reason why the issues were marked wontfix.

As usual, getting the fix on the older versions is a whole different story.

The other Google failing here is the whole permission thing - since 6.0 Android apps need to ask for permissions for drawing over other app. However, for some really baffling reason Google still lets us publish APKs that target API below 6.0, which automatically triggers fallback mode which grants permissions at install time. Why I have no idea.


Not everyone is using android >6, or am I missing something?


Android applications take 2 SDK versions into account: 'target SDK' and 'minimum sdk'. The target value allows you to use APIs up to that version but also uses that version's toolchain, thereby enforcing any behavior changes or new requirements (in a backwards compatible fashion). Minimum SDK is just that: the absolute lowest Android version the app will run on.

A brand new blank slate app will be configured with a target SDK of 25 (latest Nougat) and a minimum SDK of 14 (Ice Cream Sandwich iirc). This means it will run on ancient devices but also be able to use the latest Nougat APIs (when available.) However, because it is targeting 23+ (6.0), it will force you to use runtime permission prompts and stuff.


Targeting a higher API should still allow you to install on lower version at least that's my understanding of API targeting (not an Android dev so might be wrong).


This seems like a refinement of "tap-jacking", which is a pretty well-known UI redressing attack for Android apps, and works in a way that is somewhat similar to "clickjacking" on web pages --- an invisible frame is rendered on top of the application that captures inputs (as opposed to an opaque frame that obscures the legitimate application and tricks you into interacting with it).

The big limitation on these attacks is that you have to install a malicious application and then trick people into getting to a situation where the application can interact with a specific target; that's something that should be straightforward for app stores to screen for.


I agree. Once you've got the malicious app, it's already too late. As they point out, Facebook could arguably tap jack all your passwords too. Known issue, so pretending it's a new issue by giving it a new name is silly.


I'm less concerned with whether it's original and more with whether it's a serious silent attack on my phone (yes?). And whether Google intends to fix the problem (no?).


Why do they need a specific target in-this case, it appears from the video like they can overlay the keyboard and simply grab all keyboard input?

> that's something that should be straightforward for app stores to screen for.

Also wouldn't it be relatively viable to gain update access to a popular existing application or to provide an innocuous application at first and then push a patch enabling this vulnerability?


> they can overlay the keyboard and simply grab all keyboard input?

Keyboards have variable heights, widths, etc, so they at least have to know what the keyboard looks like.


Most of users use same and default keyboards. Even if you use different keyboard, with click heatmaps, they could find out keyboard layout and specs.


No. Why would OS assume any thing but maliciousness from apps ? The appstore is not part of OS, why would it then perform a security-critical function ? Why would you even involve humans to ensure security ?

These are dangerous engineering decisions and should be publicized as much as possible not tone it down.


Additionally, stores like F-droid and whatever Samsung has exist, so the check needs to be further down into the OS, like you said.


How is the invisible frame able to pass taps through to the app underneath?


You can use onTouchEvent even when not receiving taps.


From the section marked "Responsible Disclosure":

> Current — All the attacks discussed by this work are still practical, even with latest version of Android (Android 7.1.2, with security patches of May 5th installed).

Honest question - does HN feel this is actually responsible disclosure? A large number of Android devices seem to be vulnerable to the issue, and the whitepaper includes exploit details. Is the intent to force Google to pay attention to the issue?


There's no such thing as "responsible disclosure".

The term is, quite literally, an Orwellian scheme to promote vendor interests as if they were naturally shared by those of researchers and users. A more neutral term is "coordinated disclosure". We can discuss how carefully coordinated things are, but there is --- to say the least --- a lot of disagreement about how much responsibility researchers have for coordinating this kind of stuff with vendors who shouldn't be shipping vulnerable code in the first place.

A warning in advance, though: this is a pretty boring discussion to be having on a thread about a new kind of vulnerability. Better that we should be talking about the merits of the paper itself.


So all disclosure is responsible? Or no disclosure is responsible?


I don't get this logic that much.

Your neighbor left their door unlocked. You have their phone number. You can call them to tell them first, and make a funny FB post about the event afterwards. Or you could just post about it on FB.

Not to mention that the users of the software are the ones who suffer the most. Seems pretty ethical to at least try to get the issue fixed.


A better analogy would be: You discover that all doorlocks of some vendor can easily be opened without the key, including your neighbors. So you tell the vendor. The vendor does nothing about it for 6 months, so you decide to tell everyone who has the lock instead so they can fix/change it.


Would that not fall into responsible disclosure? Maybe the definition I have is different

I would think that if you make an honest and serious attempt to coordinate with the vendor, it's responsible.


I think his point is rather that "responsible" is a weasel-word.


They contacted Google on August 22nd, 2016 and Google decided to do very little about it. At least we know there is a problem now and we can take mitigation actions.


It would be irresponsible and being a bad neighbor to post on social media about your neighbors door being unlocked at all ever, really.

But you might want to tell your neighbors housemates about it if your neighbor keeps leaving the door unlocked and they haven't noticed but you have. Or worse, if your neighbor is making duplicates of his keys and handing them out around the neighborhood.

The flaw in the analogy is that your neighbors door being unlocked effects nobody but your neighbor and their housemates.


Like it or not, the average smartphone user's security is at the mercy of their vendor. The pace and priority of updates is not up to the typical device owner, nor do security researchers have any say - it's up to Apple and Google. Fully exposing flaws in the software they write can have a real impact on millions of people who don't have much of an alternative.

If you as a security researcher reasonably believe the vendor has neglected to patch the vulnerability, and you release exploit details, you have made things less safe for security-unaware users. There's no way around that. You already gave the vendor the opportunity to fix the problem. Now you are giving attackers the tools they need to take advantage of it.

You and I might care about abstract principles of corporate responsibility, but the typical user just wants to avoid getting phished, and I can't help but feel open disclosure has a lot of potential to harm them. I hope it's worth it.


> If you as a security researcher reasonably believe the vendor has neglected to patch the vulnerability, and you release exploit details, you are amplifying the current harm it's inflicting.

At least the public now has a fair chance to evaluate if they want to use their devices despite this vulnerabilities.


That's true, but it glibly ignores the reality of vendor lock-in. The average non-infosec human is not going to switch devices because of a vulnerability whitepaper. This "vuln-insensitive user" is better served by giving the vendor a longer timeline to take care of the issue, rather than releasing exploit details two weeks after someone changed a status on a ticket.


Then fight the lock-in, in court if necessary. Tort lawyer will have some fun with it if you can show you cannot evade losses of secure data.

I am pretty sure the line of defence will be weak "no warranty" claim that has nothing to do with lock in.


I feel that this rebuttle is silly. Nobody outside HN (and similar) will understand what this means. And, even if they do, many users cannot afford to buy an iPhone (the only real competition) just because of one security flaw that might affect them. Even then, the opportunity cost of switching is not minimal.


The fact that the only competition is unaffordable to a product with 85% global market share is a strong indicator we need to bust a monopoly for the public interest.


I wonder if Apple was perceived to be playing a security first product approach, whether this same audience would not be able to find a far more forgiving view of the Apple price list?


At the same time, it's a real "damned if you do, damned if you don't" situation, isn't it?

It is well documented that vulnerabilities are traded and sold freely online at various forums; since this isn't really a 0-day or an exploit as much as it is a new spin on a classic trick, there's no value in malicious actors hoarding the secret, especially since it's mostly a social engineering trick not an actual exploit.

In this particular case, the weight of the options seems to be "Make key users aware and put public pressure on vendors at the cost of giving a few bad actors an idea they didn't have before" versus "prevent giving a few more bad actors a new idea at the cost of Users being in the dark and vendors having no reason to address the issue"

The cost in this case seems pretty minimal, given that I'd assume there's no reason to really hang onto such a tactic.

For other exploits, it's basically the same idea: Who has the exploit, how likely is it that it's been sold, is the vendor likely to do anything about it, how dangerous is it to users? In most cases, if the exploit is already known and used among bad actors freely, what benefit is there to users in keeping such exploits hidden just to prevent a few more bad actors from doing it as well? Just seems like if the house is on fire, there's no sense in worrying about the wallpaper. (paraphrasing the quote since I can't remember its origin)


I don't really care what you believe about this stuff --- you're entitled to your opinions! --- as long as we're clear that people probably shouldn't be perpetuating the term "responsible disclosure".


And I don't really care what you call this stuff! "Coordinated disclosure" is a good value-neutral term and I appreciate the reasons why you prefer it. I'm just trying to get a discussion going on which approach, if any, is most responsible (lowercase).


The issues are marked as "won't fix". The only responsible course of action, IMO, at that point is to publicly announce the vulnerability.


One thing that is kind of unique about this vulnerability is that users can actually prevent attacks from happening by checking the permissions on an untrustworthy app because google has permissions controls built in for the features that are being abused here.

The heart of this vulnerability is more in that these settings aren't clear on their implications and there are bad defaults. Disclosing to users seems to be a responsible thing to do since users can prevent an attack on their own device (especially given google's response of "working as intended").


This is insightful, thanks. In this specific case, you're absolutely right that users themselves can take steps to mitigate the risk. If users come to understand the real meaning of granting permissions as a result of this disclosure, that's better for everyone.

Definitely a factor to take into account, particularly when deciding how the information will be publicized.


Malicious actors may find the vulnerability just like researchers did, so if the issue is not being fixed, the only responsible thing is to inform everyone of the potential danger they _already_ are in, and let them decide.

The bells & whistles of giving it a fancy name, domain, etc suggest an additional layer of intent with the disclosure, but that's a separate issue because they have followed reasonable procedures beforehand.


Malicious actors and users will be alerted in equal measure, but the typical Android phone owner will tend to do nothing in response to a new security paper, while malicious actors will tend to write exploit scripts.

That's the main concern going through my mind when I see this. It would be different if the average user was a business, a developer, a security professional - but this isn't Heartbleed and we're talking about normal people this time. Are they even going to read this, and if so, what can they do besides keep an eye on their permissions?


> Malicious actors and users will be alerted in equal measure

You have to assume malicious actors will find the vulnerability on their own, if they haven't already. They are already alerted.


Absolutely, but widespread disclosure does nothing to stop that from happening, and may actually enable less-sophisticated attackers to do things they weren't capable of before.


Assuming those less sophisticated attackers don't just buy the exploit from more sophisticated ones.


>but the typical Android phone owner will tend to do nothing in response to a new security paper, while malicious actors will tend to write exploit scripts.

If there are enough open, unpatched exploits people will switch to other phone vendors.


I seriously doubt that. Some sources have use of WinXP at 7 percent, it should be zero, and there are plenty of open, unpatched exploits for WinXP.


If you look at the timeline it details they disclosed the exploit and Google marked as won't fix which seems fair game for public release at that point.


Everything but the info about apps becoming the permission granted automatically by Play Store was already used by malware over a year ago.

https://www.skycure.com/blog/accessibility-clickjacking/


Absolutely responsible.

They should have went public earlier, like August_22nd_2016 + 90days (this is what google themselves does)


The impact of this vulnerability would not have been as bad if only android let the user disable background apps easily and reliably.


It's so ironic that their argument for preventing user's control of their phones is security.


Android should totally disable draw-on-top while security-sensitive UI elements are visible like password fields and permission dialogs. Seems like an easy fix.


It already does for permission dialogs. The problem is all the legacy applications which do not require the explicit permission check dialog on runtime. (Targets older API than 24)

Then there is this accessibility thing which can actually click on the permission dialog and does not require one itself. (Only a check in settings, which is not protected against overlays.)


Would this disrupt apps like LastPass, etc?


As cynical and unoriginal as I may sound, I could only expect such an exploit for android, and not iOS.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: