
Boffins reveal password-killer 0days for iOS and OS X - moe
http://www.theregister.co.uk/2015/06/17/apple_hosed_boffins_drop_0day_mac_ios_research_blitzkrieg/
======
tptacek
It's not bad work but it looks like The Register has hyped it much too far.
Breakdown:

* OSX (but not iOS) apps can delete (but not read) arbitrary Keychain entries and create new ones for arbitrary applications. The creator controls the ACL. A malicious app could delete another app's Keychain entry, recreate it with itself added to the ACL, and wait for the victim app to repopulate it.

* A malicious OSX (but not iOS) application can contain helpers registered to the bundle IDs of other applications. The app installer will add those helpers to the ACLs of those other applications (but not to the ACLs of any Apple application).

* A malicious OSX (but not iOS) application can subvert Safari extensions by installing itself and camping out on a Websockets port relied on by the extension.

* A malicious iOS application can register itself as the URL handler for a URL scheme used by another application and intercept its messages.

The headline news would have to be about iOS, because even though OSX does
have a sandbox now, it's still not the expectation of anyone serious about
security that the platform is airtight against malware. Compared to other
things malware can likely do on OSX, these seem pretty benign. The Keychain
and BID things are certainly bugs, but I can see why they aren't hair-on-fire
priorities.

Unfortunately, the iOS URL thing is I think extraordinarily well-known,
because for many years URL schemes were practically the only interesting thing
security consultants could assess about iOS apps, so limited were the IPC
capabilities on the platform. There are surely plenty of apps that use URLs
insecurely in the manner described by this paper, but it's a little unfair to
suggest that this is a new platform weakness.

~~~
eridius
Thank you for posting this. This is dramatically different from the way The
Register was hyping it. It's pretty serious of course, but the iOS
vulnerability is pretty minimal[1], and yet The Register made it sound like
the keychain was exploited on iOS and it seems that's not the case at all.

[1] How often is that even going to be exploitable? Generally cross-app
communication like that is to request info from the other app, not to send
sensitive info to that app.

~~~
tptacek
No, iOS applications definitely (ab)use URL schemes to send sensitive
information or trigger sensitive actions. The problem isn't that they're wrong
about that; it's that it's not a new concern.

~~~
eridius
The only example that really comes to mind where actual secret information is
sent over a URL like that is things like Dropbox OAuth tokens, which require
the requesting app to have a URL scheme db-<app_key> that it uses to send the
the token. But besides the fact that this isn't a new issue, it's hard to
imagine this actually being a serious problem, because it's impossible for the
malware app to hide the fact that it just intercepted the URL request. If I'm
in some app and request access to Dropbox, it switches to the Dropbox app and
asks for permission, and then switches to some _other_ app, it's pretty
obvious that other app is behaving badly. Especially since there's no way for
that other app to then hand the token back to the original app, so you can't
even man-in-the-middle and hope the user isn't paying attention.

~~~
tptacek
It's less common with the major well-known applications, in part because
almost all of those get some kind of security assessment done, and, like I
said: this was for a long time the #1 action item on any mobile app
assessment.

What you have to keep in mind is that for every major app you've heard of,
there are 2000+ that you've never heard of but that are important to some
niche of users.

~~~
eridius
Sure, I get that. I'm still just having a hard time imagining trying to
exploit this, because it's impossible to hide that you did it from the user,
and it completely breaks the app you're trying to take advantage of (since you
took over its URL handler, it can never receive the expected information, so
you can't even try to immediately pass the data to the real app and hope the
user doesn't notice).

Assuming the model where you send a request to another app, which then sends
the secret data (such as an OAuth token) back to you, it also seems rather
trivial to defeat (if you're the app with the secret data). Just require the
sending app synthesize a random binary string and send it to you, and use that
as a one-time pad for the data. You know your URL handler is secure (because
otherwise it can't have been invoked), and this way you know that even if some
other app intercepts the reply, they can't understand it. Granted, this
doesn't work for the model where you send secret data in your initial request
to the other application, but I can't even think of any examples of apps that
do that.

~~~
semi-extrinsic
Why can't the "other app" just fake a Dropbox-looking display that says
"Sorry, service unavailable. Click here to try again." while it does malicious
stuff in the background? And then pass to the real Dropbox once it's finished
being malicious?

~~~
eridius
Several reasons:

1\. You can't intercept the request to Dropbox itself, because that doesn't
contain any secret data. You'd need to intercept the response, and you can't
fake the UI for that app because it would be immediately apparent to even the
most cursory inspection that your app is not in fact the app that made the
request (even if you perfectly mirrored their UI, you wouldn't have any of
their data so you couldn't replicate what their app is actually showing). And
anyone who looks at the app switcher would see your app there so you can't
possibly hide the fact that you launched at that time.

2\. Even if you could be 100% convincing, you can't actually pass the data to
the real app when you're done recording it because, by virtue of overriding
their URL handler, you've made it _impossible_ to invoke the real app's URL
handler. There's no way on iOS to specify which app you're trying to open a
URL in. All you can do is pass the URL to the system and it will open the app
it thinks is correct. Since you overrode their URL handler, if you try and
call it, you'll just be calling yourself again. And since you've now made
their URL handler inaccessible, you've cut off the only possible way to pass
that data to the real app (even if it has other URL handlers, they won't
accept the same data).

So the end result is that if you do try and take over someone else's URL
handler, it'll be blindingly obvious the moment you actually intercept a
request.

The only approach that even seems semi-plausible would be attempting to phish
the user by presenting a login UI as if you were Dropbox and hoping they enter
their username/password, but the problem with that is the entire point of
calling out to a separate app is that you're already logged-in to that app, so
if the user is presented with a login form at all, they should instantly be
suspicious. And of course as already mentioned you can't hide the fact that
you intercepted the request, so you'll be caught the first time you ever do
this.

On a related note, even if you can make a perfectly convincing UI, your launch
image will still give you away as being the wrong app (since the user will see
your launch image as the app is launched). Unless you make your launch image
look like the app you're trying to pretend to be, but then you can't possibly
pretend to be a legitimate app because the user has to actually install your
app to begin with, which means they'll be looking at it. If they install some
random app from the app store and it has a launch image that looks like, say,
Dropbox, that's a dead giveaway that it's shady. There's not really any way to
disguise an app like that.

------
andor
Quick summary of the keychain "crack":

Keychain items have access control lists, where they can whitelist
applications, usually only themselves. If my banking app creates a keychain
item, malware will not have access. But malware can delete and recreate
keychain items, and add both itself and the banking app to the ACL. Next time
the banking app needs credentials, it will ask me to reenter them, and then
store them in the keychain item created by the malware.

~~~
pilsetnieks
If this is how it works, you can check it on OS X by clicking each item in
Keychain Access and looking at the Access Control tab or you can run `security
dump-keychain -a` in terminal - it lists all keychain items and their access
control lists. It's still a big and unwieldy list but not as bad as clicking
each keychain item. Someone better at this stuff could probably think of a way
to make it easier.

(This would only show if you've been exploited already, not that some app is
capable of doing it.)

~~~
obituary_latte
>This would only show if you've been exploited already

What would indicate a compromise?

~~~
alimbada
If the ACL of a keychain item contains an app that isn't supposed to have
access to that keychain item then that would indicate a compromise.

~~~
obituary_latte
Ah ok that makes sense. Thanks very much.

------
MagerValp
That paper is rife with confusing or just plain wrong terminology, and the
discussion jumps between Android, iOS, and OS X, making it really hard to
digest. I think these are the bugs they have discovered, but if anyone could
clarify that would be great:

• The keychain can be compromised by a malicious app that plants poisoned
entires for other apps, which when they store entries that should be private
end up readable by the malicious app.

• A malicious app can contain helper apps, and Apple fails to ensure that the
helper app has a unique bundle ID, giving it access to another app's sandbox.

• WebSockets are unauthenticated. This seems to be by design rather than a bug
though, and applications would presumably authenticate clients themselves, or
am I missing something?

• URL schemes are unauthenticated, again as far as I can tell by design, and
not a channel where you'd normally send sensitive data.

~~~
kinofcain
The URL Schemes are unauthenticated, but the main problem is that duplicates
are resolved by the host OS at install time, either as first-installed app
wins (OSX) or last installed app wins (iOS).

~~~
MagerValp
Both of which seem like a valid strategy to me. The OS has never guaranteed
that a particular URL scheme goes to a particular app, and developers are
wrong to assume that it goes to their app and not someone else's. I realize
that there aren't that many alternatives on iOS, but a sharing extension at
least gives the user complete control. On OS X there are a wealth of different
IPC options, including sockets and mach based services.

It would of course be nice if Apple provided a nice GUI to control the Launch
Services database, but since they haven't you have to assume that users are
neither in control nor aware of which app handles which URL scheme.

~~~
kinofcain
Indeed. The insecurity of the scheme handling is not a new development and
should be better known.

------
therealmarv
So Apple was aware of this for 6 months and are doing NOTHING, not even
communicating?! How serious do they take security and fixing it (at least
within 6 months) ?

~~~
userbinator
It's not remotely exploitable --- it _requires installing a malicious app_ ;
that makes it far less severe than something that could be done through e.g.
just visiting a webpage.

~~~
DennisP
Yes, but the researchers submitted an app with the exploit to the app store,
and it was accepted.

~~~
mmcconnell1618
Good thing there are 1,500,000 apps in the store and getting visibility is the
biggest challenge for developers/publishers :-)

~~~
nadams
"MoneyMakingApp5000 - make money from home"

Post some screenshots of the app with screenshots of some random Paypal
transfers and I don't think that you will have a problem getting people to
find/download your app.

~~~
webjprgm
Downloading and running it once would set up the exploit but not complete it,
IIRC. You need to go back to the target app and re-enter credentials then run
the exploit app a second time. So a broken app that a user would run once and
then delete is no good.

A standard Trojan game/utility would work fine even if only a small number of
people run it.

------
SlashmanX
The paper in question:
[http://arxiv.org/abs/1505.06836](http://arxiv.org/abs/1505.06836)

~~~
PhantomGremlin
Thank you for that "normal" link.

The Register instead links to something on Google Drive. Since I don't enable
JS, Google presents me with an unusable screen of 25 https: links.

------
StavrosK
"Boffins"? Isn't that rather dismissive, as in "oh, look at what those crazy
boffins cooked up now!"?

~~~
zelos
It's tongue-in-cheek. The Register's style is basically a joke on British
tabloid styles.

~~~
jahnu
I don't think it's a joke. I think they actively approve of tabloid style
journalism.

See Orlowski's rants on climate change for example.

------
dodongogo
It sounds like a temporary fix for the keychain hack on iOS would be to just
never use the SecItemUpdate keychain API, and always use SecItemDelete
followed by SecItemAdd with the updated data which according to
[http://opensource.apple.com/source/Security/Security-55471/s...](http://opensource.apple.com/source/Security/Security-55471/sec/Security/SecItem.h):

> @constant kSecAttrAccessGroup ...Unless a specific access group is provided
> as the value of kSecAttrAccessGroup when SecItemAdd is called, new items are
> created in the application's default access group.

If I understand this correctly that would always make sure that when an
existing entry is updated in an app, the 'hack' app would again be restricted
in being able to access the entry's data. It could still clear the data, but
wouldn't be able to access the contents.

The paper seems to note this as well:

> It turns out that all of [the apps] can be easily attacked except todo Cloud
> and Contacts Sync For Google Gmail, which delete their current keychain
> items and create new ones before updating their data. Note that this
> practice (deleting an existing item) is actually discouraged by Apple, which
> suggests to modify the item instead [9].

~~~
eridius
The keychain hack can't apply to iOS. Each app (well, app group, but an app
group can only contain apps by a single developer) gets an independent
keychain.

~~~
dodongogo
Ah, my mistake. Forgot that the access groups are scoped by bundle id on iOS.
So yeah, this would only apply to OSX.

------
drtse4
From the paper:

> Since the issues may not be easily fixed, we built a simple program that
> detects exploit attempts on OS~X, helping protect vulnerable apps before the
> problems can be fully addressed.

I'm wondering if the tool is publicly accessible, couldn't find any reference
to it.

~~~
jdc0589
was wondering the same thing... finding items with more than 1 application is
the important part. so this is a start: security dump-keychain -a >
keychain.txt && egrep -n "applications \\(([2-9] _)\\) " keychain.txt

Then just look at the item that contains those line numbers and see whats up.
You will have some show up on an unaffected system. This is what my output
looks like:
[http://puu.sh/ishaP/675695b11e.png](http://puu.sh/ishaP/675695b11e.png)

_* disclaimer, that egrep regex is shit.

~~~
c0wb0yc0d3r
Can you skip the whole writing to file bit, and pipe straight to egrep?

~~~
jdc0589
sure. but then you wouldn't have a file to go investigate the matched line
numbers in.

~~~
lindig

        security dump-keychain -a | egrep -A 9 -n "applications \(([2-9])\)" 
    

This gives you the relevant lines right away: 9 lines following the match (you
might want to use fewer).

------
0x0
Anyone have any more information about (or even a source for) "Google's
Chromium security team was more responsive and removed Keychain integration
for Chrome noting that it could likely not be solved at the application
level"?

Is this going to happen in an upcoming stable release? What is it being
replaced with?

~~~
anon1385
That does seem a bit strange. The Chrome devs have long taken the position
that there's no point trying to encrypt local copies of passwords. You can see
a very long discussion about it here where Chrome devs argue that it's
pointless:
[https://news.ycombinator.com/item?id=6165708](https://news.ycombinator.com/item?id=6165708)

The comments by the chrome security tech lead would suggest that they wouldn't
view this keychain issue as a security flaw.

So I don't see why they would bother removing keychain integration. What is
the replacement going to be? A password file encrypted with the password
"peanuts"?[1]

[1]
[https://news.ycombinator.com/item?id=9714770](https://news.ycombinator.com/item?id=9714770)

------
coldcode
The first defense they can perform is to change the automatic checks in the
App Store review process to identify the attack in a malicious app and stop it
from being approved. This could be fairly easy, of course Apple doesn't tell
anyone what they do in this process so we have no way to verify it. Still you
have to identify how the process could be hidden but since it uses known API
calls in an uncommon way, I think this is quite doable.

The second defense is more complex, changing the way Keychain API works
without breaking every app out there is much more complex. Not knowing much
about this is implemented it might take a lot of testing to verify a fix
without breaking apps.

The last thing they can also do is to build a verified system tool that checks
the existing keychain for incorrect ACL usage. You can't hide the hack from
the system. This way Apple could fix the ACL to not allow incorrect usage and
not give access where it doesn't belong. I think this is fairly easy to do
since it will break very little.

This is why building security is hard no matter who you are and everyone gets
it wrong sometimes. At least Apple has the ability to readily (except for
point 2) repair the problem, unlike Samsung having to have 800 million phones
somehow patched by third parties to fix the keyboard hack.

------
w8rbt
The fundamental design flaw of all of these compromised password managers,
keychains, etc. is that they keep state in a file. That causes all sorts of
problems (syncing among devices, file corruption, unauthorized access,
tampering, backups, etc.).

 __ _Edit - I seldom downvote others and the few times I do, I comment as to
why I think the post was inappropriate. What is inappropriate about my post?

Few people stop and think about the burden of keeping state and the problems
that introduces with password storage. Many even compound the problems by
keeping state in the Cloud (solve device syncing issues). It's worth
discussing. There are other ways._ __

~~~
tempodox
No idea. I can't even find a single obvious interpretation of what a downvote
is supposed to convey. I bet it's not the same thing twice for the same person
on the same day. If it were up to me, I would remove the power-to-downvote
from the API. If I really do oppose a comment, I should give a reason rather
than just a “bit with a negative sign”.

~~~
reitanqild
One reason is people using HN on mobile devices. This used to be my "reading
account" so I couldn't downvote accidentally but now I have crossed the
barrier with this one as well.

Not saying that is what happened here though (because I down't know.)

~~~
walterbell
Yes, we need the upvote and downvote icons to be moved to opposite ends of the
comment title, which would eliminate the 1mm targeting challenge.

------
jpmoral
Okay, so if a user only retrieve Keychain items manually (unlock keychain,
view password, type/paste into app/website) and never allow apps to access it,
is s/he safe?

------
jusio
Oh God. After reading the paper I wouldn't expect a fix from Apple anytime
soon :(

~~~
watmough
Right, but it's pretty clear from the paper that one major step forward would
be for apps to check whether their keychain has been compromised.

This is something that apps can go forward with. But doesn't close the door if
it was already opened.

Overall, this is just a brutal suite of bugs though, and a great paper.

------
gchp
Well, shit. Finally I feel justified for never (read: rarely) using the "Save
password", feature in my web browser.

Does anyone know if Apple have done anything towards resolving this in the 6
month window they requested? Slightly worrying now that this has been
published without a fix from Apple. I don't really download apps very often on
my Mac, but probably won't for sure now until I know this has been resolved.
Annoying.

~~~
hhsnopek
You know this was bound to happen sooner or later. That goes for any
encryption technology. Last pass was recently "hacked" as well. You can't
trust any crypto tech ;)

~~~
cinquemb
In the age of crypto-peddling-as-a-service by large, small companies and
individuals alike, as an end all be all to general opsec and the tradeoffs
inherit in any decision making (as it is so often common to ignore such
elephants in the room with one wave of the "trust the math" wands), It might
just be more socially acceptable to just feign surprise :P

~~~
bennyg
Which is why "open source all the things" is the way to go for trusting crypto
implementations - or at least it's step 1.

------
ikeboy
I wonder if the new "Rootless" feature prevents this, and if it was developed
because of this.

~~~
smackfu
Rootless is more about securing the OS files and processes from malware.

~~~
ikeboy
Isn't this similar? Rootless is strengthening the sandbox against malware.

Anyone on the new Mac version want to test this?

~~~
RexRollman
I don't think so, especially since the Keychain being affected by this is a
user file. Rootless protects system files.

------
glasz
they've known for half a year and still, just 2 weeks back, cook is cooking
things up about their stance on encryption and privacy [0]. you've gotta love
the hypocrisy on every side of the discussion. it's so hilarious that it makes
me wanna do harm to certain people.

[0] [http://9to5mac.com/2015/06/02/tim-cook-privacy-
encryption/](http://9to5mac.com/2015/06/02/tim-cook-privacy-encryption/)

------
vbezhenar
Don't run untrusted apps outside of virtual machines. Too bad that web taught
us to trust the code we shouldn't trust. Noscript must be integrated in every
browser and enabled by default. Sandboxing was and will be broken.

------
wahsd
Bravo, Apple. Humongous security hole and you don't address it in six months?

I hope it's being readied for inclusion in 8.4. We all know how it bruises
Apple's ego to have to patch stuff without acting like it'a a feature
enhancement.

------
josteink
Once again goes to show that Apple is mostly interested in the security of its
iStore, platform lock down and DRM.

I'm not exactly shocked.

Just for kicks... Does anyone remember the I'm a PC ads, where macs were
magically "secure", couldn't get viruses or hacked or anything? Turns out,
with marketshare they can! Just like Windows. Strange thing eh?

~~~
romanovcode
>Just like Windows. Strange thing eh?

Not strange if you grasp the fact that malware is just a program that has
elevated access.

For me it was strange how can Apple market their system as virus-free. Now
that's ridiculous.

~~~
josteink
Yes, to any techie the lie is obvious.

I just wonder what the vast userbase of uneducated people (seniors, teen
bloggers, ironically education institutions, etc) who moved over to macs
because they bought the lie will feel when they too later discover that the
promises were a lie.

Because unlike Microsoft, Apple doesn't have a battle hardened OS where
security has been worked on systematically, for over a decade.

And I could have told you the same story years ago. I don't need blatantly
obvious bugs like this one to back that claim.

~~~
Bud
There was no lie. It was true then, and is still clearly and obviously true
now, that Mac users have a small fraction of the malware issues that Windows
users have. The difference between iOS and Android is even more stark.

You're also hilariously wrong about Microsoft having a supposedly "battle-
hardened" OS where security has been worked on systematically. OS X is based
on BSD Unix, where security has been worked on since the 1970s, before
Microsoft even existed. OS X itself is now 15 years old.

I administer hundreds of Macs and PCs. I can objectively state that the PCs
have about 10-50x as much issues with malware as the Macs have, and those
issues are more severe and affect users and admins more. Everyone who manages
both Macs and PCs in the enterprise is well-aware of this.

