Hacker News new | past | comments | ask | show | jobs | submit login
Boffins reveal password-killer 0days for iOS and OS X (theregister.co.uk)
501 points by moe on June 17, 2015 | hide | past | web | favorite | 139 comments



It's not bad work but it looks like The Register has hyped it much too far. Breakdown:

* OSX (but not iOS) apps can delete (but not read) arbitrary Keychain entries and create new ones for arbitrary applications. The creator controls the ACL. A malicious app could delete another app's Keychain entry, recreate it with itself added to the ACL, and wait for the victim app to repopulate it.

* A malicious OSX (but not iOS) application can contain helpers registered to the bundle IDs of other applications. The app installer will add those helpers to the ACLs of those other applications (but not to the ACLs of any Apple application).

* A malicious OSX (but not iOS) application can subvert Safari extensions by installing itself and camping out on a Websockets port relied on by the extension.

* A malicious iOS application can register itself as the URL handler for a URL scheme used by another application and intercept its messages.

The headline news would have to be about iOS, because even though OSX does have a sandbox now, it's still not the expectation of anyone serious about security that the platform is airtight against malware. Compared to other things malware can likely do on OSX, these seem pretty benign. The Keychain and BID things are certainly bugs, but I can see why they aren't hair-on-fire priorities.

Unfortunately, the iOS URL thing is I think extraordinarily well-known, because for many years URL schemes were practically the only interesting thing security consultants could assess about iOS apps, so limited were the IPC capabilities on the platform. There are surely plenty of apps that use URLs insecurely in the manner described by this paper, but it's a little unfair to suggest that this is a new platform weakness.


Thank you for posting this. This is dramatically different from the way The Register was hyping it. It's pretty serious of course, but the iOS vulnerability is pretty minimal[1], and yet The Register made it sound like the keychain was exploited on iOS and it seems that's not the case at all.

[1] How often is that even going to be exploitable? Generally cross-app communication like that is to request info from the other app, not to send sensitive info to that app.


No, iOS applications definitely (ab)use URL schemes to send sensitive information or trigger sensitive actions. The problem isn't that they're wrong about that; it's that it's not a new concern.


The only example that really comes to mind where actual secret information is sent over a URL like that is things like Dropbox OAuth tokens, which require the requesting app to have a URL scheme db-<app_key> that it uses to send the the token. But besides the fact that this isn't a new issue, it's hard to imagine this actually being a serious problem, because it's impossible for the malware app to hide the fact that it just intercepted the URL request. If I'm in some app and request access to Dropbox, it switches to the Dropbox app and asks for permission, and then switches to some other app, it's pretty obvious that other app is behaving badly. Especially since there's no way for that other app to then hand the token back to the original app, so you can't even man-in-the-middle and hope the user isn't paying attention.


It's less common with the major well-known applications, in part because almost all of those get some kind of security assessment done, and, like I said: this was for a long time the #1 action item on any mobile app assessment.

What you have to keep in mind is that for every major app you've heard of, there are 2000+ that you've never heard of but that are important to some niche of users.


Sure, I get that. I'm still just having a hard time imagining trying to exploit this, because it's impossible to hide that you did it from the user, and it completely breaks the app you're trying to take advantage of (since you took over its URL handler, it can never receive the expected information, so you can't even try to immediately pass the data to the real app and hope the user doesn't notice).

Assuming the model where you send a request to another app, which then sends the secret data (such as an OAuth token) back to you, it also seems rather trivial to defeat (if you're the app with the secret data). Just require the sending app synthesize a random binary string and send it to you, and use that as a one-time pad for the data. You know your URL handler is secure (because otherwise it can't have been invoked), and this way you know that even if some other app intercepts the reply, they can't understand it. Granted, this doesn't work for the model where you send secret data in your initial request to the other application, but I can't even think of any examples of apps that do that.


Why can't the "other app" just fake a Dropbox-looking display that says "Sorry, service unavailable. Click here to try again." while it does malicious stuff in the background? And then pass to the real Dropbox once it's finished being malicious?


Several reasons:

1. You can't intercept the request to Dropbox itself, because that doesn't contain any secret data. You'd need to intercept the response, and you can't fake the UI for that app because it would be immediately apparent to even the most cursory inspection that your app is not in fact the app that made the request (even if you perfectly mirrored their UI, you wouldn't have any of their data so you couldn't replicate what their app is actually showing). And anyone who looks at the app switcher would see your app there so you can't possibly hide the fact that you launched at that time.

2. Even if you could be 100% convincing, you can't actually pass the data to the real app when you're done recording it because, by virtue of overriding their URL handler, you've made it impossible to invoke the real app's URL handler. There's no way on iOS to specify which app you're trying to open a URL in. All you can do is pass the URL to the system and it will open the app it thinks is correct. Since you overrode their URL handler, if you try and call it, you'll just be calling yourself again. And since you've now made their URL handler inaccessible, you've cut off the only possible way to pass that data to the real app (even if it has other URL handlers, they won't accept the same data).

So the end result is that if you do try and take over someone else's URL handler, it'll be blindingly obvious the moment you actually intercept a request.

The only approach that even seems semi-plausible would be attempting to phish the user by presenting a login UI as if you were Dropbox and hoping they enter their username/password, but the problem with that is the entire point of calling out to a separate app is that you're already logged-in to that app, so if the user is presented with a login form at all, they should instantly be suspicious. And of course as already mentioned you can't hide the fact that you intercepted the request, so you'll be caught the first time you ever do this.

On a related note, even if you can make a perfectly convincing UI, your launch image will still give you away as being the wrong app (since the user will see your launch image as the app is launched). Unless you make your launch image look like the app you're trying to pretend to be, but then you can't possibly pretend to be a legitimate app because the user has to actually install your app to begin with, which means they'll be looking at it. If they install some random app from the app store and it has a launch image that looks like, say, Dropbox, that's a dead giveaway that it's shady. There's not really any way to disguise an app like that.


In iOS9, apps can now register to arbitrary http URLs, but that in fact requires the app to be correctly associated with the domain, which in turns requires a lengthy process (the domain must expose via HTTPS a json naming the bundle id, signed with the TLS private key).

So I think they made it right for generic URLs while the custom URLs has been a little unfortunate from day 1, but it's hardly something new.

Btw can anybody explain how association to arbitrary http URLs works in Android? Is there a similar validation, or can any app intercept any URL if it wishes so?


In Android it's all been rolled into the Intent/IPC system since day 1. Apps are composed of Activities, and Activities can defined Intent filters. Intent filters describe what the Activity can handle, including but not limited to URLs.

Through this system, any app can register for any url (IIRC you can filter by scheme, host, and/or path). When a url is invoked, the system asks the user which app should handle it if there are several that can. You can also set a default app for the given url, etc - the whole system, though very flexible, has been widely criticized as having mediocre UX (though IMO it mostly works just fine).

In Android M (unreleased), they've added a similar feature as in iOS 9 whereby you can ensure that URLs you define and own are always handled by your app. Essentially you host a json file at your domain, served over https, that specifies the SHA256 fingerprint of your app's signing cert. Your app defines the url filter similar to before and the system makes sure that you match the fingerprint.

Android being Android, you can still tweak the default handling of intents even if apps do this, but it's pretty hidden.


Not sure the above is entirely complete (though perhaps accurate), at least given what they are claiming. They are claiming in the introduction that the WebSocket attack can work on Windows and iOS. However, they don't seem to explain how in section 3.3 My guess is they're saying that an app can create a BG server on an iOS device and you can't control which apps connect to it. Not particularly insightful.

I'm skeptical as to their chops given the general disorganization of their paper and their over hyping of the scheme issue which is pretty basic / well known. And, in fact, it's not that hard to authenticate an incoming scheme on iOS via app prefix. Just have to dig in the docs a bit.


Their Websockets attack appears to be premised on a Safari extension that assumes it can trust a Websockets endpoint bound to localhost.


https://blog.agilebits.com/2015/06/17/1password-inter-proces...

This particular attack is worrisome because it doesn’t require “admin” or “root” access, unlike other attacks that depend on the presence of malicious software on the system.

It's a weakness.


It clearly is a weakness on OSX (apparently much less so on iOS). The issue is, it's not a new weakness.


Quick summary of the keychain "crack":

Keychain items have access control lists, where they can whitelist applications, usually only themselves. If my banking app creates a keychain item, malware will not have access. But malware can delete and recreate keychain items, and add both itself and the banking app to the ACL. Next time the banking app needs credentials, it will ask me to reenter them, and then store them in the keychain item created by the malware.


If this is how it works, you can check it on OS X by clicking each item in Keychain Access and looking at the Access Control tab or you can run `security dump-keychain -a` in terminal - it lists all keychain items and their access control lists. It's still a big and unwieldy list but not as bad as clicking each keychain item. Someone better at this stuff could probably think of a way to make it easier.

(This would only show if you've been exploited already, not that some app is capable of doing it.)


The results are pretty unwieldly if you have a lot of passwords so I used the following snippet to sift through the crud

  cat results.txt | grep -A 20 'applications' |grep \[0-9\]: | grep -v entry | grep -v description | cut -d\  -f14- |  sort | uniq
I made a couple of assumptions so YMMV:

  - up to 10 apps in the ACL ( the max I saw in mine was 7 )

  - the word 'entry' or 'decription' was no present in the app name


Here is a more comprehensive one-liner to see all items with more than 1 applications on a ACL list:

    security dump-keychain -a > keychain-results.txt

    grep -P '(^(class|keychain):|\s+("(svce|acct)"|0x0000000(1|7)) <blob>="[^"]*"|^\s+applications \(([2-9]|[0-9]{2,})\):|^\s+[0-9]+:)' keychain-results.txt | \
    perl -pe 's/\n/\0/g' | perl -pe 's/\0keychain:/\nkeychain:/g' | \
    grep -aP 'applications \(([2-9]|[0-9]{2,})\):' | \
    sort -u | perl -pe 's/\0/\n/g'


>This would only show if you've been exploited already

What would indicate a compromise?


If the ACL of a keychain item contains an app that isn't supposed to have access to that keychain item then that would indicate a compromise.


Ah ok that makes sense. Thanks very much.


I'm curious: do you have any idea why after running this command would security start asking for password for various keychains?


It goes over all keychains so I'm assuming (at least it was so for me) that it asked for password for locked keychains.

`security` is a built-in system utility, basically Keychain Access for terminal, the same way that `diskutil` is Disk Utility but in the terminal.


Just to note, this "crack" can be worked around by the banking app. If it finds a keychain item that doesn't have a password, instead of updating the item, it could just delete it and re-create it. That would reset the ACL back to the expected value. (Or it could clear out the ACL, but it's cleaner just to delete/recreate).

That said, this doesn't fix the case tptacek listed where a malicious app could include helpers registered to the bundle ID of another app and those helpers would be automatically added to the ACL of those other apps. If that's an accurate description it sounds like something only Apple can fix.


How can malware delete a keychain item if it is not on the ACL?


Apparently everyone can. That's the bug, or at least a big part of it.

There is precedent for this, for example in the Unix filesystem permissions: to be able to delete a file, you need write access to its parent directory; the permissions of the file itself are not taken into account.


>>There is precedent for this, for example in the Unix filesystem permissions: to be able to delete a file, you need write access to its parent directory; the permissions of the file itself are not taken into account.

My mind has been blown. I can delete this .txt as the user "pikachu". I had no idea.

  pikachu@POKEMONGYM ~/tmp3/pikachuFolder $ ls -la
  total 8
  drwxrwxr-x 2 pikachu pikachu 4096 Jun 17 08:48 .
  drwxr-xr-x 3 pikachu pikachu 4096 Jun 17 08:48 ..
  -rw-r--r-- 1 root    root       0 Jun 17 08:48 gogogadgetarms.txt
  pikachu@POKEMONGYM ~/tmp3/pikachuFolder $
  pikachu@POKEMONGYM ~/tmp3/pikachuFolder $ rm gogogadgetarms.txt 
  rm: remove write-protected regular empty file ‘gogogadgetarms.txt’? y
  pikachu@POKEMONGYM ~/tmp3/pikachuFolder $ ls -la
  total 8
  drwxrwxr-x 2 pikachu pikachu 4096 Jun 17 08:52 .
  drwxr-xr-x 3 pikachu pikachu 4096 Jun 17 08:48 ..
  pikachu@POKEMONGYM ~/tmp3/pikachuFolder $


The sticky bit, often applied to directories like /tmp, fixes this: https://en.wikipedia.org/wiki/Sticky_bit .


The thinking is that if you own the directory you own the ability to remove things from it.

It gets reported as a bug quite often though.


Well, the UNIX case is not a bug, it's by design. A directory is a list of files, so you remove a file by removing it from that list, the directory. The permissions of the file itself are for access/modifying of the contents of the file itself.


That's technically correct but it violates the principle of least surprise. Most users find the behaviour unexpected even though it's fairly easy to remember and real-world analogues abound (e.g. you can shred an envelope without reading the contents).

It's not clear to me that this is a common-enough situation to warrant changing anything so the best answer is probably to make sure documentation is easily found.


It seems like under these conditions--

  1. The user has write permission for the directory.
  2. The user does not have write permission for the file.
  3. The file has only one hard link.
  4. The user deletes the file.
--that the last hard link should be moved to ~owner home directory, rather than deleted. Otherwise, I don't see a problem.


Again, I'm not sure this happens very often – most projects have tended to focus on avoiding cases where you have users crossing security boundaries like this. At the very least, setting the sticky bit seems to avoid a great deal of potential confusion.

I haven't thought about this in depth but here are the edge cases which come to mind on first thought with your proposal:

1. What happens if that file is on a different filesystem than the owner's home directory? 1a. What if the home directory is on a filesystem which doesn't have room? 1b. What happens if that would cause the user to exceed a size or inode quota?

2. How does this interact with e.g. lockd in an NFS environment?

3. How does it get a filename / how do we avoid security issues? For example, if I have write access to a directory I can rename the file to, say, .bashrc before I call unlink() so this would probably require generated some sort of reasonably informative name which can't collide ("deleted-by-<uid>-<hostname>-<inode>-<SHA-256>-<sequence>"?).

You could probably handle some of this by following the tradition of `lost+found` and moving it to a directory on the same filesystem which has the sticky bit set.


You put a bit more effort into this than I did. I only said the first thing that came to mind that would prevent a user without write permission from deleting the file, which is a near-equivalent to blindly overwriting the contents of a file with nothing.


That's a major security bug waiting to happen. You're effectively granting everyone the right to create files in anyone's home directory.


But you have to be superuser to chown.


That's not the only way to get an arbitrary file owned by someone else.


If there is a way that does not require superuser access at any point in the process, I'm curious. Please tell.


Malware apparently can also create items in advance as a sort-of honeypot for passwords.

For me, that is worse, as when "Next time the banking app needs credentials, it will ask me to reenter them" would occur, my reaction would be "I do not know my password; it's in the keychain" (for most services)


I'd imagine that unless you suspect an attack, at that point you'd reset your password and enter it into the keychain: Giving them your new password.


First one? Probably. If it happened twice or thrice within a few hours, I would start being suspicious and reach for a backup. But maybe, I'm too paranoid.


Is there a known exploit in the wild?

Mac Outlook have (for unknown reasons) been asking me for my domain password to store it into the keychain, the last week, which surprised me at the time.


Mac Outlook has a tendency to ask for the password in that manner if there are network changes or connection problems when it is trying to connect/reconnect/etc.


That paper is rife with confusing or just plain wrong terminology, and the discussion jumps between Android, iOS, and OS X, making it really hard to digest. I think these are the bugs they have discovered, but if anyone could clarify that would be great:

• The keychain can be compromised by a malicious app that plants poisoned entires for other apps, which when they store entries that should be private end up readable by the malicious app.

• A malicious app can contain helper apps, and Apple fails to ensure that the helper app has a unique bundle ID, giving it access to another app's sandbox.

• WebSockets are unauthenticated. This seems to be by design rather than a bug though, and applications would presumably authenticate clients themselves, or am I missing something?

• URL schemes are unauthenticated, again as far as I can tell by design, and not a channel where you'd normally send sensitive data.


The URL Schemes are unauthenticated, but the main problem is that duplicates are resolved by the host OS at install time, either as first-installed app wins (OSX) or last installed app wins (iOS).


Both of which seem like a valid strategy to me. The OS has never guaranteed that a particular URL scheme goes to a particular app, and developers are wrong to assume that it goes to their app and not someone else's. I realize that there aren't that many alternatives on iOS, but a sharing extension at least gives the user complete control. On OS X there are a wealth of different IPC options, including sockets and mach based services.

It would of course be nice if Apple provided a nice GUI to control the Launch Services database, but since they haven't you have to assume that users are neither in control nor aware of which app handles which URL scheme.


Indeed. The insecurity of the scheme handling is not a new development and should be better known.


So Apple was aware of this for 6 months and are doing NOTHING, not even communicating?! How serious do they take security and fixing it (at least within 6 months) ?


>"and are doing NOTHING"

Citation needed. An article (and on the Register at that) is not any indication that "they're doing nothing"...


They do make the claim in the paper:

> We reported this vulnerability to Apple on Oct. 15, 2014, and communicated with them again in November, 2014 and early 2015. They informed us that given the nature of the problem, they need 6 months to fix it.

However, doing nothing seems to be unfair:

> We checked the most recent OS X 10.10.3 and beta version 10.10.4 and found that they attempted to address the iCloud issue using a 9-digit random number as accountName. However, the accountName attribute for other services, e.g. Gmail, are still the user’s email address. Most importantly, such protection, based upon a secret attribute name, does not work when the attacker reads the attribute names of an existing item and then deletes it to create a clone under its control, a new problem we discovered after the first keychain vulnerability report and are helping Apple fix it.

So not nothing, but their iCloud 'fix' doesn't work and there's no fix for the real issues. But the researchers say they're helping Apple fix it, so nothing does seem unfair.


FTA: "Apple security officers responded in emails seen by El Reg expressing understanding for the gravity of the attacks and asking for a six month extension and in February requesting an advanced copy of the research paper before it was made public."


It's not remotely exploitable --- it requires installing a malicious app; that makes it far less severe than something that could be done through e.g. just visiting a webpage.


Yes, but the researchers submitted an app with the exploit to the app store, and it was accepted.


Good thing there are 1,500,000 apps in the store and getting visibility is the biggest challenge for developers/publishers :-)


There are lots of web pages too, but an exploit that works when you visit a web page is still a pretty big deal.


"MoneyMakingApp5000 - make money from home"

Post some screenshots of the app with screenshots of some random Paypal transfers and I don't think that you will have a problem getting people to find/download your app.


Downloading and running it once would set up the exploit but not complete it, IIRC. You need to go back to the target app and re-enter credentials then run the exploit app a second time. So a broken app that a user would run once and then delete is no good.

A standard Trojan game/utility would work fine even if only a small number of people run it.


Ah yes, security by obscurity, everyone's favourite.


Security through obscurity strikes again!


Yes, they submitted one app which was accepted. That app is now gone, you can bet. Can they continue to submit apps continuously which Apple will still accept? I doubt it. The story isn't clear on this.


I bet it was submitted before they informed Apple of the problem. If you were to submit such an app today, it seems likely that they now look for stuff like this.


This is something many people forget: Apple has a challenge changing system APIs since they need to avoid breaking other apps but they can start scanning App Store submissions immediately and take action with absolute certainty that there won't be collateral damage.


It also works if the target has installed an exploitable app, or one that can be trojaned via an included component. That sort of thing has happened in the past even from "reputable" software vendors (c.f. Lenovo/Superfish or the Sony rootkit), so pretending that it can't happen on App Store seems naive to me.

No, this isn't a remote root, but it's pretty severe and not something that should be downplayed with italics.


How do you know they was aware if they didn't "not even communicating"?


According to the article, they were aware


This doesn't seem like something a quick patch can fix.

The section of the paper on mitigation suggests that it is non-trivial to correct without significantly re-architecting the app-OS relationship, if the paper is accurate, Apple is in a very difficult situation.


Yap, but the OP was saying that perhaps Apple was not aeare of the vulnerability but the article stated that they the authors had communications wiht Apple about it.

It seems that stating just a fact from the article is not liked by some


exactly....


According to the paper they haven't done nothing, they cranked out a half-assed fix for the keychain issue specifically for iCloud.

So as good as nothing, but not nothing.


Apple's stance on security is "fix it when there's an exploit" rather than "fix it when it's broken".


I'm surprised that you're surprised! After The Fappening and the SMS of doom (amongst many others), you can't still believe that Apple gives a sh*t about security, can you?

I mean, I understand that there are still a lot of Apple fans here at HN, but Apple's security has been a laughing stock of the industry for a while now.


I'm far from an Apple apologist (I probably complain about them more than I praise them) but "failing to be flawless" about security is not the same thing as "not giving a shit" about security.

I've yet to see any massive platform that is both flexible and open to millions of users that has zero security flaws or exploits developed for them. Not saying it's a good thing, but it's certainly a common thing, even among companies that give several shits about security.


There is a difference between "has zero security flaws" and "not fixing a security flaw for 6 months after being made aware"


Don't forget 'goto fail'.


The paper in question: http://arxiv.org/abs/1505.06836


Thank you for that "normal" link.

The Register instead links to something on Google Drive. Since I don't enable JS, Google presents me with an unusable screen of 25 https: links.


"Boffins"? Isn't that rather dismissive, as in "oh, look at what those crazy boffins cooked up now!"?


It's tongue-in-cheek. The Register's style is basically a joke on British tabloid styles.


I don't think it's a joke. I think they actively approve of tabloid style journalism.

See Orlowski's rants on climate change for example.


That would also explain the blitzkrieg part.


It's the standard Reg term for scientists and academics.


It's the British IT Tabloid. Its style is not to be taken entirely seriously.

See their units convertor: http://www.theregister.co.uk/Design/page/reg-standards-conve...

A! Yahoo! Related! Story! Is! Probably! Headlined! With! Exclamation! Marks!

It uses terms like bonk-to-pay (contactless payment), mobes (mobile/cellular phones), Chocolate Factory (Google), Blighty (Britain) etc.


And don't forget any music industry articles, which spell out the name of the Recording Industry Ass. of America.


Boffin just means someone that has an esoteric or difficult-to-master skill.


It sounds like a temporary fix for the keychain hack on iOS would be to just never use the SecItemUpdate keychain API, and always use SecItemDelete followed by SecItemAdd with the updated data which according to http://opensource.apple.com/source/Security/Security-55471/s...:

> @constant kSecAttrAccessGroup ...Unless a specific access group is provided as the value of kSecAttrAccessGroup when SecItemAdd is called, new items are created in the application's default access group.

If I understand this correctly that would always make sure that when an existing entry is updated in an app, the 'hack' app would again be restricted in being able to access the entry's data. It could still clear the data, but wouldn't be able to access the contents.

The paper seems to note this as well:

> It turns out that all of [the apps] can be easily attacked except todo Cloud and Contacts Sync For Google Gmail, which delete their current keychain items and create new ones before updating their data. Note that this practice (deleting an existing item) is actually discouraged by Apple, which suggests to modify the item instead [9].


The keychain hack can't apply to iOS. Each app (well, app group, but an app group can only contain apps by a single developer) gets an independent keychain.


Ah, my mistake. Forgot that the access groups are scoped by bundle id on iOS. So yeah, this would only apply to OSX.


Does this apply to iOS or just OSX?


From the paper:

> Since the issues may not be easily fixed, we built a simple program that detects exploit attempts on OS~X, helping protect vulnerable apps before the problems can be fully addressed.

I'm wondering if the tool is publicly accessible, couldn't find any reference to it.


was wondering the same thing... finding items with more than 1 application is the important part. so this is a start: security dump-keychain -a > keychain.txt && egrep -n "applications \(([2-9])\)" keychain.txt

Then just look at the item that contains those line numbers and see whats up. You will have some show up on an unaffected system. This is what my output looks like: http://puu.sh/ishaP/675695b11e.png

* disclaimer, that egrep regex is shit.


Can you skip the whole writing to file bit, and pipe straight to egrep?


sure. but then you wouldn't have a file to go investigate the matched line numbers in.


    security dump-keychain -a | egrep -A 9 -n "applications \(([2-9])\)" 
This gives you the relevant lines right away: 9 lines following the match (you might want to use fewer).


Anyone have any more information about (or even a source for) "Google's Chromium security team was more responsive and removed Keychain integration for Chrome noting that it could likely not be solved at the application level"?

Is this going to happen in an upcoming stable release? What is it being replaced with?


That does seem a bit strange. The Chrome devs have long taken the position that there's no point trying to encrypt local copies of passwords. You can see a very long discussion about it here where Chrome devs argue that it's pointless: https://news.ycombinator.com/item?id=6165708

The comments by the chrome security tech lead would suggest that they wouldn't view this keychain issue as a security flaw.

So I don't see why they would bother removing keychain integration. What is the replacement going to be? A password file encrypted with the password "peanuts"?[1]

[1] https://news.ycombinator.com/item?id=9714770


Chromium security issues are not public visible. At least as long as the security issue remains.


The first defense they can perform is to change the automatic checks in the App Store review process to identify the attack in a malicious app and stop it from being approved. This could be fairly easy, of course Apple doesn't tell anyone what they do in this process so we have no way to verify it. Still you have to identify how the process could be hidden but since it uses known API calls in an uncommon way, I think this is quite doable.

The second defense is more complex, changing the way Keychain API works without breaking every app out there is much more complex. Not knowing much about this is implemented it might take a lot of testing to verify a fix without breaking apps.

The last thing they can also do is to build a verified system tool that checks the existing keychain for incorrect ACL usage. You can't hide the hack from the system. This way Apple could fix the ACL to not allow incorrect usage and not give access where it doesn't belong. I think this is fairly easy to do since it will break very little.

This is why building security is hard no matter who you are and everyone gets it wrong sometimes. At least Apple has the ability to readily (except for point 2) repair the problem, unlike Samsung having to have 800 million phones somehow patched by third parties to fix the keyboard hack.


The fundamental design flaw of all of these compromised password managers, keychains, etc. is that they keep state in a file. That causes all sorts of problems (syncing among devices, file corruption, unauthorized access, tampering, backups, etc.).

Edit - I seldom downvote others and the few times I do, I comment as to why I think the post was inappropriate. What is inappropriate about my post?

Few people stop and think about the burden of keeping state and the problems that introduces with password storage. Many even compound the problems by keeping state in the Cloud (solve device syncing issues). It's worth discussing. There are other ways.


The fundamental problem with this idea is that the computer has to store your password somewhere, for it to be able to check it. Now, these particular passwords could be kept only on the server, but they are still kept. In a file. On a computer.

Your comment comes across as somewhat silly and akin to saying that the problem is we have to transmit the passwords (or some derivative data) across the internet to do business with someone. Yes... that is a facet of the problem. It is also a facet of how work gets done.


No idea. I can't even find a single obvious interpretation of what a downvote is supposed to convey. I bet it's not the same thing twice for the same person on the same day. If it were up to me, I would remove the power-to-downvote from the API. If I really do oppose a comment, I should give a reason rather than just a “bit with a negative sign”.


One reason is people using HN on mobile devices. This used to be my "reading account" so I couldn't downvote accidentally but now I have crossed the barrier with this one as well.

Not saying that is what happened here though (because I down't know.)


Yes, we need the upvote and downvote icons to be moved to opposite ends of the comment title, which would eliminate the 1mm targeting challenge.


Where should the state be kept, on a server?


No where electronically. A simple word or phrase in the user's mind would do.

Rather than being stored for later retrieval, complex passwords could be generated on-the-fly (when needed) using this word/phrase as input combined with other input such as URLs, hostnames, service names, etc.


That's hard to manage outside of a few specialized environments where you have total control of the user and applications. The problem which comes to mind is that there's no way to use that system when so many sites have incompatible restrictions on length and character set – i.e. I have accounts on systems where the minimum password length is longer than the maximum somewhere else – and I'd still need a manager to track my random answers to security questions.

Rather than trying to make small improvements in the inherently-limited password model we should be focusing on changing the model entirely to depend less on what a human can comfortably remember: that's things like SSO to avoid the need to manage hundreds of accounts and also things like U2F to use strong public-key encryption.


IIUC, you're talking about using what amounts to a user-managed password-generation schema. This seems like a weak (re: entropy) and ultimately very user-hostile approach to password management. Having known people who use such approaches, they still have to remember at some potentially distant time just what cocktail of fields they used in their password formula for this site. That is, the burden of password memorization remains but they don't gain much for their trouble.

I hear of such schemes from time to time but I've never seen them subjected to real attack analysis. I strongly suspect that these mostly generate surprisingly weak passwords.


What is fundamentally different between your suggestion and an approach where passwords are kept in a file that's encrypted with a password that's kept only in the user's mind?

In both cases you have a bunch of data that mostly represents the passwords, but you need one final component to actually unlock it, which is the master password. What's the advantage of your proposal?


You method would require that pass everytime a password is fetched or generated. Similar to the UAC confimation on Windows. Remember how annoying that was, everyone turned it off.


The problem with this is that different passwords have different requirements, which must now be remembered and entered every time you need that particular password.

I suppose "immutable state" would be a nice compromise, but the method is fundamentally less flexible than something with state.


Okay, so if a user only retrieve Keychain items manually (unlock keychain, view password, type/paste into app/website) and never allow apps to access it, is s/he safe?


Oh God. After reading the paper I wouldn't expect a fix from Apple anytime soon :(


Right, but it's pretty clear from the paper that one major step forward would be for apps to check whether their keychain has been compromised.

This is something that apps can go forward with. But doesn't close the door if it was already opened.

Overall, this is just a brutal suite of bugs though, and a great paper.


Well, shit. Finally I feel justified for never (read: rarely) using the "Save password", feature in my web browser.

Does anyone know if Apple have done anything towards resolving this in the 6 month window they requested? Slightly worrying now that this has been published without a fix from Apple. I don't really download apps very often on my Mac, but probably won't for sure now until I know this has been resolved. Annoying.


You know this was bound to happen sooner or later. That goes for any encryption technology. Last pass was recently "hacked" as well. You can't trust any crypto tech ;)


In the age of crypto-peddling-as-a-service by large, small companies and individuals alike, as an end all be all to general opsec and the tradeoffs inherit in any decision making (as it is so often common to ignore such elephants in the room with one wave of the "trust the math" wands), It might just be more socially acceptable to just feign surprise :P


Which is why "open source all the things" is the way to go for trusting crypto implementations - or at least it's step 1.


I wonder if the new "Rootless" feature prevents this, and if it was developed because of this.


Rootless is more about securing the OS files and processes from malware.


Isn't this similar? Rootless is strengthening the sandbox against malware.

Anyone on the new Mac version want to test this?


I don't think so, especially since the Keychain being affected by this is a user file. Rootless protects system files.


It seemed to me like rootless was at a lower level than the sandbox, since arbitrary apps don't run in the sandbox.


they've known for half a year and still, just 2 weeks back, cook is cooking things up about their stance on encryption and privacy [0]. you've gotta love the hypocrisy on every side of the discussion. it's so hilarious that it makes me wanna do harm to certain people.

[0] http://9to5mac.com/2015/06/02/tim-cook-privacy-encryption/


Don't run untrusted apps outside of virtual machines. Too bad that web taught us to trust the code we shouldn't trust. Noscript must be integrated in every browser and enabled by default. Sandboxing was and will be broken.


Bravo, Apple. Humongous security hole and you don't address it in six months?

I hope it's being readied for inclusion in 8.4. We all know how it bruises Apple's ego to have to patch stuff without acting like it'a a feature enhancement.


Once again goes to show that Apple is mostly interested in the security of its iStore, platform lock down and DRM.

I'm not exactly shocked.

Just for kicks... Does anyone remember the I'm a PC ads, where macs were magically "secure", couldn't get viruses or hacked or anything? Turns out, with marketshare they can! Just like Windows. Strange thing eh?


> Does anyone remember the I'm a PC ads, where macs were magically "secure", couldn't get viruses or hacked or anything?

Well in 2006 when those ads were first being shown, XP was still the newest version of Windows and it had no privilege separation. As viruses that patched MBR sectors or system DLLs were extremely common, MacOS X was in fact "magically" (inherently) more secure since that vector of attack on a Mac would require a password prompt to elevate the program's privileges.

From then to this day, Mac viruses have been effectively non-existent in the wild. There have been some trojans and worms, but they can't rightfully be classified as viruses (no infection of other files).


I agree. I guess the guys down here in the greyed out area are more after some kind of justice. It's more the arrogance of apple that people want to give back to apple.


> Well in 2006 when those ads were first being shown, XP was still the newest version of Windows and it had no privilege separation

That's probably false. Windows introduced this feature in Windows 2000 (from 1999) and you could define a "normal" user and a "power" user. Only when you needed to install something would you run as the power-user.

This all worked inside the same desktop session.

That maybe only 1% of the users (the "paranoid" ones) used it, doesn't mean it wasn't there.


In fairness, for most of the 2000's in my experience at least -- which included defacto admin duties for a decent size office fill of Macs -- dealing with malware and viruses really just wasn't much of a problem to worry about.

By contrast, it seemed like owning a Wintel machine pretty much guaranteed you'd have issues unless you were utterly ruthless and/or didn't have any layman users browsing the internet to worry about.

Has that in fact changed since? I am no longer as familiar with the Windows side of things as I used to be, but I do know from experience that there's a very solid reason why this stereotype took root in the first place.


>Just like Windows. Strange thing eh?

Not strange if you grasp the fact that malware is just a program that has elevated access.

For me it was strange how can Apple market their system as virus-free. Now that's ridiculous.


Yes, to any techie the lie is obvious.

I just wonder what the vast userbase of uneducated people (seniors, teen bloggers, ironically education institutions, etc) who moved over to macs because they bought the lie will feel when they too later discover that the promises were a lie.

Because unlike Microsoft, Apple doesn't have a battle hardened OS where security has been worked on systematically, for over a decade.

And I could have told you the same story years ago. I don't need blatantly obvious bugs like this one to back that claim.


There was no lie. It was true then, and is still clearly and obviously true now, that Mac users have a small fraction of the malware issues that Windows users have. The difference between iOS and Android is even more stark.

You're also hilariously wrong about Microsoft having a supposedly "battle-hardened" OS where security has been worked on systematically. OS X is based on BSD Unix, where security has been worked on since the 1970s, before Microsoft even existed. OS X itself is now 15 years old.

I administer hundreds of Macs and PCs. I can objectively state that the PCs have about 10-50x as much issues with malware as the Macs have, and those issues are more severe and affect users and admins more. Everyone who manages both Macs and PCs in the enterprise is well-aware of this.


> who moved over to macs because they bought the lie will feel when they too later discover that the promises were a lie.

what? I thought they bought macs so we wouldn't need to give free tech support ;)


> For me it was strange how can Apple market their system as virus-free. Now that's ridiculous.

Not really. I've been using Macs for as long as I can remember (I'm 30), and in that entire time, I've only ever actually seen 2 pieces of malware myself (I've heard of others but never actually encountered them). One of them was the rather benign Merry Xmas Hypercard trojan from way back, which doesn't actually harm your computer, all it does is search for other hypercard stacks on your computer to infect, and if you open an infected stack on December 25th it will play sound and wish you a Merry Xmas. The other one was one of those Adware apps, I forget its precise name, and I didn't actually even see that, I talked with someone else on the phone who had it and walked them through the instructions at https://support.apple.com/en-us/HT203987 for removing it.

And just to note, the latter one isn't even a virus, because it's not self-replicating (the former one technically is, because it infects other stacks on the same computer, but it was pretty darn benign and did not rely on an OS security flaw to operate).

So yeah, there exists malware for the Mac, and there's more of it now than there ever has been in the past, but it's like a completely different universe from Windows malware. You pretty much have to go out of your way to hit this on the Mac.

As an aside, the first widely-spread Mac malware I ever heard of was spread via a pirated copy of iWork '09 being distributed on BitTorrent. Someone had altered the DMG to include the virus before uploading it. It was kind of funny hearing about people being infected because you knew the only way they could have done that was by trying to pirate iWork '09 (this was the only distribution vector). And even that apparently doesn't count as a "major" security threat because the Wikipedia page[1] for Mac Defender, which is dated to May 2011, describes Mac Defender as "the first major malware threat to the Macintosh platform", even though it wasn't even a virus it was just a trojan (and FWIW it didn't even require antivirus software to remove, Apple rolled out an automatic fix themselves, although it did take them a few weeks to do so).

[1] https://en.wikipedia.org/wiki/Mac_Defender


I have never seen Apple market their system as "virus free". Can you point me to that one?


It was a ubiquitous part of Apple's marketing for many years. Their two main (almost only) arguments for overpaying for their computers were that they were "easier" and didn't get viruses.

For example, read through Adweek's summary of Apple's "Get a Mac" campaign, which Adweek calls the best ad campaign of 2000-2010:

http://www.adweek.com/adfreak/apples-get-mac-complete-campai...

"PC has caught a virus and is clearly under the weather. He warns Mac to stay away from him, citing 114,000 known viruses that infect PCs. But Mac isn't worried, as viruses don't affect him."

"Trying to hide from spyware, PC is seen wearing a trench coat, a fedora, dark glasses, and a false mustache. He offers Mac a disguise, but Mac declines, saying he doesn't have to worry about such things with OS X."

"PC appears wearing a biohazard suit to protect himself from viruses and malware. He eventually takes mask off to hear Mac better, then shrieks and puts it back on."

"She has lots of demands, but her insistence that the computer have no viruses, crashes or headaches sends all the PCs fleeing"



Which was never aired since 2010, removed everywhere from Apple's website and youtube channels.


Not strange if you grasp the fact that malware is just a program that has elevated access

I would define malware more simply as a program that does something the user doesn't want.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: