Security on iOS comes from the sandboxing that all apps run in. Apple's review process is really quick and adds approximately nothing in terms of security. Apps running in the sandbox should be safe no matter how evil they are, and if they can break out, the proper solution is to fix the sandbox.
That seems like a strange way to put it. Apps can still do all kinds of nasty things inside their sandbox, for example calling private API's that are now supposed to be caught in the review process, but also less obvious things like hot-patching strings (e.g. URLs) in the binary, sabotaging the device by deliberately hogging the CPU, playing sounds through the speaker, popping up fake prompts for fishing, etc.
I agree that the review process itself does little for security, but surely you don't want to allow applications to pull in unchecked native code over the network, right?
What's so bad about calling private APIs? I get why Apple doesn't want it, but as a user I don't care.
The sandbox prevents apps from pulling in native code over the network. The OS won't allow pages to be marked as executable unless the code is signed by Apple.
If a private API is a privacy or security concern then the sandbox needs to block it.
Apple blocks private APIs because they don't want to maintain their compatibility across OS releases and don't want third party apps to break when those APIs change.
Edit: I'm starting to suspect that people don't know what "private API" means, so I want to lay it out real quick. Apple ships a bunch of dynamic libraries with the OS that apps can link against and call into. Those libraries contain functions, global variables, classes, methods, etc. Some of those are published and documented and are intended for third parties to use. Some are unpublished, undocumented, and intended only for internal use.
The difference is documentation and support. The machine doesn't know or care what's public and what's private. There's no security boundary between the two. Private APIs do nothing that a third-party developer couldn't do in their own code, if they knew how to write it. The only way Apple can check for private API usage is to have a big list of all the private APIs in their libraries and scan the app looking for calls to them. This is fundamentally impossible to do with certainty, because there's an unlimited number of ways to obfuscate such calls.
Functionality that needs to be restricted due to privacy or security concerns has to be implemented in a completely separate process with requests from apps being made over some IPC mechanism. This is the only way to reliably gate access.
Apple's prohibition against using private APIs is like an "employees only" sign on an unlocked door in a store. It serves a purpose, but that purpose is to help keep well-meaning but clueless customers away from an area where they might get confused, or lost, or hurt. It won't do anything for your store's security.
Mike, this line is completely and 100% inaccurate:
"Private APIs do nothing that a third-party developer couldn't do in their own code, if they knew how to write it."
There are a million things under the sun that private APIs have access to that wouldn't be possible with the use of public APIs alone, good developer or not. Prime example: "UIGetScreenImage()". That function allows you to take a screenshot of the device's entire screen, your app, someone else's app, the home screen of iOS. That's a pretty big security hole, is it not?
There are countless examples just like that one hidden inside the private API bubble. Things the OS needs to function, (although the OS may not need that particular example anymore) but could cause massive security issues.
It could be argued that because a private API has no guarantees against change, using them could lead to apps break after OS updates more frequently, which would annoy me as a user (whether or not I knew what was causing the crash).
I wasn't even talking about breaking out of the sandbox. Also, at the most basic level, simply having a trusted and signed delivery process of binaries does add some security. Nobody here is saying it will prevent a compromise, but since when is security viewed like this? It's about layers of protection.
Reminds me of people fussing about getting root on a workstation. Simply getting access to the user's account, without root, will be hugely damaging. Plus you'll likely have root in no time after you get that user account.
And the review process isn't even entirely about stopping the attack. If the malicious code was in the app, when it was submitted for review, you can at least have a trail and can review it later to see how it happened.
If the attack happened with this specific app framework, the bad code could be dynamically loaded into memory and then purged, so you'd never know what happened.
If you don't break out of the sandbox then you can't access anything interesting.
Traditional UNIXoid workstations are quite different. A program running under your user can do anything your user can do. It can access and delete all of your data.
An iOS app can't access or delete any of your data by default. Everything requires explicit permissions granted by the user, and even those are pretty limited. As long as the sandbox functions correctly, a malicious app will never be able to, say, read my financials spreadsheet out of Numbers, or my private texts out of Messages.
I've yet to see any evidence that this process adds security. Given that the review process is extremely shallow (some automated tools are run to scan for private API calls and such, and two non-experts spend a total of about ten minutes with your app) so there's no hope of any sort of useful security audit being done.