Rollout's mission has always been, and will always be about helping developers create and deploy mobile apps quickly and safely.
Our current product has been a life saver for hundreds of apps by allowing them to patch bugs in live apps.
We were surprised by Apple's actions today.
From what we've been able to gather, they seem to be rejecting any app which utilizes a mechanism of live patching, not just apps using Rollout.
Rollout has always been compliant with Apple's guidelines as we've detailed in the past here:
Our SDK is installed in hundreds of live apps and our customers have fixed thousands of live bugs in their apps.
We are contacting Apple in order to get further clarification on why Rollout doesn't fall under the clause that lets developers push JS to live apps as long as it does not modify the original features and functionality of the app.
I'll post updates as I have them.
I hate the app store review process and a lot of apple policies around the app store and I feel for you and I totally think there should be a less onerous update/review process ... but ... you clearly and blatantly circumvented a core policy, and what happened to you was absolutely predictable.
Get your money back from the lawyer that told you Apple wouldn't shut you down. You got bad advice.
Without reading the blog, I just wanted to comment on Aereo: a lot of us think that this was the wrong decision, and not in a facetious or 'cute' way.
To quote Scalia's dissent in the case:
> In a dissent that expressed distaste for Aereo’s business model, Justice Antonin Scalia said that the service had nevertheless identified a loophole in the law. “It is not the role of this court to identify and plug loopholes,” he wrote. “It is the role of good lawyers to identify and exploit them, and the role of Congress to eliminate them if it wishes.”
Just one example -- there may have been a group of supporters of the law in question used against Aereo that only supported the law because they realized it had said 'loophole'. The rule would not have become law without the 'loophole'. Now, how should a court interpret those circumstances?
This is not the case in common law systems, which the US and UK have. Judges discover the law through principals and precedent. Legislation can override this, however. The US Constitution is a good example.
EULAs and TOSes are firmly in private law, and we can take England and Wales as the national setting.
Even here, "judges discover the law through principals and precedent" is inaccurate. First and foremost there is overriding statute. Where Parliament has intervened in matters of private law, Parliament wins; the parties may choose to show that Parliament's intervention does not apply for some reason (e.g. it conflicts with a subsequent intervention by Parliament, or it does not apply strictly in the matter before the court). Judges may act sua sponte, but mostly in private law leave such matters up to the parties to draw to the court's attention. Secondly, there's the plain wording of the contract. Finally there's recourse to covering case law established by higher courts and binding on the court of first instance (e.g. the county court or the High Court).
However, Parliament has caused the Civil Rules of Procedure for England and Wales to bind the county courts, and CPR rule 1 is the "overriding objective" which directs judges to be just taking into account the totality of circumstances and the behaviour of the parties, among other things. The UK Human Rights Act 1998 also requires courts to take into account the rights it brought into force, and this applies to all courts. These two features oblige judges to look past statute (or more strictly speaking, to do a reading-down as necessary) and specifics of a contract when assessing liability.
The private law system in England and Wales is (mostly) adversarial with the judges (mostly) paying attention to issues brought up by the parties' advocates. There are specific obligations on the court to act sua sponte as noted, and a court is free to ask questions or consider points not brought up by the parties, and it is also free not to look too deeply into matters of its own volition. This can lead to "judge roulette" to some degree, but the court-appearing legal community in England and Wales is not that large (and it's even smaller in Scotland or Northern Ireland) and good advocates and even good solicitors have some idea of what to expect from a particular judge in terms of case management.
However, I don't think many would agree that judges should "discover the law thorugh principals and precedent". Certainly almost no senior English judges woudl agree with that idea; indeed, the majority is much more likely to say that the parties should draw to their attention every salient aspect of the dispute so as to reduce the court's workload (in principle to do sufficient work that few disputes really need a hearing or a conclusion other than an out-of-court settlement between the parties).
They "discover the law" mostly by having it brought to the attention by the parties. Except in constructive litigation, the adversarial principle supposedly guarantees that one party cannot wholly misrepresent the law to the judge (unfortunately this is often not the case, especially where one party has much deeper pockets than the other, and even less the case when filings are not even dealt with because the cost of litigation exhausts one party even where that party has a good case that the non-exhausted party is misrepresenting the law).
The law stems from several sources. Depending on the area of practice of private law, statute and secondary legislation may have codified many aspects such that no other source of law is required in most cases, or (as in landlord-tenant law) statute law may be highly scattered across many Acts of Parliament, and additionally almost always engages in references to decisions by the Court of Appeals taken to resolve disputes where Parliament has not decided to provide a statutory basis for the resolution. (That's mostly because MPs are terrified of legislating in the area of property law since it is a daunting task to consolidate hundreds of years of various sources of law into one Act; not-so-jokingly the Great Repeal Bill proposed as part of the Brexit process will probably be less involving.)
Scalia's argument is overly idealized and focuses on the legal aspect of the system of justice, to the detriment of the justice side. A system of justice should lead to a finding of liability on wrongdoers, but should hold non-wrongdoers harmless from liability. (Unfortunately there are several aspects of the system of justice in England where that falls down, but at least there aren't many professionals in the justice system who think it should be even less just, finding non-wrongdoers unjustly liable simply because that is what the law says to do.)
But your focus on advocates bring the law to the judges attention seems to support the parent comment, judges "discover" the law. Certainly statute overrides all, but the point of common law is that the statute is always insufficient. It is not enough to deal with the facts of any given case. I don't know much about the UK legal system, but in the US rulings on statute become codified as "precedent". Important and relevant decisions are published, circulated, cataloged, studied and effectively become the law. Any case that is litigated starts with a series of briefs on what the parties feel is relevant case law. It also might include briefs filed by interested parties, studies of the legislative process to determine intent and so on. That is all very much in the realm of "discovery".
And yet judges talk about the "spirit of law", as distinct from the "letter of the law", all the time.
No, they don't; I've read lots of legal decisions, and that phrase or anything like it is rarely invoked. Pundits, not judges, are prone to talk about the spirit of the law as opposed to the letter; judges are more prone to talk about legislative intent (not "spirit of the law"), not distinct from the letter of the law, but as part of the analysis of which of several facially plausible meanings the letter of the law should be given in the context of the specific fact pattern presented in the case they are dealing with.
I think this has manifested at its peak with Ethereum.
>evolution of the law (which involves creating new portions of the law to cover previously created portions which are considered lacking)
And argue that electronic privacy vis-a-vis wiretapping laws is creating a new portion of the law to cover previously created portions which are considered lacking. We can quibble about definitions, but that strikes me as very much in the area of "evolution".
In common law systems it is precisely their job to do so.
The reality in Common Law legal systems is nothing like this, and judge made law through interpretation and application of precedent is a very real thing, even in the USA. As a particularly blunt example, in some parts of the UK such as Scotland, the traditional common law crimes such as murder/theft etc aren't even defined in primary legislation ("laws"), and exist solely as judge made and applied creations through decades of precedent. Even where there exists primary legislation, the scope of judicial interpretation gives a great deal of freedom to judges to establish precedents that the drafters might not have foreseen or intended.
Heck even the definition of the term "Common Law" is normally interpreted to mean "Case Law" as developed by judges.
Also, as I've seen in other comments, we're going to get on the merry-go-round of defining "create law".
EDIT: Or Congress can get involved and change the law. Checks and balances.
You may disagree with this, but the fact remains that the law works like this in the US and UK and has since 1066.
In the Aereo case, the law was clear. The court was supposed to uphold the law, but didn't.
FWIW I agree that the court got it wrong, but Scalia's reasoning in supporting Aereo's position is flawed.
Scalia went too far with his dissent. Language is imprecise, and in common law it is always coupled with precedent and intent.
I don't want anyone pushing code updates to the apps that have been reviewed. Whilst that isn't foolproof, compromising the deployment mechanism with this approach is very scary.
Apple has always been adamant that they see _all_ code that goes onto devices. Live patching is so bloody obvious against their EULA.
It is surprisingly simple to make an interpreter that is "accidentally" Turing complete (this IMHO so often happens by accident that I love to say that if an interpreter is not "obviously" more restricted than a Turing machine, it probably is Turing complete).
This is not just my opinion - there lots of pages in the internet of things that are "accidentally" Turing complete, for example:
Apple has decided that, and you're not going to get around their policies with a clever rhetorical question.
> Apple has decided that, and you're not going to get around their policies with a clever rhetorical question.
Apple cannot change mathematical facts by "decisional" rhetoric.
I just don't understand how Apple is supposed to draw a line here.
I think that is the problem with Rollout.
"Code" is another term for what you are referring to as "binary".
An example that comes to mind is a high speed image compression app for taking rapid sequences of photos. Apple bought the company or the rights so they could include it themselves.
Software is only easier to write than read if you have an idea what it's supposed to do. If you've ever googled "how do I do X?", then you likely have reverse engineered the answer you found to fit your particular use case.
In addition, and in some countries, you can't patent software (thankfully), and so innovation comes through reverse engineering naturally.
Apple has close relationships with those companies so it's often a case of them reaching out to the developers rather than just blinding rejecting the app.
But any idea that Apple would allow them to run ruff shot over the platform and do whatever they wanted is a bit ridiculous.
"Rough shod", before we get another mondegreen propagating across the internet.
Roughshod means the horseshoes have their nails sticking out the bottom to help prevent slipping, so you can imagine trampling someone with those could be painful.
The idiom refers more to what a roughshod horse will do to a road or trail surface; the nailheads dig in and scatter surface material every which way, leaving behind a hell of a mess that'll turn to deep slush or sticky mud, depending on the temperature, with the next precipitation.
Apple uses private APIs (http://sourcedna.com/blog/20151018/ios-apps-using-private-ap...) to build some of their software and reject apps doing the same, effectively killing competition.
But Google and facebook uses them because they want to create products that can compete with apple's features. E.G: https://daringfireball.net/2008/11/google_mobile_uses_privat...
Yet they are not rejected, because they are "big enough".
Apple often uses immature frameworks internally - like the extensions framework - to to polish them or to dog food them before making them official.
That being said, it's hard to argue that Apple (or Android) shouldn't be able to set boundaries on behaviors which are only allowed to be done by the OS as a opposed to an app. Apple's tight control of device screen characteristics makes it pretty understandable that they don't want one app able to control how another app looks on the screen.
The optics of the f.lux situation is just really, really bad. But considering the f.lux never really charged, they have a claim to fame that few can match: creating a feature good enough that Apple incorporated into both iOS and MacOS (now in beta).
It's really not. The argument for user freedoms is almost as old as software.
Security premise: when you are looking at Facebook, you are looking at Facebook. You are not looking at a third party app drawing over Facebook and pretending to be Facebook.
I do not see the above as a slippery slope. Phishing is a capability apps should not have. Even if they have the best of intentions.
If MS had taken a harder line then at least hundreds of millions of people would have had faster computers... And arguably safer ones. But it would be hypocrisy for MS, given they gave us IE, ActiveX, DLLs, VB macros, etc.
Most third party firewalls are just GUIs using the OS API for filtering, not parsers written in C running in the kernel.
The point is not that these APIs exist; the problem is when vendors actively block others from using them, with hacks and/or policy bans. That's extremely hypocritical and anti-competitive. I can see why unofficial APIs must be discouraged (because let's be honest, developers will bitch and moan when they change -- Microsoft in particular was strong-armed into legacy support for decades by the likes of Adobe and Symantec), but it should never be an excuse to ostracize or tilt the playing field.
It is perfectly reasonable, and so far, any other interpretation seems to be a skewed view to facilitate some sort of non-compliant piece of software.
The only reason I know they do is because some of my friends working on mobile video games regularly complain they can't get some features because they are private while google and facebook do. They analyzed some apps to try to copy said features and realized the unfairness of their situation.
Those are lunch chit chats, not hard facts. But they got seldom reasons to lie.
I'm not sure if Apple made it available to other companies privately though.
And then they got double screwed because the US copyright office declared that no matter what the supreme court said they were not a cable company and couldn't get compulsory licensing either.
As far as I can tell it's legal to run one antenna for one person, and I have absolutely no idea where the line is that you start violating copyright. I don't think the guidelines are well written.
For those curious about their justification:
It also seems clearly against their EULA, so you only have yourself to blame for this.
Apple's rules can be harsh but I would rarely call them arbitrary. There is a very good security reason for Apple's stance here. And ultimately it's their store and their rules.
I agree that the review process itself does little for security, but surely you don't want to allow applications to pull in unchecked native code over the network, right?
The sandbox prevents apps from pulling in native code over the network. The OS won't allow pages to be marked as executable unless the code is signed by Apple.
For example serial numbers, user ids, lists of installed applications, etc.
Apple blocks private APIs because they don't want to maintain their compatibility across OS releases and don't want third party apps to break when those APIs change.
Edit: I'm starting to suspect that people don't know what "private API" means, so I want to lay it out real quick. Apple ships a bunch of dynamic libraries with the OS that apps can link against and call into. Those libraries contain functions, global variables, classes, methods, etc. Some of those are published and documented and are intended for third parties to use. Some are unpublished, undocumented, and intended only for internal use.
The difference is documentation and support. The machine doesn't know or care what's public and what's private. There's no security boundary between the two. Private APIs do nothing that a third-party developer couldn't do in their own code, if they knew how to write it. The only way Apple can check for private API usage is to have a big list of all the private APIs in their libraries and scan the app looking for calls to them. This is fundamentally impossible to do with certainty, because there's an unlimited number of ways to obfuscate such calls.
Functionality that needs to be restricted due to privacy or security concerns has to be implemented in a completely separate process with requests from apps being made over some IPC mechanism. This is the only way to reliably gate access.
Apple's prohibition against using private APIs is like an "employees only" sign on an unlocked door in a store. It serves a purpose, but that purpose is to help keep well-meaning but clueless customers away from an area where they might get confused, or lost, or hurt. It won't do anything for your store's security.
"Private APIs do nothing that a third-party developer couldn't do in their own code, if they knew how to write it."
There are a million things under the sun that private APIs have access to that wouldn't be possible with the use of public APIs alone, good developer or not. Prime example: "UIGetScreenImage()". That function allows you to take a screenshot of the device's entire screen, your app, someone else's app, the home screen of iOS. That's a pretty big security hole, is it not?
There are countless examples just like that one hidden inside the private API bubble. Things the OS needs to function, (although the OS may not need that particular example anymore) but could cause massive security issues.
Reminds me of people fussing about getting root on a workstation. Simply getting access to the user's account, without root, will be hugely damaging. Plus you'll likely have root in no time after you get that user account.
And the review process isn't even entirely about stopping the attack. If the malicious code was in the app, when it was submitted for review, you can at least have a trail and can review it later to see how it happened.
If the attack happened with this specific app framework, the bad code could be dynamically loaded into memory and then purged, so you'd never know what happened.
Traditional UNIXoid workstations are quite different. A program running under your user can do anything your user can do. It can access and delete all of your data.
An iOS app can't access or delete any of your data by default. Everything requires explicit permissions granted by the user, and even those are pretty limited. As long as the sandbox functions correctly, a malicious app will never be able to, say, read my financials spreadsheet out of Numbers, or my private texts out of Messages.
I've yet to see any evidence that this process adds security. Given that the review process is extremely shallow (some automated tools are run to scan for private API calls and such, and two non-experts spend a total of about ten minutes with your app) so there's no hope of any sort of useful security audit being done.
You have to trust app developers anyway, since they run native code on your machine. While there are security concerns, these are not the real motivation. Apple is gradually closing down their platform, as many people have predicted in the past. You can also see that in various subtle changes to Gatekeeper and the Sandboxing features.
For me personally, the red line is when unsigned executables can no longer run on MacOS. If Apple ever disallows unsigned executables, I will immediately discontinue my application on MacOS and redirect customers who rely on it to Apple's customer support.
Time will tell. I think it will really come down to the severity of malware problems of the future.
But I really think we'll just move 100% into bifurcated systems (we're already there with Intel's ME to a large extent) where the place that arbitrary code can run is completely segmented off from trusted code.
That is my personal red line also. But I am 100% in support of them enforcing signed apps for the majority, but it should be something you can turn off for advanced users via firmware/bios. My mom does not need to run unsigned apps she finds on the Internet.
It wouldn't be as bad if their store weren't also the only store available for the platform. Because of this forced monoculture, the criticisms are well within scope.
And then Apple is to blame, and can spend tons of dollahs and man hours to fix problems caused by your "open" alternatives.
This post deserves more attention.
As a security-conscious user, live patching is awful. Nothing guarantees me that the benign app I've been granting various permissions to doesn't get altered by a fourth party adversary through coercion or hacking and gets wiretapped by a malicious dynamic payload.
One could argue that live patching allowed companies to fix or mitigate security problems faster than Apples (awful) app store policy (and timescale) would otherwise allow.
Yet, we can say that code review by a third party is better for trust of that code, than no code review by a third party.
"Nothing guarantees" may have been strong. but "the set of attack vectors and their relative efficacy increases " doesn't roll off the tongue quite as nicely.
That's guaranteed, at least ...
Your argument does sound good, but it's a double-edged sword.
I'm wondering which of the current top-downloaded FlappyCrush Of Titans clone got caught exfiltrating all their players contact lists or something...
Sorry for your loss, but in glad Apple is doing this and apps will be safer.
"new apps presenting new questions may result in new rules at any time."
Good luck to you, but it's Apple's sandbox and your product appears to thwart the principles that the Apple App Store has been run on for nearly a decade.
No binary code being injected.
A number of other posts talk explicitly of dynamic delivery of native code. If you're sure, it's a genuine question: I'm interested to know how this works. Function pointer swaps are one thing, but how would this allow you to patch bugs in the app? I can see how this could let you change the app's behaviour, even including calling private API's, but surely this would be constrained to calling pre-existing behaviour?
Man, do I dislike marketing speak.
A "We knew we were non-compliant, but think the security benefits of quick bugfixes outweigh the disadvantages. We will work with apple to return to compliance." would've been honest, better and not bs.
"We have always been in compliance with the guidelines, and we are asking apple and trying to figure out why we're somehow not in compliance" is a fair statement, and not at all BS.
"we've been getting away with it for a long time, so there's that"
Did you see the article on HN last weekend about Wifi routers? "I have a 1.3gbps wireless ac router...but only at the PHY layer, but only in an RF test lab, but only if the client is MU-MIMO enabled, but only if they talk on all 4 channels, but only if the signal connects at 100%, but only if your data is 10:1 compressible, but only if you have one client, " even after like 5 "but only if"'s , there was still this unexplained 20% discrepancy between the advertised "speed" and what the device was physically capable of. I'd love to hear their lawyer explain how thats not false advertising.
Other than contacting Apple what can you do to combat this?
That being said, I agree with most of the people here; live patching in my opinion is kind of infringing on the users' freedoms and security.
Back in 2012, it wasn't prohibited by the ToS at all; we read and re-read the ToS over and over again to make sure so that we wouldn't waste our time building something "illegal."
Once I had the third largest social gaming company as a customer, Facebook's lawyers pulled the plug on it right away.
Turns out (according to Archive.org Wayback Machine), they added a new clause to their ToS two days before emailing us about our ToS violation:
"You must not give your secret key and access tokens to another party, unless that party is an agent acting on your behalf as an operator of your application. You are responsible for all activities that occur under your account identifiers."
Moral of the story: If they want to nuke you, they WILL nuke you (I'm sure Facebook wasn't too happy about my database storing millions of users' social graphs on it, and that was the REAL reason for the shutdown).
Even during our YC interview, a couple of the most legit original partners told us on our way (permanently) out the door "yeah, you guys are going to get shut down..."
I can't imagine any iOS developer who knows the guidelines and how your product works wouldn't have been worried.
We've seen this over and over. The platform risk should be seriously considered. Even AWS has demonstrated recently how dependence can be catastrophic.
Expo (and the React Native library we use) doesn't do any of that. We also make sure we don't expose ways to dynamically execute native code such as the dlopen() function that Apple mentioned in that message. We also haven't received any messages from Apple about Expo, nor have we heard of any Expo developers receiving the same." - Exponent Team
Apple's message reads to me that they're concerned about libraries like Rollout and JSPatch, which expose uncontrolled and direct access to native APIs (including private APIs) or enable dynamic loading of native code. Rollout and JSPatch are the only two libraries I've heard to be correlated with the warning.
Technically it is possible for a WebView app or a React Native app also to contain code that exposes uncontrolled access to native APIs. This could happen unintentionally; someone using React Native might also use Rollout. But this isn't something specific to or systemic about React Native nor WebViews anyway.
One nice thing about Expo, which uses React Native, is that we don't expose uncontrolled or dynamic access to native APIs and take care of this issue for you if your project is written only in JS. We do a lot of React Native work and are really involved in the community and haven't heard of anyone using Expo or React Native alone having this issue.
Since they're not, I wouldn't have _too much_ faith in other things not being rejected.
React Native is very much like a WebView except it calls out to one more native API (UIKit) for views and animations instead of using HTML.
What neither WebViews nor React Native do is expose the ability to dynamically call any native method at runtime. You write regular Objective-C methods that are statically analyzed by Xcode and Apple and it's no easier to call unauthorized, private APIs.
With Expo all of the native modules are safe to use and don't expose arbitrary access to native APIs. Apps made with Expo are set up to be good citizens in the React Native and Apple ecosystems.
if jsonResponseFromYourBackend contains:"runThis" then performSelector:json["runThis"]
and make sure you don't send a runThis param while the app is in review.
Unfortunately for Apple's app review process, Apple's own objective-C language and runtime has very strong dynamic reflection capabilities.
At runtime, any API calls made by the app are checked against this file; if a new API call is found, then it must have escaped Apple's code scanning logic. The API call can be rejected and logged for Apple to improve their scanner.
Moreover, any random app cannot enhance SELinux policy of the system.
Your suggestion of enforcing this also makes no sense from performance or privacy standpoint.
In either case the root issue is about sending secrets like an unscoped API key to the client. "Client secret" is an oxymoron in this context regardless of the programming language.
The API key is actually the least of the hazards, since you can hide that in the keychain. Having source code for your business logic shipping in your app is not good; having it be hackable business logic (by changing the JS in place) is very not good.
Hybrid apps could achieve that same kind of relative business logic security, but at the cost of pushing more and more of the actual business logic behind an API and not in the JS in the app. At that point, the benefits of code sharing (such as they are) get fewer and fewer since it's really pretty easy to write API code in Objective C, Swift, Java or Kotlin.
I'm not aware of any non-tethered jailbreak for iOS 10
If an app is modified on the fly to use an undocumented and maybe "forbidden by apple" method in order to bypass security features or worse spy on me I'm clearly not ok.
Do you really think the apple ecosystem work because clients see AppStore as a evil cage and that the external developers are all angels with good intentions?
By not allowing developers to patch their app 'on the fly', directly, without going through a new version in the app store (which is a very lengthy process, almost forever in security terms), Apple effectively protects their iOS users from malicious code (not from the dev, which is probably to be trusted if the app is already installed, but from MITM attacks and the likes). However at the same time they deny very hasty security patches, which may compromise the device and all associated data and accounts entirely.
So there's no best world but undoubtingly many aspects to consider. Anecdotally I find that it's often better to trust the developers of an app to maintain their own thing; the OS only being a facilitator, provided the user has control (UAC on Windows, Permissions on mobile, etc.) [note: obviously you trust the OS vendor to patch said OS, it's just a particular kind of app]
Also current policy don't prevent app developers to implement a "kill switch" that will prompt user to update (or wait for update) at splash screen and abort loading the malfunctioning version.
Source: https://github.com/bang590/JSPatch/issues/746 (in Chinese)
difference is the js - native bridge. for react native it is fixed bridge, that can only change with a change in the binary.
for rollout they can execute any native code with an update of the JS.
The warning is not about React Native, it's about exposing uncontrolled access to native APIs including private ones and React Native doesn't do that.
It is all about intents. If app allows that, it is banned for that behavior, not for having React in his kitchen. But if you have armed drone there, then police has a question.
"Even if the remote resource is not intentionally malicious, it could easily be hijacked via a Man In The Middle (MiTM) attack, which can pose a serious security vulnerability to users of your app."
I'm not even fond of Apple, but I'd rather trust them, and I'm glad they're protecting their users.
 Caveat: I don't know how likely/possible these are to occur on iOS. I assume a sufficiently motivated & misguided developer could do them within their own app's context.
If I'm running an app that includes native code and accesses data from the outside world then I'm probably trusting that app developer to write C code that doesn't contain arbitrary code execution vulnerabilities, which is much much harder than using HTTPS right.
Or that attacker controls or can coerce a Certificate Authority in the OS's root list - like, say, just about any nation state...
Most apps - I suspect - are not pinning their TLS certs. Apple have already gotten onto a very public fight with the FBI.
Basically says "only we, Apple, can do HTTPS right, you can't, and even if you try you can easily be MiTMed". Which is I don't agree with.
What you say is correct, but it's not the argument I criticize. You point is that they don't trust developers to implement secure loading of code and don't have technical means to control it and can't or don't want to check it in the review. But it's completely different to "you could be easily hijacked if you're not Apple".
Are _you_ secured against, say, an attacker who works at Verisign and can create a valid cert for api.yourdomain.com? Or an attacker who has a buddy who works at GoDaddy who can subvert your dns records so they can trick LetsEncrypt into issuing a valid cert for api.yourdomain.com? Or an elbonian teenage hacker who's just got your AshleyMaddison assword from pastebin and used it to log into your Gmail account and overtaken your dns registrar account to get themselves a valid ssl cert?
For me? Not really "easily" (tho a wifi pineapple in a coffee shop where FE Devs hang out attempting to MiTM them with the Charlesproxy root CA would be a fun experiment... Which, of course, I'd never do - because that'd be bad, right?)
For someone at the NSA or CIA or Mossad? Sure it's easy. For someone a little further down the LEO "cyber" chart like FBI, probably not "easy". For a local beat cop or council dog catcher - nah, definitely not "easy".
For a very-dark-grey pen tester or redteam who're prepared to phish your email password and use it to p0wn your dns registrar? They'd probably call that "easy"... (Hell, I've got a few pentesting friends who'd call that "fun"!)
From April 2016
>>Rollout is aware of the concerns within the community that patching apps outside of the App Store could be a violation of Apple’s review guidelines and practices. Rollout notes both on their FAQ site and in a longer blog post that their process is in compliance.
The reason that's allowed is because everything executes within a sandboxed browser environment. No native code is downloaded.
That's the same thing Rollout does. In fact, iOS apps can't download and run native code. The OS won't let you mark pages as executable unless they're appropriately signed, and only Apple has those keys.
But surely you can see the difference between executing limited actions inside a web view, and making available any native method to a web view.
I don't see any fundamental difference here. Both (UI|WK)WebView and JSC allow bridging. Neither one grants full access to JS code automatically, the programmer has to put some effort into it. And even if there is some important difference, neither one is native code which is what I was disputing above.
Worked on a F2P mobile game, we bundled a tiny Lua engine, and them pushed various promotion screens as a bundle of resources (images) and lua code (screen layout, its preconditions and what's gonna happen after you click - game provided a small API that Lua called).
Would you mind elaborating on why Erlang programs tend to have FSMs? LYSE has a chapter on how to use gen_fsm, but I've really been unable to find a great answer as to WHY you would want to use it.
Although those updates never trigger the Android update service, so I'm not sure if they are just downloading more resources of if they are able to request new permissions(I would like to assume not.)
Google did recently change the Android permission model; previously, apps had to request all their permissions at install time and it was all-or-nothing (and frankly, hardly anyone bothered to look them over.)
Now, certain permissions have to be requested when they're needed (at least for recent versions of the SDK) and the user can choose to allow or deny. But an app can't grant itself new permissions without going through the official update process.
I always wonder when seeing one update, if there is a 0 day that can bypass that. On a technical level I know I run the same risk with my PC, but at the same time, it's more difficult for me to examine processes and startups in my android.
The reason this type of "hot code push" is more attractive on iOS is because the app review process is much longer, so publishers look for ways to skirt it. Looks like Apple is just starting to enforce it more.
Does Rollout comply to Apple’s Guidelines?
Yes. As per Apple’s official guidelines, Rollout.io does NOT alter binaries. ... With over 50 million devices already running our SDK, it is safe to say that Rollout complies with with Apple’s development and App Store guidelines.
What did they expect when their entire business model is based on something that's literally the opposite of what the review guideline allows?
However, due to their miserable practices I am glad that almost all (if not all) drivers for Uber also drive for the local taxi-in-an-app service.
Uber is legal in the vast majority of cities.
Not to mention that they (allegedly) attempted to subvert law enforcement attempts to investigate them, by "greyballing" law enforcement. Whether that is considered obstruction of justice is a secondary question to whether it is incredibly shady.
You can do hot "code" push techniques that allow the first but not the second, by letting apps update HTML and JS that calls back into pre-existing native code. That's what Cordova / PhoneGap does. I'd guess that Apple will just ban the app and the developer if they catch it.
It appears that Rollout started using some API that would enable it to do the second, and Apple is preemptively making sure that it doesn't happen. The wording of the rejection is based on passing computed parameters to introspection routines.
I'm sure it worries about them but there's a much larger, riskier scenario.
Once you start downloading and executing binary code from untrusted sources (i.e., not the App Store) anything can go wrong.
1. An iOS app doesn't care about security, and it hot-loads code from some non-https source and gets man-in-the-middle'd
2. An iOS app hot-loads code in a secure manner, but the server from which the code is served becomes cot mpromised
3. A malicious employee at an iOS app vendor pushes harmful code out via her company's app
Now, I'm not a fan of Apple's policies. I think there should be a "guys, I know what I'm doing" mode where I'm allowed to download code from untrusted sources. Just like Android or MacOS.
However, I sympathize with them here. For nearly a decade people have been downloading code from the App Store with the understanding that it is safe to do so. Even I appreciate this much of the time... I'm an engineer but I'm busy. I can't audit every app I download. I wish there were other options, but I find huge value in the fact that I don't have to worry about an App Store app screwing my device. And it's a big reason why I recommend iOS despite its flaws to older family members.
This exists today and has for a long time, it just costs you money for this "privilege".
I've never understood this part. Why doesn't iOS simply prevent apps from calling private APIs?
So no, there's no way for Apple to technologically make it impossible to call private functions. The only actual solution there would be to completely rewrite the OS such that literally every call into a framework that an app makes actually goes over IPC (so that way apps can't even attempt to invoke private functions since they won't be linked into the app), but that would probably be crazy slow which is why nobody does that.
Say they implement the scheme you mention in clang and the LLVM linker, so the function bodies of their public APIs end up placed in that privileged region of memory, and those of their private APIs end up in the restricted region.
Nothing prevents gcc from producing object files that tell the linker "this user function is part of Apple's public APIs". And nothing prevents people from using a different linker anyway, one that would put private API body functions out of the restricted region of memory.
The only real way to achieve that would be to move all their frameworks to the kernel, which would be all sorts of problematic.
Agreed that this design is fundamentally flawed, but that's because the coder is providing the implementations of private code. Providing that is Apple's job.
Put privileged code into a dynamically-linked library that Apple provides. Only code in that block of memory can call private APIs. Pretty straightforward to implement, and requires nothing fancy from the kernel.
Of course this only works if you can prevent the attacker from corrupting memory.
And well, in any case they need to maintain compatibility with current apps for who knows how many years.
Such a scheme wouldn't stop ASLR. The loader just needs to tell the verification code where it put the privileged libraries.
> And well, in any case they need to maintain compatibility with current apps for who knows how many years.
Do they? I think Apple could easily order everyone to switch over to a more secure compiler with a one year deadline.
Even if I had the idea and the technical skills to pull such a product off I wouldn't even bother trying for fear it wouldn't even get past the first hurdle.
At some point is there a risk that Apple may also start to ban the web browser, despite that it's under strict control on IOS?
Apple/Google/Platform Owner will always do what's right for them, not the customer, and not the developer, for example banning Amazon from selling books in their kindle app, not allowing competing browsers (they recognise the power of the web as a platform), not allowing competing sales mechanisms (where they don't get a cut), and here not allowing developers to update their apps except through the store mechanism. I have some sympathy with Apple here, and see why they're doing it (they have to control what software is installed for security reasons as well as platform protection), but this is all about control over what you install on your own device. Sometimes their actions will be in the best interests of customers, even if not the best interests of developers, but most of the time their actions are simply aimed at preserving their control of the platform and control of the money flowing through it.
The web is the one exception to this rule which works across all platforms and devices (because it is so dumb and simple), and has survived attempts to corral it to a walled-in commercial offering remarkably well.
Or, try making a Photoshop clone, CAD software or similar without being on someone else's platform.
"it don't work"
Or is the right challenge "make a game for PC that isn't in the microsoft app store"? Which is in fact no hindrance at all.
It's not really "someone else's platform" just by using someone's software. They have to be in control.
as for video games, yes you'll be beholden to the platforms you build it under. this is why I say to devs, always build it cross platform!
it is like to say that GM is completely withing their rights to restrict where you can drive your GM car. Mind you, that is coming in pretty near future too - giving all the computerization/connectivity/self-driving of the cars which would make the cars into GM's "application platform" with DMCA protecting such a platform too like it protects Apple/Google/FB/etc...
i wonder whether you're intentionally skipped that part or just don't know that in case of non-jailbroken iPhone the "GM dealers" is the only way to get "aftermarket accessories". There is no "if they want to sell their products through", instead there is "if they want to sell their products at all".
Reject this naive approach to reality. No, we will not disregard Tim Berners-Lee's role, timing, or place in history. We will not pretend that Cupertino should ignore their responsibility to prevent arbitrary code execution on one of the most widely deployed platforms on the planet. If anything, we celebrate that despite Apple being a giant target for quite a bit of criticism these days (some deserved, some not), for the most part, people on this thread recognize what a giant, unacceptable vulnerability Rollout.io is.
Tech must advance beyond adversarial, animus-based, abrasive reactions to other operating systems, ecosystems, and the like. No time like the present and no better opportunity than some ignorant HN strawman.
Imagine the kerfuffle if Google had Chrome on iOS. 2% of "PC" users complain of Chrome hitting their battery hard on macOS. iOS has a much bigger market share.
Competition is healthy, I agree. But sometimes the best interest of the vendor and the users don't align, and I'm more confident Apple is prioritizing battery life and performance over other things, while Google will prioritize those other things (like ads and data collection). I can't imagine them allowing adblockers on iOS (exactly as they don't on Android, afaik)
Personally, I think that Cordova hybrid apps will continue to be okay, but I don't know about something like React Native...
I would also recommend not using CodePush to completely what an app does.
>change the primary purpose of the Application by providing features or functionality that are inconsistent with the intended and advertised purpose of the Application as submitted to the App Store.
I mean I read the headline, thought this sounded eminently sensible, then read the story and saw it was a framework for doing this, and my inner mental model of my security researcher girlfriend leaned forward, started rubbing her hands together, and wanted to start digging for those sweet new vulns.
So they weren't changing app functionality after App Review approval; it's just that for users some of that functionality was gated on a boolean that was fetched over HTTPS.
It's also a thing that can easily affect small developers; if your app requires logging into some existing paid account (enterprise software, a bank's app, etc.), the available features depend on what features the account has paid for. So as part of the review, you send Apple credentials for a test account that has all the features enabled. (Without a test account, they couldn't log in at all.)
I say, don't go native unless you must (for performance reasons etc). push the web forward instead!
It basically goes:
* add their SDK, which has the ability to swizzle(swap out the implementation for) arbitrary methods in your app
* their SDK checks on startup which methods to replace and downloads the appropriate JS replacements
It's much like the argument 'if you ban guns only criminals will have guns' and it's quite true in this case.
it's about damage mitigation, and shutting down a 3rd party "app-hot-fix" service is a good move.
it's harder to fool only apple, than to submit some naive looking thing and still have unmitigated access to changing its code.
The explanation in the rollout.io site about why they are fine is intentionally deceptive. They have the guts to link to a document that says "An Application may not download or install executable code." and then quote more friendly excerpts in the hopes that you won't read the actual doc. I can't imagine why Apple has let this go on for so long.
"3.3.2 An Application may not download or install executable code. Interpreted code may only be used in an Application if all scripts, code and interpreters are packaged in the Application and not downloaded. The only exception to the foregoing is scripts and code downloaded and run by Apple's builtin WebKit framework..."
> This includes any code which passes arbitrary parameters to dynamic methods such as dlopen(), dlsym(), respondsToSelector:, performSelector:, method_exchangeImplementations(), and running remote scripts in order to change app behavior or call SPI, based on the contents of the downloaded script.
Uber can write a "foo for a day" feature, that downloads some strings and some images from the Uber website, but doesn't actually add any functionality.
Same with any mobile game that has periodic events. You just write code to implement a generic event, write data to describe the event (name, graphics, level data), and do an update when you think of a different kind of event.
- you publish an app that creates collages;
- user gives access to Photos;
- you change your JS to upload all the photos to your server.
Substitute "photos" part to any iOS permission; isn't this security risk? Should it be allowed under current App Store ToS? Also, what's stopping your JS code from downloading binary code and injecting it via some iOS exploit into "native thread"?
But there's little to prevent you from lying about it, and there are so many iOS platform technologies out there now (native, PhoneGap, React Native, AIR, Xamarin) that there's probably no way for Apple to see what you're doing in an automated way.
It works by "swizzling" methods, which is a valid mechanism in Apple's runtimes (like in Ruby or many other dynamic languages). In Swift, this will only work in subclasses of NSObject or its decendants, because only then message dispatch is used.
For example, google has Tag Manager  which I believe is mostly used for managing small UI related changes. Is there a clear documentation or distinction about that?
I would have expected dlopen and dlsym are blacklisted - you can trivially use dlsym to access any blacklisted API. Similarly, I thought SPI ("system programming interface" = private API) was all blacklisted. Am I misreading the message?
That said, if this is what they're focused on, it seems like it actively does not impact any apps that hot-push HTML code (e.g., PhoneGap / Cordova).
If the only reports are coming from Rollout.io, my guess is that the latest Rollout SDK uses one of these functions (I'd bet method_exchangeImplementations(), i.e., swizzling) with a dynamic parameter, and that the SDK can be changed to just stop doing that.
devs should re-submit their code to apple, instead pushing "fixes" to phones. i guess it's fair to assume 99% of those pushes are safe (80%? guess it's more like 'benefit of the doubt'), but that 1% the escapes scrutiny is the one piece that makes the whole platform shaky, and honestly, poses a major attack vector. i wonder how often this was abused.
as to service like rollout.io, it's a service that was never supposed to be. especially if it serviced hundreds of apps - as a security minded indv. i shudder at the thought of what might have slipped through.
edit: after digging into rollout.io and finding out it's based in telaviv, is it wrong to speculate about origins and real purpose of an israeli company that specializes in injecting code into iphone applications?
It is too bad people is using it for good causes but the hole has to be closed. Sorry guys.
We always felt this day would come, however, so we switched directions.
Or is the app review process more subjective?
It's called CodePush, and it's a plugin for React Native (does not come with it). It is developed by Microsoft: https://microsoft.github.io/code-push/
Doubt it would get taken down.
So, with this "hot code push", is it also possible to update an app (adding new features), and not only bug fixes, directly?
If yes, then sounds like you are trying to circumvent the App Store QA process.
If you use hot code push together with React Native, then you're borked.
Also, from Firebase Docs "Don't attempt to circumvent the requirements of your app's target platform using Remote Config."
"Firebase Remote Config allows you to change the look-and-feel of your app, gradually roll out features, run A/B tests, and deliver customized content to certain users, all from the cloud without needing to publish a new version of your app."
With Remote Config you deploy the code, then decide what to show to users post release. This gives Apple a chance to review all the code submitted to the App Store.
In the case of rolling out features an A/B tests, you'd make a release including a feature, but only enable it x% of your users using RC. You can then obviously enable it for everyone if it passes your A/B test or if you're happy its working.
> 3.3.2 An Application may not download or install executable code. Interpreted code may only be
used in an Application if all scripts, code and interpreters are packaged in the Application and not
downloaded. The only exception to the foregoing is scripts and code downloaded and run by Apple's builtin WebKit framework, provided that such scripts and code do not change the primary purpose of the Application by providing features or functionality that are inconsistent with the intended and advertised purpose of the Application as submitted to the App Store.
We've been meeting with Apple on this topic for years and continue to sideload our app as we need to meet SLAs with our customers. They sign the binaries with their dev certificates, which violates Apple's guidelines too.
But, alas, once you have critical mass in a vertical even mighty Apple gets cold feet about shutting your customers down.
Why Apple is not able to offer a separate way for certified and audited dev shops to hotfix their iOS apps is beyond me. SAP, MS, IBM - a shitload of big shops would love to pay for this privilege.
In this case it isn't required at all though because Apple allows enterprise to sideload apps outside of the review process.
Right now we get the certificates from out customers, sign the individual binaries. Then distribute through our own infrastructure.
We have our own update mechanism (basically hot code push), cannot have the customer's own IT shop be a barrier to deploy the fix. User sync their apps, if there is an upgrade that gets done inbetween the normal data/content sync.
"At most", those companies would just have to "change their businesses".
Apple have curbed a lot of obnoxious developer practices (and enforced good ones, like the move to 64-bit not long ago) and they, along with Microsoft, probably the only ones with enough muscle to be able to do that.