Rollout's mission has always been, and will always be about helping developers create and deploy mobile apps quickly and safely.
Our current product has been a life saver for hundreds of apps by allowing them to patch bugs in live apps.
We were surprised by Apple's actions today.
From what we've been able to gather, they seem to be rejecting any app which utilizes a mechanism of live patching, not just apps using Rollout.
Rollout has always been compliant with Apple's guidelines as we've detailed in the past here:
Our SDK is installed in hundreds of live apps and our customers have fixed thousands of live bugs in their apps.
We are contacting Apple in order to get further clarification on why Rollout doesn't fall under the clause that lets developers push JS to live apps as long as it does not modify the original features and functionality of the app.
I'll post updates as I have them.
I hate the app store review process and a lot of apple policies around the app store and I feel for you and I totally think there should be a less onerous update/review process ... but ... you clearly and blatantly circumvented a core policy, and what happened to you was absolutely predictable.
Get your money back from the lawyer that told you Apple wouldn't shut you down. You got bad advice.
Without reading the blog, I just wanted to comment on Aereo: a lot of us think that this was the wrong decision, and not in a facetious or 'cute' way.
To quote Scalia's dissent in the case:
> In a dissent that expressed distaste for Aereo’s business model, Justice Antonin Scalia said that the service had nevertheless identified a loophole in the law. “It is not the role of this court to identify and plug loopholes,” he wrote. “It is the role of good lawyers to identify and exploit them, and the role of Congress to eliminate them if it wishes.”
Just one example -- there may have been a group of supporters of the law in question used against Aereo that only supported the law because they realized it had said 'loophole'. The rule would not have become law without the 'loophole'. Now, how should a court interpret those circumstances?
This is not the case in common law systems, which the US and UK have. Judges discover the law through principals and precedent. Legislation can override this, however. The US Constitution is a good example.
EULAs and TOSes are firmly in private law, and we can take England and Wales as the national setting.
Even here, "judges discover the law through principals and precedent" is inaccurate. First and foremost there is overriding statute. Where Parliament has intervened in matters of private law, Parliament wins; the parties may choose to show that Parliament's intervention does not apply for some reason (e.g. it conflicts with a subsequent intervention by Parliament, or it does not apply strictly in the matter before the court). Judges may act sua sponte, but mostly in private law leave such matters up to the parties to draw to the court's attention. Secondly, there's the plain wording of the contract. Finally there's recourse to covering case law established by higher courts and binding on the court of first instance (e.g. the county court or the High Court).
However, Parliament has caused the Civil Rules of Procedure for England and Wales to bind the county courts, and CPR rule 1 is the "overriding objective" which directs judges to be just taking into account the totality of circumstances and the behaviour of the parties, among other things. The UK Human Rights Act 1998 also requires courts to take into account the rights it brought into force, and this applies to all courts. These two features oblige judges to look past statute (or more strictly speaking, to do a reading-down as necessary) and specifics of a contract when assessing liability.
The private law system in England and Wales is (mostly) adversarial with the judges (mostly) paying attention to issues brought up by the parties' advocates. There are specific obligations on the court to act sua sponte as noted, and a court is free to ask questions or consider points not brought up by the parties, and it is also free not to look too deeply into matters of its own volition. This can lead to "judge roulette" to some degree, but the court-appearing legal community in England and Wales is not that large (and it's even smaller in Scotland or Northern Ireland) and good advocates and even good solicitors have some idea of what to expect from a particular judge in terms of case management.
However, I don't think many would agree that judges should "discover the law thorugh principals and precedent". Certainly almost no senior English judges woudl agree with that idea; indeed, the majority is much more likely to say that the parties should draw to their attention every salient aspect of the dispute so as to reduce the court's workload (in principle to do sufficient work that few disputes really need a hearing or a conclusion other than an out-of-court settlement between the parties).
They "discover the law" mostly by having it brought to the attention by the parties. Except in constructive litigation, the adversarial principle supposedly guarantees that one party cannot wholly misrepresent the law to the judge (unfortunately this is often not the case, especially where one party has much deeper pockets than the other, and even less the case when filings are not even dealt with because the cost of litigation exhausts one party even where that party has a good case that the non-exhausted party is misrepresenting the law).
The law stems from several sources. Depending on the area of practice of private law, statute and secondary legislation may have codified many aspects such that no other source of law is required in most cases, or (as in landlord-tenant law) statute law may be highly scattered across many Acts of Parliament, and additionally almost always engages in references to decisions by the Court of Appeals taken to resolve disputes where Parliament has not decided to provide a statutory basis for the resolution. (That's mostly because MPs are terrified of legislating in the area of property law since it is a daunting task to consolidate hundreds of years of various sources of law into one Act; not-so-jokingly the Great Repeal Bill proposed as part of the Brexit process will probably be less involving.)
Scalia's argument is overly idealized and focuses on the legal aspect of the system of justice, to the detriment of the justice side. A system of justice should lead to a finding of liability on wrongdoers, but should hold non-wrongdoers harmless from liability. (Unfortunately there are several aspects of the system of justice in England where that falls down, but at least there aren't many professionals in the justice system who think it should be even less just, finding non-wrongdoers unjustly liable simply because that is what the law says to do.)
But your focus on advocates bring the law to the judges attention seems to support the parent comment, judges "discover" the law. Certainly statute overrides all, but the point of common law is that the statute is always insufficient. It is not enough to deal with the facts of any given case. I don't know much about the UK legal system, but in the US rulings on statute become codified as "precedent". Important and relevant decisions are published, circulated, cataloged, studied and effectively become the law. Any case that is litigated starts with a series of briefs on what the parties feel is relevant case law. It also might include briefs filed by interested parties, studies of the legislative process to determine intent and so on. That is all very much in the realm of "discovery".
And yet judges talk about the "spirit of law", as distinct from the "letter of the law", all the time.
No, they don't; I've read lots of legal decisions, and that phrase or anything like it is rarely invoked. Pundits, not judges, are prone to talk about the spirit of the law as opposed to the letter; judges are more prone to talk about legislative intent (not "spirit of the law"), not distinct from the letter of the law, but as part of the analysis of which of several facially plausible meanings the letter of the law should be given in the context of the specific fact pattern presented in the case they are dealing with.
I think this has manifested at its peak with Ethereum.
>evolution of the law (which involves creating new portions of the law to cover previously created portions which are considered lacking)
And argue that electronic privacy vis-a-vis wiretapping laws is creating a new portion of the law to cover previously created portions which are considered lacking. We can quibble about definitions, but that strikes me as very much in the area of "evolution".
In common law systems it is precisely their job to do so.
The reality in Common Law legal systems is nothing like this, and judge made law through interpretation and application of precedent is a very real thing, even in the USA. As a particularly blunt example, in some parts of the UK such as Scotland, the traditional common law crimes such as murder/theft etc aren't even defined in primary legislation ("laws"), and exist solely as judge made and applied creations through decades of precedent. Even where there exists primary legislation, the scope of judicial interpretation gives a great deal of freedom to judges to establish precedents that the drafters might not have foreseen or intended.
Heck even the definition of the term "Common Law" is normally interpreted to mean "Case Law" as developed by judges.
Also, as I've seen in other comments, we're going to get on the merry-go-round of defining "create law".
EDIT: Or Congress can get involved and change the law. Checks and balances.
You may disagree with this, but the fact remains that the law works like this in the US and UK and has since 1066.
In the Aereo case, the law was clear. The court was supposed to uphold the law, but didn't.
FWIW I agree that the court got it wrong, but Scalia's reasoning in supporting Aereo's position is flawed.
Scalia went too far with his dissent. Language is imprecise, and in common law it is always coupled with precedent and intent.
I don't want anyone pushing code updates to the apps that have been reviewed. Whilst that isn't foolproof, compromising the deployment mechanism with this approach is very scary.
Apple has always been adamant that they see _all_ code that goes onto devices. Live patching is so bloody obvious against their EULA.
It is surprisingly simple to make an interpreter that is "accidentally" Turing complete (this IMHO so often happens by accident that I love to say that if an interpreter is not "obviously" more restricted than a Turing machine, it probably is Turing complete).
This is not just my opinion - there lots of pages in the internet of things that are "accidentally" Turing complete, for example:
Apple has decided that, and you're not going to get around their policies with a clever rhetorical question.
> Apple has decided that, and you're not going to get around their policies with a clever rhetorical question.
Apple cannot change mathematical facts by "decisional" rhetoric.
I just don't understand how Apple is supposed to draw a line here.
I think that is the problem with Rollout.
"Code" is another term for what you are referring to as "binary".
An example that comes to mind is a high speed image compression app for taking rapid sequences of photos. Apple bought the company or the rights so they could include it themselves.
Software is only easier to write than read if you have an idea what it's supposed to do. If you've ever googled "how do I do X?", then you likely have reverse engineered the answer you found to fit your particular use case.
In addition, and in some countries, you can't patent software (thankfully), and so innovation comes through reverse engineering naturally.
Apple has close relationships with those companies so it's often a case of them reaching out to the developers rather than just blinding rejecting the app.
But any idea that Apple would allow them to run ruff shot over the platform and do whatever they wanted is a bit ridiculous.
"Rough shod", before we get another mondegreen propagating across the internet.
Roughshod means the horseshoes have their nails sticking out the bottom to help prevent slipping, so you can imagine trampling someone with those could be painful.
The idiom refers more to what a roughshod horse will do to a road or trail surface; the nailheads dig in and scatter surface material every which way, leaving behind a hell of a mess that'll turn to deep slush or sticky mud, depending on the temperature, with the next precipitation.
Apple uses private APIs (http://sourcedna.com/blog/20151018/ios-apps-using-private-ap...) to build some of their software and reject apps doing the same, effectively killing competition.
But Google and facebook uses them because they want to create products that can compete with apple's features. E.G: https://daringfireball.net/2008/11/google_mobile_uses_privat...
Yet they are not rejected, because they are "big enough".
Apple often uses immature frameworks internally - like the extensions framework - to to polish them or to dog food them before making them official.
That being said, it's hard to argue that Apple (or Android) shouldn't be able to set boundaries on behaviors which are only allowed to be done by the OS as a opposed to an app. Apple's tight control of device screen characteristics makes it pretty understandable that they don't want one app able to control how another app looks on the screen.
The optics of the f.lux situation is just really, really bad. But considering the f.lux never really charged, they have a claim to fame that few can match: creating a feature good enough that Apple incorporated into both iOS and MacOS (now in beta).
It's really not. The argument for user freedoms is almost as old as software.
Security premise: when you are looking at Facebook, you are looking at Facebook. You are not looking at a third party app drawing over Facebook and pretending to be Facebook.
I do not see the above as a slippery slope. Phishing is a capability apps should not have. Even if they have the best of intentions.
If MS had taken a harder line then at least hundreds of millions of people would have had faster computers... And arguably safer ones. But it would be hypocrisy for MS, given they gave us IE, ActiveX, DLLs, VB macros, etc.
Most third party firewalls are just GUIs using the OS API for filtering, not parsers written in C running in the kernel.
The point is not that these APIs exist; the problem is when vendors actively block others from using them, with hacks and/or policy bans. That's extremely hypocritical and anti-competitive. I can see why unofficial APIs must be discouraged (because let's be honest, developers will bitch and moan when they change -- Microsoft in particular was strong-armed into legacy support for decades by the likes of Adobe and Symantec), but it should never be an excuse to ostracize or tilt the playing field.
It is perfectly reasonable, and so far, any other interpretation seems to be a skewed view to facilitate some sort of non-compliant piece of software.
The only reason I know they do is because some of my friends working on mobile video games regularly complain they can't get some features because they are private while google and facebook do. They analyzed some apps to try to copy said features and realized the unfairness of their situation.
Those are lunch chit chats, not hard facts. But they got seldom reasons to lie.
I'm not sure if Apple made it available to other companies privately though.
And then they got double screwed because the US copyright office declared that no matter what the supreme court said they were not a cable company and couldn't get compulsory licensing either.
As far as I can tell it's legal to run one antenna for one person, and I have absolutely no idea where the line is that you start violating copyright. I don't think the guidelines are well written.
For those curious about their justification:
It also seems clearly against their EULA, so you only have yourself to blame for this.
Apple's rules can be harsh but I would rarely call them arbitrary. There is a very good security reason for Apple's stance here. And ultimately it's their store and their rules.
I agree that the review process itself does little for security, but surely you don't want to allow applications to pull in unchecked native code over the network, right?
The sandbox prevents apps from pulling in native code over the network. The OS won't allow pages to be marked as executable unless the code is signed by Apple.
For example serial numbers, user ids, lists of installed applications, etc.
Apple blocks private APIs because they don't want to maintain their compatibility across OS releases and don't want third party apps to break when those APIs change.
Edit: I'm starting to suspect that people don't know what "private API" means, so I want to lay it out real quick. Apple ships a bunch of dynamic libraries with the OS that apps can link against and call into. Those libraries contain functions, global variables, classes, methods, etc. Some of those are published and documented and are intended for third parties to use. Some are unpublished, undocumented, and intended only for internal use.
The difference is documentation and support. The machine doesn't know or care what's public and what's private. There's no security boundary between the two. Private APIs do nothing that a third-party developer couldn't do in their own code, if they knew how to write it. The only way Apple can check for private API usage is to have a big list of all the private APIs in their libraries and scan the app looking for calls to them. This is fundamentally impossible to do with certainty, because there's an unlimited number of ways to obfuscate such calls.
Functionality that needs to be restricted due to privacy or security concerns has to be implemented in a completely separate process with requests from apps being made over some IPC mechanism. This is the only way to reliably gate access.
Apple's prohibition against using private APIs is like an "employees only" sign on an unlocked door in a store. It serves a purpose, but that purpose is to help keep well-meaning but clueless customers away from an area where they might get confused, or lost, or hurt. It won't do anything for your store's security.
"Private APIs do nothing that a third-party developer couldn't do in their own code, if they knew how to write it."
There are a million things under the sun that private APIs have access to that wouldn't be possible with the use of public APIs alone, good developer or not. Prime example: "UIGetScreenImage()". That function allows you to take a screenshot of the device's entire screen, your app, someone else's app, the home screen of iOS. That's a pretty big security hole, is it not?
There are countless examples just like that one hidden inside the private API bubble. Things the OS needs to function, (although the OS may not need that particular example anymore) but could cause massive security issues.
Reminds me of people fussing about getting root on a workstation. Simply getting access to the user's account, without root, will be hugely damaging. Plus you'll likely have root in no time after you get that user account.
And the review process isn't even entirely about stopping the attack. If the malicious code was in the app, when it was submitted for review, you can at least have a trail and can review it later to see how it happened.
If the attack happened with this specific app framework, the bad code could be dynamically loaded into memory and then purged, so you'd never know what happened.
Traditional UNIXoid workstations are quite different. A program running under your user can do anything your user can do. It can access and delete all of your data.
An iOS app can't access or delete any of your data by default. Everything requires explicit permissions granted by the user, and even those are pretty limited. As long as the sandbox functions correctly, a malicious app will never be able to, say, read my financials spreadsheet out of Numbers, or my private texts out of Messages.
I've yet to see any evidence that this process adds security. Given that the review process is extremely shallow (some automated tools are run to scan for private API calls and such, and two non-experts spend a total of about ten minutes with your app) so there's no hope of any sort of useful security audit being done.
You have to trust app developers anyway, since they run native code on your machine. While there are security concerns, these are not the real motivation. Apple is gradually closing down their platform, as many people have predicted in the past. You can also see that in various subtle changes to Gatekeeper and the Sandboxing features.
For me personally, the red line is when unsigned executables can no longer run on MacOS. If Apple ever disallows unsigned executables, I will immediately discontinue my application on MacOS and redirect customers who rely on it to Apple's customer support.
Time will tell. I think it will really come down to the severity of malware problems of the future.
But I really think we'll just move 100% into bifurcated systems (we're already there with Intel's ME to a large extent) where the place that arbitrary code can run is completely segmented off from trusted code.
That is my personal red line also. But I am 100% in support of them enforcing signed apps for the majority, but it should be something you can turn off for advanced users via firmware/bios. My mom does not need to run unsigned apps she finds on the Internet.
It wouldn't be as bad if their store weren't also the only store available for the platform. Because of this forced monoculture, the criticisms are well within scope.
And then Apple is to blame, and can spend tons of dollahs and man hours to fix problems caused by your "open" alternatives.
This post deserves more attention.
As a security-conscious user, live patching is awful. Nothing guarantees me that the benign app I've been granting various permissions to doesn't get altered by a fourth party adversary through coercion or hacking and gets wiretapped by a malicious dynamic payload.
One could argue that live patching allowed companies to fix or mitigate security problems faster than Apples (awful) app store policy (and timescale) would otherwise allow.
Yet, we can say that code review by a third party is better for trust of that code, than no code review by a third party.
"Nothing guarantees" may have been strong. but "the set of attack vectors and their relative efficacy increases " doesn't roll off the tongue quite as nicely.
That's guaranteed, at least ...
Your argument does sound good, but it's a double-edged sword.
I'm wondering which of the current top-downloaded FlappyCrush Of Titans clone got caught exfiltrating all their players contact lists or something...
Sorry for your loss, but in glad Apple is doing this and apps will be safer.
"new apps presenting new questions may result in new rules at any time."
Good luck to you, but it's Apple's sandbox and your product appears to thwart the principles that the Apple App Store has been run on for nearly a decade.
No binary code being injected.
A number of other posts talk explicitly of dynamic delivery of native code. If you're sure, it's a genuine question: I'm interested to know how this works. Function pointer swaps are one thing, but how would this allow you to patch bugs in the app? I can see how this could let you change the app's behaviour, even including calling private API's, but surely this would be constrained to calling pre-existing behaviour?
Man, do I dislike marketing speak.
A "We knew we were non-compliant, but think the security benefits of quick bugfixes outweigh the disadvantages. We will work with apple to return to compliance." would've been honest, better and not bs.
"We have always been in compliance with the guidelines, and we are asking apple and trying to figure out why we're somehow not in compliance" is a fair statement, and not at all BS.
"we've been getting away with it for a long time, so there's that"
Did you see the article on HN last weekend about Wifi routers? "I have a 1.3gbps wireless ac router...but only at the PHY layer, but only in an RF test lab, but only if the client is MU-MIMO enabled, but only if they talk on all 4 channels, but only if the signal connects at 100%, but only if your data is 10:1 compressible, but only if you have one client, " even after like 5 "but only if"'s , there was still this unexplained 20% discrepancy between the advertised "speed" and what the device was physically capable of. I'd love to hear their lawyer explain how thats not false advertising.
Other than contacting Apple what can you do to combat this?
That being said, I agree with most of the people here; live patching in my opinion is kind of infringing on the users' freedoms and security.
Back in 2012, it wasn't prohibited by the ToS at all; we read and re-read the ToS over and over again to make sure so that we wouldn't waste our time building something "illegal."
Once I had the third largest social gaming company as a customer, Facebook's lawyers pulled the plug on it right away.
Turns out (according to Archive.org Wayback Machine), they added a new clause to their ToS two days before emailing us about our ToS violation:
"You must not give your secret key and access tokens to another party, unless that party is an agent acting on your behalf as an operator of your application. You are responsible for all activities that occur under your account identifiers."
Moral of the story: If they want to nuke you, they WILL nuke you (I'm sure Facebook wasn't too happy about my database storing millions of users' social graphs on it, and that was the REAL reason for the shutdown).
Even during our YC interview, a couple of the most legit original partners told us on our way (permanently) out the door "yeah, you guys are going to get shut down..."
I can't imagine any iOS developer who knows the guidelines and how your product works wouldn't have been worried.
We've seen this over and over. The platform risk should be seriously considered. Even AWS has demonstrated recently how dependence can be catastrophic.
Expo (and the React Native library we use) doesn't do any of that. We also make sure we don't expose ways to dynamically execute native code such as the dlopen() function that Apple mentioned in that message. We also haven't received any messages from Apple about Expo, nor have we heard of any Expo developers receiving the same." - Exponent Team
Apple's message reads to me that they're concerned about libraries like Rollout and JSPatch, which expose uncontrolled and direct access to native APIs (including private APIs) or enable dynamic loading of native code. Rollout and JSPatch are the only two libraries I've heard to be correlated with the warning.
Technically it is possible for a WebView app or a React Native app also to contain code that exposes uncontrolled access to native APIs. This could happen unintentionally; someone using React Native might also use Rollout. But this isn't something specific to or systemic about React Native nor WebViews anyway.
One nice thing about Expo, which uses React Native, is that we don't expose uncontrolled or dynamic access to native APIs and take care of this issue for you if your project is written only in JS. We do a lot of React Native work and are really involved in the community and haven't heard of anyone using Expo or React Native alone having this issue.
Since they're not, I wouldn't have _too much_ faith in other things not being rejected.
React Native is very much like a WebView except it calls out to one more native API (UIKit) for views and animations instead of using HTML.
What neither WebViews nor React Native do is expose the ability to dynamically call any native method at runtime. You write regular Objective-C methods that are statically analyzed by Xcode and Apple and it's no easier to call unauthorized, private APIs.
With Expo all of the native modules are safe to use and don't expose arbitrary access to native APIs. Apps made with Expo are set up to be good citizens in the React Native and Apple ecosystems.
if jsonResponseFromYourBackend contains:"runThis" then performSelector:json["runThis"]
and make sure you don't send a runThis param while the app is in review.
Unfortunately for Apple's app review process, Apple's own objective-C language and runtime has very strong dynamic reflection capabilities.
At runtime, any API calls made by the app are checked against this file; if a new API call is found, then it must have escaped Apple's code scanning logic. The API call can be rejected and logged for Apple to improve their scanner.
Moreover, any random app cannot enhance SELinux policy of the system.
Your suggestion of enforcing this also makes no sense from performance or privacy standpoint.
In either case the root issue is about sending secrets like an unscoped API key to the client. "Client secret" is an oxymoron in this context regardless of the programming language.
The API key is actually the least of the hazards, since you can hide that in the keychain. Having source code for your business logic shipping in your app is not good; having it be hackable business logic (by changing the JS in place) is very not good.
Hybrid apps could achieve that same kind of relative business logic security, but at the cost of pushing more and more of the actual business logic behind an API and not in the JS in the app. At that point, the benefits of code sharing (such as they are) get fewer and fewer since it's really pretty easy to write API code in Objective C, Swift, Java or Kotlin.
I'm not aware of any non-tethered jailbreak for iOS 10
If an app is modified on the fly to use an undocumented and maybe "forbidden by apple" method in order to bypass security features or worse spy on me I'm clearly not ok.
Do you really think the apple ecosystem work because clients see AppStore as a evil cage and that the external developers are all angels with good intentions?
By not allowing developers to patch their app 'on the fly', directly, without going through a new version in the app store (which is a very lengthy process, almost forever in security terms), Apple effectively protects their iOS users from malicious code (not from the dev, which is probably to be trusted if the app is already installed, but from MITM attacks and the likes). However at the same time they deny very hasty security patches, which may compromise the device and all associated data and accounts entirely.
So there's no best world but undoubtingly many aspects to consider. Anecdotally I find that it's often better to trust the developers of an app to maintain their own thing; the OS only being a facilitator, provided the user has control (UAC on Windows, Permissions on mobile, etc.) [note: obviously you trust the OS vendor to patch said OS, it's just a particular kind of app]
Also current policy don't prevent app developers to implement a "kill switch" that will prompt user to update (or wait for update) at splash screen and abort loading the malfunctioning version.
Source: https://github.com/bang590/JSPatch/issues/746 (in Chinese)
difference is the js - native bridge. for react native it is fixed bridge, that can only change with a change in the binary.
for rollout they can execute any native code with an update of the JS.
The warning is not about React Native, it's about exposing uncontrolled access to native APIs including private ones and React Native doesn't do that.
It is all about intents. If app allows that, it is banned for that behavior, not for having React in his kitchen. But if you have armed drone there, then police has a question.
"Even if the remote resource is not intentionally malicious, it could easily be hijacked via a Man In The Middle (MiTM) attack, which can pose a serious security vulnerability to users of your app."
I'm not even fond of Apple, but I'd rather trust them, and I'm glad they're protecting their users.
 Caveat: I don't know how likely/possible these are to occur on iOS. I assume a sufficiently motivated & misguided developer could do them within their own app's context.
If I'm running an app that includes native code and accesses data from the outside world then I'm probably trusting that app developer to write C code that doesn't contain arbitrary code execution vulnerabilities, which is much much harder than using HTTPS right.
Or that attacker controls or can coerce a Certificate Authority in the OS's root list - like, say, just about any nation state...
Most apps - I suspect - are not pinning their TLS certs. Apple have already gotten onto a very public fight with the FBI.
Basically says "only we, Apple, can do HTTPS right, you can't, and even if you try you can easily be MiTMed". Which is I don't agree with.
What you say is correct, but it's not the argument I criticize. You point is that they don't trust developers to implement secure loading of code and don't have technical means to control it and can't or don't want to check it in the review. But it's completely different to "you could be easily hijacked if you're not Apple".
Are _you_ secured against, say, an attacker who works at Verisign and can create a valid cert for api.yourdomain.com? Or an attacker who has a buddy who works at GoDaddy who can subvert your dns records so they can trick LetsEncrypt into issuing a valid cert for api.yourdomain.com? Or an elbonian teenage hacker who's just got your AshleyMaddison assword from pastebin and used it to log into your Gmail account and overtaken your dns registrar account to get themselves a valid ssl cert?
For me? Not really "easily" (tho a wifi pineapple in a coffee shop where FE Devs hang out attempting to MiTM them with the Charlesproxy root CA would be a fun experiment... Which, of course, I'd never do - because that'd be bad, right?)
For someone at the NSA or CIA or Mossad? Sure it's easy. For someone a little further down the LEO "cyber" chart like FBI, probably not "easy". For a local beat cop or council dog catcher - nah, definitely not "easy".
For a very-dark-grey pen tester or redteam who're prepared to phish your email password and use it to p0wn your dns registrar? They'd probably call that "easy"... (Hell, I've got a few pentesting friends who'd call that "fun"!)
From April 2016
>>Rollout is aware of the concerns within the community that patching apps outside of the App Store could be a violation of Apple’s review guidelines and practices. Rollout notes both on their FAQ site and in a longer blog post that their process is in compliance.
The reason that's allowed is because everything executes within a sandboxed browser environment. No native code is downloaded.
That's the same thing Rollout does. In fact, iOS apps can't download and run native code. The OS won't let you mark pages as executable unless they're appropriately signed, and only Apple has those keys.
But surely you can see the difference between executing limited actions inside a web view, and making available any native method to a web view.
I don't see any fundamental difference here. Both (UI|WK)WebView and JSC allow bridging. Neither one grants full access to JS code automatically, the programmer has to put some effort into it. And even if there is some important difference, neither one is native code which is what I was disputing above.
Worked on a F2P mobile game, we bundled a tiny Lua engine, and them pushed various promotion screens as a bundle of resources (images) and lua code (screen layout, its preconditions and what's gonna happen after you click - game provided a small API that Lua called).
Would you mind elaborating on why Erlang programs tend to have FSMs? LYSE has a chapter on how to use gen_fsm, but I've really been unable to find a great answer as to WHY you would want to use it.
Although those updates never trigger the Android update service, so I'm not sure if they are just downloading more resources of if they are able to request new permissions(I would like to assume not.)
Google did recently change the Android permission model; previously, apps had to request all their permissions at install time and it was all-or-nothing (and frankly, hardly anyone bothered to look them over.)
Now, certain permissions have to be requested when they're needed (at least for recent versions of the SDK) and the user can choose to allow or deny. But an app can't grant itself new permissions without going through the official update process.
I always wonder when seeing one update, if there is a 0 day that can bypass that. On a technical level I know I run the same risk with my PC, but at the same time, it's more difficult for me to examine processes and startups in my android.
The reason this type of "hot code push" is more attractive on iOS is because the app review process is much longer, so publishers look for ways to skirt it. Looks like Apple is just starting to enforce it more.