Hacker News new | comments | show | ask | jobs | submit login
Apple starts rejecting apps with “hot code push” features (apple.com)
705 points by dylanpyle on Mar 8, 2017 | hide | past | web | favorite | 467 comments



I'm Erez Rusovsky, the CEO of Rollout.io

Rollout's mission has always been, and will always be about helping developers create and deploy mobile apps quickly and safely. Our current product has been a life saver for hundreds of apps by allowing them to patch bugs in live apps.

We were surprised by Apple's actions today. From what we've been able to gather, they seem to be rejecting any app which utilizes a mechanism of live patching, not just apps using Rollout.

Rollout has always been compliant with Apple's guidelines as we've detailed in the past here: https://rollout.io/blog/updating-apps-without-app-store/

Our SDK is installed in hundreds of live apps and our customers have fixed thousands of live bugs in their apps.

We are contacting Apple in order to get further clarification on why Rollout doesn't fall under the clause that lets developers push JS to live apps as long as it does not modify the original features and functionality of the app.

I'll post updates as I have them.

Erez Rusovsky CEO Rollout.io


Oh man. You were surprised? Really? You blog sounds like the PR spin that came out of Aereo, a company that spent an inordinate amount of effort to stay within the absolute letter of a law. Predictably they got killed by lawsuits because judges aren't idiots and the law isn't inflexible to the point where the intent and context isn't considered. Your case is even worse because you engineered a solution to adhere to the letter of a EULA of a tightly controlled ecosystem run by a very capricious company.

I hate the app store review process and a lot of apple policies around the app store and I feel for you and I totally think there should be a less onerous update/review process ... but ... you clearly and blatantly circumvented a core policy, and what happened to you was absolutely predictable.

Get your money back from the lawyer that told you Apple wouldn't shut you down. You got bad advice.


> You blog sounds like the PR spin that came out of Aereo, a company that spent an inordinate amount of effort to stay within the absolute letter of a law.

Without reading the blog, I just wanted to comment on Aereo: a lot of us think that this was the wrong decision, and not in a facetious or 'cute' way.

To quote Scalia's dissent in the case:

> In a dissent that expressed distaste for Aereo’s business model, Justice Antonin Scalia said that the service had nevertheless identified a loophole in the law. “It is not the role of this court to identify and plug loopholes,” he wrote. “It is the role of good lawyers to identify and exploit them, and the role of Congress to eliminate them if it wishes.”

https://www.nytimes.com/2014/06/26/business/media/supreme-co...


This is obviously getting off topic, but in a common law system that interpretation is just wrong. The law is an evolving thing, it is meant to be interpreted, read, and understood, not to be exploited.


Disagree -- it's not the court's job to even categorize a thing as a loophole or not. It simply applies the law. Some actions will fall inside a prohibition and some outside. Divining the intent of the drafters of the law is something fraught with problems considering the process.

Just one example -- there may have been a group of supporters of the law in question used against Aereo that only supported the law because they realized it had said 'loophole'. The rule would not have become law without the 'loophole'. Now, how should a court interpret those circumstances?


> It simply applies the law

This is not the case in common law systems, which the US and UK have. Judges discover the law through principals and precedent. Legislation can override this, however. The US Constitution is a good example.


There are several different legal systems in the UK; there are both national differences (as between Scots law and the law of England and Wales) and applicability differences (as between private law resolving disputes between private persons, administrative law resolving disputes between a person and a statutory/governmental body and criminal law wherein the state prosecutes alleged wrongdoers). All of these fit broadly under the term "common law", so that term needs to be disambiguated.

EULAs and TOSes are firmly in private law, and we can take England and Wales as the national setting.

Even here, "judges discover the law through principals and precedent" is inaccurate. First and foremost there is overriding statute. Where Parliament has intervened in matters of private law, Parliament wins; the parties may choose to show that Parliament's intervention does not apply for some reason (e.g. it conflicts with a subsequent intervention by Parliament, or it does not apply strictly in the matter before the court). Judges may act sua sponte, but mostly in private law leave such matters up to the parties to draw to the court's attention. Secondly, there's the plain wording of the contract. Finally there's recourse to covering case law established by higher courts and binding on the court of first instance (e.g. the county court or the High Court).

However, Parliament has caused the Civil Rules of Procedure for England and Wales to bind the county courts, and CPR rule 1 is the "overriding objective" which directs judges to be just taking into account the totality of circumstances and the behaviour of the parties, among other things. The UK Human Rights Act 1998 also requires courts to take into account the rights it brought into force, and this applies to all courts. These two features oblige judges to look past statute (or more strictly speaking, to do a reading-down as necessary) and specifics of a contract when assessing liability.

The private law system in England and Wales is (mostly) adversarial with the judges (mostly) paying attention to issues brought up by the parties' advocates. There are specific obligations on the court to act sua sponte as noted, and a court is free to ask questions or consider points not brought up by the parties, and it is also free not to look too deeply into matters of its own volition. This can lead to "judge roulette" to some degree, but the court-appearing legal community in England and Wales is not that large (and it's even smaller in Scotland or Northern Ireland) and good advocates and even good solicitors have some idea of what to expect from a particular judge in terms of case management.

However, I don't think many would agree that judges should "discover the law thorugh principals and precedent". Certainly almost no senior English judges woudl agree with that idea; indeed, the majority is much more likely to say that the parties should draw to their attention every salient aspect of the dispute so as to reduce the court's workload (in principle to do sufficient work that few disputes really need a hearing or a conclusion other than an out-of-court settlement between the parties).

They "discover the law" mostly by having it brought to the attention by the parties. Except in constructive litigation, the adversarial principle supposedly guarantees that one party cannot wholly misrepresent the law to the judge (unfortunately this is often not the case, especially where one party has much deeper pockets than the other, and even less the case when filings are not even dealt with because the cost of litigation exhausts one party even where that party has a good case that the non-exhausted party is misrepresenting the law).

The law stems from several sources. Depending on the area of practice of private law, statute and secondary legislation may have codified many aspects such that no other source of law is required in most cases, or (as in landlord-tenant law) statute law may be highly scattered across many Acts of Parliament, and additionally almost always engages in references to decisions by the Court of Appeals taken to resolve disputes where Parliament has not decided to provide a statutory basis for the resolution. (That's mostly because MPs are terrified of legislating in the area of property law since it is a daunting task to consolidate hundreds of years of various sources of law into one Act; not-so-jokingly the Great Repeal Bill proposed as part of the Brexit process will probably be less involving.)

Scalia's argument is overly idealized and focuses on the legal aspect of the system of justice, to the detriment of the justice side. A system of justice should lead to a finding of liability on wrongdoers, but should hold non-wrongdoers harmless from liability. (Unfortunately there are several aspects of the system of justice in England where that falls down, but at least there aren't many professionals in the justice system who think it should be even less just, finding non-wrongdoers unjustly liable simply because that is what the law says to do.)


You seem to somewhat contradict yourself here. On one had you speak of overriding parliamentary authority, on the other of Parliament steering away from "complex and various sources of law".

But your focus on advocates bring the law to the judges attention seems to support the parent comment, judges "discover" the law. Certainly statute overrides all, but the point of common law is that the statute is always insufficient. It is not enough to deal with the facts of any given case. I don't know much about the UK legal system, but in the US rulings on statute become codified as "precedent". Important and relevant decisions are published, circulated, cataloged, studied and effectively become the law. Any case that is litigated starts with a series of briefs on what the parties feel is relevant case law. It also might include briefs filed by interested parties, studies of the legislative process to determine intent and so on. That is all very much in the realm of "discovery".


Well, we can run down the rabbit hole of "interpretation", I guess, but there is a reason we have an appeals system and ultimately a final arbiter (the Supreme Court). The reason the justice system is separate from the legislative branch is because laws cannot cover every eventuality, nor would we want them to. A judge can interpret specific facts outside of the political machinations of the legislative branch. Indeed, it could be argued that this is good because it prevents the legislative branch from making laws to deal with specific situations (as a lawyer I once new said "Good cases make bad law"). Given the power of lobbyists and issues with earmarking in the legislative branch, I'd say this is a net good. In the case of Aereo, there was enough disagreement and enough room for that discussion that it ultimately had to be decided by the final court.


> Divining the intent of the drafters of the law is something fraught with problems considering the process.

And yet judges talk about the "spirit of law", as distinct from the "letter of the law", all the time.


> And yet judges talk about the "spirit of law", as distinct from the "letter of the law", all the time.

No, they don't; I've read lots of legal decisions, and that phrase or anything like it is rarely invoked. Pundits, not judges, are prone to talk about the spirit of the law as opposed to the letter; judges are more prone to talk about legislative intent (not "spirit of the law"), not distinct from the letter of the law, but as part of the analysis of which of several facially plausible meanings the letter of the law should be given in the context of the specific fact pattern presented in the case they are dealing with.


To some extent I agree with you, but at the same time it is not the purpose of the court to create law. It is their job to interpret. Lawyers read and understand. Evolution of the law (which involves creating new portions of the law to cover previously created portions which are considered lacking), is the responsibility of the legislative branch (in that case, Congress).


But the common law is evolving. That's why we review previous cases and cite precedent. Because we assume the interpretation of the law will change as soon as it comes into contact with facts. There is a point where Congress needs to get involved, but until they choose to do so, the court system is where the law happens. Sometimes that includes evolution, but I suppose it's up to the appeals system to draw that line.


I can agree with this.


Under that reasoning, wiretapping laws and privacy laws should not apply to digital communications, because they were not specifically mentioned.


I've noticed a trend where technology-inclined people take a very strict, autistic approach to the law. They tend to view the law as being analogous to source code in that there is no room for interpretation, intent or spirit behind what's codified.

I think this has manifested at its peak with Ethereum.


Laws are funny, they have a certain duality to them. They can be strict, but also fluid.


Not at all. If the existing law is interpreted by the courts to apply to digital communications then it does. Congress a has the ability to remove interpretations by specification.


So you disagree with your previous statement? That courts can interpret the law, including the intention of it?


My original comment said that courts could interpret law... I'm not sure what you're getting at. Yes, including intention. US courts do it all the time. It's called the Constitution.


I guess I would look at this:

>evolution of the law (which involves creating new portions of the law to cover previously created portions which are considered lacking)

And argue that electronic privacy vis-a-vis wiretapping laws is creating a new portion of the law to cover previously created portions which are considered lacking. We can quibble about definitions, but that strikes me as very much in the area of "evolution".


I think defining "evolution" and "create" are the real sticking points in this argument. That get's down to splitting hairs. Though I will say that I do believe that everyone in this thread does have sound arguments given their definition of those two words.


> it is not the purpose of the court to create law

In common law systems it is precisely their job to do so.


In the United States, creation of law is the responsibility of the Legislative branch. There is no avenue for the Judicial branch to create law.


There's one unifying feature of all common law legal systems - judges will publicly almost always proclaim they do not create law, largely because simple prima facie interpretations of most western constitutions say "the legislature makes the laws, the court enforces them", and the existence of judge made law has always had an uneasy relationship with this.

The reality in Common Law legal systems is nothing like this, and judge made law through interpretation and application of precedent is a very real thing, even in the USA. As a particularly blunt example, in some parts of the UK such as Scotland, the traditional common law crimes such as murder/theft etc aren't even defined in primary legislation ("laws"), and exist solely as judge made and applied creations through decades of precedent. Even where there exists primary legislation, the scope of judicial interpretation gives a great deal of freedom to judges to establish precedents that the drafters might not have foreseen or intended.

Heck even the definition of the term "Common Law" is normally interpreted to mean "Case Law" as developed by judges.

https://en.wikipedia.org/wiki/Common_law


I'm not trying to be condescending, but you should probably include a US centric example when asserting how the US works. The Scottish example is irrelevant, as it applies to Scotland, not the US. Also, if you try to relate Scotland and the US under the umbrella of the term "Common Law", but then say that that term has it's own interpretive meaning, you've loosened the association to the point where you can't strictly say that the Scottish and US systems are the same...

Also, as I've seen in other comments, we're going to get on the merry-go-round of defining "create law".


So it is claimed in civics classes, but that's a rather narrow interpretation of "create law".


I'm not sure what other definition there is...


'globox nailed it upthread.


Don't you think at a certain point, a loophole in a poorly-written law can be too big for the courts to close? When does a judge go from upholding the spirit to assigning new meaning?


Yes, I do. This is why we have a tiered system. The judge interprets the facts of the case and if they apply the law incorrectly, a higher set of judges can overturn.

EDIT: Or Congress can get involved and change the law. Checks and balances.


Then the letter of the law means nothing.


But in a common law system, the law has many letters. The law consists not just of legislation, but of precedent, briefs, circumstances, intentions and so on. You cite previous decisions and congressional hearings and the feelings of interested parties because that all weighs into how the law is read.

You may disagree with this, but the fact remains that the law works like this in the US and UK and has since 1066.


The law is a living breathing organic document. Anybody who says otherwise is living in 1776 with slaves.


Bad example. Slavery was made illegal the proper way, by changing the letter of the law. See the 13th amendment.


Slavery was outlawed through a constitutional amendment because Lincoln (for good reason) was afraid that his Emancipation Proclamation wouldn't hold up after the war.


If the law is very clear, then the court should not need to do any interpretation. In those cases it just applies it.

In the Aereo case, the law was clear. The court was supposed to uphold the law, but didn't.


Um, no. Please no. That's a terrifying thought.


Justice Clarence Thomas wouldn't agree with you.


Considering his decision in Bush v. Gore compared to his other decisions surrounding voting rights and the EPC, I'd say he is not above reproach in the area of consistency. (this could be said of judges on both sides of that case)


A great example of how poor Scalia's judicial reasoning was. It wasn't a 'loophole' and the fact that the justices understood it that way probably shows how out of touch they are. But that Scalia disregarded a key tenet of the need to interpret laws based on the circumstances at hand is ghastly.

FWIW I agree that the court got it wrong, but Scalia's reasoning in supporting Aereo's position is flawed.


The sad thing with Aereo is that even if they won the supreme Court decision, Congress would have plugged that loophole immediately. They were never going to win.

Scalia went too far with his dissent. Language is imprecise, and in common law it is always coupled with precedent and intent.


Completely. Disappointed is fine, but surprised. Given this seems explicitly designed to avoid the need for AppStore reviews, this was inevitable.

I don't want anyone pushing code updates to the apps that have been reviewed. Whilst that isn't foolproof, compromising the deployment mechanism with this approach is very scary.


> Oh man. You were surprised? Really?

Exactly!

Apple has always been adamant that they see _all_ code that goes onto devices. Live patching is so bloody obvious against their EULA.


What is "code"? Everybody who has programmed in LISP or Scheme knows that there is no essential distinction between code and data (only many programming languages make it a little hard to see that it is all the same). Thus Apple would have to see not only all code, but also all data that goes onto the devices. But this would imply that Apple disallows all apps that read data from a foreign (i.e. at least not Apple-controlled) server if one does not want to get into a self-contradiction.


Which is why you're not allowed to use a Lisp interpreter or use any method of evaluating data as code. In this model the only thing that data can do is change which code paths run, not what they do.


Changing a code path is the same as changing what they do.


They do allow things like pushing updated JS bundles to react native apps. My guess: RN constrains the surface area of the native API it comes into contact with (e.g. no performSelector, or similar)


That characterization isn't enough to distinguish a Turing complete interpreter from something that trivially manipulates an input datum. An interpreter is just a program containing code paths, which are activated in response to the input (the interpreted code).


> That characterization isn't enough to distinguish a Turing complete interpreter from something that trivially manipulates an input datum. An interpreter is just a program containing code paths, which are activated in response to the input (the interpreted code).

It is surprisingly simple to make an interpreter that is "accidentally" Turing complete (this IMHO so often happens by accident that I love to say that if an interpreter is not "obviously" more restricted than a Turing machine, it probably is Turing complete).

This is not just my opinion - there lots of pages in the internet of things that are "accidentally" Turing complete, for example:

http://beza1e1.tuxen.de/articles/accidentally_turing_complet...

https://www.gwern.net/Turing-complete


"What is "code"?"

Apple has decided that, and you're not going to get around their policies with a clever rhetorical question.


So what is Apple's decision about what code is?


When it comes to what's run on their platform, yes.


I didn't ask a yes/no question, I asked what apple's decision was


> "What is "code"?"

> Apple has decided that, and you're not going to get around their policies with a clever rhetorical question.

Apple cannot change mathematical facts by "decisional" rhetoric.


Apple doesn't need to change mathematical facts, they just don't let you publish on their AppStore.


They're not changing anything. They're deciding the rules for their platform.


So if I have an imbeded webpage inside of my app and that website is updated do I suddenly violate the EULA. What if its a social media app that provides users the ability to write custom HTML/CSS/JS to personalize their profiles and a user writes a game that runs in the header of that profile. What if that game suddenly allows the ability to access copyrighted material?

I just don't understand how Apple is supposed to draw a line here.


Web apps are fine, I believe the issue is Apple wants to prevent apps from updating their Objective-C code. Anything run by WebKit is fine. From the Rollout page linked above:

> With Rollout’s SDK you can update and modify your Objective-C methods with logic written in JavaScript so we’re good on the first condition.

I think that is the problem with Rollout.


They don't 'see' the code. They run a program on the binary for some obvious checks and do a QA smoke test of the app itself.


They also run a static analysis on the binary to check for, amongst other things, use of private APIs. It is presumably fairly easy for them to detect the presence of third-party SDKs like rollout.io from their binary signature.


You are thinking "source code".

"Code" is another term for what you are referring to as "binary".


Recently Apple added (and actively encourages) the ability for developers to upload bytecode to the App Store instead of ARM binaries so Apple can more easily dynamically recompile for new architectures and optimisations. Of course bytecode is considerably easier to revert back into readable source-code (especially as Swift/ObjectiveC retain (some) symbol names in compiled output) - so it's not outside the realm of possibility that an unscrupulous Apple team is disassembling cool apps to see how they work then re-implement them for the next release of iOS.


Can you identify a type of app for which reverse engineering it would be easier than writing their own? Software is usually easier to write than to read. If an app has such a magic secret sauce, and it's of value, then it should be protected by patent or copyright anyway.

An example that comes to mind is a high speed image compression app for taking rapid sequences of photos. Apple bought the company or the rights so they could include it themselves.


In my experience, it's always been easier for me to implement something once I've seen a working example of it. That's basically what examples are for: A "cheat sheet" for reverse engineers.

Software is only easier to write than read if you have an idea what it's supposed to do. If you've ever googled "how do I do X?", then you likely have reverse engineered the answer you found to fit your particular use case.

In addition, and in some countries, you can't patent software (thankfully), and so innovation comes through reverse engineering naturally.


And how many of those working code examples came from decompiled code?


I really don't think Apple, with it's war chest, is actively disassembling code to steal it. As has been demonstrated time and time again, they will just buy companies that have awesome tech and IP. Far easier.


Or they can just "sherlock" them. Happened several times.


No. Apple has plenty of experience doing that themselves, without that. There's a reason the term "Sherlocking" exists.


Unless you are Facebook or Google. Then it's fine and you get a free pass.


To some extent yes but often no.

Apple has close relationships with those companies so it's often a case of them reaching out to the developers rather than just blinding rejecting the app.

But any idea that Apple would allow them to run ruff shot over the platform and do whatever they wanted is a bit ridiculous.


> ruff shot

"Rough shod", before we get another mondegreen propagating across the internet.


As a case of point - as a matter in fact For all intensive purposes this article peaked my interest because by in large it addressed a deep ceded issue with app updates


Thank you! As a fellow "Correct Idiom Usage Nazi" that made my day. Have an upvote :)


Holy crap its `deep ceded` not `deep seated`?


No - the comment was full of intentional errors. "matter in fact" "For all intensive purposes" "peaked my interest" "by in large" "deep ceded"


No, the reply was filled with eggcorns / mondegreens, i.e. misheard idioms.


Heh, nice! I'm a big fan of eggcorns[1] and your post is positively teaming with them!

[1]: http://eggcorns.lascribe.net


To be more precise, the idiom is typically “ride roughshod over” rather than “run ...”, and roughshod is typically written as one word.

Roughshod means the horseshoes have their nails sticking out the bottom to help prevent slipping, so you can imagine trampling someone with those could be painful.


Horses are heavy. Being trampled by one is going to be injurious or lethal regardless of whether or how it is shod. (Most horses will go far out of their way to avoid trampling a human, though; cavalry horses had to be carefully trained into it. Treading deliberately on one's foot is another matter, but, like some humans, some horses are just assholes.)

The idiom refers more to what a roughshod horse will do to a road or trail surface; the nailheads dig in and scatter surface material every which way, leaving behind a hell of a mess that'll turn to deep slush or sticky mud, depending on the temperature, with the next precipitation.


I always thought they were Eggcorns.

http://eggcorns.lascribe.net/


Except for the time when Facebook did it (and still does). They use private apis to monitor user activity even while the app isn't running and collect all sorts of data that others don't have access to like wi-fi SSID and device MAC address. But what's Apple going to do - not have Facebook on iOS?


Err, as far as I know collecting the SSID is a public API.


CNCopyCurrentNetworkInfo gives any app network info, including the SSID.


Does this still work? A cursory googling says it was deprecated in iOS 9 betas but may have been re-enabled?


Depreciated is not removed. Apple is usually very conscientious about depreciating things, then waiting a while before removing them. In some cases they have gone to great lengths... for example the now 5 year depreciation of OpenSSL: first they marked it depreciated, which generates a compiler warning, then after a few years they removed the headers from the MacOS SDK so you couldn't compile new software but left the binary in place so that old software would continue to work. The next step will probably be to remove that binary, something I would expect in MacOS 10.13 (sometime this year) or 10.14 (presumably next year).


Poor choice of words then, it was deprecated and removed over the course of iOS 9 betas, with the plan to make a new system only available to "captive network apps" that could declare a specific list of SSIDs that they manage. Appears they backtracked though? I didn't see any recent info.


It's not ridiculous, they have been doing it for a long time now. E.G:

Apple uses private APIs (http://sourcedna.com/blog/20151018/ios-apps-using-private-ap...) to build some of their software and reject apps doing the same, effectively killing competition.

But Google and facebook uses them because they want to create products that can compete with apple's features. E.G: https://daringfireball.net/2008/11/google_mobile_uses_privat...

Yet they are not rejected, because they are "big enough".


I've never understood why developers criticize OS vendors use of "private APIs". I would go so far as to say there is no such thing as a "private API". The API is a vendors promise to consuming applications that when they call a method, a certain behavior will happen. Whatever they do behind the scenes is an implementation detail that they should be allowed to change anytime without breaking consuming applications.

Apple often uses immature frameworks internally - like the extensions framework - to to polish them or to dog food them before making them official.


f.lux strikes me as a good example. f.lux came to the market first with its idea to control screen temperature. Apple decides "No, you're not allowed to do that...but that's a great idea!", kicks f.lux out of the app store, and then adds their own Night Mode into later versions of iOS, using API's only they are allowed to access.


I love f.lux and have a soft spot for the very nice developer couple behind the app.

That being said, it's hard to argue that Apple (or Android) shouldn't be able to set boundaries on behaviors which are only allowed to be done by the OS as a opposed to an app. Apple's tight control of device screen characteristics makes it pretty understandable that they don't want one app able to control how another app looks on the screen.

The optics of the f.lux situation is just really, really bad. But considering the f.lux never really charged, they have a claim to fame that few can match: creating a feature good enough that Apple incorporated into both iOS and MacOS (now in beta).


> It's hard to argue that Apple (or Android) shouldn't be able to set boundaries on behaviors which are only allowed to be done by the OS as a opposed to an app

It's really not. The argument for user freedoms is almost as old as software.


The delicate balance between throwing a user a rope, and throwing them enough rope to hang themselves...


I'm guessing, from Apple's perspective, things that overlay the whole screen and alter the appearance of other people's apps (such as applying a colour cast), are essentially "white hat phishing". It makes security sense to hide this capability in the OS and not in apps.


You can see, I trust, how this could lead down a monopolistic slippery slope. For instance, virus-scanning is a dangerous enterprise, given that it exposes a greater attack surface if the antivirus program is poorly written. Should Apple and Microsoft remove the ability for third-party antivirus apps to exist? How about third-party firewalls?


I don't see that, actually, I think that's a false equivalence.

Security premise: when you are looking at Facebook, you are looking at Facebook. You are not looking at a third party app drawing over Facebook and pretending to be Facebook.

I do not see the above as a slippery slope. Phishing is a capability apps should not have. Even if they have the best of intentions.


> Should Apple and Microsoft remove the ability for third-party antivirus apps to exist?

Please?

...pretty please?


Well, yes. Unless the AV is designed in a way that shows it doesn't increase the risk, it's just snake oil.

If MS had taken a harder line then at least hundreds of millions of people would have had faster computers... And arguably safer ones. But it would be hypocrisy for MS, given they gave us IE, ActiveX, DLLs, VB macros, etc.

Most third party firewalls are just GUIs using the OS API for filtering, not parsers written in C running in the kernel.


There aren't third-party anti-virus on iOS....


And when Apple changed the private undocumented API that allows the functionality, who should get blamed? Who do you think users will blame when previously working apps that used an undocumented function breaks with a new Os?


> why developers criticize OS vendors use of "private APIs"

The point is not that these APIs exist; the problem is when vendors actively block others from using them, with hacks and/or policy bans. That's extremely hypocritical and anti-competitive. I can see why unofficial APIs must be discouraged (because let's be honest, developers will bitch and moan when they change -- Microsoft in particular was strong-armed into legacy support for decades by the likes of Adobe and Symantec), but it should never be an excuse to ostracize or tilt the playing field.


It's not about marketing, economics or other MBA-feuled non-technical ideas. It's about a software vendor saying: this is our platform, here are API's for you to use, don't go outside it.

It is perfectly reasonable, and so far, any other interpretation seems to be a skewed view to facilitate some sort of non-compliant piece of software.


And the minute scores of applications break that depend on a third party library that uses an undocumented method, the OS vendor gets blamed for releasing a "buggy OS" or they have to keep buggy work arounds in their code forever like MS does.


Of course Apple use private APIs... if they didn't, then these APIs would have no reason to exist in the first place.


Do you have examples more recent than 2008?


Those are not easy to find, it's not really something they advertise and you need somebody to publicly catch them.

The only reason I know they do is because some of my friends working on mobile video games regularly complain they can't get some features because they are private while google and facebook do. They analyzed some apps to try to copy said features and realized the unfairness of their situation.

Those are lunch chit chats, not hard facts. But they got seldom reasons to lie.


I'm not sure if it's still the case, but I don't think you could record iPad screens at the start, though Apple demonstrated it as a possibility during their live demos.

I'm not sure if Apple made it available to other companies privately though.


My impression of the Aereo decision was that it was based on the letter of the law. Contrary to opinions often expressed here, the law wording does not specifically apply to cable companies and specific wording was not creatively interpreted to apply to Aereo. The wording of the law referred not to antennas and cables but to a more abstract notion of "public performances" of copyrighted works, and Aereo fell squarely into what Congress (and legal precedent) meant by public performances of copyrighted works. The law was actually fairly well written to cover evolving technology.


Except that it's been established that 'cloud DVR' is legal. That's 90% of what Aereo did. But somehow attaching an individual antenna to each person's DVR makes it 'public performance'? That specific argument is nonsensical.

And then they got double screwed because the US copyright office declared that no matter what the supreme court said they were not a cable company and couldn't get compulsory licensing either.

As far as I can tell it's legal to run one antenna for one person, and I have absolutely no idea where the line is that you start violating copyright. I don't think the guidelines are well written.


Yeah, this seems crazy that anyone would build a business on this.

For those curious about their justification:

https://rollout.io/blog/updating-apps-without-app-store/


No, what you end up doing is effectively destroying the security protections Apple puts in place to protect the user from unknown/bad code from running on their device. Apple signs apps for a reason -- now we have to trust you to deliver that code safely to the user without being manipulated in transit. I also have to trust that you will respect my privacy. And I don't.

It also seems clearly against their EULA, so you only have yourself to blame for this.

Apple's rules can be harsh but I would rarely call them arbitrary. There is a very good security reason for Apple's stance here. And ultimately it's their store and their rules.


Security on iOS comes from the sandboxing that all apps run in. Apple's review process is really quick and adds approximately nothing in terms of security. Apps running in the sandbox should be safe no matter how evil they are, and if they can break out, the proper solution is to fix the sandbox.


That seems like a strange way to put it. Apps can still do all kinds of nasty things inside their sandbox, for example calling private API's that are now supposed to be caught in the review process, but also less obvious things like hot-patching strings (e.g. URLs) in the binary, sabotaging the device by deliberately hogging the CPU, playing sounds through the speaker, popping up fake prompts for fishing, etc.

I agree that the review process itself does little for security, but surely you don't want to allow applications to pull in unchecked native code over the network, right?


What's so bad about calling private APIs? I get why Apple doesn't want it, but as a user I don't care.

The sandbox prevents apps from pulling in native code over the network. The OS won't allow pages to be marked as executable unless the code is signed by Apple.


Because a private API could give out details about you that you don't want shared to a random application or 3rd party advertising/analytics platform.

For example serial numbers, user ids, lists of installed applications, etc.


If a private API is a privacy or security concern then the sandbox needs to block it.

Apple blocks private APIs because they don't want to maintain their compatibility across OS releases and don't want third party apps to break when those APIs change.

Edit: I'm starting to suspect that people don't know what "private API" means, so I want to lay it out real quick. Apple ships a bunch of dynamic libraries with the OS that apps can link against and call into. Those libraries contain functions, global variables, classes, methods, etc. Some of those are published and documented and are intended for third parties to use. Some are unpublished, undocumented, and intended only for internal use.

The difference is documentation and support. The machine doesn't know or care what's public and what's private. There's no security boundary between the two. Private APIs do nothing that a third-party developer couldn't do in their own code, if they knew how to write it. The only way Apple can check for private API usage is to have a big list of all the private APIs in their libraries and scan the app looking for calls to them. This is fundamentally impossible to do with certainty, because there's an unlimited number of ways to obfuscate such calls.

Functionality that needs to be restricted due to privacy or security concerns has to be implemented in a completely separate process with requests from apps being made over some IPC mechanism. This is the only way to reliably gate access.

Apple's prohibition against using private APIs is like an "employees only" sign on an unlocked door in a store. It serves a purpose, but that purpose is to help keep well-meaning but clueless customers away from an area where they might get confused, or lost, or hurt. It won't do anything for your store's security.


Mike, this line is completely and 100% inaccurate:

"Private APIs do nothing that a third-party developer couldn't do in their own code, if they knew how to write it."

There are a million things under the sun that private APIs have access to that wouldn't be possible with the use of public APIs alone, good developer or not. Prime example: "UIGetScreenImage()". That function allows you to take a screenshot of the device's entire screen, your app, someone else's app, the home screen of iOS. That's a pretty big security hole, is it not?

There are countless examples just like that one hidden inside the private API bubble. Things the OS needs to function, (although the OS may not need that particular example anymore) but could cause massive security issues.


It could be argued that because a private API has no guarantees against change, using them could lead to apps break after OS updates more frequently, which would annoy me as a user (whether or not I knew what was causing the crash).


I wasn't even talking about breaking out of the sandbox. Also, at the most basic level, simply having a trusted and signed delivery process of binaries does add some security. Nobody here is saying it will prevent a compromise, but since when is security viewed like this? It's about layers of protection.

Reminds me of people fussing about getting root on a workstation. Simply getting access to the user's account, without root, will be hugely damaging. Plus you'll likely have root in no time after you get that user account.

And the review process isn't even entirely about stopping the attack. If the malicious code was in the app, when it was submitted for review, you can at least have a trail and can review it later to see how it happened.

If the attack happened with this specific app framework, the bad code could be dynamically loaded into memory and then purged, so you'd never know what happened.


If you don't break out of the sandbox then you can't access anything interesting.

Traditional UNIXoid workstations are quite different. A program running under your user can do anything your user can do. It can access and delete all of your data.

An iOS app can't access or delete any of your data by default. Everything requires explicit permissions granted by the user, and even those are pretty limited. As long as the sandbox functions correctly, a malicious app will never be able to, say, read my financials spreadsheet out of Numbers, or my private texts out of Messages.

I've yet to see any evidence that this process adds security. Given that the review process is extremely shallow (some automated tools are run to scan for private API calls and such, and two non-experts spend a total of about ten minutes with your app) so there's no hope of any sort of useful security audit being done.


now we have to trust you to deliver that code safely to the user without being manipulated in transit.

You have to trust app developers anyway, since they run native code on your machine. While there are security concerns, these are not the real motivation. Apple is gradually closing down their platform, as many people have predicted in the past. You can also see that in various subtle changes to Gatekeeper and the Sandboxing features.

For me personally, the red line is when unsigned executables can no longer run on MacOS. If Apple ever disallows unsigned executables, I will immediately discontinue my application on MacOS and redirect customers who rely on it to Apple's customer support.


MS is ahead of Apple in this race to security via taking back control from the end user. I'm with you in theory, but really doubt that on either platform there won't be at least a dev-only way of running arbitrary unsigned apps.

Time will tell. I think it will really come down to the severity of malware problems of the future.

But I really think we'll just move 100% into bifurcated systems (we're already there with Intel's ME to a large extent) where the place that arbitrary code can run is completely segmented off from trusted code.


Yes, you have to trust the app developer. And Apple is acting as a check/oversight on that relationship, too. Whether it's of use or not, that is really another discussion.

That is my personal red line also. But I am 100% in support of them enforcing signed apps for the majority, but it should be something you can turn off for advanced users via firmware/bios. My mom does not need to run unsigned apps she finds on the Internet.


especially with relation to iOS you can't really say they're closing the platform further when at the most recent developer conference they opened a ton of APIs (siri, maps, imessage to name a few)


> And ultimately it's their store and their rules.

It wouldn't be as bad if their store weren't also the only store available for the platform. Because of this forced monoculture, the criticisms are well within scope.


Criticisms of the rule are valid. Criticisms of Apple for enforcing existing rules are counter-productive. It's to everyone's benefit that all devs play by the same rules.


Yeah, so we can have one store with strict policies that protect users and the value of the hardware; and a bunch of other shit stores that offer apps that can hijack our devices. That makes sense!

And then Apple is to blame, and can spend tons of dollahs and man hours to fix problems caused by your "open" alternatives.


Oh yeah because that's what Android have been spending all their money on...


remember a few years back when windows was trying to promote their mobile app store and they were paying people to write shit apps just to have them hosted in the store? Or android where (at least they used to ) have fake apps that tell you how to "get" the real app ?


ios has as many stores as Android, all you need to do is jailbreak.


Sorry to be OT, but since you're the CEO I do hope you found out if Rollout supports swift as well :^)

https://news.ycombinator.com/item?id=8158046


Yep, and this one: "Great stuff man, I wonder how many apps would suffer from problems when trying to access directly Amazon s3, also how many app updates would get pushed just to update plist"

https://news.ycombinator.com/item?id=10151755


Ouch.

This post deserves more attention.


I'm confused. Did he forget to post that under another account or was he one of us, a lowly HN lurker that applied and go the CEO position (by self selection).


Well, considering he's the co-founder and he's been working there for more than 3 years, and the comment was left less than three years ago...

https://www.linkedin.com/in/erez-rusovsky-3a458850


Hah, neat


bewildered ๏_๏


> We are contacting Apple in order to get further clarification on why Rollout doesn't fall under the clause that lets developers push JS to live apps as long as it does not modify the original features and functionality of the app.

As a security-conscious user, live patching is awful. Nothing guarantees me that the benign app I've been granting various permissions to doesn't get altered by a fourth party adversary through coercion or hacking and gets wiretapped by a malicious dynamic payload.


Nothing guarantees that. There have been RCE exploits on iOS.

One could argue that live patching allowed companies to fix or mitigate security problems faster than Apples (awful) app store policy (and timescale) would otherwise allow.


Nothing guarantees nothing. Life is ephemeral and we're all going to die.

Yet, we can say that code review by a third party is better for trust of that code, than no code review by a third party.

"Nothing guarantees" may have been strong. but "the set of attack vectors and their relative efficacy increases " doesn't roll off the tongue quite as nicely.


Replace "code review" with automated static analysis and a 5 minute run through of the app and you are spot on.


> we're all going to die

That's guaranteed, at least ...


Unless "The Singularity" (and subsequent mind-uploading) actually pans out.


That only delays the inevitable.


[Till the sun runs down][1]

[1]: http://multivax.com/last_question.html


A large number of apps will become abandoned apps at some point. And if one of those relies on code from a third party that has now turned malicious?

Your argument does sound good, but it's a double-edged sword.


My guess is that somewhere in the giant dump of CIA malware there is an exploit that uses this to hijack an iPhone. They are pretty explicit about what they don't like and how it would be exploited.


I suspect unless they got advance notice from Wikileaks this reaction is too soon.

I'm wondering which of the current top-downloaded FlappyCrush Of Titans clone got caught exfiltrating all their players contact lists or something...


I'm wondering if one route to preventing this being an issue is to prevent any hot code fixes to specific devices. If you have to hot code push to all devices running a specific version, I reckon that would put a damper on the actions of an institution trying to target a specific person. It's a lot harder to try and be sneaky when a change has such a large impact. That said, I'm not sure this restriction alone would be enough.


An interesting theory but I kind of doubt the time between the release and this is enough for them to have identified these exploits.


I agree its a long shot. But perhaps they were on the fence about it and the CIA dump pushed them over the edge. Sadly there is no actual way to know for sure.


They don't like you changing the stated functionality of the app. You're making it sound like an iPhone RCE automatically means jailbreak.


If only. You are using a technical hack to modify _native_ apps. The Apple Guidelines aren't strictly set in stone; the intent is clear: you can't remotely modify how native apps work, even if you do it through a JS delivery mechanism.

Sorry for your loss, but in glad Apple is doing this and apps will be safer.


I think the part that you're running afoul of is where it says:

"new apps presenting new questions may result in new rules at any time."

Good luck to you, but it's Apple's sandbox and your product appears to thwart the principles that the Apple App Store has been run on for nearly a decade.


You were relying on a huge loophole. The code runs inside JavascriptCore but it injects native code into the app.


An objc swizzle is not native code injection, it's a function pointer swap. They swizzle the method to their general objc message handler which then executes a piece of javascript code.

For swift they basically patch the app before it gets compiled so that every function, if it meets the conditional would execute their javascript code handler instead.

No binary code being injected.


> No binary code being injected.

A number of other posts talk explicitly of dynamic delivery of native code. If you're sure, it's a genuine question: I'm interested to know how this works. Function pointer swaps are one thing, but how would this allow you to patch bugs in the app? I can see how this could let you change the app's behaviour, even including calling private API's, but surely this would be constrained to calling pre-existing behaviour?

Or by adding new behaviour is this to mean new javascript behaviour.


I think they are confused by the downloading of JavaScript files and executing that inside a 'native context'. I looked at how rollout did their stuff in detail a while back, so i can see how its easy to confuse the two.


Swizzling is incredibly useful. AFNetworking, MagicalRecord and GPGMail use it, just to name a few.


That sounds like a huge hack. They built a company around that?


Built a company with 3 million in funding https://www.crunchbase.com/organization/rollout-io-2#/entity.


lol


"We are in full compliance. Everything is fine. The house is not on fire. The heat you are feeling is coincidential."

Man, do I dislike marketing speak. A "We knew we were non-compliant, but think the security benefits of quick bugfixes outweigh the disadvantages. We will work with apple to return to compliance." would've been honest, better and not bs.


True, but in this case "return to compliance" means "scrapping the company", because its key product depends on the non-compliant behavior and is impossible to implement otherwise.


It more meant "don't build a company on shaky ground," which Rollout clearly did. The only reason to be so specific about not breaking the rules is when you know you are breaking the spirit of the rules. The blog post he linked to is a year old - they're lucky to have survived this long.


You say that as if apple hasn't repeatedly shut down apps that were in compliance. Sometimes changing the rules after, sometimes not even doing that.

"We have always been in compliance with the guidelines, and we are asking apple and trying to figure out why we're somehow not in compliance" is a fair statement, and not at all BS.


don't forget to change their lines about "hundreds of apps use our system and none have ever been rejected by Apple"

to: "we've been getting away with it for a long time, so there's that"


> Man, do I dislike marketing speak

Did you see the article on HN last weekend about Wifi routers? "I have a 1.3gbps wireless ac router...but only at the PHY layer, but only in an RF test lab, but only if the client is MU-MIMO enabled, but only if they talk on all 4 channels, but only if the signal connects at 100%, but only if your data is 10:1 compressible, but only if you have one client, " even after like 5 "but only if"'s , there was still this unexplained 20% discrepancy between the advertised "speed" and what the device was physically capable of. I'd love to hear their lawyer explain how thats not false advertising.


Don't worry, they actually wrote "up to 1.3gbps" on the box. You just haven't read it closely enough.


Your live patching allows you to call arbitrary native methods - this is even demonstrated in your video - of course this was going to get banned!


It sucks that the success of your business sounds dependent on the policies of the company in charge of the app store.

Other than contacting Apple what can you do to combat this?

That being said, I agree with most of the people here; live patching in my opinion is kind of infringing on the users' freedoms and security.


I learned this lesson the hard way a long time ago when I built a service that uses ML (I was doing GPU powered ML in 2011) and social graph clustering to recommend "better" Facebook friends to invite in apps that use the FB SDK. They would send our API their users' FB access tokens (only required the default FB user permissions, too, for mutual friends), we'd issue calls to the FB SDK to get their social graph (completely on the issuing app's behalf), crunch it on our GPUs, and send back a sorted list of recommended friends to suggest to invite for improving virality.

Back in 2012, it wasn't prohibited by the ToS at all; we read and re-read the ToS over and over again to make sure so that we wouldn't waste our time building something "illegal."

Once I had the third largest social gaming company as a customer, Facebook's lawyers pulled the plug on it right away.

Turns out (according to Archive.org Wayback Machine), they added a new clause to their ToS two days before emailing us about our ToS violation:

"You must not give your secret key and access tokens to another party, unless that party is an agent acting on your behalf as an operator of your application. You are responsible for all activities that occur under your account identifiers."

Moral of the story: If they want to nuke you, they WILL nuke you (I'm sure Facebook wasn't too happy about my database storing millions of users' social graphs on it, and that was the REAL reason for the shutdown).

Even during our YC interview, a couple of the most legit original partners told us on our way (permanently) out the door "yeah, you guys are going to get shut down..."


I'm not an iOS developer, but even I know enough about Apple's rules to know that they would frown on any code that has the ability to patch itself without going through app review unless it used the builtin Javascript engine and or was a web view.

I can't imagine any iOS developer who knows the guidelines and how your product works wouldn't have been worried.


Builds a business around a 3rd party marketplace, get surprised when they cut you off without a warning.

We've seen this over and over. The platform risk should be seriously considered. Even AWS has demonstrated recently how dependence can be catastrophic.



Making a business the wholly depends on the decisions of another business is not a business you want to be business of operating. Because situations like this arise and can immediately shut you down.


"Hi there -- I believe that title isn't quite accurate; Apple specifically is referring to behavior of a library called Rollout which lets people dynamically inject Objective-C/Swift. They are doing hot delivery of native, Objective-C code. It's really not about React Native nor Expo.

Expo (and the React Native library we use) doesn't do any of that. We also make sure we don't expose ways to dynamically execute native code such as the dlopen() function that Apple mentioned in that message. We also haven't received any messages from Apple about Expo, nor have we heard of any Expo developers receiving the same." - Exponent Team


Hi, I work on Expo (YC S16) and also am a core contributor to React Native.

Apple's message reads to me that they're concerned about libraries like Rollout and JSPatch, which expose uncontrolled and direct access to native APIs (including private APIs) or enable dynamic loading of native code. Rollout and JSPatch are the only two libraries I've heard to be correlated with the warning.

React Native is different from those libraries because it doesn't expose uncontrolled access to native APIs at runtime. Instead, the developer writes native modules that define some functions the app can call from JavaScript, like setting a timer or playing a sound. This is the same strategy that "hybrid" apps that use a UIWebView/WKWebView have been using for many years. From a technical perspective, React Native is basically a hybrid app except that it calls into more UI APIs.

Technically it is possible for a WebView app or a React Native app also to contain code that exposes uncontrolled access to native APIs. This could happen unintentionally; someone using React Native might also use Rollout. But this isn't something specific to or systemic about React Native nor WebViews anyway.

One nice thing about Expo, which uses React Native, is that we don't expose uncontrolled or dynamic access to native APIs and take care of this issue for you if your project is written only in JS. We do a lot of React Native work and are really involved in the community and haven't heard of anyone using Expo or React Native alone having this issue.


I do wonder why, if they're _really_ fine with that, why they're not fine with browsers with different rendering engines on iOS.

Since they're not, I wouldn't have _too much_ faith in other things not being rejected.


Apple has been fine with WebViews in apps since the beginning of the App Store, including WebViews that make calls to native code, like in the Quip app. WKWebView even has APIs for native and web code to communicate. It's OK for a WebView to call out to your native code that saves data to disk, registers for push notifications, plays a sound, and so on.

React Native is very much like a WebView except it calls out to one more native API (UIKit) for views and animations instead of using HTML.

What neither WebViews nor React Native do is expose the ability to dynamically call any native method at runtime. You write regular Objective-C methods that are statically analyzed by Xcode and Apple and it's no easier to call unauthorized, private APIs.

With Expo all of the native modules are safe to use and don't expose arbitrary access to native APIs. Apps made with Expo are set up to be good citizens in the React Native and Apple ecosystems.


Well, let's be clear here. I use React Native, and it would take me about 30m to write a bridged React Native method that could execute ObjC code dynamically, including accessing all the private APIs you could want.


Yeah. Objective-C is definitely really flexible. From my reading of the warning, the important thing is not to execute Objective-C dynamically or access private APIs regardless of whether you're using React Native, Cordova, a WebView, or even a bare iOS app.



Simple: JIT compilers are banned and so that excludes any modern browser's JavaScript implementation from iOS. But anyone using Apple's JavaScriptCore has nothing to fear.


The rules explicitly forbid any HTML renderer or JS interpreter aside from WebKit, JIT or no JIT. I believe all the popular third party browsers today still use the non-JIT UIWebView rather than WKWebView because the former gives you more control over the request cycle



Chrome uses WKWebView on iOS, so it's basically safari with a different UI (so does firefox on iOS)


Yes, I know. I was saying the OP of this thread was wrong because it's not UIWebView, it's WKWebView with a JIT :).


WKWebView has JIT. In executes the JS in a different process than the hosted app; there the JIT lives, this special process is whitelisted to do JIT magic.


I understand that, the original OP was stating that none of the "browser skins" had JIT because they all used UIWebView, which isn't the case with the link I posted :p.


Webkit is the only acceptable browser engine on iOS. Firefox and Chrome both use webkit on iOS.


How can you ship an app with access to private APIs? There is a private API usage scanning before you can submit for review.


The scanner isn't foolproof. You could fool it if you obfuscate your calls to performSelector well enough, for example

if jsonResponseFromYourBackend contains:"runThis" then performSelector:json["runThis"]

and make sure you don't send a runThis param while the app is in review.

Unfortunately for Apple's app review process, Apple's own objective-C language and runtime has very strong dynamic reflection capabilities.


Apple could potentially close any loopholes here by scanning new apps for their API usage, checking for any 'bad' calls, and then writing the remaining discovered calls into a permissions file that is delivered with the app in the store.

At runtime, any API calls made by the app are checked against this file; if a new API call is found, then it must have escaped Apple's code scanning logic. The API call can be rejected and logged for Apple to improve their scanner.


This is a great idea actually. Actually, isn't google already doing this via SELinux? You give the app a manifest of calls it's allowed to make, and if the call isn't in the manifest the call gets rejected?


SELinux is not that strong. It works on kernel syscall boundaries and some parameters thereof, and those aren't particularly fine grained. Service access is governed by a separate Google API, for example.

Moreover, any random app cannot enhance SELinux policy of the system.


There are many legitimate uses of calling methods and functions using reflection. Expecting to hit all of them in a short review process is comically optimistic for anything but the simplistic of apps.

Your suggestion of enforcing this also makes no sense from performance or privacy standpoint.


Fair warning that I'm not familiar with Swift

Obvious (to me) idea: have the private API access stored as data sent from the server at runtime, rather than code in the reviewed app. Basically the equivalent of eval()-ing a string for front-end javascript code.


James, do you know if they're going to go agains the Exponent app per-se?


Oh, no, I don't have reason to believe so.


Rather than a speculating on what really boils down to semantics once ToS is involved maybe someone could actually try submitting an app and reporting back on whether it triggers the same failure?


I haven't heard any reports of Expo developers or React Native developers (who aren't using Rollout or JSPatch) getting this warning.


This needs a little elaboration. Cached Javascript in any hybrid app is a security hole because that can be exposed through a jailbreak. Depending on how much of your business logic you've pushed into the JS layer to enable that "80% code sharing" that makes managers go all tingly you may be exposing all kinds of things - cached access tokens, API keys and whatnot - to anyone who wants to install your app and mine its secrets.


Huh? Are you saying that the security hole is that a user could see stuff in the memory of their own phone?


Or someone that steals your phone, or picks it up when you lose it somewhere. Yes, there's the lock screen and passcode but...

http://www.wikihow.com/Bypass-iPhone-Passcode


At that point your phone is fucked, anyhow. If you've lost physical control of the device and an attacker has broken the lock, you're compromised in much bigger ways.


"This bypass won't work on iPhones running iOS 9.3 and up"



How is this any different from native code? Afaik, you can access the native compiled source code of an app on a jailbroken phone. Sure it's more of a pain to parse through but security through obscurity isn't security.


Definitely, running "strings" on a binary is about as easy as finding the JS for a hybrid app (WebViews or React Native). Depending on your experience it could be easier to extract API keys from an IPA than from JS.

In either case the root issue is about sending secrets like an unscoped API key to the client. "Client secret" is an oxymoron in this context regardless of the programming language.


You'd have to know a few things first, like (1) the IPA is a ZIP file, (2) the ZIP file is actually of a directory and (3) you can dump the actual code in the JS files (if they're in the bundle directory) much easier than you can look for strings from the binary that might look like an API key.

The API key is actually the least of the hazards, since you can hide that in the keychain. Having source code for your business logic shipping in your app is not good; having it be hackable business logic (by changing the JS in place) is very not good.


Shipping source code with business logic (assuming the definition of source code includes obfuscated JS) is how the entire web works today! With WASM the code that is shipped will be even further away from the original source code and really not so different from downloading ARM from the App Store or Java/ART bytecode from the Play Store.


Not the entire web. eBay and Amazon don't put all their algorithms in the browser; PayPal doesn't either. They hide their company jewels behind their API where they're a little more secure. What you see in the browser is the presentation layer code.

Hybrid apps could achieve that same kind of relative business logic security, but at the cost of pushing more and more of the actual business logic behind an API and not in the JS in the app. At that point, the benefits of code sharing (such as they are) get fewer and fewer since it's really pretty easy to write API code in Objective C, Swift, Java or Kotlin.


You aren't "parsing" through compiled (and linked) code; you're decompiling it, which is a much trickier thing to do and get right. Having your app logic in Javascript in the sandbox cache is just serving it up on a plate.


I'm having trouble seeing this as a valid argument when applied to cached JavaScript within a mobile browser. The same security practices apply.


They do, and it's for this reason that the really important stuff isn't in the web page at all - it's behind the company's API. The "business logic" that you can safely push out to a browser or hybrid app is somewhat limited, which means there is a hard upper limit on the real code sharing savings you can get with hybrid apps versus full native.


You as an end user jailbreaking your own phone is not a "security hole".

I'm not aware of any non-tethered jailbreak for iOS 10


I think he's talking about application wide (not user specific) "secrets" in the javascript layer.


Yes, I am. As for untethered jailbreaks, http://pangu8.com/10.html mentions a few.


Yes it is, jailbreaking bypasses critical security features of your phone. Granted, it's required to run certain kinds of software but there are better ways to run your own code on your phone (like getting your own developer certificate) that preserve the security model.


The problem with Apple that you and your customers need to be aware of (or concerned about), is that once a number of your customers sidestep Apple policies w.r.t. pushing or changing features via JavaScript, Apple will change their policies to close that loophole. It's only a matter of time before companies take advantage of this path to sidestep app store approvals.


As a client I'm pretty baffled that some developer had specifically bypassed something that I see as a security measure.

If an app is modified on the fly to use an undocumented and maybe "forbidden by apple" method in order to bypass security features or worse spy on me I'm clearly not ok.

Do you really think the apple ecosystem work because clients see AppStore as a evil cage and that the external developers are all angels with good intentions?


It's actually a double-edged security sword.

By not allowing developers to patch their app 'on the fly', directly, without going through a new version in the app store (which is a very lengthy process, almost forever in security terms), Apple effectively protects their iOS users from malicious code (not from the dev, which is probably to be trusted if the app is already installed, but from MITM attacks and the likes). However at the same time they deny very hasty security patches, which may compromise the device and all associated data and accounts entirely.

So there's no best world but undoubtingly many aspects to consider. Anecdotally I find that it's often better to trust the developers of an app to maintain their own thing; the OS only being a facilitator, provided the user has control (UAC on Windows, Permissions on mobile, etc.) [note: obviously you trust the OS vendor to patch said OS, it's just a particular kind of app]

edit: wording


I believe carefully crafted update should mostly mitigate that risk.

Also current policy don't prevent app developers to implement a "kill switch" that will prompt user to update (or wait for update) at splash screen and abort loading the malfunctioning version.


Though the security justification here is limited to private APIs & native code pushing, the first few sentences of the rejection definitely seem like the "spirit" of the terms includes any significant functionality pushing at all. Wouldn't be surprised if they ramp up enforcement on that.


This appears to affect apps using JSPatch which is not pushing Objective-C but JavaScript.

Source: https://github.com/bang590/JSPatch/issues/746 (in Chinese)


The key part of JSPatch is that it exposes arbitrary, uncontrolled access to native APIs. You could use it to call private APIs even because Objective-C doesn't distinguish between public and private APIs at runtime, so Xcode's compiler checks and Apple's static analysis can't anticipate which APIs are possibly called.

In contrast, React Native doesn't expose uncontrolled access to native APIs. You write regular Objective-C methods that are callable from JavaScript, and within your Objective-C methods you write regular Objective-C that is statically checked by Xcode and Apple.


I'm not sure why people are concerned about React Native being targeted by this new enforcement of the rule. It is not the same thing at all.


it shares some components. like javascriptcore.

difference is the js - native bridge. for react native it is fixed bridge, that can only change with a change in the binary.

for rollout they can execute any native code with an update of the JS.


The main reason for React Native to not be impacted is that Facebook + Instagram + Airbnb + Soundcloud + ... are using it and Apple cannot justify to their userbase to not accept those favorite apps for a technical reason.


No, it's not. It's because React Native is a totally different technical solution.


"You write regular Objective-C methods that are callable from JavaScript," - the objC function that I am calling from JS can be capable of calling a private API received by it in an argument - that breaks the entire claim of reactNative not supporting calls to private API. React Native must also go down.


If you deliberately write code that calls private APIs that are exposed to JS in your React Native app, I expect only your specific app to receive this warning. The same would be true in an app that doesn't use React Native at all, as we've seen with apps using Rollout.

The warning is not about React Native, it's about exposing uncontrolled access to native APIs including private ones and React Native doesn't do that.


See the difference between a regular human who in theory can take a knife and violently rob or kill someone, and human-like robot that can be easily programmed to do the same. Who has to be isolated?

It is all about intents. If app allows that, it is banned for that behavior, not for having React in his kitchen. But if you have armed drone there, then police has a question.


It seems that most people are overlooking one of the more significant points Apple have made here:

"Even if the remote resource is not intentionally malicious, it could easily be hijacked via a Man In The Middle (MiTM) attack, which can pose a serious security vulnerability to users of your app."

Source: https://github.com/bang590/JSPatch/issues/746


I'm not buying the MITM argument in general. If remote code is downloaded via HTTPS, it could not be hijacked, at least not easily.


HTTPS is sufficient against MITM, until someone disables all verification to use their self-signed cert, or adds their poorly-secured "CA" cert to the allowed CA's for the download, or adds a weak cipher to the list. Do you trust every app developer to do those right (if they even use HTTPS!)[0], or would you rather trust Apple to get it right in the centralized system they designed for app updates for all apps?

I'm not even fond of Apple, but I'd rather trust them, and I'm glad they're protecting their users.

[0] Caveat: I don't know how likely/possible these are to occur on iOS. I assume a sufficiently motivated & misguided developer could do them within their own app's context.


> Do you trust every app developer to do those right (if they even use HTTPS!)[0]

If I'm running an app that includes native code and accesses data from the outside world then I'm probably trusting that app developer to write C code that doesn't contain arbitrary code execution vulnerabilities, which is much much harder than using HTTPS right.


"HTTPS is sufficient against MITM, until someone disables all verification to use their self-signed cert, or adds their poorly-secured "CA" cert to the allowed CA's for the download, or adds a weak cipher to the list. "

Or that attacker controls or can coerce a Certificate Authority in the OS's root list - like, say, just about any nation state...

Most apps - I suspect - are not pinning their TLS certs. Apple have already gotten onto a very public fight with the FBI.


You see, this whole thing:

"Even if the remote resource is not intentionally malicious, it could easily be hijacked via a Man In The Middle (MiTM) attack, which can pose a serious security vulnerability to users of your app."

Basically says "only we, Apple, can do HTTPS right, you can't, and even if you try you can easily be MiTMed". Which is I don't agree with.

What you say is correct, but it's not the argument I criticize. You point is that they don't trust developers to implement secure loading of code and don't have technical means to control it and can't or don't want to check it in the review. But it's completely different to "you could be easily hijacked if you're not Apple".


I'd trust Apple to do right more than I'd trust a small team at a startup trying to deliver features at breakneck speed. I really like the fact that Apple is looking out for its customers here.


In an ideal world where apps check/pin certificates and don't disable cert checks to make self-signed certs work in test environments you'd be right. If only this were reality.


I suspect I've left a lot of test devices behind me with Charlesproxy MiTM root certs installed (I wouldn't be _too_ surprised if one of the phonee in my pocket right now has that...)


I'm just pointing out that "remote resource ... could easily be hijacked via a MiTM attack" is technically incorrect. The problem is not the remote resource per se, the problem is trusting developers to implement secure loading of resources. Which is a completely different argument.


Depends on who you're expecting the MiTM attack to be executed by.

Are _you_ secured against, say, an attacker who works at Verisign and can create a valid cert for api.yourdomain.com? Or an attacker who has a buddy who works at GoDaddy who can subvert your dns records so they can trick LetsEncrypt into issuing a valid cert for api.yourdomain.com? Or an elbonian teenage hacker who's just got your AshleyMaddison assword from pastebin and used it to log into your Gmail account and overtaken your dns registrar account to get themselves a valid ssl cert?


And all of this is "easily"?


Like I said - depends on who you are.

For me? Not really "easily" (tho a wifi pineapple in a coffee shop where FE Devs hang out attempting to MiTM them with the Charlesproxy root CA would be a fun experiment... Which, of course, I'd never do - because that'd be bad, right?)

For someone at the NSA or CIA or Mossad? Sure it's easy. For someone a little further down the LEO "cyber" chart like FBI, probably not "easy". For a local beat cop or council dog catcher - nah, definitely not "easy".

For a very-dark-grey pen tester or redteam who're prepared to phish your email password and use it to p0wn your dns registrar? They'd probably call that "easy"... (Hell, I've got a few pentesting friends who'd call that "fun"!)


Seems like people have been aware of concerns about violating the TOS with these hot patch frameworks.

From April 2016

>>Rollout is aware of the concerns within the community that patching apps outside of the App Store could be a violation of Apple’s review guidelines and practices. Rollout notes both on their FAQ site and in a longer blog post that their process is in compliance.

https://www.fireeye.com/blog/threat-research/2016/04/rollout...


A ton of games do this and it is incredibly annoying. I don't want to download an update, then have to download an update. I only wish the same restriction applied to my Android device.


Most likely most games are updating only game related data and graphics files. Very few games actually use internal scripting that would be needed to do code updates


The only app I've got that appears to actually update itself without going through the AppStore is the HSBC mobile banking app. I'd be interested in hearing the discussions going on between Apple and HSBC at the moment.


Judging by how sluggish and annoying the HSBC app is, I think it is a web app framed in a thin launcher from the app store.

I.e. it downloads a bunch of javascript/html/css and that executes within a UIWebView/WKWebView. Using caching and localStorage, you can construct such an app to not need to download everything on each launch.

The reason that's allowed is because everything executes within a sandboxed browser environment. No native code is downloaded.


> The reason that's allowed is because everything executes within a sandboxed browser environment. No native code is downloaded.

That's the same thing Rollout does. In fact, iOS apps can't download and run native code. The OS won't let you mark pages as executable unless they're appropriately signed, and only Apple has those keys.


If they're using JSC, then they can definitely download JS and execute native methods. They could subclass any UIKit object and make it conform to JSExport. Done, now they're "running native code."


Sure, and an app displaying a web page using WebView can provide hooks that allow doing that sort of thing too. Neither one is downloading native code.


Only UIWebView, which is going away soon.

But surely you can see the difference between executing limited actions inside a web view, and making available any native method to a web view.


WKWebView allows the app to execute arbitrary JS within the loaded page, and intercept URL loads and other actions made by JS code. That's all you need to build a bridge.

I don't see any fundamental difference here. Both (UI|WK)WebView and JSC allow bridging. Neither one grants full access to JS code automatically, the programmer has to put some effort into it. And even if there is some important difference, neither one is native code which is what I was disputing above.


Is it possible to do that and use the fingerprint sensor for login in a secure way? I thought the same as you until they enabled Touch login.


There's no reason why they can't have the native portion do the authentication and just have it return a signed token to the JS portion which forwards it to the server.



Supercell's games and a bunch of F2P collect-ish games also do that, when you open the game they have an update process. I'm reasonably sure that only updates static assets though, stuff like description files and graphic assets. It's actually pretty useful as it lowers the payload of the core engine and lets them do much smaller updates compared to having to bundle it all, that's especially important with things like Unity Frameworks which are not delta-updatable in the store (hence Hearthstone's 2GB downloads every time they add a cardback or nerf a pair of cards)


I bet it's some code too.

Worked on a F2P mobile game, we bundled a tiny Lua engine, and them pushed various promotion screens as a bundle of resources (images) and lua code (screen layout, its preconditions and what's gonna happen after you click - game provided a small API that Lua called).


Hey, unrelated, came across a comment of yours from ~5 years ago: https://news.ycombinator.com/item?id=2949645

Would you mind elaborating on why Erlang programs tend to have FSMs? LYSE has a chapter on how to use gen_fsm, but I've really been unable to find a great answer as to WHY you would want to use it.


I was thinking of the HSBC app the whole time while reading this. In my opinion Apple should reject that piece of shit and force HSBC to write a native app that works properly.


Google definitely forbids self-updating apps on Google Play. But I'm not sure how well this is enforced.


Horribly. I get a few games from the Japanese market, and almost each one requires in immediate internal download and update.

Although those updates never trigger the Android update service, so I'm not sure if they are just downloading more resources of if they are able to request new permissions(I would like to assume not.)


Games updating DLC is nothing new and is not what this is about.

Google did recently change the Android permission model; previously, apps had to request all their permissions at install time and it was all-or-nothing (and frankly, hardly anyone bothered to look them over.)

Now, certain permissions have to be requested when they're needed (at least for recent versions of the SDK) and the user can choose to allow or deny. But an app can't grant itself new permissions without going through the official update process.


See I do trust that - but only to an extent.

I always wonder when seeing one update, if there is a 0 day that can bypass that. On a technical level I know I run the same risk with my PC, but at the same time, it's more difficult for me to examine processes and startups in my android.


Realistically, if they've written their own native code that parses their updates then almost certainly. If they're using an established library then maybe not (likewise if they're using a decent language, but unfortunately no-one does that). I'm reminded of the example at the bottom of http://www.gamasutra.com/view/feature/194772/dirty_game_deve... where the game had a buffer overflow in displaying its own EULA.


The internal download and update is allowed when it consists of media resources and such which is not native executed code.


Neither Google nor Apple nor any other game platform is going to stop games from downloading new content. That's just how games work these days.

The reason this type of "hot code push" is more attractive on iOS is because the app review process is much longer, so publishers look for ways to skirt it. Looks like Apple is just starting to enforce it more.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: