Hacker News new | comments | show | ask | jobs | submit login
Apple starts rejecting apps with “hot code push” features (apple.com)
705 points by dylanpyle on Mar 8, 2017 | hide | past | web | favorite | 467 comments



I'm Erez Rusovsky, the CEO of Rollout.io

Rollout's mission has always been, and will always be about helping developers create and deploy mobile apps quickly and safely. Our current product has been a life saver for hundreds of apps by allowing them to patch bugs in live apps.

We were surprised by Apple's actions today. From what we've been able to gather, they seem to be rejecting any app which utilizes a mechanism of live patching, not just apps using Rollout.

Rollout has always been compliant with Apple's guidelines as we've detailed in the past here: https://rollout.io/blog/updating-apps-without-app-store/

Our SDK is installed in hundreds of live apps and our customers have fixed thousands of live bugs in their apps.

We are contacting Apple in order to get further clarification on why Rollout doesn't fall under the clause that lets developers push JS to live apps as long as it does not modify the original features and functionality of the app.

I'll post updates as I have them.

Erez Rusovsky CEO Rollout.io


Oh man. You were surprised? Really? You blog sounds like the PR spin that came out of Aereo, a company that spent an inordinate amount of effort to stay within the absolute letter of a law. Predictably they got killed by lawsuits because judges aren't idiots and the law isn't inflexible to the point where the intent and context isn't considered. Your case is even worse because you engineered a solution to adhere to the letter of a EULA of a tightly controlled ecosystem run by a very capricious company.

I hate the app store review process and a lot of apple policies around the app store and I feel for you and I totally think there should be a less onerous update/review process ... but ... you clearly and blatantly circumvented a core policy, and what happened to you was absolutely predictable.

Get your money back from the lawyer that told you Apple wouldn't shut you down. You got bad advice.


> You blog sounds like the PR spin that came out of Aereo, a company that spent an inordinate amount of effort to stay within the absolute letter of a law.

Without reading the blog, I just wanted to comment on Aereo: a lot of us think that this was the wrong decision, and not in a facetious or 'cute' way.

To quote Scalia's dissent in the case:

> In a dissent that expressed distaste for Aereo’s business model, Justice Antonin Scalia said that the service had nevertheless identified a loophole in the law. “It is not the role of this court to identify and plug loopholes,” he wrote. “It is the role of good lawyers to identify and exploit them, and the role of Congress to eliminate them if it wishes.”

https://www.nytimes.com/2014/06/26/business/media/supreme-co...


This is obviously getting off topic, but in a common law system that interpretation is just wrong. The law is an evolving thing, it is meant to be interpreted, read, and understood, not to be exploited.


Disagree -- it's not the court's job to even categorize a thing as a loophole or not. It simply applies the law. Some actions will fall inside a prohibition and some outside. Divining the intent of the drafters of the law is something fraught with problems considering the process.

Just one example -- there may have been a group of supporters of the law in question used against Aereo that only supported the law because they realized it had said 'loophole'. The rule would not have become law without the 'loophole'. Now, how should a court interpret those circumstances?


> It simply applies the law

This is not the case in common law systems, which the US and UK have. Judges discover the law through principals and precedent. Legislation can override this, however. The US Constitution is a good example.


There are several different legal systems in the UK; there are both national differences (as between Scots law and the law of England and Wales) and applicability differences (as between private law resolving disputes between private persons, administrative law resolving disputes between a person and a statutory/governmental body and criminal law wherein the state prosecutes alleged wrongdoers). All of these fit broadly under the term "common law", so that term needs to be disambiguated.

EULAs and TOSes are firmly in private law, and we can take England and Wales as the national setting.

Even here, "judges discover the law through principals and precedent" is inaccurate. First and foremost there is overriding statute. Where Parliament has intervened in matters of private law, Parliament wins; the parties may choose to show that Parliament's intervention does not apply for some reason (e.g. it conflicts with a subsequent intervention by Parliament, or it does not apply strictly in the matter before the court). Judges may act sua sponte, but mostly in private law leave such matters up to the parties to draw to the court's attention. Secondly, there's the plain wording of the contract. Finally there's recourse to covering case law established by higher courts and binding on the court of first instance (e.g. the county court or the High Court).

However, Parliament has caused the Civil Rules of Procedure for England and Wales to bind the county courts, and CPR rule 1 is the "overriding objective" which directs judges to be just taking into account the totality of circumstances and the behaviour of the parties, among other things. The UK Human Rights Act 1998 also requires courts to take into account the rights it brought into force, and this applies to all courts. These two features oblige judges to look past statute (or more strictly speaking, to do a reading-down as necessary) and specifics of a contract when assessing liability.

The private law system in England and Wales is (mostly) adversarial with the judges (mostly) paying attention to issues brought up by the parties' advocates. There are specific obligations on the court to act sua sponte as noted, and a court is free to ask questions or consider points not brought up by the parties, and it is also free not to look too deeply into matters of its own volition. This can lead to "judge roulette" to some degree, but the court-appearing legal community in England and Wales is not that large (and it's even smaller in Scotland or Northern Ireland) and good advocates and even good solicitors have some idea of what to expect from a particular judge in terms of case management.

However, I don't think many would agree that judges should "discover the law thorugh principals and precedent". Certainly almost no senior English judges woudl agree with that idea; indeed, the majority is much more likely to say that the parties should draw to their attention every salient aspect of the dispute so as to reduce the court's workload (in principle to do sufficient work that few disputes really need a hearing or a conclusion other than an out-of-court settlement between the parties).

They "discover the law" mostly by having it brought to the attention by the parties. Except in constructive litigation, the adversarial principle supposedly guarantees that one party cannot wholly misrepresent the law to the judge (unfortunately this is often not the case, especially where one party has much deeper pockets than the other, and even less the case when filings are not even dealt with because the cost of litigation exhausts one party even where that party has a good case that the non-exhausted party is misrepresenting the law).

The law stems from several sources. Depending on the area of practice of private law, statute and secondary legislation may have codified many aspects such that no other source of law is required in most cases, or (as in landlord-tenant law) statute law may be highly scattered across many Acts of Parliament, and additionally almost always engages in references to decisions by the Court of Appeals taken to resolve disputes where Parliament has not decided to provide a statutory basis for the resolution. (That's mostly because MPs are terrified of legislating in the area of property law since it is a daunting task to consolidate hundreds of years of various sources of law into one Act; not-so-jokingly the Great Repeal Bill proposed as part of the Brexit process will probably be less involving.)

Scalia's argument is overly idealized and focuses on the legal aspect of the system of justice, to the detriment of the justice side. A system of justice should lead to a finding of liability on wrongdoers, but should hold non-wrongdoers harmless from liability. (Unfortunately there are several aspects of the system of justice in England where that falls down, but at least there aren't many professionals in the justice system who think it should be even less just, finding non-wrongdoers unjustly liable simply because that is what the law says to do.)


You seem to somewhat contradict yourself here. On one had you speak of overriding parliamentary authority, on the other of Parliament steering away from "complex and various sources of law".

But your focus on advocates bring the law to the judges attention seems to support the parent comment, judges "discover" the law. Certainly statute overrides all, but the point of common law is that the statute is always insufficient. It is not enough to deal with the facts of any given case. I don't know much about the UK legal system, but in the US rulings on statute become codified as "precedent". Important and relevant decisions are published, circulated, cataloged, studied and effectively become the law. Any case that is litigated starts with a series of briefs on what the parties feel is relevant case law. It also might include briefs filed by interested parties, studies of the legislative process to determine intent and so on. That is all very much in the realm of "discovery".


Well, we can run down the rabbit hole of "interpretation", I guess, but there is a reason we have an appeals system and ultimately a final arbiter (the Supreme Court). The reason the justice system is separate from the legislative branch is because laws cannot cover every eventuality, nor would we want them to. A judge can interpret specific facts outside of the political machinations of the legislative branch. Indeed, it could be argued that this is good because it prevents the legislative branch from making laws to deal with specific situations (as a lawyer I once new said "Good cases make bad law"). Given the power of lobbyists and issues with earmarking in the legislative branch, I'd say this is a net good. In the case of Aereo, there was enough disagreement and enough room for that discussion that it ultimately had to be decided by the final court.


> Divining the intent of the drafters of the law is something fraught with problems considering the process.

And yet judges talk about the "spirit of law", as distinct from the "letter of the law", all the time.


> And yet judges talk about the "spirit of law", as distinct from the "letter of the law", all the time.

No, they don't; I've read lots of legal decisions, and that phrase or anything like it is rarely invoked. Pundits, not judges, are prone to talk about the spirit of the law as opposed to the letter; judges are more prone to talk about legislative intent (not "spirit of the law"), not distinct from the letter of the law, but as part of the analysis of which of several facially plausible meanings the letter of the law should be given in the context of the specific fact pattern presented in the case they are dealing with.


To some extent I agree with you, but at the same time it is not the purpose of the court to create law. It is their job to interpret. Lawyers read and understand. Evolution of the law (which involves creating new portions of the law to cover previously created portions which are considered lacking), is the responsibility of the legislative branch (in that case, Congress).


But the common law is evolving. That's why we review previous cases and cite precedent. Because we assume the interpretation of the law will change as soon as it comes into contact with facts. There is a point where Congress needs to get involved, but until they choose to do so, the court system is where the law happens. Sometimes that includes evolution, but I suppose it's up to the appeals system to draw that line.


I can agree with this.


Under that reasoning, wiretapping laws and privacy laws should not apply to digital communications, because they were not specifically mentioned.


I've noticed a trend where technology-inclined people take a very strict, autistic approach to the law. They tend to view the law as being analogous to source code in that there is no room for interpretation, intent or spirit behind what's codified.

I think this has manifested at its peak with Ethereum.


Laws are funny, they have a certain duality to them. They can be strict, but also fluid.


Not at all. If the existing law is interpreted by the courts to apply to digital communications then it does. Congress a has the ability to remove interpretations by specification.


So you disagree with your previous statement? That courts can interpret the law, including the intention of it?


My original comment said that courts could interpret law... I'm not sure what you're getting at. Yes, including intention. US courts do it all the time. It's called the Constitution.


I guess I would look at this:

>evolution of the law (which involves creating new portions of the law to cover previously created portions which are considered lacking)

And argue that electronic privacy vis-a-vis wiretapping laws is creating a new portion of the law to cover previously created portions which are considered lacking. We can quibble about definitions, but that strikes me as very much in the area of "evolution".


I think defining "evolution" and "create" are the real sticking points in this argument. That get's down to splitting hairs. Though I will say that I do believe that everyone in this thread does have sound arguments given their definition of those two words.


> it is not the purpose of the court to create law

In common law systems it is precisely their job to do so.


In the United States, creation of law is the responsibility of the Legislative branch. There is no avenue for the Judicial branch to create law.


There's one unifying feature of all common law legal systems - judges will publicly almost always proclaim they do not create law, largely because simple prima facie interpretations of most western constitutions say "the legislature makes the laws, the court enforces them", and the existence of judge made law has always had an uneasy relationship with this.

The reality in Common Law legal systems is nothing like this, and judge made law through interpretation and application of precedent is a very real thing, even in the USA. As a particularly blunt example, in some parts of the UK such as Scotland, the traditional common law crimes such as murder/theft etc aren't even defined in primary legislation ("laws"), and exist solely as judge made and applied creations through decades of precedent. Even where there exists primary legislation, the scope of judicial interpretation gives a great deal of freedom to judges to establish precedents that the drafters might not have foreseen or intended.

Heck even the definition of the term "Common Law" is normally interpreted to mean "Case Law" as developed by judges.

https://en.wikipedia.org/wiki/Common_law


I'm not trying to be condescending, but you should probably include a US centric example when asserting how the US works. The Scottish example is irrelevant, as it applies to Scotland, not the US. Also, if you try to relate Scotland and the US under the umbrella of the term "Common Law", but then say that that term has it's own interpretive meaning, you've loosened the association to the point where you can't strictly say that the Scottish and US systems are the same...

Also, as I've seen in other comments, we're going to get on the merry-go-round of defining "create law".


So it is claimed in civics classes, but that's a rather narrow interpretation of "create law".


I'm not sure what other definition there is...


'globox nailed it upthread.


Don't you think at a certain point, a loophole in a poorly-written law can be too big for the courts to close? When does a judge go from upholding the spirit to assigning new meaning?


Yes, I do. This is why we have a tiered system. The judge interprets the facts of the case and if they apply the law incorrectly, a higher set of judges can overturn.

EDIT: Or Congress can get involved and change the law. Checks and balances.


Justice Clarence Thomas wouldn't agree with you.


Considering his decision in Bush v. Gore compared to his other decisions surrounding voting rights and the EPC, I'd say he is not above reproach in the area of consistency. (this could be said of judges on both sides of that case)


Then the letter of the law means nothing.


But in a common law system, the law has many letters. The law consists not just of legislation, but of precedent, briefs, circumstances, intentions and so on. You cite previous decisions and congressional hearings and the feelings of interested parties because that all weighs into how the law is read.

You may disagree with this, but the fact remains that the law works like this in the US and UK and has since 1066.


The law is a living breathing organic document. Anybody who says otherwise is living in 1776 with slaves.


Bad example. Slavery was made illegal the proper way, by changing the letter of the law. See the 13th amendment.


Slavery was outlawed through a constitutional amendment because Lincoln (for good reason) was afraid that his Emancipation Proclamation wouldn't hold up after the war.


If the law is very clear, then the court should not need to do any interpretation. In those cases it just applies it.

In the Aereo case, the law was clear. The court was supposed to uphold the law, but didn't.


Um, no. Please no. That's a terrifying thought.


A great example of how poor Scalia's judicial reasoning was. It wasn't a 'loophole' and the fact that the justices understood it that way probably shows how out of touch they are. But that Scalia disregarded a key tenet of the need to interpret laws based on the circumstances at hand is ghastly.

FWIW I agree that the court got it wrong, but Scalia's reasoning in supporting Aereo's position is flawed.


The sad thing with Aereo is that even if they won the supreme Court decision, Congress would have plugged that loophole immediately. They were never going to win.

Scalia went too far with his dissent. Language is imprecise, and in common law it is always coupled with precedent and intent.


Completely. Disappointed is fine, but surprised. Given this seems explicitly designed to avoid the need for AppStore reviews, this was inevitable.

I don't want anyone pushing code updates to the apps that have been reviewed. Whilst that isn't foolproof, compromising the deployment mechanism with this approach is very scary.


> Oh man. You were surprised? Really?

Exactly!

Apple has always been adamant that they see _all_ code that goes onto devices. Live patching is so bloody obvious against their EULA.


What is "code"? Everybody who has programmed in LISP or Scheme knows that there is no essential distinction between code and data (only many programming languages make it a little hard to see that it is all the same). Thus Apple would have to see not only all code, but also all data that goes onto the devices. But this would imply that Apple disallows all apps that read data from a foreign (i.e. at least not Apple-controlled) server if one does not want to get into a self-contradiction.


Which is why you're not allowed to use a Lisp interpreter or use any method of evaluating data as code. In this model the only thing that data can do is change which code paths run, not what they do.


Changing a code path is the same as changing what they do.


They do allow things like pushing updated JS bundles to react native apps. My guess: RN constrains the surface area of the native API it comes into contact with (e.g. no performSelector, or similar)


That characterization isn't enough to distinguish a Turing complete interpreter from something that trivially manipulates an input datum. An interpreter is just a program containing code paths, which are activated in response to the input (the interpreted code).


> That characterization isn't enough to distinguish a Turing complete interpreter from something that trivially manipulates an input datum. An interpreter is just a program containing code paths, which are activated in response to the input (the interpreted code).

It is surprisingly simple to make an interpreter that is "accidentally" Turing complete (this IMHO so often happens by accident that I love to say that if an interpreter is not "obviously" more restricted than a Turing machine, it probably is Turing complete).

This is not just my opinion - there lots of pages in the internet of things that are "accidentally" Turing complete, for example:

http://beza1e1.tuxen.de/articles/accidentally_turing_complet...

https://www.gwern.net/Turing-complete


"What is "code"?"

Apple has decided that, and you're not going to get around their policies with a clever rhetorical question.


So what is Apple's decision about what code is?


When it comes to what's run on their platform, yes.


I didn't ask a yes/no question, I asked what apple's decision was


> "What is "code"?"

> Apple has decided that, and you're not going to get around their policies with a clever rhetorical question.

Apple cannot change mathematical facts by "decisional" rhetoric.


Apple doesn't need to change mathematical facts, they just don't let you publish on their AppStore.


They're not changing anything. They're deciding the rules for their platform.


So if I have an imbeded webpage inside of my app and that website is updated do I suddenly violate the EULA. What if its a social media app that provides users the ability to write custom HTML/CSS/JS to personalize their profiles and a user writes a game that runs in the header of that profile. What if that game suddenly allows the ability to access copyrighted material?

I just don't understand how Apple is supposed to draw a line here.


Web apps are fine, I believe the issue is Apple wants to prevent apps from updating their Objective-C code. Anything run by WebKit is fine. From the Rollout page linked above:

> With Rollout’s SDK you can update and modify your Objective-C methods with logic written in JavaScript so we’re good on the first condition.

I think that is the problem with Rollout.


They don't 'see' the code. They run a program on the binary for some obvious checks and do a QA smoke test of the app itself.


They also run a static analysis on the binary to check for, amongst other things, use of private APIs. It is presumably fairly easy for them to detect the presence of third-party SDKs like rollout.io from their binary signature.


You are thinking "source code".

"Code" is another term for what you are referring to as "binary".


Recently Apple added (and actively encourages) the ability for developers to upload bytecode to the App Store instead of ARM binaries so Apple can more easily dynamically recompile for new architectures and optimisations. Of course bytecode is considerably easier to revert back into readable source-code (especially as Swift/ObjectiveC retain (some) symbol names in compiled output) - so it's not outside the realm of possibility that an unscrupulous Apple team is disassembling cool apps to see how they work then re-implement them for the next release of iOS.


Can you identify a type of app for which reverse engineering it would be easier than writing their own? Software is usually easier to write than to read. If an app has such a magic secret sauce, and it's of value, then it should be protected by patent or copyright anyway.

An example that comes to mind is a high speed image compression app for taking rapid sequences of photos. Apple bought the company or the rights so they could include it themselves.


In my experience, it's always been easier for me to implement something once I've seen a working example of it. That's basically what examples are for: A "cheat sheet" for reverse engineers.

Software is only easier to write than read if you have an idea what it's supposed to do. If you've ever googled "how do I do X?", then you likely have reverse engineered the answer you found to fit your particular use case.

In addition, and in some countries, you can't patent software (thankfully), and so innovation comes through reverse engineering naturally.


And how many of those working code examples came from decompiled code?


I really don't think Apple, with it's war chest, is actively disassembling code to steal it. As has been demonstrated time and time again, they will just buy companies that have awesome tech and IP. Far easier.


Or they can just "sherlock" them. Happened several times.


No. Apple has plenty of experience doing that themselves, without that. There's a reason the term "Sherlocking" exists.


Unless you are Facebook or Google. Then it's fine and you get a free pass.


To some extent yes but often no.

Apple has close relationships with those companies so it's often a case of them reaching out to the developers rather than just blinding rejecting the app.

But any idea that Apple would allow them to run ruff shot over the platform and do whatever they wanted is a bit ridiculous.


> ruff shot

"Rough shod", before we get another mondegreen propagating across the internet.


As a case of point - as a matter in fact For all intensive purposes this article peaked my interest because by in large it addressed a deep ceded issue with app updates


Thank you! As a fellow "Correct Idiom Usage Nazi" that made my day. Have an upvote :)


Holy crap its `deep ceded` not `deep seated`?


No - the comment was full of intentional errors. "matter in fact" "For all intensive purposes" "peaked my interest" "by in large" "deep ceded"


No, the reply was filled with eggcorns / mondegreens, i.e. misheard idioms.


Heh, nice! I'm a big fan of eggcorns[1] and your post is positively teaming with them!

[1]: http://eggcorns.lascribe.net


To be more precise, the idiom is typically “ride roughshod over” rather than “run ...”, and roughshod is typically written as one word.

Roughshod means the horseshoes have their nails sticking out the bottom to help prevent slipping, so you can imagine trampling someone with those could be painful.


Horses are heavy. Being trampled by one is going to be injurious or lethal regardless of whether or how it is shod. (Most horses will go far out of their way to avoid trampling a human, though; cavalry horses had to be carefully trained into it. Treading deliberately on one's foot is another matter, but, like some humans, some horses are just assholes.)

The idiom refers more to what a roughshod horse will do to a road or trail surface; the nailheads dig in and scatter surface material every which way, leaving behind a hell of a mess that'll turn to deep slush or sticky mud, depending on the temperature, with the next precipitation.


I always thought they were Eggcorns.

http://eggcorns.lascribe.net/


Except for the time when Facebook did it (and still does). They use private apis to monitor user activity even while the app isn't running and collect all sorts of data that others don't have access to like wi-fi SSID and device MAC address. But what's Apple going to do - not have Facebook on iOS?


Err, as far as I know collecting the SSID is a public API.


CNCopyCurrentNetworkInfo gives any app network info, including the SSID.


Does this still work? A cursory googling says it was deprecated in iOS 9 betas but may have been re-enabled?


Depreciated is not removed. Apple is usually very conscientious about depreciating things, then waiting a while before removing them. In some cases they have gone to great lengths... for example the now 5 year depreciation of OpenSSL: first they marked it depreciated, which generates a compiler warning, then after a few years they removed the headers from the MacOS SDK so you couldn't compile new software but left the binary in place so that old software would continue to work. The next step will probably be to remove that binary, something I would expect in MacOS 10.13 (sometime this year) or 10.14 (presumably next year).


Poor choice of words then, it was deprecated and removed over the course of iOS 9 betas, with the plan to make a new system only available to "captive network apps" that could declare a specific list of SSIDs that they manage. Appears they backtracked though? I didn't see any recent info.


It's not ridiculous, they have been doing it for a long time now. E.G:

Apple uses private APIs (http://sourcedna.com/blog/20151018/ios-apps-using-private-ap...) to build some of their software and reject apps doing the same, effectively killing competition.

But Google and facebook uses them because they want to create products that can compete with apple's features. E.G: https://daringfireball.net/2008/11/google_mobile_uses_privat...

Yet they are not rejected, because they are "big enough".


I've never understood why developers criticize OS vendors use of "private APIs". I would go so far as to say there is no such thing as a "private API". The API is a vendors promise to consuming applications that when they call a method, a certain behavior will happen. Whatever they do behind the scenes is an implementation detail that they should be allowed to change anytime without breaking consuming applications.

Apple often uses immature frameworks internally - like the extensions framework - to to polish them or to dog food them before making them official.


f.lux strikes me as a good example. f.lux came to the market first with its idea to control screen temperature. Apple decides "No, you're not allowed to do that...but that's a great idea!", kicks f.lux out of the app store, and then adds their own Night Mode into later versions of iOS, using API's only they are allowed to access.


I love f.lux and have a soft spot for the very nice developer couple behind the app.

That being said, it's hard to argue that Apple (or Android) shouldn't be able to set boundaries on behaviors which are only allowed to be done by the OS as a opposed to an app. Apple's tight control of device screen characteristics makes it pretty understandable that they don't want one app able to control how another app looks on the screen.

The optics of the f.lux situation is just really, really bad. But considering the f.lux never really charged, they have a claim to fame that few can match: creating a feature good enough that Apple incorporated into both iOS and MacOS (now in beta).


> It's hard to argue that Apple (or Android) shouldn't be able to set boundaries on behaviors which are only allowed to be done by the OS as a opposed to an app

It's really not. The argument for user freedoms is almost as old as software.


The delicate balance between throwing a user a rope, and throwing them enough rope to hang themselves...


I'm guessing, from Apple's perspective, things that overlay the whole screen and alter the appearance of other people's apps (such as applying a colour cast), are essentially "white hat phishing". It makes security sense to hide this capability in the OS and not in apps.


You can see, I trust, how this could lead down a monopolistic slippery slope. For instance, virus-scanning is a dangerous enterprise, given that it exposes a greater attack surface if the antivirus program is poorly written. Should Apple and Microsoft remove the ability for third-party antivirus apps to exist? How about third-party firewalls?


I don't see that, actually, I think that's a false equivalence.

Security premise: when you are looking at Facebook, you are looking at Facebook. You are not looking at a third party app drawing over Facebook and pretending to be Facebook.

I do not see the above as a slippery slope. Phishing is a capability apps should not have. Even if they have the best of intentions.


> Should Apple and Microsoft remove the ability for third-party antivirus apps to exist?

Please?

...pretty please?


Well, yes. Unless the AV is designed in a way that shows it doesn't increase the risk, it's just snake oil.

If MS had taken a harder line then at least hundreds of millions of people would have had faster computers... And arguably safer ones. But it would be hypocrisy for MS, given they gave us IE, ActiveX, DLLs, VB macros, etc.

Most third party firewalls are just GUIs using the OS API for filtering, not parsers written in C running in the kernel.


There aren't third-party anti-virus on iOS....


And when Apple changed the private undocumented API that allows the functionality, who should get blamed? Who do you think users will blame when previously working apps that used an undocumented function breaks with a new Os?


> why developers criticize OS vendors use of "private APIs"

The point is not that these APIs exist; the problem is when vendors actively block others from using them, with hacks and/or policy bans. That's extremely hypocritical and anti-competitive. I can see why unofficial APIs must be discouraged (because let's be honest, developers will bitch and moan when they change -- Microsoft in particular was strong-armed into legacy support for decades by the likes of Adobe and Symantec), but it should never be an excuse to ostracize or tilt the playing field.


It's not about marketing, economics or other MBA-feuled non-technical ideas. It's about a software vendor saying: this is our platform, here are API's for you to use, don't go outside it.

It is perfectly reasonable, and so far, any other interpretation seems to be a skewed view to facilitate some sort of non-compliant piece of software.


And the minute scores of applications break that depend on a third party library that uses an undocumented method, the OS vendor gets blamed for releasing a "buggy OS" or they have to keep buggy work arounds in their code forever like MS does.


Of course Apple use private APIs... if they didn't, then these APIs would have no reason to exist in the first place.


Do you have examples more recent than 2008?


Those are not easy to find, it's not really something they advertise and you need somebody to publicly catch them.

The only reason I know they do is because some of my friends working on mobile video games regularly complain they can't get some features because they are private while google and facebook do. They analyzed some apps to try to copy said features and realized the unfairness of their situation.

Those are lunch chit chats, not hard facts. But they got seldom reasons to lie.


I'm not sure if it's still the case, but I don't think you could record iPad screens at the start, though Apple demonstrated it as a possibility during their live demos.

I'm not sure if Apple made it available to other companies privately though.


My impression of the Aereo decision was that it was based on the letter of the law. Contrary to opinions often expressed here, the law wording does not specifically apply to cable companies and specific wording was not creatively interpreted to apply to Aereo. The wording of the law referred not to antennas and cables but to a more abstract notion of "public performances" of copyrighted works, and Aereo fell squarely into what Congress (and legal precedent) meant by public performances of copyrighted works. The law was actually fairly well written to cover evolving technology.


Except that it's been established that 'cloud DVR' is legal. That's 90% of what Aereo did. But somehow attaching an individual antenna to each person's DVR makes it 'public performance'? That specific argument is nonsensical.

And then they got double screwed because the US copyright office declared that no matter what the supreme court said they were not a cable company and couldn't get compulsory licensing either.

As far as I can tell it's legal to run one antenna for one person, and I have absolutely no idea where the line is that you start violating copyright. I don't think the guidelines are well written.


Yeah, this seems crazy that anyone would build a business on this.

For those curious about their justification:

https://rollout.io/blog/updating-apps-without-app-store/


No, what you end up doing is effectively destroying the security protections Apple puts in place to protect the user from unknown/bad code from running on their device. Apple signs apps for a reason -- now we have to trust you to deliver that code safely to the user without being manipulated in transit. I also have to trust that you will respect my privacy. And I don't.

It also seems clearly against their EULA, so you only have yourself to blame for this.

Apple's rules can be harsh but I would rarely call them arbitrary. There is a very good security reason for Apple's stance here. And ultimately it's their store and their rules.


Security on iOS comes from the sandboxing that all apps run in. Apple's review process is really quick and adds approximately nothing in terms of security. Apps running in the sandbox should be safe no matter how evil they are, and if they can break out, the proper solution is to fix the sandbox.


That seems like a strange way to put it. Apps can still do all kinds of nasty things inside their sandbox, for example calling private API's that are now supposed to be caught in the review process, but also less obvious things like hot-patching strings (e.g. URLs) in the binary, sabotaging the device by deliberately hogging the CPU, playing sounds through the speaker, popping up fake prompts for fishing, etc.

I agree that the review process itself does little for security, but surely you don't want to allow applications to pull in unchecked native code over the network, right?


What's so bad about calling private APIs? I get why Apple doesn't want it, but as a user I don't care.

The sandbox prevents apps from pulling in native code over the network. The OS won't allow pages to be marked as executable unless the code is signed by Apple.


Because a private API could give out details about you that you don't want shared to a random application or 3rd party advertising/analytics platform.

For example serial numbers, user ids, lists of installed applications, etc.


If a private API is a privacy or security concern then the sandbox needs to block it.

Apple blocks private APIs because they don't want to maintain their compatibility across OS releases and don't want third party apps to break when those APIs change.

Edit: I'm starting to suspect that people don't know what "private API" means, so I want to lay it out real quick. Apple ships a bunch of dynamic libraries with the OS that apps can link against and call into. Those libraries contain functions, global variables, classes, methods, etc. Some of those are published and documented and are intended for third parties to use. Some are unpublished, undocumented, and intended only for internal use.

The difference is documentation and support. The machine doesn't know or care what's public and what's private. There's no security boundary between the two. Private APIs do nothing that a third-party developer couldn't do in their own code, if they knew how to write it. The only way Apple can check for private API usage is to have a big list of all the private APIs in their libraries and scan the app looking for calls to them. This is fundamentally impossible to do with certainty, because there's an unlimited number of ways to obfuscate such calls.

Functionality that needs to be restricted due to privacy or security concerns has to be implemented in a completely separate process with requests from apps being made over some IPC mechanism. This is the only way to reliably gate access.

Apple's prohibition against using private APIs is like an "employees only" sign on an unlocked door in a store. It serves a purpose, but that purpose is to help keep well-meaning but clueless customers away from an area where they might get confused, or lost, or hurt. It won't do anything for your store's security.


Mike, this line is completely and 100% inaccurate:

"Private APIs do nothing that a third-party developer couldn't do in their own code, if they knew how to write it."

There are a million things under the sun that private APIs have access to that wouldn't be possible with the use of public APIs alone, good developer or not. Prime example: "UIGetScreenImage()". That function allows you to take a screenshot of the device's entire screen, your app, someone else's app, the home screen of iOS. That's a pretty big security hole, is it not?

There are countless examples just like that one hidden inside the private API bubble. Things the OS needs to function, (although the OS may not need that particular example anymore) but could cause massive security issues.


It could be argued that because a private API has no guarantees against change, using them could lead to apps break after OS updates more frequently, which would annoy me as a user (whether or not I knew what was causing the crash).


I wasn't even talking about breaking out of the sandbox. Also, at the most basic level, simply having a trusted and signed delivery process of binaries does add some security. Nobody here is saying it will prevent a compromise, but since when is security viewed like this? It's about layers of protection.

Reminds me of people fussing about getting root on a workstation. Simply getting access to the user's account, without root, will be hugely damaging. Plus you'll likely have root in no time after you get that user account.

And the review process isn't even entirely about stopping the attack. If the malicious code was in the app, when it was submitted for review, you can at least have a trail and can review it later to see how it happened.

If the attack happened with this specific app framework, the bad code could be dynamically loaded into memory and then purged, so you'd never know what happened.


If you don't break out of the sandbox then you can't access anything interesting.

Traditional UNIXoid workstations are quite different. A program running under your user can do anything your user can do. It can access and delete all of your data.

An iOS app can't access or delete any of your data by default. Everything requires explicit permissions granted by the user, and even those are pretty limited. As long as the sandbox functions correctly, a malicious app will never be able to, say, read my financials spreadsheet out of Numbers, or my private texts out of Messages.

I've yet to see any evidence that this process adds security. Given that the review process is extremely shallow (some automated tools are run to scan for private API calls and such, and two non-experts spend a total of about ten minutes with your app) so there's no hope of any sort of useful security audit being done.


now we have to trust you to deliver that code safely to the user without being manipulated in transit.

You have to trust app developers anyway, since they run native code on your machine. While there are security concerns, these are not the real motivation. Apple is gradually closing down their platform, as many people have predicted in the past. You can also see that in various subtle changes to Gatekeeper and the Sandboxing features.

For me personally, the red line is when unsigned executables can no longer run on MacOS. If Apple ever disallows unsigned executables, I will immediately discontinue my application on MacOS and redirect customers who rely on it to Apple's customer support.


MS is ahead of Apple in this race to security via taking back control from the end user. I'm with you in theory, but really doubt that on either platform there won't be at least a dev-only way of running arbitrary unsigned apps.

Time will tell. I think it will really come down to the severity of malware problems of the future.

But I really think we'll just move 100% into bifurcated systems (we're already there with Intel's ME to a large extent) where the place that arbitrary code can run is completely segmented off from trusted code.


Yes, you have to trust the app developer. And Apple is acting as a check/oversight on that relationship, too. Whether it's of use or not, that is really another discussion.

That is my personal red line also. But I am 100% in support of them enforcing signed apps for the majority, but it should be something you can turn off for advanced users via firmware/bios. My mom does not need to run unsigned apps she finds on the Internet.


especially with relation to iOS you can't really say they're closing the platform further when at the most recent developer conference they opened a ton of APIs (siri, maps, imessage to name a few)


> And ultimately it's their store and their rules.

It wouldn't be as bad if their store weren't also the only store available for the platform. Because of this forced monoculture, the criticisms are well within scope.


Criticisms of the rule are valid. Criticisms of Apple for enforcing existing rules are counter-productive. It's to everyone's benefit that all devs play by the same rules.


Yeah, so we can have one store with strict policies that protect users and the value of the hardware; and a bunch of other shit stores that offer apps that can hijack our devices. That makes sense!

And then Apple is to blame, and can spend tons of dollahs and man hours to fix problems caused by your "open" alternatives.


Oh yeah because that's what Android have been spending all their money on...


remember a few years back when windows was trying to promote their mobile app store and they were paying people to write shit apps just to have them hosted in the store? Or android where (at least they used to ) have fake apps that tell you how to "get" the real app ?


ios has as many stores as Android, all you need to do is jailbreak.


Sorry to be OT, but since you're the CEO I do hope you found out if Rollout supports swift as well :^)

https://news.ycombinator.com/item?id=8158046


Yep, and this one: "Great stuff man, I wonder how many apps would suffer from problems when trying to access directly Amazon s3, also how many app updates would get pushed just to update plist"

https://news.ycombinator.com/item?id=10151755


Ouch.

This post deserves more attention.


I'm confused. Did he forget to post that under another account or was he one of us, a lowly HN lurker that applied and go the CEO position (by self selection).


Well, considering he's the co-founder and he's been working there for more than 3 years, and the comment was left less than three years ago...

https://www.linkedin.com/in/erez-rusovsky-3a458850


Hah, neat


bewildered ๏_๏


> We are contacting Apple in order to get further clarification on why Rollout doesn't fall under the clause that lets developers push JS to live apps as long as it does not modify the original features and functionality of the app.

As a security-conscious user, live patching is awful. Nothing guarantees me that the benign app I've been granting various permissions to doesn't get altered by a fourth party adversary through coercion or hacking and gets wiretapped by a malicious dynamic payload.


Nothing guarantees that. There have been RCE exploits on iOS.

One could argue that live patching allowed companies to fix or mitigate security problems faster than Apples (awful) app store policy (and timescale) would otherwise allow.


Nothing guarantees nothing. Life is ephemeral and we're all going to die.

Yet, we can say that code review by a third party is better for trust of that code, than no code review by a third party.

"Nothing guarantees" may have been strong. but "the set of attack vectors and their relative efficacy increases " doesn't roll off the tongue quite as nicely.


Replace "code review" with automated static analysis and a 5 minute run through of the app and you are spot on.


> we're all going to die

That's guaranteed, at least ...


Unless "The Singularity" (and subsequent mind-uploading) actually pans out.


That only delays the inevitable.


[Till the sun runs down][1]

[1]: http://multivax.com/last_question.html


A large number of apps will become abandoned apps at some point. And if one of those relies on code from a third party that has now turned malicious?

Your argument does sound good, but it's a double-edged sword.


My guess is that somewhere in the giant dump of CIA malware there is an exploit that uses this to hijack an iPhone. They are pretty explicit about what they don't like and how it would be exploited.


I suspect unless they got advance notice from Wikileaks this reaction is too soon.

I'm wondering which of the current top-downloaded FlappyCrush Of Titans clone got caught exfiltrating all their players contact lists or something...


I'm wondering if one route to preventing this being an issue is to prevent any hot code fixes to specific devices. If you have to hot code push to all devices running a specific version, I reckon that would put a damper on the actions of an institution trying to target a specific person. It's a lot harder to try and be sneaky when a change has such a large impact. That said, I'm not sure this restriction alone would be enough.


An interesting theory but I kind of doubt the time between the release and this is enough for them to have identified these exploits.


I agree its a long shot. But perhaps they were on the fence about it and the CIA dump pushed them over the edge. Sadly there is no actual way to know for sure.


They don't like you changing the stated functionality of the app. You're making it sound like an iPhone RCE automatically means jailbreak.


If only. You are using a technical hack to modify _native_ apps. The Apple Guidelines aren't strictly set in stone; the intent is clear: you can't remotely modify how native apps work, even if you do it through a JS delivery mechanism.

Sorry for your loss, but in glad Apple is doing this and apps will be safer.


I think the part that you're running afoul of is where it says:

"new apps presenting new questions may result in new rules at any time."

Good luck to you, but it's Apple's sandbox and your product appears to thwart the principles that the Apple App Store has been run on for nearly a decade.


You were relying on a huge loophole. The code runs inside JavascriptCore but it injects native code into the app.


An objc swizzle is not native code injection, it's a function pointer swap. They swizzle the method to their general objc message handler which then executes a piece of javascript code.

For swift they basically patch the app before it gets compiled so that every function, if it meets the conditional would execute their javascript code handler instead.

No binary code being injected.


> No binary code being injected.

A number of other posts talk explicitly of dynamic delivery of native code. If you're sure, it's a genuine question: I'm interested to know how this works. Function pointer swaps are one thing, but how would this allow you to patch bugs in the app? I can see how this could let you change the app's behaviour, even including calling private API's, but surely this would be constrained to calling pre-existing behaviour?

Or by adding new behaviour is this to mean new javascript behaviour.


I think they are confused by the downloading of JavaScript files and executing that inside a 'native context'. I looked at how rollout did their stuff in detail a while back, so i can see how its easy to confuse the two.


Swizzling is incredibly useful. AFNetworking, MagicalRecord and GPGMail use it, just to name a few.


That sounds like a huge hack. They built a company around that?


Built a company with 3 million in funding https://www.crunchbase.com/organization/rollout-io-2#/entity.


lol


"We are in full compliance. Everything is fine. The house is not on fire. The heat you are feeling is coincidential."

Man, do I dislike marketing speak. A "We knew we were non-compliant, but think the security benefits of quick bugfixes outweigh the disadvantages. We will work with apple to return to compliance." would've been honest, better and not bs.


True, but in this case "return to compliance" means "scrapping the company", because its key product depends on the non-compliant behavior and is impossible to implement otherwise.


It more meant "don't build a company on shaky ground," which Rollout clearly did. The only reason to be so specific about not breaking the rules is when you know you are breaking the spirit of the rules. The blog post he linked to is a year old - they're lucky to have survived this long.


You say that as if apple hasn't repeatedly shut down apps that were in compliance. Sometimes changing the rules after, sometimes not even doing that.

"We have always been in compliance with the guidelines, and we are asking apple and trying to figure out why we're somehow not in compliance" is a fair statement, and not at all BS.


don't forget to change their lines about "hundreds of apps use our system and none have ever been rejected by Apple"

to: "we've been getting away with it for a long time, so there's that"


> Man, do I dislike marketing speak

Did you see the article on HN last weekend about Wifi routers? "I have a 1.3gbps wireless ac router...but only at the PHY layer, but only in an RF test lab, but only if the client is MU-MIMO enabled, but only if they talk on all 4 channels, but only if the signal connects at 100%, but only if your data is 10:1 compressible, but only if you have one client, " even after like 5 "but only if"'s , there was still this unexplained 20% discrepancy between the advertised "speed" and what the device was physically capable of. I'd love to hear their lawyer explain how thats not false advertising.


Don't worry, they actually wrote "up to 1.3gbps" on the box. You just haven't read it closely enough.


Your live patching allows you to call arbitrary native methods - this is even demonstrated in your video - of course this was going to get banned!


It sucks that the success of your business sounds dependent on the policies of the company in charge of the app store.

Other than contacting Apple what can you do to combat this?

That being said, I agree with most of the people here; live patching in my opinion is kind of infringing on the users' freedoms and security.


I learned this lesson the hard way a long time ago when I built a service that uses ML (I was doing GPU powered ML in 2011) and social graph clustering to recommend "better" Facebook friends to invite in apps that use the FB SDK. They would send our API their users' FB access tokens (only required the default FB user permissions, too, for mutual friends), we'd issue calls to the FB SDK to get their social graph (completely on the issuing app's behalf), crunch it on our GPUs, and send back a sorted list of recommended friends to suggest to invite for improving virality.

Back in 2012, it wasn't prohibited by the ToS at all; we read and re-read the ToS over and over again to make sure so that we wouldn't waste our time building something "illegal."

Once I had the third largest social gaming company as a customer, Facebook's lawyers pulled the plug on it right away.

Turns out (according to Archive.org Wayback Machine), they added a new clause to their ToS two days before emailing us about our ToS violation:

"You must not give your secret key and access tokens to another party, unless that party is an agent acting on your behalf as an operator of your application. You are responsible for all activities that occur under your account identifiers."

Moral of the story: If they want to nuke you, they WILL nuke you (I'm sure Facebook wasn't too happy about my database storing millions of users' social graphs on it, and that was the REAL reason for the shutdown).

Even during our YC interview, a couple of the most legit original partners told us on our way (permanently) out the door "yeah, you guys are going to get shut down..."


I'm not an iOS developer, but even I know enough about Apple's rules to know that they would frown on any code that has the ability to patch itself without going through app review unless it used the builtin Javascript engine and or was a web view.

I can't imagine any iOS developer who knows the guidelines and how your product works wouldn't have been worried.


Builds a business around a 3rd party marketplace, get surprised when they cut you off without a warning.

We've seen this over and over. The platform risk should be seriously considered. Even AWS has demonstrated recently how dependence can be catastrophic.



Making a business the wholly depends on the decisions of another business is not a business you want to be business of operating. Because situations like this arise and can immediately shut you down.


"Hi there -- I believe that title isn't quite accurate; Apple specifically is referring to behavior of a library called Rollout which lets people dynamically inject Objective-C/Swift. They are doing hot delivery of native, Objective-C code. It's really not about React Native nor Expo.

Expo (and the React Native library we use) doesn't do any of that. We also make sure we don't expose ways to dynamically execute native code such as the dlopen() function that Apple mentioned in that message. We also haven't received any messages from Apple about Expo, nor have we heard of any Expo developers receiving the same." - Exponent Team


Hi, I work on Expo (YC S16) and also am a core contributor to React Native.

Apple's message reads to me that they're concerned about libraries like Rollout and JSPatch, which expose uncontrolled and direct access to native APIs (including private APIs) or enable dynamic loading of native code. Rollout and JSPatch are the only two libraries I've heard to be correlated with the warning.

React Native is different from those libraries because it doesn't expose uncontrolled access to native APIs at runtime. Instead, the developer writes native modules that define some functions the app can call from JavaScript, like setting a timer or playing a sound. This is the same strategy that "hybrid" apps that use a UIWebView/WKWebView have been using for many years. From a technical perspective, React Native is basically a hybrid app except that it calls into more UI APIs.

Technically it is possible for a WebView app or a React Native app also to contain code that exposes uncontrolled access to native APIs. This could happen unintentionally; someone using React Native might also use Rollout. But this isn't something specific to or systemic about React Native nor WebViews anyway.

One nice thing about Expo, which uses React Native, is that we don't expose uncontrolled or dynamic access to native APIs and take care of this issue for you if your project is written only in JS. We do a lot of React Native work and are really involved in the community and haven't heard of anyone using Expo or React Native alone having this issue.


I do wonder why, if they're _really_ fine with that, why they're not fine with browsers with different rendering engines on iOS.

Since they're not, I wouldn't have _too much_ faith in other things not being rejected.


Apple has been fine with WebViews in apps since the beginning of the App Store, including WebViews that make calls to native code, like in the Quip app. WKWebView even has APIs for native and web code to communicate. It's OK for a WebView to call out to your native code that saves data to disk, registers for push notifications, plays a sound, and so on.

React Native is very much like a WebView except it calls out to one more native API (UIKit) for views and animations instead of using HTML.

What neither WebViews nor React Native do is expose the ability to dynamically call any native method at runtime. You write regular Objective-C methods that are statically analyzed by Xcode and Apple and it's no easier to call unauthorized, private APIs.

With Expo all of the native modules are safe to use and don't expose arbitrary access to native APIs. Apps made with Expo are set up to be good citizens in the React Native and Apple ecosystems.


Well, let's be clear here. I use React Native, and it would take me about 30m to write a bridged React Native method that could execute ObjC code dynamically, including accessing all the private APIs you could want.


Yeah. Objective-C is definitely really flexible. From my reading of the warning, the important thing is not to execute Objective-C dynamically or access private APIs regardless of whether you're using React Native, Cordova, a WebView, or even a bare iOS app.



Simple: JIT compilers are banned and so that excludes any modern browser's JavaScript implementation from iOS. But anyone using Apple's JavaScriptCore has nothing to fear.


The rules explicitly forbid any HTML renderer or JS interpreter aside from WebKit, JIT or no JIT. I believe all the popular third party browsers today still use the non-JIT UIWebView rather than WKWebView because the former gives you more control over the request cycle



Chrome uses WKWebView on iOS, so it's basically safari with a different UI (so does firefox on iOS)


Yes, I know. I was saying the OP of this thread was wrong because it's not UIWebView, it's WKWebView with a JIT :).


WKWebView has JIT. In executes the JS in a different process than the hosted app; there the JIT lives, this special process is whitelisted to do JIT magic.


I understand that, the original OP was stating that none of the "browser skins" had JIT because they all used UIWebView, which isn't the case with the link I posted :p.


Webkit is the only acceptable browser engine on iOS. Firefox and Chrome both use webkit on iOS.


How can you ship an app with access to private APIs? There is a private API usage scanning before you can submit for review.


The scanner isn't foolproof. You could fool it if you obfuscate your calls to performSelector well enough, for example

if jsonResponseFromYourBackend contains:"runThis" then performSelector:json["runThis"]

and make sure you don't send a runThis param while the app is in review.

Unfortunately for Apple's app review process, Apple's own objective-C language and runtime has very strong dynamic reflection capabilities.


Apple could potentially close any loopholes here by scanning new apps for their API usage, checking for any 'bad' calls, and then writing the remaining discovered calls into a permissions file that is delivered with the app in the store.

At runtime, any API calls made by the app are checked against this file; if a new API call is found, then it must have escaped Apple's code scanning logic. The API call can be rejected and logged for Apple to improve their scanner.


This is a great idea actually. Actually, isn't google already doing this via SELinux? You give the app a manifest of calls it's allowed to make, and if the call isn't in the manifest the call gets rejected?


SELinux is not that strong. It works on kernel syscall boundaries and some parameters thereof, and those aren't particularly fine grained. Service access is governed by a separate Google API, for example.

Moreover, any random app cannot enhance SELinux policy of the system.


There are many legitimate uses of calling methods and functions using reflection. Expecting to hit all of them in a short review process is comically optimistic for anything but the simplistic of apps.

Your suggestion of enforcing this also makes no sense from performance or privacy standpoint.


Fair warning that I'm not familiar with Swift

Obvious (to me) idea: have the private API access stored as data sent from the server at runtime, rather than code in the reviewed app. Basically the equivalent of eval()-ing a string for front-end javascript code.


James, do you know if they're going to go agains the Exponent app per-se?


Oh, no, I don't have reason to believe so.


Rather than a speculating on what really boils down to semantics once ToS is involved maybe someone could actually try submitting an app and reporting back on whether it triggers the same failure?


I haven't heard any reports of Expo developers or React Native developers (who aren't using Rollout or JSPatch) getting this warning.


This needs a little elaboration. Cached Javascript in any hybrid app is a security hole because that can be exposed through a jailbreak. Depending on how much of your business logic you've pushed into the JS layer to enable that "80% code sharing" that makes managers go all tingly you may be exposing all kinds of things - cached access tokens, API keys and whatnot - to anyone who wants to install your app and mine its secrets.


Huh? Are you saying that the security hole is that a user could see stuff in the memory of their own phone?


Or someone that steals your phone, or picks it up when you lose it somewhere. Yes, there's the lock screen and passcode but...

http://www.wikihow.com/Bypass-iPhone-Passcode


At that point your phone is fucked, anyhow. If you've lost physical control of the device and an attacker has broken the lock, you're compromised in much bigger ways.


"This bypass won't work on iPhones running iOS 9.3 and up"



How is this any different from native code? Afaik, you can access the native compiled source code of an app on a jailbroken phone. Sure it's more of a pain to parse through but security through obscurity isn't security.


Definitely, running "strings" on a binary is about as easy as finding the JS for a hybrid app (WebViews or React Native). Depending on your experience it could be easier to extract API keys from an IPA than from JS.

In either case the root issue is about sending secrets like an unscoped API key to the client. "Client secret" is an oxymoron in this context regardless of the programming language.


You'd have to know a few things first, like (1) the IPA is a ZIP file, (2) the ZIP file is actually of a directory and (3) you can dump the actual code in the JS files (if they're in the bundle directory) much easier than you can look for strings from the binary that might look like an API key.

The API key is actually the least of the hazards, since you can hide that in the keychain. Having source code for your business logic shipping in your app is not good; having it be hackable business logic (by changing the JS in place) is very not good.


Shipping source code with business logic (assuming the definition of source code includes obfuscated JS) is how the entire web works today! With WASM the code that is shipped will be even further away from the original source code and really not so different from downloading ARM from the App Store or Java/ART bytecode from the Play Store.


Not the entire web. eBay and Amazon don't put all their algorithms in the browser; PayPal doesn't either. They hide their company jewels behind their API where they're a little more secure. What you see in the browser is the presentation layer code.

Hybrid apps could achieve that same kind of relative business logic security, but at the cost of pushing more and more of the actual business logic behind an API and not in the JS in the app. At that point, the benefits of code sharing (such as they are) get fewer and fewer since it's really pretty easy to write API code in Objective C, Swift, Java or Kotlin.


You aren't "parsing" through compiled (and linked) code; you're decompiling it, which is a much trickier thing to do and get right. Having your app logic in Javascript in the sandbox cache is just serving it up on a plate.


I'm having trouble seeing this as a valid argument when applied to cached JavaScript within a mobile browser. The same security practices apply.


They do, and it's for this reason that the really important stuff isn't in the web page at all - it's behind the company's API. The "business logic" that you can safely push out to a browser or hybrid app is somewhat limited, which means there is a hard upper limit on the real code sharing savings you can get with hybrid apps versus full native.


You as an end user jailbreaking your own phone is not a "security hole".

I'm not aware of any non-tethered jailbreak for iOS 10


I think he's talking about application wide (not user specific) "secrets" in the javascript layer.


Yes, I am. As for untethered jailbreaks, http://pangu8.com/10.html mentions a few.


Yes it is, jailbreaking bypasses critical security features of your phone. Granted, it's required to run certain kinds of software but there are better ways to run your own code on your phone (like getting your own developer certificate) that preserve the security model.


The problem with Apple that you and your customers need to be aware of (or concerned about), is that once a number of your customers sidestep Apple policies w.r.t. pushing or changing features via JavaScript, Apple will change their policies to close that loophole. It's only a matter of time before companies take advantage of this path to sidestep app store approvals.


As a client I'm pretty baffled that some developer had specifically bypassed something that I see as a security measure.

If an app is modified on the fly to use an undocumented and maybe "forbidden by apple" method in order to bypass security features or worse spy on me I'm clearly not ok.

Do you really think the apple ecosystem work because clients see AppStore as a evil cage and that the external developers are all angels with good intentions?


It's actually a double-edged security sword.

By not allowing developers to patch their app 'on the fly', directly, without going through a new version in the app store (which is a very lengthy process, almost forever in security terms), Apple effectively protects their iOS users from malicious code (not from the dev, which is probably to be trusted if the app is already installed, but from MITM attacks and the likes). However at the same time they deny very hasty security patches, which may compromise the device and all associated data and accounts entirely.

So there's no best world but undoubtingly many aspects to consider. Anecdotally I find that it's often better to trust the developers of an app to maintain their own thing; the OS only being a facilitator, provided the user has control (UAC on Windows, Permissions on mobile, etc.) [note: obviously you trust the OS vendor to patch said OS, it's just a particular kind of app]

edit: wording


I believe carefully crafted update should mostly mitigate that risk.

Also current policy don't prevent app developers to implement a "kill switch" that will prompt user to update (or wait for update) at splash screen and abort loading the malfunctioning version.


Though the security justification here is limited to private APIs & native code pushing, the first few sentences of the rejection definitely seem like the "spirit" of the terms includes any significant functionality pushing at all. Wouldn't be surprised if they ramp up enforcement on that.


This appears to affect apps using JSPatch which is not pushing Objective-C but JavaScript.

Source: https://github.com/bang590/JSPatch/issues/746 (in Chinese)


The key part of JSPatch is that it exposes arbitrary, uncontrolled access to native APIs. You could use it to call private APIs even because Objective-C doesn't distinguish between public and private APIs at runtime, so Xcode's compiler checks and Apple's static analysis can't anticipate which APIs are possibly called.

In contrast, React Native doesn't expose uncontrolled access to native APIs. You write regular Objective-C methods that are callable from JavaScript, and within your Objective-C methods you write regular Objective-C that is statically checked by Xcode and Apple.


I'm not sure why people are concerned about React Native being targeted by this new enforcement of the rule. It is not the same thing at all.


it shares some components. like javascriptcore.

difference is the js - native bridge. for react native it is fixed bridge, that can only change with a change in the binary.

for rollout they can execute any native code with an update of the JS.


The main reason for React Native to not be impacted is that Facebook + Instagram + Airbnb + Soundcloud + ... are using it and Apple cannot justify to their userbase to not accept those favorite apps for a technical reason.


No, it's not. It's because React Native is a totally different technical solution.


"You write regular Objective-C methods that are callable from JavaScript," - the objC function that I am calling from JS can be capable of calling a private API received by it in an argument - that breaks the entire claim of reactNative not supporting calls to private API. React Native must also go down.


If you deliberately write code that calls private APIs that are exposed to JS in your React Native app, I expect only your specific app to receive this warning. The same would be true in an app that doesn't use React Native at all, as we've seen with apps using Rollout.

The warning is not about React Native, it's about exposing uncontrolled access to native APIs including private ones and React Native doesn't do that.


See the difference between a regular human who in theory can take a knife and violently rob or kill someone, and human-like robot that can be easily programmed to do the same. Who has to be isolated?

It is all about intents. If app allows that, it is banned for that behavior, not for having React in his kitchen. But if you have armed drone there, then police has a question.


It seems that most people are overlooking one of the more significant points Apple have made here:

"Even if the remote resource is not intentionally malicious, it could easily be hijacked via a Man In The Middle (MiTM) attack, which can pose a serious security vulnerability to users of your app."

Source: https://github.com/bang590/JSPatch/issues/746


I'm not buying the MITM argument in general. If remote code is downloaded via HTTPS, it could not be hijacked, at least not easily.


HTTPS is sufficient against MITM, until someone disables all verification to use their self-signed cert, or adds their poorly-secured "CA" cert to the allowed CA's for the download, or adds a weak cipher to the list. Do you trust every app developer to do those right (if they even use HTTPS!)[0], or would you rather trust Apple to get it right in the centralized system they designed for app updates for all apps?

I'm not even fond of Apple, but I'd rather trust them, and I'm glad they're protecting their users.

[0] Caveat: I don't know how likely/possible these are to occur on iOS. I assume a sufficiently motivated & misguided developer could do them within their own app's context.


> Do you trust every app developer to do those right (if they even use HTTPS!)[0]

If I'm running an app that includes native code and accesses data from the outside world then I'm probably trusting that app developer to write C code that doesn't contain arbitrary code execution vulnerabilities, which is much much harder than using HTTPS right.


"HTTPS is sufficient against MITM, until someone disables all verification to use their self-signed cert, or adds their poorly-secured "CA" cert to the allowed CA's for the download, or adds a weak cipher to the list. "

Or that attacker controls or can coerce a Certificate Authority in the OS's root list - like, say, just about any nation state...

Most apps - I suspect - are not pinning their TLS certs. Apple have already gotten onto a very public fight with the FBI.


You see, this whole thing:

"Even if the remote resource is not intentionally malicious, it could easily be hijacked via a Man In The Middle (MiTM) attack, which can pose a serious security vulnerability to users of your app."

Basically says "only we, Apple, can do HTTPS right, you can't, and even if you try you can easily be MiTMed". Which is I don't agree with.

What you say is correct, but it's not the argument I criticize. You point is that they don't trust developers to implement secure loading of code and don't have technical means to control it and can't or don't want to check it in the review. But it's completely different to "you could be easily hijacked if you're not Apple".


I'd trust Apple to do right more than I'd trust a small team at a startup trying to deliver features at breakneck speed. I really like the fact that Apple is looking out for its customers here.


In an ideal world where apps check/pin certificates and don't disable cert checks to make self-signed certs work in test environments you'd be right. If only this were reality.


I suspect I've left a lot of test devices behind me with Charlesproxy MiTM root certs installed (I wouldn't be _too_ surprised if one of the phonee in my pocket right now has that...)


I'm just pointing out that "remote resource ... could easily be hijacked via a MiTM attack" is technically incorrect. The problem is not the remote resource per se, the problem is trusting developers to implement secure loading of resources. Which is a completely different argument.


Depends on who you're expecting the MiTM attack to be executed by.

Are _you_ secured against, say, an attacker who works at Verisign and can create a valid cert for api.yourdomain.com? Or an attacker who has a buddy who works at GoDaddy who can subvert your dns records so they can trick LetsEncrypt into issuing a valid cert for api.yourdomain.com? Or an elbonian teenage hacker who's just got your AshleyMaddison assword from pastebin and used it to log into your Gmail account and overtaken your dns registrar account to get themselves a valid ssl cert?


And all of this is "easily"?


Like I said - depends on who you are.

For me? Not really "easily" (tho a wifi pineapple in a coffee shop where FE Devs hang out attempting to MiTM them with the Charlesproxy root CA would be a fun experiment... Which, of course, I'd never do - because that'd be bad, right?)

For someone at the NSA or CIA or Mossad? Sure it's easy. For someone a little further down the LEO "cyber" chart like FBI, probably not "easy". For a local beat cop or council dog catcher - nah, definitely not "easy".

For a very-dark-grey pen tester or redteam who're prepared to phish your email password and use it to p0wn your dns registrar? They'd probably call that "easy"... (Hell, I've got a few pentesting friends who'd call that "fun"!)


Seems like people have been aware of concerns about violating the TOS with these hot patch frameworks.

From April 2016

>>Rollout is aware of the concerns within the community that patching apps outside of the App Store could be a violation of Apple’s review guidelines and practices. Rollout notes both on their FAQ site and in a longer blog post that their process is in compliance.

https://www.fireeye.com/blog/threat-research/2016/04/rollout...


A ton of games do this and it is incredibly annoying. I don't want to download an update, then have to download an update. I only wish the same restriction applied to my Android device.


Most likely most games are updating only game related data and graphics files. Very few games actually use internal scripting that would be needed to do code updates


The only app I've got that appears to actually update itself without going through the AppStore is the HSBC mobile banking app. I'd be interested in hearing the discussions going on between Apple and HSBC at the moment.


Judging by how sluggish and annoying the HSBC app is, I think it is a web app framed in a thin launcher from the app store.

I.e. it downloads a bunch of javascript/html/css and that executes within a UIWebView/WKWebView. Using caching and localStorage, you can construct such an app to not need to download everything on each launch.

The reason that's allowed is because everything executes within a sandboxed browser environment. No native code is downloaded.


> The reason that's allowed is because everything executes within a sandboxed browser environment. No native code is downloaded.

That's the same thing Rollout does. In fact, iOS apps can't download and run native code. The OS won't let you mark pages as executable unless they're appropriately signed, and only Apple has those keys.


If they're using JSC, then they can definitely download JS and execute native methods. They could subclass any UIKit object and make it conform to JSExport. Done, now they're "running native code."


Sure, and an app displaying a web page using WebView can provide hooks that allow doing that sort of thing too. Neither one is downloading native code.


Only UIWebView, which is going away soon.

But surely you can see the difference between executing limited actions inside a web view, and making available any native method to a web view.


WKWebView allows the app to execute arbitrary JS within the loaded page, and intercept URL loads and other actions made by JS code. That's all you need to build a bridge.

I don't see any fundamental difference here. Both (UI|WK)WebView and JSC allow bridging. Neither one grants full access to JS code automatically, the programmer has to put some effort into it. And even if there is some important difference, neither one is native code which is what I was disputing above.


Is it possible to do that and use the fingerprint sensor for login in a secure way? I thought the same as you until they enabled Touch login.


There's no reason why they can't have the native portion do the authentication and just have it return a signed token to the JS portion which forwards it to the server.



Supercell's games and a bunch of F2P collect-ish games also do that, when you open the game they have an update process. I'm reasonably sure that only updates static assets though, stuff like description files and graphic assets. It's actually pretty useful as it lowers the payload of the core engine and lets them do much smaller updates compared to having to bundle it all, that's especially important with things like Unity Frameworks which are not delta-updatable in the store (hence Hearthstone's 2GB downloads every time they add a cardback or nerf a pair of cards)


I bet it's some code too.

Worked on a F2P mobile game, we bundled a tiny Lua engine, and them pushed various promotion screens as a bundle of resources (images) and lua code (screen layout, its preconditions and what's gonna happen after you click - game provided a small API that Lua called).


Hey, unrelated, came across a comment of yours from ~5 years ago: https://news.ycombinator.com/item?id=2949645

Would you mind elaborating on why Erlang programs tend to have FSMs? LYSE has a chapter on how to use gen_fsm, but I've really been unable to find a great answer as to WHY you would want to use it.


I was thinking of the HSBC app the whole time while reading this. In my opinion Apple should reject that piece of shit and force HSBC to write a native app that works properly.


Google definitely forbids self-updating apps on Google Play. But I'm not sure how well this is enforced.


Horribly. I get a few games from the Japanese market, and almost each one requires in immediate internal download and update.

Although those updates never trigger the Android update service, so I'm not sure if they are just downloading more resources of if they are able to request new permissions(I would like to assume not.)


Games updating DLC is nothing new and is not what this is about.

Google did recently change the Android permission model; previously, apps had to request all their permissions at install time and it was all-or-nothing (and frankly, hardly anyone bothered to look them over.)

Now, certain permissions have to be requested when they're needed (at least for recent versions of the SDK) and the user can choose to allow or deny. But an app can't grant itself new permissions without going through the official update process.


See I do trust that - but only to an extent.

I always wonder when seeing one update, if there is a 0 day that can bypass that. On a technical level I know I run the same risk with my PC, but at the same time, it's more difficult for me to examine processes and startups in my android.


Realistically, if they've written their own native code that parses their updates then almost certainly. If they're using an established library then maybe not (likewise if they're using a decent language, but unfortunately no-one does that). I'm reminded of the example at the bottom of http://www.gamasutra.com/view/feature/194772/dirty_game_deve... where the game had a buffer overflow in displaying its own EULA.


The internal download and update is allowed when it consists of media resources and such which is not native executed code.


Neither Google nor Apple nor any other game platform is going to stop games from downloading new content. That's just how games work these days.

The reason this type of "hot code push" is more attractive on iOS is because the app review process is much longer, so publishers look for ways to skirt it. Looks like Apple is just starting to enforce it more.


From https://rollout.io/how-it-works/ :

Does Rollout comply to Apple’s Guidelines?

    Yes. As per Apple’s official guidelines, Rollout.io does NOT alter binaries. ... With over 50 million devices already running our SDK, it is safe to say that Rollout complies with with Apple’s development and App Store guidelines.

Ouch. Just like the company's future is in danger.


> Rollout lets you push code-level changes to native iOS apps, without waiting on the App Store.

What did they expect when their entire business model is based on something that's literally the opposite of what the review guideline allows?


Uber is still doing kind of ok.


Uber isn't going up against one company though. It is going up against 1000s of governments.


Uber may be good at skirting local laws, but as a company it's a ticking bomb.


Uber is not a ticking time bomb if it is able to convince a sizable portion of the regulators of the markets it operates in to continue to operate for the foreseeable future


What does Uber do?


They run an illegal taxi service. There's an app for it too.


Half of Uber activity is illegal (UberX in states, it was called Uber Pop in France but it got banned). The other half is perfectly legal and convenient.

However, due to their miserable practices I am glad that almost all (if not all) drivers for Uber also drive for the local taxi-in-an-app service.


It's legal in many cities.


It's become legal (after the fact) in several cities, and Uber has allegedly intentionally attempted to subvert attempts by law enforcement to investigate their operations.


Some cities it was legal to begin with, btw.


Only following the law in "some cities" is not a good thing.


"Illegal" doesn't mean "I have a beef with it".

Uber is legal in the vast majority of cities.


"Mostly legal now" doesn't change the fact that it was illegal in the past and is still illegal in cities that are not in the "vast majority".

Not to mention that they (allegedly) attempted to subvert law enforcement attempts to investigate them, by "greyballing" law enforcement. Whether that is considered obstruction of justice is a secondary question to whether it is incredibly shady.


I know. I'm using "illegal" to mean "against the law".


So is Airbnb...


The first time I saw Rollout I was shocked it wasn't already banned by the App Store. No matter what they say, I can't imagine how they could do what they claim to do without flagrantly violating the guidelines.


I'm guessing there are two things Apple is worried about. The first is using hot code push to change the purpose of the app after release, e.g., switching a business app into a video game. The second is using hot code push to violate app store review guidelines, like the use of private APIs.

You can do hot "code" push techniques that allow the first but not the second, by letting apps update HTML and JS that calls back into pre-existing native code. That's what Cordova / PhoneGap does. I'd guess that Apple will just ban the app and the developer if they catch it.

It appears that Rollout started using some API that would enable it to do the second, and Apple is preemptively making sure that it doesn't happen. The wording of the rejection is based on passing computed parameters to introspection routines.


> I'm guessing there are two things Apple is worried about.

I'm sure it worries about them but there's a much larger, riskier scenario.

Once you start downloading and executing binary code from untrusted sources (i.e., not the App Store) anything can go wrong.

1. An iOS app doesn't care about security, and it hot-loads code from some non-https source and gets man-in-the-middle'd 2. An iOS app hot-loads code in a secure manner, but the server from which the code is served becomes cot mpromised 3. A malicious employee at an iOS app vendor pushes harmful code out via her company's app

Now, I'm not a fan of Apple's policies. I think there should be a "guys, I know what I'm doing" mode where I'm allowed to download code from untrusted sources. Just like Android or MacOS.

However, I sympathize with them here. For nearly a decade people have been downloading code from the App Store with the understanding that it is safe to do so. Even I appreciate this much of the time... I'm an engineer but I'm busy. I can't audit every app I download. I wish there were other options, but I find huge value in the fact that I don't have to worry about an App Store app screwing my device. And it's a big reason why I recommend iOS despite its flaws to older family members.


> Now, I'm not a fan of Apple's policies. I think there should be a "guys, I know what I'm doing" mode where I'm allowed to download code from untrusted sources.

This exists today and has for a long time, it just costs you money for this "privilege".


You can compile and run apps on your own devices without a paid developer membership: https://www.google.com/amp/s/9to5mac.com/2015/06/10/xcode-7-...


They've nerfed this so that the app will only run for a week before you must push another build.


You can drop a .ipa file into iTunes and load/run the app on a phone that's syncing to that copy of iTunes...


Getting the .ipa signed still costs money (or has a week-long timebomb), and you can't run unsigned code.


> The second is using hot code push to violate app store review guidelines, like the use of private APIs.

I've never understood this part. Why doesn't iOS simply prevent apps from calling private APIs?


You aren't supposed to call private APIs in your code, but your app is definitely making private API calls all the time since the libraries provided by the platform are running in-process.


Preventing it in a technical way is far from easy: If your app calls public API X, which as part of its implementation calls private API Y, your compiler only needs a declaration of Y to output the function call / ObjC message send. Nothing in the language prevents it, and the code is executed natively unlike Java.


They control the software and the hardware though: seems possible to allow a specific region of memory (aka their public API) to call a specific region of memory (private API) and segfault for all the rest that does that?


At best all they could do is change literally every single private function call everywhere to inspect the return address and see if that return address is in a system framework or is in the app image. But this would be huge overhead, a real pain in the ass, and also not even reliable, because all you have to do is pick a public function, figure out the address of the `ret` instruction, push your $pc onto the stack, and then call the private function passing the address of that `ret` instruction as the return address. The private function will see that this address is in a system framework, and so will work, and then it will return, passing control to that `ret` instruction which immediately returns back to your real caller function.

So no, there's no way for Apple to technologically make it impossible to call private functions. The only actual solution there would be to completely rewrite the OS such that literally every call into a framework that an app makes actually goes over IPC (so that way apps can't even attempt to invoke private functions since they won't be linked into the app), but that would probably be crazy slow which is why nobody does that.


Many private APIs are methods on objects which are part of public APIs, so there's no "region" of memory which cleanly corresponds to private APIs.


But they can theoretically create that as they own the entire chain; dev env, tools, OS, software, hardware (CPU included). I know it is not currently the case, sure, but they can do it was my point.


They "own" a compiler, but not all of them.

Say they implement the scheme you mention in clang and the LLVM linker, so the function bodies of their public APIs end up placed in that privileged region of memory, and those of their private APIs end up in the restricted region.

Nothing prevents gcc from producing object files that tell the linker "this user function is part of Apple's public APIs". And nothing prevents people from using a different linker anyway, one that would put private API body functions out of the restricted region of memory.

The only real way to achieve that would be to move all their frameworks to the kernel, which would be all sorts of problematic.


> Say they implement the scheme you mention in clang and the LLVM linker, so the function bodies of their public APIs end up placed in that privileged region of memory, and those of their private APIs end up in the restricted region.

Agreed that this design is fundamentally flawed, but that's because the coder is providing the implementations of private code. Providing that is Apple's job.

Put privileged code into a dynamically-linked library that Apple provides. Only code in that block of memory can call private APIs. Pretty straightforward to implement, and requires nothing fancy from the kernel.

Of course this only works if you can prevent the attacker from corrupting memory.


I don't know if iOS does randomization of loading addresses, but if so, that'd be a disadvantage.

And well, in any case they need to maintain compatibility with current apps for who knows how many years.


> I don't know if iOS does randomization of loading addresses, but if so, that'd be a disadvantage.

Such a scheme wouldn't stop ASLR. The loader just needs to tell the verification code where it put the privileged libraries.

> And well, in any case they need to maintain compatibility with current apps for who knows how many years.

Do they? I think Apple could easily order everyone to switch over to a more secure compiler with a one year deadline.


Presumably because many private APIs are used behind the scenes by public APIs and the security model must allow applications to run them.


Because some of these APIs are useful in an enterprise app setting that aren't distributed via the App Store. Like Disney applications on their turnstile devices at Disney World.


[flagged]


Please comment civilly and substantively on HN or not at all.

https://news.ycombinator.com/newsguidelines.html


Which is disingenuous to say because the guidelines forbid using these methods to make "significant changes to the app" - it doesn't mention modifying the binary.


Heh, "We're in so many apps... it MUST be allowed!"


To be honest, I think the fact that they're used by so many big players is the reason that Apple is requiring removal in future updates rather than outright removing for ToS violations.


Very little sympathy for anyone who didn't see this coming, I'm more just shocked they even got it past Apple in the first place.

Even if I had the idea and the technical skills to pull such a product off I wouldn't even bother trying for fear it wouldn't even get past the first hurdle.


It won't be the first $10m+ company killed by Apple's App Store policy. I hope it will be among the last though.


So you're routing for ${FutureAbusiveSpamCompany}?


Rooting. As in 'root password acquired' ;)


Ha, I'm British and it shows.


I would be cashing out now...!


Just imagine www didn't exist and Apple already had ios and apps. If someone came up with the idea of www and an app called web browser , would apple accept it in the app store? They would only accept it if they build it themselves.

At some point is there a risk that Apple may also start to ban the web browser, despite that it's under strict control on IOS?


This is why you don't build on someone else's platform.

Apple/Google/Platform Owner will always do what's right for them, not the customer, and not the developer, for example banning Amazon from selling books in their kindle app, not allowing competing browsers (they recognise the power of the web as a platform), not allowing competing sales mechanisms (where they don't get a cut), and here not allowing developers to update their apps except through the store mechanism. I have some sympathy with Apple here, and see why they're doing it (they have to control what software is installed for security reasons as well as platform protection), but this is all about control over what you install on your own device. Sometimes their actions will be in the best interests of customers, even if not the best interests of developers, but most of the time their actions are simply aimed at preserving their control of the platform and control of the money flowing through it.

The web is the one exception to this rule which works across all platforms and devices (because it is so dumb and simple), and has survived attempts to corral it to a walled-in commercial offering remarkably well.


Try making e.g. a game that is not on someone else's platform. Make a game that is not for PC or XBox or PS or Nintendo or iOS or Android or Facebook or Java or whatever. Count money. Oops, there isn't any.

Or, try making a Photoshop clone, CAD software or similar without being on someone else's platform. Oops.

"it don't work"

:)


> Make a game that is not for PC

Or is the right challenge "make a game for PC that isn't in the microsoft app store"? Which is in fact no hindrance at all.

It's not really "someone else's platform" just by using someone's software. They have to be in control.


You are confusing platforms though. The various app stores are vastly different then how it works on Windows, and PC in general for that matter. I get to choose all the way down to the OS, very few phones get that kind of choice. On top of that, iPhone app store is one of the most controlling out there. Even android allows you to install out of band, and to install other app stores and that kind of thing.


I hope you are wrong. Age of Ascent did play tests that were rather fun (and impressive) when I caught them, and that's in-browser no-plugin.


there are many games not in platforms you listed. those games are called board games, and they have been a resurgence of them lately.

as for video games, yes you'll be beholden to the platforms you build it under. this is why I say to devs, always build it cross platform!


I agree - they're definitely putting the customer's security first. If people want to use an open environment, the web browser is always available on iOS anyway. Apple are completely within their rights to restrict their application platform.


>Apple are completely within their rights to restrict their application platform.

it is like to say that GM is completely withing their rights to restrict where you can drive your GM car. Mind you, that is coming in pretty near future too - giving all the computerization/connectivity/self-driving of the cars which would make the cars into GM's "application platform" with DMCA protecting such a platform too like it protects Apple/Google/FB/etc...


Actually it is more like GM preventing you to update some software components in your car, which almost all car manufacturers do.


Not the same thing. It's like GM telling aftermarket accessories manufacturers that if they want to sell their products through GM dealers that they must meet GM standards. The inability to use Rollout has no real impact on consumers.. except by improving safety.


>if they want to sell their products through GM dealers that they must meet GM standards.

i wonder whether you're intentionally skipped that part or just don't know that in case of non-jailbroken iPhone the "GM dealers" is the only way to get "aftermarket accessories". There is no "if they want to sell their products through", instead there is "if they want to sell their products at all".


They already have, to some extent. The only web browser (or, well, rendering engine) allowed on iOS is ones using the WebKit they provide. This is the reason is took ages for Firefox to appear on iOS, and even today isn't actually Firefox and thus can't make use of features usually in Firefox like supporting websites using CSS Grid, or WebAssembly, or Service Workers, or WebRTC, or...


This is misleading FUD disguised as an innocent question. The strategy is to pose a seemingly reasonable hypothetical that, in reality, encourages folks to disregard common sense, ignore security best practices, and instead embrace their worst suspicions about the role that ecosystems play in modern technology.

Reject this naive approach to reality. No, we will not disregard Tim Berners-Lee's role, timing, or place in history. We will not pretend that Cupertino should ignore their responsibility to prevent arbitrary code execution on one of the most widely deployed platforms on the planet. If anything, we celebrate that despite Apple being a giant target for quite a bit of criticism these days (some deserved, some not), for the most part, people on this thread recognize what a giant, unacceptable vulnerability Rollout.io is.

Tech must advance beyond adversarial, animus-based, abrasive reactions to other operating systems, ecosystems, and the like. No time like the present and no better opportunity than some ignorant HN strawman.


I strongly doubt Apple would ever ban web browsers. The amount browsing done on iOS is too significant, it wouldn't make sense to ban them.


At this moment Apple is effectively banning web browsers except their own safari. The other browsers you see on iOS are just a wrapper over the native webkit view.


Probably a controversial opinion, but seeing how Safari is the only browser behaving correctly on macOS (performance- and battery-wise), I'd assume only Apple has the motivation to make a correct browser for iOS.

Imagine the kerfuffle if Google had Chrome on iOS. 2% of "PC" users complain of Chrome hitting their battery hard on macOS. iOS has a much bigger market share.

Competition is healthy, I agree. But sometimes the best interest of the vendor and the users don't align, and I'm more confident Apple is prioritizing battery life and performance over other things, while Google will prioritize those other things (like ads and data collection). I can't imagine them allowing adblockers on iOS (exactly as they don't on Android, afaik)


why not let the consumer decide their browser of choice? Apple isn't prevented from creating battery efficient code by allowing others to write a browser.


The consumer is free to choose Android.


From a user point of view, I see many differences and even paradigm shifts across webkit-based browsers on all platforms. When you call a browser 'just a wrapper over webkit view', I don't even know from where to begin. Specific engine is the last thing to consider today.


OP proposed a world in which the web starts now, so there is no browsing being done on iOS in this scenario.


Apple would not, because they don't allow apps to provide a JavaScript runtime.

It would be possible to build a browser without JavaScript though.


Possible, but forbidden by Apple as well. ( section 2.5.6 of https://developer.apple.com/app-store/review/guidelines/ )


Yes, but if the web was a new invention that probably wouldn't be in the rules.


I wonder if this is going to hit non native code push solutions like React Native? Or if Apple are going to start cracking down on apps like Facebook, Twitter or Pinterest that do a lot of A/B testing.


To date, Apple's Developer Program Guidelines states (in Section 3.3.2):

> Except as set forth in the next paragraph, an Application may not download or install executable code. Interpreted code may only be used in an Application if all scripts, code and interpreters are packaged in the Application and not downloaded. The only exceptions to the foregoing are scripts and code downloaded and run by Apple's built-in WebKit framework or JavascriptCore, provided that such scripts and code do not change the primary purpose of the Application by providing features or functionality that are inconsistent with the intended and advertised purpose of the Application as submitted to the App Store.

Personally, I think that Cordova hybrid apps will continue to be okay, but I don't know about something like React Native...


That actually makes it sound like it's OK to hot-deploy arbitrary new JS code to cordova/ionic apps like bug fixes and new features as long as you don't pull a bait and switch and turn your todo list into a camera or something.


Code push doesn't push Native code. I think Rollout uses swizzling to send native code over the air to your app and then uses JavaScriptCore to inject it into your app at runtime. This always seemed pretty sketchy to me and I could see why Apple would be annoyed by it (it allows you to push changes which can call private objective c apis).

React Native code push does not push any native code, just JavaScript. Out of the box it does not allow you to push code that can call private API calls at runtime.


RN runs with JavaScriptCore.


rollout as well...


This was my immediate question too. Microsoft offers a service called CodePush (https://microsoft.github.io/code-push/) for React Native and Cordova apps that presumedly could get caught by this. I don't have enough mobile dev knowledge to know whether or not it uses the same APIs that were mentioned in Apple's rejection letter, though.


Yep. I use code push in several apps. It's gonna be fun times next time I need to submit an app. I think it might be safe because it doesn't push native code and rollout is all about pushing native changes.


PM on the CodePush team here. The rejection notice seems to explicitly call out the native methods that are a cause of the issue. CodePush cannot inject private frameworks or expose any methods that React Native already exposes.

I would also recommend not using CodePush to completely what an app does.


Correct me if I am totally wrong here, but isn't the issue not with introducing new private frameworks or exposing new methods but with changing the behavior of the interpreted code that interacts with already exposed frameworks/methods? The relevant language seems to suggest that you could still be in violation of Apple's TOS if your script(s):

>change the primary purpose of the Application by providing features or functionality that are inconsistent with the intended and advertised purpose of the Application as submitted to the App Store.


Can you get confirmation that CodePush won't be impacted?


I accidentally what the app does.


I believe the guidelines allow Js functionality like this as long as the functionality does not change. This seems like they are doing this for security reasons so I'd think React Native is different. Not sure though, someone else might know more.


Correct me if I'm wrong but you can use React Native without any code push features, can't you?


Sure but a lot of React Native people use code push. I assume React Native is not affected as code push is just pushing JavaScript and not using JavaScript core to swizzle native code into your application like rollout does.


CodePush PM here - note that CodePush cannot push any native code to the app.


Can it call arbitrary native code? I think that's our probably going to be an issue if it can (I.e. If it can do loadFramework("baz").getClass("foo").callMethod("bar"))


I posted some thoughts on why this isn't about React Native here: https://news.ycombinator.com/item?id=13818211.


I think in these instances the code is pushed in a build, and then toggled server-side. Enabling features server-side isn't against the ToS, but pushing new app code is.



I really kinda see no problem with Apple doing this. Hack the endpoint the app checks for new code, push malicious code. Or fool the app into checking for new code at your server, push malicious code.

I mean I read the headline, thought this sounded eminently sensible, then read the story and saw it was a framework for doing this, and my inner mental model of my security researcher girlfriend leaned forward, started rubbing her hands together, and wanted to start digging for those sweet new vulns.


I wonder if Apple will apply this rule to everyone, which would be fair, or if they plan on letting big name developers like Facebook or Google continue to violate the rules without consequence.


I'm not sure if they still do this, but Facebook used to ship both code paths in the app binary for new launches, and give Apple instructions on how to test both code paths (e.g. sign in with this special user/pass combo).

So they weren't changing app functionality after App Review approval; it's just that for users some of that functionality was gated on a boolean that was fetched over HTTPS.


Putting code behind a feature flag seems entirely fair and a good idea for developers of any size.

It's also a thing that can easily affect small developers; if your app requires logging into some existing paid account (enterprise software, a bank's app, etc.), the available features depend on what features the account has paid for. So as part of the review, you send Apple credentials for a test account that has all the features enabled. (Without a test account, they couldn't log in at all.)


Are there any sites that document Facebook or Google using hot code push on iOS?


They should pull the Facebook app for this. But I'll eat my hat if they do.


Their guidelines specifically say that, with prior authorization, the prohibitions do not apply.


unfortunately, just like any other private platform, they don't need to be consistent or follow the rules all the time.

I say, don't go native unless you must (for performance reasons etc). push the web forward instead!


You can accomplish some crazy stuff with hot code loading: use private APIs, get around privacy restrictions. In theory, there are a lot of guys reasons for Apple to prohibit this.


You can accomplish all of that without loading new code. Apple's private API checks, for example, are easy to get around if you're motivated.


There's a cynical part of me that thinks this is because Apple is going to announce a similar feature at WWDC


That doesn't seem likely. They 'just' reduced app review down to ~24 hours.


I think that's likely. Why is it a cynical part of you? Either Apple is proactive and aggressive about keeping their platform free of hackishness as a matter of routine, or their store ends up a pile of malware and crashy junk.


I've always questioned that 'hot code pushing' whenever I face an app that does it, like how on earth is it allowed to begin with ? they basically can send almost any functionality they want skipping Apple's reviewing all together !


Hasn't this always been against the App Store terms? I thought the only language you were allowed to download code from the internet and run was Javascript on Apple's VM.


Rollout.io has been offering a product that leverages this to 'hotpatch' binaries, but it looks like this is now considered not in the spirit of the guidelines. Technicals here: https://rollout.io/blog/under-the-hood-2016-update/

It basically goes:

* add their SDK, which has the ability to swizzle(swap out the implementation for) arbitrary methods in your app

* the swapped in implementations use JavascriptCore to execute javascript you supply, wrapping or replacing the 'real' invocation of the method.

* their SDK checks on startup which methods to replace and downloads the appropriate JS replacements

This is, technically speaking, only using JavascriptCore.


That's actually quite a clever workaround to the current rules, but rather naive of them to think Apple wouldn't fight back at some point.


It's really sad, because that kind of stuff lets you fix bugs / mitigate outages in the wild without having to wait on apple's schedule.


This is true. The problem is bad actors can use this to bypass Apple's review. As an iOS app publisher I slightly regret this inconvenience. As an iPhone user, I appreciate Apple looking out for my security.


Apple's review isn't that useful in this case as a pre-check, it is possible to avoid it if you want. Apple review does automated code checks & a reviewer manually using your app. With that review process you can deliver executable code after the fact in any way you want and only get caught after the fact if it's even noticeable. You can even get sneaky and add some security exploit to make it look like a mistake.

It's much like the argument 'if you ban guns only criminals will have guns' and it's quite true in this case.


Given that Apple's automated review tools detect many ways in which executable code can be injected into apps, and OP's link is itself about that very thing - what you say is mostly false.


It was easy to detect because they are not to trying to hide it. They just have to check if the library exists.


it's not about apple's process being imperfect. sure, you can fool them if you try hard enough and a bad actor.

it's about damage mitigation, and shutting down a 3rd party "app-hot-fix" service is a good move.

it's harder to fool only apple, than to submit some naive looking thing and still have unmitigated access to changing its code.


The wording of the prohibition has varied over the years. Current wording is:

Except as set forth in the next paragraph, an Application may not download or install executable code. Interpreted code may only be used in an Application if all scripts, code and interpreters are packaged in the Application and not downloaded. The only exceptions to the foregoing are scripts and code downloaded and run by Apple's built-in WebKit framework or JavascriptCore, provided that such scripts and code do not change the primary purpose of the Application by providing features or functionality that are inconsistent with the intended and advertised purpose of the Application as submitted to the App Store.

So even if you download JavaScript code and run it on Apple's VM, they reserve the right to reject it if it changes the primary purpose of the application.


I think that wording with the "exception" is from the OSX developer program. As far as I can remember, the iOS info sheet has always been taxative with no exceptions about executable code. Of course all wordings allow Apple to start rejecting a previous approved app if they feel the code/scripts the app is now downloading are in violation.

The explanation in the rollout.io site about why they are fine is intentionally deceptive. They have the guts to link to a document that says "An Application may not download or install executable code." and then quote more friendly excerpts in the hopes that you won't read the actual doc. I can't imagine why Apple has let this go on for so long.


>The only exceptions to the foregoing are scripts and code downloaded and run by Apple's built-in WebKit framework or JavascriptCore, provided that such scripts and code do not change the primary purpose of the Application by providing features or functionality that are inconsistent with the intended and advertised purpose of the Application as submitted to the App Store.

It looks like this says "If you use download code that is run on JavaScript core (ie, JavaScript), then you can do this so long as you don't change the purpose of your app when you submitted it to the App Store."


Yes, that's correct. IOS Developer Program License Agreement, section 3.3.2:

"3.3.2 An Application may not download or install executable code. Interpreted code may only be used in an Application if all scripts, code and interpreters are packaged in the Application and not downloaded. The only exception to the foregoing is scripts and code downloaded and run by Apple's builtin WebKit framework..."


... or JavascriptCore, which is what React Native uses.


..."provided that such scripts and code do not change the primary purpose of the Application"


So you can add features all you want, just make sure you don't change your todo list app to a dating app and all is fine.


Once you've 'worked' with Apple's app review for long enough you'll learn that you can't rely on such an assumption.


Yes, but it has not been widely enforced.


I'm curious if this has anything to do with the exploits that are in the recent Wikileaks dump. Perhaps Apple saw something in there that raised alarm and an impetus to close the loophole?


I wonder what they define as "code." Uber often rolls out "Kittens for a day" style features, some of which are topical and can't have been designed before the previous app update.


A topical "special day" feature could easily be defined as a downloadable content blob with zero executable or interpreted code. They probably have a set of special event templates already coded into the application which can then be themed with a few images, colours and text strings.


The rejection notice is very clear:

> This includes any code which passes arbitrary parameters to dynamic methods such as dlopen(), dlsym(), respondsToSelector:, performSelector:, method_exchangeImplementations(), and running remote scripts in order to change app behavior or call SPI, based on the contents of the downloaded script.

Uber can write a "foo for a day" feature, that downloads some strings and some images from the Uber website, but doesn't actually add any functionality.

Same with any mobile game that has periodic events. You just write code to implement a generic event, write data to describe the event (name, graphics, level data), and do an update when you think of a different kind of event.


I'm not an iOS developer, so I'm not sure what's possible, but am wondering about React Native apps (and similar technologies too). Here's a scenario:

- you publish an app that creates collages;

- user gives access to Photos;

- you change your JS to upload all the photos to your server.

Substitute "photos" part to any iOS permission; isn't this security risk? Should it be allowed under current App Store ToS? Also, what's stopping your JS code from downloading binary code and injecting it via some iOS exploit into "native thread"?


There's not much to prevent that scenario from happening. There's now a "why" section in one of the app's descriptors where you need to describe what you're doing with the permission. So it's not enough to say "I want access to the photos". It's now "I want access to the photos so that I can put them on the screen and let you draw funny pictures on them."

But there's little to prevent you from lying about it, and there are so many iOS platform technologies out there now (native, PhoneGap, React Native, AIR, Xamarin) that there's probably no way for Apple to see what you're doing in an automated way.


From my understanding people are giving rollout push rights to their app store accounts? So rollout can hijack all the apps it controls? It's a hell of a central authority, a security nightmare waiting to happen.


No, it allows apps to be modified by the apps pulling JS code from rollout's servers. App reviews would be way too slow, and they are exactly the problem that rollout solves. However, review times have improved a lot since then.

ADDITION: It works by "swizzling" methods, which is a valid mechanism in Apple's runtimes (like in Ruby or many other dynamic languages). In Swift, this will only work in subclasses of NSObject or its decendants, because only then message dispatch is used.


Ok, so there's no direct access to your App Store but idea is same - rollout can "hot fix" aka pwn all users at once. I'd be angry if I were apple too.


Indeed, I always wondered how Rollout would not same day be rejected by Apple.


How does apple understand whether the app can change behaviour according to external factors. Maybe in other words, what is a "behavior change"?

For example, google has Tag Manager [1] which I believe is mostly used for managing small UI related changes. Is there a clear documentation or distinction about that?

[1] https://developers.google.com/tag-manager/


> any code which passes arbitrary parameters to dynamic methods such as dlopen(), dlsym(), respondsToSelector:, performSelector:, method_exchangeImplementations(), and running remote scripts in order to change app behavior or call SPI

I would have expected dlopen and dlsym are blacklisted - you can trivially use dlsym to access any blacklisted API. Similarly, I thought SPI ("system programming interface" = private API) was all blacklisted. Am I misreading the message?

That said, if this is what they're focused on, it seems like it actively does not impact any apps that hot-push HTML code (e.g., PhoneGap / Cordova).

If the only reports are coming from Rollout.io, my guess is that the latest Rollout SDK uses one of these functions (I'd bet method_exchangeImplementations(), i.e., swizzling) with a dynamic parameter, and that the SDK can be changed to just stop doing that.


Hi all (Erez from Rollout here) –I appreciate the discussion. Here’s our full statement – please weigh in with any questions

https://rollout.io/blog/rollout-statement-on-apple-guideline...


good, this sort of stuff screams of abuse potential.

devs should re-submit their code to apple, instead pushing "fixes" to phones. i guess it's fair to assume 99% of those pushes are safe (80%? guess it's more like 'benefit of the doubt'), but that 1% the escapes scrutiny is the one piece that makes the whole platform shaky, and honestly, poses a major attack vector. i wonder how often this was abused.

as to service like rollout.io, it's a service that was never supposed to be. especially if it serviced hundreds of apps - as a security minded indv. i shudder at the thought of what might have slipped through.

edit: after digging into rollout.io and finding out it's based in telaviv, is it wrong to speculate about origins and real purpose of an israeli company that specializes in injecting code into iphone applications?


"Oh look, I found this tunnel under the border, I'll use it to feed the birds on the other side. I swear to god I'll never use it to smuggle drugs or people"

It is too bad people is using it for good causes but the hole has to be closed. Sorry guys.


tldr - appears to affect rollout.io customers (at least for those who have replied to the thread so far).


We haven't noticed any issues with AppHub, our service for dynamically updating React Native JavaScript code.

We always felt this day would come, however, so we switched directions.


I checked again rollout.io Disrupt presentation (great pitch btw) and the first question they get (Alexandra Chong) is "is it going to be ok with AppStore policies": https://techcrunch.com/2015/09/22/rollout-io-puts-mobile-dev...


What will happen to YouTube? YouTube has been pushing new functionality ahead of updates for me recently. For example, the new "double tap to rewind 10 seconds" feature appeared at random one day (without an any updates)...then eventually the feature is announced in an App Store update.

Or is the app review process more subjective?


They might have achieved that by pushing it in silently over time and utilizing feature flags. Only going live in an app store update once it was rolled out to everyone.


Google will absolutely have more wiggle room with rules than smaller developers.


Hmm, I definitely had to install an update for the YouTube app to receive the double tap to rewind/fast-forward.


The iPhone also has the ability to automatically update apps in the background if you allow it.


Should have been banned long ago, why would they have allowed an app to alter major behaviors post review?


I'd say this is only going to further the arms race. One simple way around this would be to intentionally introduce a bug with a "controlled" exploit that lets you send a specially crafted data packet to your app, and execute shell-code of your choice.


I've recently been looking into React-Native. I'm hesitent to commit because i believe there's 'hot code push' potential and Apple will be a b*h about it. I not sure if this is the case thou.


> i believe there's 'hot code push' potential

It's called CodePush, and it's a plugin for React Native (does not come with it). It is developed by Microsoft: https://microsoft.github.io/code-push/

Doubt it would get taken down.


Interesting, thanks


There's hot code push potential in Swift and Obj-C too.


Non iOS/Apple developer here.

So, with this "hot code push", is it also possible to update an app (adding new features), and not only bug fixes, directly?

If yes, then sounds like you are trying to circumvent the App Store QA process.


Just ditch Apple. The obnoxious attempt to ban any kind of customization is sickening. Apple like to shoot in their own foot by making life miserable for developers.


But users have no idea how miserable life behind the iron XCode curtain is, so it will continue forever.


That game that is topping the charts "Legacy", explicitly says when you start it that it's downloading patches. Wonder why it isn't blocked?


Patches could be assets - updated map data - updated textures - updated AI scripts? I don't think these need a full app store update as it's not changing the game from a game to a dating app (for example).


Still, how do they avoid someone writing an ad-hoc interpreter and reading code from e.g. PNG images? Data == code, as we know.


I don't see Apple blocking React Native, too many big players using it in production.

If you use hot code push together with React Native, then you're borked.


A day after the Vault 7 leaks. Well well well


Another big worry is analytics platform like 'AppSee' that is clear violation of consumer privacy.


"Starts"? I thought this was always a rule. Maybe they're only starting to enforce it now...


Hot CIA back door push is a misfeature in any app, and Apple are doing the right thing here.


Could this have anything to do with Wikileak's release of iOS hacks by the CIA?


Probably not, the timing is too tight, and if Apple were aware of active exploitation, they would probably compeltely remove / block affected apps, instead of giving the devs a notice to improve the next release.


Didn't Firebase just introduce Remote Config which touts this very thing?


AFAIK Remote Config is just a server side key value store with customisable values based on audiences (e.g. random 50% for A/B testing, all people from Country X). This means all behaviour changes will require code to be deployed in the first place.

Also, from Firebase Docs[1] "Don't attempt to circumvent the requirements of your app's target platform using Remote Config."

[1] https://firebase.google.com/docs/remote-config/


From this....

https://www.youtube.com/watch?v=_CXXVFPO6f0

"Firebase Remote Config allows you to change the look-and-feel of your app, gradually roll out features, run A/B tests, and deliver customized content to certain users, all from the cloud without needing to publish a new version of your app."


With hot code pushing, you're deploying newly written code/functionality, post release (opening possibility of up MiTM attacks etc).

With Remote Config you deploy the code, then decide what to show to users post release. This gives Apple a chance to review all the code submitted to the App Store.

In the case of rolling out features an A/B tests, you'd make a release including a feature, but only enable it x% of your users using RC. You can then obviously enable it for everyone if it passes your A/B test or if you're happy its working.


React Native allows you to replace the JS bundle, will this be affected?


This has always been a requirement for Apple apps, hasn't it?


What about Microsoft's infamous Code Push?


Does this mean all Cordova apps may be banned?


Not necessarily. From the Apple Developer Program License Agreement:

> 3.3.2 An Application may not download or install executable code. Interpreted code may only be used in an Application if all scripts, code and interpreters are packaged in the Application and not downloaded. The only exception to the foregoing is scripts and code downloaded and run by Apple's builtin WebKit framework, provided that such scripts and code do not change the primary purpose of the Application by providing features or functionality that are inconsistent with the intended and advertised purpose of the Application as submitted to the App Store.


Stuff like this is one of the big reasons I am a fan of web apps over mobile apps. Letting apple or google have control over this channel is just too risky for me.


You've literally got things upside-down.


The Unfree web.


Does this effect React Native apps?


Understandable, but there is a deeper problem of course - the app store model is broken for apps that need hotfix capabilities (aka enterprise).

We've been meeting with Apple on this topic for years and continue to sideload our app as we need to meet SLAs with our customers. They sign the binaries with their dev certificates, which violates Apple's guidelines too.

But, alas, once you have critical mass in a vertical even mighty Apple gets cold feet about shutting your customers down.

Why Apple is not able to offer a separate way for certified and audited dev shops to hotfix their iOS apps is beyond me. SAP, MS, IBM - a shitload of big shops would love to pay for this privilege.


'Enterprise' always needs things, and then when these required things are not available enterprise makes do with what is.

In this case it isn't required at all though because Apple allows enterprise to sideload apps outside of the review process.


But not as a vendor.

Right now we get the certificates from out customers, sign the individual binaries. Then distribute through our own infrastructure.

We have our own update mechanism (basically hot code push), cannot have the customer's own IT shop be a barrier to deploy the fix. User sync their apps, if there is an upgrade that gets done inbetween the normal data/content sync.


Isn't that exactly what enterprise distribution does?


No, still goes through a check if it is a globally published app (vs. a custom app for just one company).


Oh, Mario Run! What are you gonna do?


About time, as an user it's fucking annoyng when you download an app but will refuse to work unless you're online even when it has zero needs for it


To be fair that's just bad UX, there is no necessity for apps using rollout to access the internet as (I hope) rollout caches the app.


The solution is fairly simple: just stop releasing software on that platform. There are millions of customers on more open platforms, so there's really no need to support them anyway.


I don't care about specific numbers, let's use some from androidauthority.com . ios apps make more revenue then Google Play apps. So what you say? Well you make a LOT more revenue with BOTH gPlay + Appstore then EITHER on their own.

http://www.androidauthority.com/google-play-store-vs-the-app...


All of life's problems are simple when suicide is your backup plan.


Boycott is a valid response no matter how much you try to make false analogies. Pulling app out of app store is by no means equivalent to suicide. At most you change your business.


Right, like Uber, Snapchat, Facebook, Clash of Clans, Pinterest, Whatsapp, Instagram, Twitter, Waze, Shazam, Tinder, Match, YouTube, and basically every other app out there pulling out of the App Store would not be suicide.

"At most", those companies would just have to "change their businesses".


If all those companies did pull out, then it would be the end of Apple.


Absolutely not. Apple was fine long before these companies came into existence, and Apple will long outlast these companies. New companies would come in to fill the spaces these companies will have left in days.


I am definitely stealing that


As a user/consumer, I like this. It reduces potentially unpleasant "surprises."

Apple have curbed a lot of obnoxious developer practices (and enforced good ones, like the move to 64-bit not long ago) and they, along with Microsoft, probably the only ones with enough muscle to be able to do that.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: