Hacker News new | comments | show | ask | jobs | submit login
iOS Static Libraries Are, Like, Really Bad (bikemonkey.org)
95 points by wooster 1374 days ago | hide | past | web | 74 comments | favorite

I had a pretty detailed discussion with iOS engineers at a WWDC a couple of years back. It was a somewhat frustrating conversation, mostly because of how badly I wanted true user generated Framework support, but also because the engineers had decent reasons for the existing state of things. Primarily that Frameworks (in their fullest expression) are dynamically loaded.

Apple has made a decision that allowing 3rd parties to dynamically load code (outside of Apple certified frameworks) is a security issue on a mobile platform in particular. I don't have a solid counter argument, although there are certainly some technical constraints they could put in place to help mitigate the risk.

Anyway, agree with your essay in general. But I also understand how we got here.


It's the app developer's responsibility to update and QA their program.

If Apple allowed dynamically loaded libraries across the OS, then subtle issues in an App update could cause that one update to break seemingly unrelated apps. Windows developers call this DLL hell, and even with manifests and SxS, Microsoft still doesn't have an attractive solution to the problem.

Meanwhile, from a security standpoint, the sandbox should prevent apps from interfering with the files of each other and the OS.

And from a performance perspective, the few kilobytes (even entire megabytes!) of duplicated code segments is inconsequential on a phone with 1GB of RAM and very few context switches across apps.

No, you are conflating two separate concepts. Just because a framework is dynamic doesn't mean it has to be shared. MacOS X provides all the benefits of dynamic frameworks that Landon outlines, but third party frameworks are almost always bundled within each app (and on iOS they would certainly be required to be).

I am not actually conflating them at all. However, Frameworks as implemented on iOS are at present dynamically loaded. As I said, there are technical ways to address that particular issue, some of which bring iOS Frameworks more in parity with OS X.

We're already dynamically loading one chunk of third-party code: the app. Why are third-party frameworks any different? Presumably they would be subject to the exact same code signing and approval processes as the main app.

Then you could probably replace a dylib inside one app, with a dylib from another. If Apple codesigns all dylibs in apps, you could just submit a silly little app with a malicious dylib, grab the signed dylib from the appstore later and play games with third party apps.

The code signature for an app extends to the frameworks it contains. You can't just replace them and still have a valid signature.

The code signature is on the multiple architecture binary, thereby including any statically linked object files, right?

If Apple were to add dynamic libs, they would presumably be separate binary files, with their own signatures. This could raise the concern noted by 0x0.

> If Apple were to add dynamic libs, they would presumably be separate binary files, with their own signatures.

No. That's not how it works on Mac OS X today, where bundled shared libraries are supported.

That's right, thanks.

Separate binary files have individual hashes, which are included in the package manifest file. The manifest is then signed, so a single signature covers all hashed files in the manifest.

Curiously though, in all of the MAS apps I've checked, bundled dylibs are explicitly not hashed in the manifest. This is the developers choice, but perhaps a default?

If anything, in my mind, not using shared libraries is a security issue.

For example, if every application links to a static version of some image loading library, then all of the applications must be patched if there is a vulnerability in that library.

Whereas if they all share the same copy, you patch the library, and they all get fixed.

I'm aware that model works better when the same vendor is providing all of the binaries, but there are cases where it's also appropriate for general ISVs.

It's a two sided coin, if you update a dynamically loaded library that subtly breaks backwards compatibility, you end up with apps that mysteriously stop working because of some other update in the system.

Really, it's up to the app maintainer to update their program, and if it has a vulnerability, in theory the sandbox will prevent it from doing damage to others.

If someone updates a library incompatibility, they deserve what they get. That's why shared libraries have versioning.

In the mobile space, it would be even more beneficial if platform holders and ISVs actually followed this; the memory and space usage savings could be substantial.

I'm uncertain why someone would downvote my comment above, but shared library versioning is a real thing, and it is a best practice:




Linux distributions heavily depend on this for the GCC runtime libraries (such as libgcc_s); it's how they provide backwards compatibility.

Many operating system distributors also rely on symbol versioning for their shared libraries as well so they can compatibly evolve interfaces for consumers.

So my original point stands, if someone incompatibly updates a shared library without accounting for versioning, they're doing it wrong.

That is the case anyway on OS X, since each .app bundle contains all of the libraries for that app, the only ones that you don't include are the ones that Apple ships with the platform itself.

So even if an issue was found in a shared framework, it has to be fixed for every app that includes it.

I'm well aware of that. Which is why I specifically said they all share the same copy.

As for the shared framework; that's not true necessarily. Not all system frameworks are included in the app bundle.

Gentoo has a whole wiki page[1] detailing the pitfalls of bundling libraries.

The biggest downside is that updates to shared libraries, done incorrectly, can break applications. That said, modern package managers allow an application to list what versions of a library is or isn't compatible with.

It'll be interesting to see if anyone ever comes up with a good solution that mixes the strengths of mobile platforms' security model and modern desktop package managers together. It seems quite nontrivial. (Does the library inherit the permissions of the app? Does it have it's own? what if I push malicious code in an update? Bundled prevents having to think about these problems.)

[1]: http://wiki.gentoo.org/wiki/Why_not_bundle_dependencies

Assuming that there really do still existing technical constraints, these aren't insurmountable problems. They're not even difficult problems. Apple has more than enough cash on hand to spend the engineering time necessary to solve them and still have money left over for a campus Beer Bash.

Note that on Mac OS X when using code signing, including when distributing via the Mac App Store, dynamic libraries are supported.

I don't get this. Based on my experience, even without framework support, it should be pretty easy to write some code in iOS that loads some code in from a remote location, just by using NSBundle functionality. After all, NSBundle files may include runnable code, in addition to resources. I'm not sure if such apps would get past Apple's tooling / testers though, when submitted to the store.

Is it harder for Apple to check if remote code loading functionality (and other potential security issues) is included in frameworks compared to bundles?

Apple blocks loading any new executable code after your process starts. Fundamentally, the OS prohibits normal processes from marking any pages as executable. You can load the data fine, but you can't execute it. NSBundle won't help you.

The problem being proposed here is that the ability to have embedded frameworks would somehow weaken this strong protection against loading new code at runtime, although I don't really see how personally.

Besides security issues, I'm wonder if another motivation is anti-DRM circumventing, working around App Store restrictions, etc.

If you can distribute source Cocoapods is an excellent alternative to static libs.

And if you have to distribute static libs Cocapods again makes it much easier.

+1, cocoapods is really great and solves 99% of the issues.

EDIT: Issues I had with integrating third party libs into projects as dependencies in the past, not issues in regards to the blog post.

You might want to read the blog post again: CocoaPods does NOT solve the mentioned issues or provides the system that should be made available for developers as it exists on OS X.

Sorry, you're right, but still mean my own issues with dealing with third party libraries, it does help out quickly adding something so I could get to use it.

Except for all the issues that the article goes into great detail describing, and not without cost. CocoaPods is even mentioned in there, at the very end.

Apple needs to fix this before they've shamelessly let it sit for an entire decade. CocoaPods is just a partial band-aid.

In my experience you spend more time fixing strange Cocoapods issues than you save using it.

This is true for both, people using existing pods and for people creating new pods. The whole process is just frustrating IMO.

In addition Cocoapods does not solve the problems described in the article.

I wish that embedding resources in programs as const byte arrays were more common. I think that approach leads to a tightly integrated, low-overhead app, especially on platforms (not iOS) where users and third-party programs are free to do stupid things with the file system; the app either works completely or isn't there at all.

You mean like: https://github.com/liuliu/mopack? But unable to inspect the embedded resource at programming time is a big show-stopper for me.

    $ xxd -i some_file

last time I checked, Microsoft has a weird restriction on their c++ compiler (they don't have a c compiler) that only allows 4k length const chars, you have to concatenate consts to get arbitrary length.

I agree bundles are a pain in the ass. This is a bit overboard for some libraries and can have unforeseen memory issues

What memory issues? const data can be mapped and unmapped on demand.

I really don't see why dynamic libraries couldn't be allowed per app, in a similar vein to other mobile OS.

It is also sad state of affairs that Objective-C builds up on C's tradition of not having proper namespace support.

> I really don't see why dynamic libraries couldn't be allowed per app, in a similar vein to other mobile OS.

Apple considers it a security issue, so presumably they have figured out attacks that exploit dynamic library loading.

In general though, "Because another platform does it" isn't always a good argument for a particular feature in a platform, particularly when that choice involves a tradeoff. Taken to an extreme, you would end up with a lowest common denominator of samey platforms that just copy each other's design decisions.

In this case Apple has decided that security is a top priority for them even if it comes at a price in terms of developer convenience. Other platforms are free to make different tradeoffs, and that gives users and developers a diverse choice of distinctive platforms available to them. This is a good thing. iOS doesn't have to become Android any more than Android has to become iOS.

By other mobile OS, I meant :

- Epoch OS

- Symbian

- Windows CE/Pocket PC

- Windows Phone 7/8

- Android

- Blackberry

- Bada

The above vendors do have/had secure deployments mechanisms despite support for dynamic libraries.

Are these validated to be secure, in this way, or are you just asserting that they are? Bear in mind that many intentional features of Android would considered by Apple to be severe user privacy violations. There platforms aren't just different technically, their standards for what security is considered to be vary significantly. That's fine because if you want those features in Android, then Android has them, but if you prefer a garden with a higher wall, there's iOS.

You surely seem to have a problem with Android it seems, as you selectivly ignore all the other ones on your remarks.

That's a false dilemma. Mac OS X has dynamic library support and there isn't a vast sea of viruses and exploits coming from the Mac App Store.

As much as I generally dislike the iOS ecosystem, Xcode and Apple in general this feels like complaining about nothing much of importance to me...

If you want to share code this becomes a non-issue - you include the code which can detect things like architecture and target at compile time and then its possible to not just transcend the boundary between iOS and OS X, but also Windows Desktop + RT, Android, other *nix flavours and that great operating system that will be released in 7 years time that we don't even know about yet...

a static library is a convenience of pre-compiled code, but also an inconvenience where you can't see inside or implement the many cool things you can with the preprocessor, meta-programming or even scripts that generate code or enforce constraints if you feel so inclined...

Why do you really want static libraries to be usable? Are they really a good thing? Is it really any more convenient than just including a folder of source code in the project?

The only real argument I can see is if you want to keep your code secret which I am philosophically opposed to in the general case...

The blog post contains lots of reasons, you might want to read it.

Here is only one example of why including via source code can be a bad thing:

> In the years past, for example, I saw issues related to a specific linker bug that resulted in improper relocation of Mach-O symbols during final linking of the executable, and crashes that thus only occurred in the specific user's application, and could only be reproduced with a specific set of linker input.

In addition: the writers frameworks/libraries are all (!!) open source! And you claim the only reason for using libraries is keeping the source secret. Yeah!

i had a good read... what precisely is the hole in my argument?

there are a few arguments there but they are not especially compelling imo and I feel the advantages of using naked code far outweigh the disadvantages of using a library (the only advantage i can think of which is genuine is hiding your code as binary). let me give my counter to some of the arguments made for why a library is a good thing by way of demonstration:

> a single atomic distribution bundle that applications developers can drag and drop into their projects.

this is programming - drag and drop is a fantastic luxury and a terrifying one if you like to understand what happens. in any case dragging and dropping a folder in to xcode containing code and resources is just as easy but has the benefit of being absolutely transparent.

> One of the significant features of frameworks on Mac OS X is the ability to bundle resources. This doesn't just include images, nibs, and other visual data, but also bundled helper tools, XPCServices[1], and additional libraries/frameworks that their framework itself depends on.

loose files have this desirable property except for being in a convenient package - including a folder reference or duplicating it as a group in xcode is a drag and drop operation.

> One of the features that is possible to achieve with static libraries is the management of symbol visibility.

Yeah... loose code files have this /better/ than a library typically does if you like all symbols to be visible. (otherwise its a valid point and lets you hide your code again...)

> Dependent Libraries

including dependencies is a pain, but it makes sure people can use your code immediately. i like one-step processes - every project i've done in the last 5 years or so works with a single check-out from source control - in some cases even if you are lacking the ide or other important tools. the bloat is not relevant today, even on mobile platforms... although i don't seem capable of creating many multiple megabytes of code no matter how much i do except for including some enormous 3rd party library (freetype is the last one i had this problem with - i never use it any more).

> result in builds of the library being unique

this is a genuine problem - kinda - if your code doesn't build under multiple platforms and compilers to have the same behaviour then you have much worse technical debt than this to fix first... like making your code actually cross-platform and deterministic (if you actually need that).

however all of that aside I think my point about being super cross platform across *nix, iOS, OS X, Windows RT/Desktop, Android, Windows Phone 8 and unknown future operating systems trumps the lot...

EDIT: as a real world example consider stb_image.c - a fantastically useful header providing a very clean image loading interface for the most common formats (especially compared to any of the OS library alternatives on all of the platforms where the amount of boilerplate and needless operations is quite staggering for something so simple). I've used this on all of the above platforms and the only problem I have is that the author is not so keen on using the maximum level of warnings like I do... (I like the compiler to never be confused and have the best information available about my program, and incidentally libraries deny the compiler valuable information in many cases too...)

1. Sadly you ignored the example I posted completely.

2. Integrating by source code is valid for some use cases, but it is definitely not for all. E.g. low level libraries like crash reporters which can not be platform independent. Or libraries that rely on platform specifics, like UIKit on iOS.

There is not one approach, one way that fits all. For some libraries/frameworks are the best, for others direct source code inclusion is best.

i didn't see it and still can't maybe i am being blind or selectively filtering it without realising... :/

> low level libraries like crash reporters which can not be platform independent.

> Or libraries that rely on platform specifics, like UIKit on iOS.

this is actually not true and i can quite comfortably demonstrate this and have worked in multiple code bases that do such things across all the major platforms, including games consoles...

these pieces are the unavoidable platform dependent bits and can be conditionally compiled accordingly - they are however usually quite small, and also something which a library can not do

this is imo far and way the biggest strength of source code inclusing and precisely what i refer to by 'you can run on every current platform and even ones that don't exist yet'

even 'cross-platform' APIs like OpenGL necessitate this because they make mistakes, or have ES flavour on mobile with breaking changes vs. Desktop OpenGL.

EDIT: "> In the years past, for example, I saw issues related to a specific linker bug that resulted in improper relocation of Mach-O symbols during final linking of the executable, and crashes that thus only occurred in the specific user's application, and could only be reproduced with a specific set of linker input."

ah i guess you mean this. in which case how does a library avoid this? the linker inputs are usually roughly equivalent to libraries... the compiler generated a bad object file consistently? I'll agree that compiler and linker bugs exist but they are the exception and not the case - that effects source code and not the library is indeed an advantage of a pre-compiled library. I consider it vanishingly unimportant against being able to work across /all platforms/.

The issue isn't about "conditional compilation": the issue is about symbol visibility. In fact, I don't think any of the issues discussed in this article--excepting the bug at the end with MH_OBJECT--would be solved by giving someone source code: adding a directory of C files to your project is semantically identical to adding a .a file using -all_load. This is clear and somewhat obvious, as the only thing I did to create a .a file was to compile the .c files to .o files for you, which will be the very first thing your compiler does with the .c file anyway: all I'm doing is saving you CPU time and some hassle, I'm not changing what happens when the code hits the linker, and that's where the problem lies. Let's look at one random specific example from the article (which again, is full of things that happen at link time, and so would be exactly identical whether you started with source code or archive files): PLCrashReporter includes a custom build of sqlite3 that has different options than Apple's; you want the user's code to continue using the version of sqlite3 that comes on iOS, but you want PLCrashReporter and PLCrashReporter only to use the custom build. If I give you a giant wad of course code files, which would then of course include the custom build of sqlite3 with the extra options PLCrashReporter needs turned on, all of that is going to be compiled down to .o files (again, exactly what would be in a .a file that I'd give you were I to have compiled it ahead of time), and the sqlite3 symbols from the included modified copy would then take precedence over the ones that come with Apple's SDK for all files in the project: you solved nothing.

I miss one feature - ffi, and this is not only limited to iOS - with a .dylib/.so/.dll and some kind of readable interface ("C") you can export functions for other languages. For exampe luajit, python (ctypes, cffi), common lisp (cffi), etc.

Another thing is "replacement" - switch one version with another. This way I've found that sqlite 3.8.2 was misbehaving for us lately, which points to be some kind of MSVC related error (compiled 64-bit for windows using VS2010).

Quickly switching to the previous one, without recompiling did another thing - I had to think less about other possible changes, since only one component changed - not the whole executable.

Then there are times where I wished everything was statically compiled :) - On all fronts (Linux, OSX, Windows) there are too many gotchas of pointing the right way to your dlls.

I seem to favor Windows (get the DLLs first from the apps' folder... well not strictly, but in general) - but then that's because Windows did not have good place to put the dlls to begin with (unlike unix).

I've always thought that since an (iOS) app needs to be fully self contained, having only static linking makes perfect sense. You also gain dead code stripping, and linking only the archs that the main binary supports as well.

Most libraries I've dealt with have mostly amounted to interfaces to respective web apis. In those, having source code distribution makes perfect sense, even preferable when the libraries are fairly simple and that gives the freedom to compile them as one wishes. Doing it with sub-xcode projects I've found to be fairly simple, even if mucking around with header search paths is still a pain.

Granted, I haven't really experienced library author side of things, but most of these issues are about drag'n'drop ease of use, which I don't think requires dynamic libs but is more of a tooling side problem, and frankly has always been a problem in the c/c++ land at least.

The lack of two-level namespace is a real problem that I've actually encountered. Then it was a simple rename of a class name away, but I can easily fathom a situation where it isn't that easy (like the example of embedded sqlite3).

Also I hadn't realised that the simulator and device builds are lipo'ed manually, that indeed should have a better way. But would dynamic libs really help in that situation, you would still ship x86+arm glued together without notion of the platform, even worse, you might be shipping the x86 as extra weight on appstore builds.

Just because dynamic linking is used doesn't mean the library doesn't get shipped with the app package.

Why use dynamic libraries if you don't need to save storage space or RAM?

We have moved beyond the era of scarcity and even on desktop platforms, lots of people are including static libraries so that they are not dependent on the upstream developers' idea of what is needed. So much of this code is open source that it makes sense to build your own static library, and maybe leave out cruft that you don't need.

Sure it would be nice to have a choice, but static libraries are not evil.

I can't read this article without resizing my browser window. Fluid text width is awful. And yes wikipedia has the same problem.

I think fluid text works well for phones and stuff where if you rotate the screen it flows nicely. The problem is huge displays-- I'm at like 2560x width.

Edit: A solution I think works okay is simply adding

    max-width: 500px;
to the css properties for p.

You're not maximizing your browser, are you?

The monitor my browser is on has 2560 width, but the client area of the browser window itself isn't much more than 1200 pixels wide. I keep terminals and the tree-style tabs to the left of the browser.

This seems like quite a hyperbolic statement.

And I've seen the exact opposite complaint from someone about a site that capped the text width.

You just can't win with people who have ridiculously-sized browser windows.

What's wrong with resizing your browser window?

Besides maintainability, there is the issue of not being able to use certain open source libraries (without some legal complexity) when you statically link libraries. IMHO, that is a much greater problem.

The problems you refer to will still exist with the LGPL (inability to replace dynamic libraries).

Not sure what you mean. If I had the ability to use shared libraries, I would be able to use LGPL code.

iOS still requires signed code. My understanding is that the LGPLv3, like the GPLv3, will prohibit this because the user can't deploy it.

In practice, LGPLv3 is much different than previous versions of the LGPL. The truth is that many excellent projects have chosen to either stick to not go the v3 route intentionally (because of the extra restrictions), and if they do, several of them choose to dual license under both versions. I can think of numerous projects that are under LGPL v2 (like ffmpeg, GEOS, mapnik, custom webkit) which would cause an amazing suite of iOS applications to surface, but with the current situation, they cannot. Signing is not a requirement of any of those libraries mentioned.

OK, how does Joe Enduser replace an LGPLv2 library on iOS, as is his legal right under the license?

Would you kindly point out which section of the LGPLv2 gives you explicitly that right?


I would emphasize the terms "work that uses the library" vs "derivative work of the library" which have historically been used to differentiate between static vs dynamic linking.

In addition, how would I go about doing this in Android? And how does Apple distribute LGPL libraries (like Webkit) without "[allowing] Joe Enduser [to] replace an LGPLv2 library on iOS"?

Xcode (and the entire tool-chain) has support for static frameworks that would negate any distribution problems the OP wants. Apple snips that functionality out of the iOS SDK for some unknown reason, but I suspect that would be a much better request to ask for.

I do agree that the iPhoneSimulator and iPhoneOS SDKs should be better integrated at compile time. Yet, they were architected as completely different environments, so I suspect that won't be an easy request to fulfill.

Anyway, what benefit would having a dynamically loaded library give you when each app runs sandboxed? I can just imagine a scenario where one framework uses a version of ASINetworking and another uses AFNetworking and they both attempt to use some version of JSONKit to parse out the result. Objective-C and its amazing namespace collision management will just carefully (and by carefully, I mean, not at all) pick one implementation over the other and leave a happy warning for all to see.

Did you even read the article? All of this is covered.

The question of "what benefit would having a dynamically loaded library give you when each app runs sandboxed" seems to be answered in great detail under "Debugging Info and Consistency" and "Missing Shared Library Features"

A workaround is to write iOS applications in Monotouch, which does support DLLs.

The problem isn't that Darwin doesn't support dynamic linking (see: 99% of jailbreak tweaks). The problem is that you can't share the same library across different applications because of the sandbox / closed platform.

That's actually not the problem at all, as described by the article. The problem you describe is a completely separate issue.

I suck at reading.

Apple doesn't want small developers on iOS. I thought this was clear 5 years ago. Every thing they've done in both the market, the UI, and the OS makes it difficult for small developers to get a leg up and for new technologies to be adopted. To Apple, integrating 3rd party libraries is not a concern of there's, that's for the professionals to try to get to work, then use as a competitive barrier, thus increasing the quality of the top apps in the market, the only apps that matter.

I think you're seeing intentional malice where really there is a lot of indifference at some times, and lack of understanding what it's like to start in this ecosystem.

Cocopods is PAINFUL to get compliant with ruby sometimes, but other then that, works for a lot of the problems you'll see. It is a leap though.

Indifference for small developers? Whatever you want to call it, I don't care. An indifference at times about how they control the app ecosystem? No.. Just no.

It isn't about not wanting small developers on iOS. That's ridiculous.

Are you seriously claiming that less competition increases the quality of apps?

More specifically, he's saying that only stronger competitors even make it onto the field, so of course there's fewer players there, but yeah it sounds kind of dumb when you put it your way. Even a strong player will probably only play as hard as they need to to stay on top.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact