This theory is pretty easily debunked when you consider the primary goal of bitcode is to strip assets from apps that aren't relevant to a user's device, hence taking up less space.
(For example, removing @2x images for a plus device that uses @3x images, or vice-versa).
App thinning is orthogonal to ENABLE_BITCODE. The App Store could have been thinning app binaries years ago (literally just `lipo -thin`). Apple likely wanted ENABLE_BITCODE for better app store verification and perhaps architecture re-targeting in the future. I very much doubt the latter because Apple seems to have no compunctions about forcing developers to use newer SDKs, rebuilding apps for WatchOS or tvOS, or aggressively deprecating 32-bit only apps in the App Store.
I said explicitly in my comment that I don't think this is some
grand conspiracy. I don't think Xcode makes big files to use up disk space. I think Apple just has little incentive across the entire ecosystem to use less storage or use storage more efficiently.
Afaik bitcode is so that Apple can rebuild binaries for different target architectures (e.g. new models of phone, watch, et c) without source developer interaction. It should come in major handy in the OSX app store when ARM64 Macbooks ship in a year or two. A full app store of working apps on hardware launch day will make the bitcode requirement worth it when it's the smoothest architecture transition they've ever done (out of 680x0->ppc and ppc->x86).
Bitcode does not allow cross-architectural builds. This is a common misconception. IR (& bitcode) includes architecture and platform-specific ABI.
What it does allow is for better optimizations as the LLVM backend optimizer improves.
I would imagine that with enough engineering effort, a cross-architecture "porting" of IR would be feasible. I doubt Apple will bother to do that when they can just force developers to rebuild and republish lest they get left out of the App Store.
Outside of a Java-style high-level VM, cross-platform cross-architecture in the C world requires compilation. You cannot use what is not there. When compiling a C language, macros are used to determine architecture and platform, and extra code is simply not compiled. The most simplest of examples is endianness handling, which would be totally broken if Intel-compiled code is automagically made to run on arm.
iOS and macOS are both little-endian, but there are many other differences between platforms (like pointer alignment, SIMD size, ObjC ABI) and bitcode makes no effort at all to accommodate them.
Bitcode can used to recompile for minor ARM updates, compiler bugs, new optimizations etc without having to get developers to submit new binaries.
> Outside of a Java-style high-level VM, cross-platform cross-architecture in the C world requires compilation.
Kind of.
On IBM i, C compiles to TIMI bytecode just like everything else. For producing actual native code directly from the compiler you need the Metal C compiler, or the POSIX compatibility environment (PASE).
The TenDRA C and C++ compilers also used bytecode (TenDRA Distribution Format).
(For example, removing @2x images for a plus device that uses @3x images, or vice-versa).