Seems like a very smart way to keep things binaries up to date without developer intervention -- and possibly even allow re-targeting to different CPU architectures after the fact. That would eliminate the need for something like Rosetta if Apple ends up switching major CPU architectures again some day.
I really think that LLVM is one of the best things to happen to computer science in a long, long time.
I didn't work on the level of IR, so I didn't come across any problems there, but I wouldn't be surprised if the IR syntax changed slightly across minor versions.
That said, as long as you keep an indication of the LLVM version that your bitcode was generated with, I really don't see a problem with fluid bitcode specs.
While LLVM is a great compiler toolchain, it is no different than many other JIT/AOT frameworks since the old mainframe days.
What you are suggesting is how OS/400 is adapted to new CPUs.
This has interesting consequences such as retargeting anything from the frontend to anything on the backend. I'd venture a wager that in the old mainframe days, the monolithic nature of a JIT would not have been friendly to a porting campaign.
On OS/400 all executables are bytecode (TIMI) with the JIT on the kernel.
When AS/400 changed processors, the programs continued to execute as always, no change required. All languages got the new processor for free.
Any compiler that targets TIMI, gets OS/400 support for free.
However here are some links about OS/400, nowadays known as System i.
A story about the two times TIMI actually changed:
Some Redbooks about ILE, which sits on top of TIMI
All of them can be found at
Coincidentally, there's been a bunch of stuff on the mailing lists recently about embedding bitcode in object files in order to support link time optimisation.
I got to understand better the whole concept of having a bytecode format for executables, with JIT/AOT deployment options when I started delving into the old mainframe world.
I used to do AS/400 backups, but never coded for it. So it was quite interesting to discover the all TIMI concepts.
Also other similar environments like the Burroughs B5000.
The old is new again. :)
I wonder why this isn't bigger in the FOSS world...maybe because the source and toolchain are already available...I don't know, it might be a neat idea to have an IR userland that compiles on install.
Least grandiose theory I can come with: two ARM cores in the Watch, one big, one small, with different instruction sets, for active/standby modes.
It also means they can optimize for different devices.
Unless the description is wrong, this looks like Apple could insert any code they want into your binary, without your users noticing.
They can also choose to just not validate signatures. If they wanted to MITM your app, this doesn't make it any easier for them as it's already very easy for them to do that.
In both their OS/kernel and in the hardware, Google has the ability to make your app do anything they want to regardless of how you coded it or signed the binary.
Windows Phone 10:
As for Android, having ART on the phone is technicality, given the platform fragmentation Google rather leaves to the OEMs the task of making ART generating the proper code.
How fast are the fastest ARM chips compared to the lower-end Intel chips? Could we see low-powered Mac Airs with long battery lives?
If needed, it wouldn't be too for someone with a few programmers (Apple could spare a few!) to write the proper versioning to upgrade/downgrade the file format as LLVM changes?
And since you'll certainly not be the only one with that problem, Apple making it mandatory for iOS would probably spur development in that area.
I like that idea in general. The other day I was reading about OS/400 on wikipedia. It always used an intermediate bytecode...and because of it they were able to seamlessly (who know how seamlessly) from architecture to architecture.
Now they turn around and follow what the others are doing.
It's not like LLVM doesn't have several steps already in its pipelines.
Like Apple's .NET Native is AOT.
Like Apple's .NET Native compiles to native code in the server.
It's your comment that gave the impression that you thought Apple's new bitcode thing is not AOT and that they follow MS in this not-AOT-ness.
It might not have been what you meant, but it's not very clear from the phrasing:
>"(...) Apple's AOT compilation toolchain was being discussed by some Apple fans as the way to go. Now they turn around and follow what the others are doing"
This reads like Apple had an AOT compilation toolchain that Apple fans thought it was "the way to go" and now Apple dones't have one (AOT compilation toolchain) anymore following MS lead in this regard.
Whereas what you actually meant was probably that Apple fans thought that Apple's PREVIOUS AOT compilation toolchain was the way to go, but now they've changed course and went for an MS style AOT compilation toolchain.
(it read like you think "Apple's AOT toolchain" was a thing of the past, and not they follow MS which doesn't have AOT).
What I said is that I heard from many Apple fans that it didn't make sense the MDIL/.NET Native compilation model and directly compilation from XCode to the device was the way to go.
Uploading IL to the store and having a server based compiler generate native code before download.