Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It was about time Android catched up with how iOS and Windows Phone work (e.g. everything is AOT compiled to native code).

Now it just needs to provide a similar developer experience to the other mobile platforms for the C and C++ developers.




In terms of scaling to more architectures, this solution looks superior to iOS's and on-par with Windows Phone's CIL-MDIL system. Since the original Android phones, they've added ARM processors with VFP, Thumb, Thumb-2, NEON, and now ARMv8-A; MIPS processors; and various Intel instruction sets. Developers targeting Dalvik bytecode can ignore all that complexity going on underneath. The same APK they built years ago will work everywhere.

I agree that they should have made the transition to AOT a long time ago. Their technical excuse is that devices didn't have enough space. That's only because they allowed devices to not have enough space. Even the original iPhone had 4 GB flash minimum.


Android's model for 64-bit support also seems superior to all other operating systems, in terms of not adding the "64-bit memory bloat" (which is roughly 30 percent more memory required for 64-bit apps), so in terms of RAM needed 64-bit Android apps should actually require less than 64-bit iOS apps.

However, I'm not completely sure whether Google just restricts the addresses length to 32-bit, or they are keeping the apps 32-bit even on the 64-bit architecture. It sounds like the former, but I really hope it's not the latter. So far I've not seen ARMv8 supported in the SDK and Nvidia's 64-bit Denver CPU still comes out as 32-bit in benchmark tests, even on Android Lollipop.

I don't know whether that's related in any way, or it's some other Google screw-up (not being ready on time with Aarach64 support), or it's just the benchmarks who don't support ARMv8 yet.

Oh and I agree with your comment on storage. Google should impose more "reasonable" requirements. In 2010, even high-end HTC flagships came with like less than 200 MB free storage. Absolutely unacceptable, even at the time. I've hated my HTC phone for so long because it, and it kind of made me not want to get HTC ever again now, since the brand is tainted in my mind.

Today, even $50 Android phones shouldn't have less than 4GB internal storage (which is like 1GB free storage), but those over $100-$150 should all have at least 8GB. Most people should get at least 16GB. When I'll buy a flagship phone a year or so from now I intend to get one with 64GB internal and a UHS-I 128GB microSD to shoot 4k video and RAW pictures.


Android's model for 64-bit support also seems superior to all other operating systems, in terms of not adding the "64-bit memory bloat" (which is roughly 30 percent more memory required for 64-bit apps), so in terms of RAM needed 64-bit Android apps should actually require less than 64-bit iOS apps.

Is this actually going to be the case? 64-bit addresses allow the use of tagged pointers, which I understand ameliorate any additional cost introduced.


How does Android's support for 64-bit avoid the memory bloat?


As I understand it, the major speed improvements with ART come less from the native compilation than from the improved GC and the ability to do whole-program optimizations during the AOT compilation step.

EDIT: Also, ART takes a different approach to AOT compilation than either iOS and Windows Phone. iOS apps are shipped as compiled binaries from the start, and apparently Windows Phone apps are compiled in the cloud by MS. ART does the native translation step on the device itself, which has only really recently become feasible with the speed and storage capacity of recent Android devices.


I think his main point was that it has caught up with the performance of AOT, not the nitty gritty details behind the process, which is kind of irrelevant to the user.


It is relevant to the user for two reasons: portability and speed. In contrast to iOS, Android in the wild runs on everything from ARM to x86, in many different generations. Doing the compilation on the device allows the AOT compiler to optimize for the specific CPU that the device uses. Also, it increases compatibility, because what is shipped is platform-independent bytecode, not a binary that may only target one specific architecture.


This is a solved problem since the early 80's, known as fat binaries or delivering multiple binaries in a package.

I code mainly in C++ (NDK) and don't have any issues delivering code.


This is a solved problem since the early 80's, known as fat binaries or delivering multiple binaries in a package.

No, they are not the solution: you cannot provide new optimized versions for generations that did not exist yet when you compiled the binary. Also, providing an optimized version for every ARM, x86, etc. generation will make the binaries very fat.

Fat binaries worked for relatively static platforms, such as Macs. For devices which are (still) iterating quickly, it's a suboptimal solution. Sure, it works for delivering code, but it is suboptimal.

(Not that I believe that ART is currently optimal, e.g. it should not be necessary to do the compilation of an app on 1 million devices that are identical.)


They are only fat at the store, the devices only see the .so they understand.

And it isn't that hard to have "APP_ABI := all" in Application.mk. The build just takes a little longer on the CI system.


There's currently more than 100 SoCs out there and several major generations of ARM architectures which show clear benefits of GCC compiler tuning for their platform.

Tracking all those devices and architectures is dumb, when the manufacturer/device can provide proper tuning for the device itself.


Yes, but my experience tells me that "can" != "he/she will".


> known as fat binaries or delivering multiple binaries in a package.

That's not a solution, that's band aid. Fat binaries or multiple binaries cannot support future architectures or architectures that the original developer doesn't support. Platform independent bytecode can.


That is the theory, which I preached for a long time as well.

The real life looks a bit different.

An a simple example, I can recompile my application and target any device while using 100% of all CPU features in all Android generations.

Whereas Dalvik and ART are stuck to the versions that were burned into the silicon.

So gcc and clang can easily outperform Dalvik in < 4.4 generations, given that it was hardly changed since 2.3.


An a simple example, I can recompile my application and target any device while using 100% of all CPU features in all Android generations.

For one application. And again, your binaries will be very fat if you want to optimize for every possible CPU generation.

Whereas Dalvik and ART are stuck to the versions that were burned into the silicon.

Which is a problem with how Android updates are distributed, not the principle of doing AOT compilation on byte code. I think pretty much everyone agrees that the push of updates to Android devices sucks, compared to iOS or, to some extend, Windows Phone.

Also, for optimization for a particular CPU it should not matter if ART is not upgraded, as long ART was optimized for the CPU at the time the phone was released. (Of course, you would miss out on newer optimizations in ART.)


> Which is a problem with how Android updates are distributed, not the principle of doing AOT compilation on byte code. I think pretty much everyone agrees that the push of updates to Android devices sucks, compared to iOS or, to some extend, Windows Phone.

That is why I said:

-- quote

That is the theory, which I preached for a long time as well.

The real life looks a bit different.

-- quote

So one is bound to use the approach that works best, regardless of how it could be.


It still depends on you, as a developer, to recompile the app for the target cpu.

I have a collection of M68K and PPC Mac binaries, that nobody is going to recompile for me. I would take sub optimally running app over not-running-at-all app any day.


> I have a collection of M68K and PPC Mac binaries, that nobody is going to recompile for me.

Apple did.

http://en.wikipedia.org/wiki/Rosetta_%28software%29


Rosetta is not supported in current systems. Since 10.7., I think.

Classic, which provided the ability to run Classic M68K and PPC binaries, had it's last release in 10.4. It wasn't supported in 10.5 and it never run on Intel Macs.

Not to say, that both these technologies are limited hacks compared to proper platform-independent bytecode ala Dalvik, ART, Java or CIL.


"Whereas Dalvik and ART are stuck to the versions that were burned into the silicon."

I don't understand this comment? What hardware is the Dalvik and ART stuff wedded to?


The ROM containing the firmware.

Except for the top OEMs, most Android devices don't get any updates, so you are stuck with the JIT that was burnt into the firmware.


What do you mean by 'ROM' or 'burnt'? Android is normally on flash storage. Even if a device does not get updates anymore, you could still replace it if the boot loader is unlocked (e.g. Cyanogenmod).

Of course, you know this, so I can only assume that you are using such words for dramatic effect ;).


Which normal users are unlocking their devices?

It doesn't matter if it is flash, eprom, rom, whatever.

The practical result is that most devices die with the Android version they were bought with.


Normal users do not care about firmware versions or cpu optimizations.

An app either runs, or not. Dalvik allows the apps to run, without the user having to know about cpu architecture in their phone or whatever.


But I as developer care to deliver the best experience in terms of performance from Android 2.3 all the way up Android 5.0.


Sure, Dalvik apparently was left to gather dust for a few Android releases.

Yes, having the AOT compilation done on the device is an handicap, given how OEMs barely update Android versions.

So while Apple and Microsoft approaches mean you can target older devices while enjoying improvements in code generation, with Google's approach you're stuck with whatever the device supports, unless you use the NDK instead.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: