

Google Acquires Android Performance Startup FlexyCore For A Reported $23 Million - sciwiz
http://techcrunch.com/2013/10/22/google-flexycore/

======
ihsw
Lesson learned: create a compelling product for Android developers and Google
will acquire you in a heartbeat.

------
plackemacher
I wish I could find more information about them. All I can find are videos
showcasing the performance improvements but without any explanation.

[http://www.youtube.com/user/FlexyCore/videos](http://www.youtube.com/user/FlexyCore/videos)

~~~
JonSkeptic
From what I can gather, it seems FlexyCore generates more efficient,
architecture specific assembly code at build time. This is where the
performance gains come from.

A little deeper reading shows that it does this through the Dalvik VM, the VM
on all Android devices.

I think the creator can say it better than me:
[http://www.youtube.com/watch?v=tEAz9fRoDmA](http://www.youtube.com/watch?v=tEAz9fRoDmA)

~~~
nly
So basically it caches the native code produced by the Dalvik VM? I'm
surprised Dalvik doesn't already do this.

~~~
pjmlp
Dalvik's development has been stalled since Android 2.3, no new JIT or GC
improvements since them.

No more talks about it at Google IO, either.

Of course, this is also related to the Oracle's suit against Google.

I wonder if this means Google will reinvest in Dalvik, as I am betting KitKat
will again only bring more Google APIs and nothing else.

------
VeejayRampay
French startup at that. Happy to see that there's at least _some_ innovation
going on in this country.

------
devx
Hoping it will appear in Android 5.0. Six months or so should be enough to
implement this, right?

Also, they should probably acquire these guys, too:

[http://www.genymotion.com/](http://www.genymotion.com/)

The emulator is also one of Android's longstanding problems, and it would be
nice if they fixed that once and for all, too.

I'm also hoping that with the arrival of powerful ARMv8 hardware in the couple
of years, maybe they'll make their SDK for ARM, too, so it doesn't even need
to be emulated in x86.

------
leeoniya
wonder if it's similar to Linaro Android [1]

[1]
[http://www.youtube.com/watch?v=F_NR_goi6iA](http://www.youtube.com/watch?v=F_NR_goi6iA)

------
Zigurd
This feels like an "acquhire." I haven't yet seen a performance booster get
any traction with OEMs. This sounds more-promising than Myriad's idea of
replacing the whole VM with their Java VM.

Optimization in a battery powered device is tricky. After the initial work on
Android's interesting JIT strategy, I have not heard much else about boosting
performance. Developers who run into performance limits are stuck "doing it by
hand" with native code and Renderscript. Maybe this signals a revival of
interest in extracting better performance from Android's runtime environment.

~~~
pjmlp
I certainly hope so.

On iOS everything is native.

Since Windows Phone 8, Microsoft also went fully native, by pre-compiling MSIL
to native code on the Windows Store servers. The devices only have a linker to
perform the last step of replacing the symbols by effective addresses on load.

~~~
Zigurd
Everything being native isn't always a better way. Android's architecture
keeps code small, makes sure nobody eats the global heap, shares common
bytecode (really, anything) pages across processes, and provides memory-
conserving modularity tools within each VM instance's heap. It's a pretty
elegant system and the reason there hasn't been a lot of new work on the JIT
may be that the JIT is as good as it can be without eating more battery.

~~~
pjmlp
> Android's architecture keeps code small, makes sure nobody eats the global
> heap, shares common bytecode (really, anything) pages across processes, and
> provides memory-conserving modularity tools within each VM instance's heap.

Global heap can be controlled by process in many OS, nothing special about
VMs.

Sharing of code pages between processes has been done in mainstream OS since a
few decades.

 _Memory-conserving modularity tools_ , whatever that might be, aren't VM
specific.

The only issue you are right about, is that bytecode is much more compact than
native code.

> ... the reason there hasn't been a lot of new work on the JIT may be that
> the JIT is as good as it can be without eating more battery.

The current JIT is good enough for developers writing CRUD applications, HTML
wrappers or the "fart app" of the month.

Those of us that care about performance use the NDK anyway.

~~~
Zigurd
> Those of us that care about performance use the NDK anyway.

If you had said "People porting game engines implemented in C use the NDK
anyway" you might be right, and they do that because re-coding the game engine
is impractical, not for performance. But the implementation of large systems
in Android is best done in Java, for reasons of performance, modularity, and
power efficiency.

Managed language systems exist for a reason. Dalvik bytecode is much more
space efficient than native code, and Google has claimed it is about 2x as
space efficient as Java bytecode, and 2x faster to interpret. Those factors
reduce the need for a JIT in Android, and, until Android 2.3, Android didn't
have one, and there were plenty of ambitious, performance-sensitive Android
apps.

The Android component architecture enables a kind of code swapping that
enables large apps to fit in a single process with a limited heap size. If you
are not making use of component objects, you are not being memory efficient.

Android's Zygote and the use of copy-on-write goes beyond sharing pages of
"pure" code. Any page can be shared between processes this way. This keeps
global memory use down and makes it efficeint to start many more processes
that would otherwise be practical with a VM language.

The Android base classes, Java toolchain, and runtime are the most
sophisticated and efficient managed language environment for mobile devices.
Ignoring them and going straight to NDK code without a specific performance
case based on measurements of Android Java code performance, the effects of
the JIT, and code size is a waste of time and a source of bugs that will be
harder to find and fix.

~~~
pjmlp
Sorry, but your comment reads out as Google PR about Dalvik. Even with your
published books I am not buying it.

The only thing you are right about is that bytecode is more space efficient
than native code.

Everything else is easily done in native code as well, it is just a matter of
the OS providing support for it, there is nothing special about having to be a
VM.

~~~
Zigurd
Much of the Android OS's middleware layer is implemented in Java, and runs in
a Dalvik VM instance. Many parts of the OS are implemented as Service
components and accessed through remote procedure calls - the same way
installable apps can provide APIs to other apps.

Contrast this with the history of the CLR in Windows mobile devices. It was
not taken seriously as an application platform. None of the built-in apps used
it, never mind it being the basis for system programming. Android is a Java OS
in a more meaningful way than running a CLR for some subclass of apps, which
certainly doesn't make Windows a "C# OS."

Try writing a non-game Android application with every Activity component as a
NativeActivity subclass. It will be painful to code, more painful to debug,
and it will disappoint in performance.

I'm not just shilling for Google's architectural decisions. Prior to Android
being released, I was working on wireless device OS that used Linux and a Java
VM for userland software. I am very familiar with the performance increment
obtainable using a JIT in a mobile device, and, as an alternative, in using a
precompiled Java runtime. As in Android, the heavy-duty pixel-pushing in that
OS was done in a graphics stack implemented in native code. Much of Android's
widget set is also in native code. Unless you choose to do heavy computation
on an Android device, you are not going to get a large advantage from a JIT,
especially if your bytecode interpreter actually does perform 2X better than
Java.

As for heavy computation on Android devices that's what Renderscript is for.

~~~
pjmlp
So why do native Objective-C/iOS and native .NET/WP 8.x apps work faster and
more fluid than Android ones?

~~~
Zigurd
Mainly due to differences in multitasking.

Android always enabled completely general multiprocessing and multithreading,
initially at the expense of smooth visual effects, and Android was implemented
on a variety of CPUs and GPUs. Secondarily, Android didn't used to have
concurrent GC. Thirdly the animations were not tuned (q.v. Butter). Fourthly,
the graphics stack was tuned to use more GPU-based operations as mobile GPUs
improved. If you know you have only OS overhead when you are the foreground
task, lots of things get more deterministic. But that does not mean you have
more total throughput.

For that matter, Android didn't have low-latency audio until 4.2.

Try taking a look at how many processes, and Dalvik instances, are running
concurrently at boot time on an Android device. Then try just starting that
many VM instances, of any VM, on any other OS. The reason Microsoft has gone
to pre-compiling "in the cloud" is that otherwise the CLR has a miserable
startup time and mediocre performance. The CLR architecture is much more like
a conventional Java VM. You can't build a whole OS on a managed language
implementation of OS features with that level of performance.

I'd wager Microsoft has a much rougher road ahead stuffing a real Windows
kernel into handsets and maturing CLR performance, memory efficiency, and
battery life for comparable computing tasks that Android handles well in all
performance dimensions.

~~~
pjmlp
> For that matter, Android didn't have low-latency audio until 4.2.

It still doesn't, [https://groups.google.com/forum/#!topic/android-
ndk/qACobHa8...](https://groups.google.com/forum/#!topic/android-
ndk/qACobHa8lNQ)

~~~
Zigurd
The person who posted that comment is behind the times re audio on Android.

Android can deliver very low audio latency and do it consistently, but the OEM
has to test that their CPU power management and other kernel configurations
have not screwed it up, and those tests are not baked into ACS.

The OEM should, at least, correctly use the feature flag for low-latency
audio, leaving is off if they don't actually know. But specifying that flag in
your manifest means your app loses access to devices where good enough is good
enough.

Here is a low-latency polyphonic synth: [https://code.google.com/p/music-
synthesizer-for-android/](https://code.google.com/p/music-synthesizer-for-
android/)

This synth was the test case for development of low-latency audio.

~~~
pjmlp
> The person who posted that comment is behind the times re audio on Android.

No, that person has real life experience what means to develop for Android
while targeting the majority of devices available to consumers.

All your answers so far were very good and I appreciate the time you took to
answer them.

However, those of us that target all three major mobile OS across multiple
OEMs, have a different experience of what Android's performance on real
devices looks like.

And to conclude this thread, Google agrees with us, otherwise they wouldn't
have spent $23 Million buying FlexyCore if Android's performance on real
devices was so good as you have been defending.

