
Why mobile web apps are slow - jacobparker
http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/
======
hcarvalhoalves
The trend towards retrofitting interactive applications to run inside the
equivalent of a VM is just stupid, not only because of the obvious
wastefulness of executing JS (given Moore's Law stopped).

When dealing with a browser, you're limited to a broken interaction model
(document? URL? back button?), a broken _security_ model, one programming
language with a weak API (that isn't equally supported between browsers
either), and you have to abuse HTML into things it's not supposed to. It's a
development experience that is subpar even by 90's standards.

Add all that to the pedestrian performance, and I'm amazed this is _still_ an
option.

~~~
untog
> The trend towards retrofitting interactive applications to run inside the
> equivalent of a VM is just stupid

Well, Android does it.

And in all honesty, when apps start getting complex they're just as bad.
There's a reason URLs exist- it's to give a unique identifier to an item. Apps
need to do that too, and end up having to fudge around launching and reaching
a specific item.

~~~
potatolicious
Apps have URLs also - even iOS, which is the weakest link in terms of inter-
app interaction, has them.

The difference here is that one platform requires shoehorning a URL into a
non-document-centric use case, and the other one leaves it optional, for the
developer to implement as he/she sees fit.

If I'm using the Yelp app, I expect to be able to communicate a restaurant
listing to someone else via a URL. But why is it that my alarm clock needs a
URL? Or my phone dialer?

The web was envisioned as interlinked documents, and for many parts of the web
this is still very much the case (see: Amazon). For others this metaphor
breaks down badly, and is the source of a great deal if hacks and kludges.

~~~
zanny
My linux kernel has a url.

file:/boot/vmlinuz-linux

So does my init daemon!

file:/sbin/init

Which is a symlink, so let's just specify a familiar program, and just throw
on the proper file extension.

file:/usr/bin/firefox.elf

Still a url. To a program. If your executor supported file URIs you could run
that.

I could host it on an http server, and if I mounted it using webdav as a
davfs2, I could access anything on that server the same way.

You could write a python interpreter that can take a url to a python script to
run: pythor
[http://github.com/some_repo/foo/script.py](http://github.com/some_repo/foo/script.py)

A lot of programs already use URIs - I know KDE dolphin specifies all resource
paths as URLs regardless of if they are local or remote, and it supports a
buttload of URI schema. The qt5 QML engine specifies qml files to load by URI,
so you can load remote qml applications in a browser that links in the qml
runtime.

I mean, we even have this nice syntax (that isn't pervasive or transparent) to
supply arguments in a url with something like google.com/search?q=bacon which
in bash-world would be google --query bacon or google -q=bacon.

So what is missing from URLs from being uniform resource locators?

~~~
potatolicious
Not everything being a resource, for one, and not everything being naturally
represented in a URI.

It's _possible_ to encode just about any information in a URI, it doesn't mean
it isn't a kludge and a force-fit.

Take a very, very simple (and common) use case:

[https://maps.google.com/maps?q=Newark+International+Airport,...](https://maps.google.com/maps?q=Newark+International+Airport,+Brewster+Road,+Newark,+NJ&hl=en&sll=40.697488,-73.979681&sspn=0.940177,1.685028&oq=newark+li&hq=Newark+International+Airport,+Brewster+Road,+Newark,+NJ&t=m&z=14)

Holy God would you look at that URL. _It doesn 't refer to a resource_. In
fact it refers to _application state_. This is a gross abuse of the whole
concept of a URL, but the folks at Google aren't idiots - they know this. But
the fact of the matter is that Google Maps _is not document-based_ , and
people have legitimate need to transport application state, independent of the
semantics of the information they're looking for. Even something as simple as
showing my friend the same map I'm looking at so we can talk about it requires
bending the role of URIs wildly out of shape.

Even better example:

[https://maps.google.com/maps?q=restaurants+around+Times+Squa...](https://maps.google.com/maps?q=restaurants+around+Times+Square,+Broadway,+New+York,+NY&hl=en&sll=40.691572,-74.10347&sspn=0.235066,0.421257&oq=restaurants+around+times+square&hq=restaurants&hnear=Broadway,+New+York&t=m&z=16)

One can argue that the previous link was clearly a reference to the airport,
and there can be a URI for it (which wouldn't transport application state,
substantially hobbling its use, but whatever). _This_ link refers to
"restaurants around Times Square", which isn't a logical entity, and whose
relevance to the end user depends _entirely_ on the application state being
transported (e.g., zoom level, center point, filtered results, etc).

Every time these insane-o URLs get brought up the URI-acolytes seem to end up
suggesting that maybe we shouldn't have rich, interactive maps or such things,
that we should force software _into_ roles that play nicely with the document-
centric, URI-friendly schemes. I for one believe that software serves users
first, and methods to locate and specify information needs to adopt its users
needs, not the other way around.

~~~
wtbob
> Even something as simple as showing my friend the same map I'm looking at so
> we can talk about it requires bending the role of URIs wildly out of shape.

How so? That 'same map' _is_ the resource being uniformly indicated.

> This link refers to "restaurants around Times Square", which isn't a logical
> entity

Why not? It's a logical entity, which is a collection viewed a certain way.

------
Joeri
It's a well-researched article, but i doubt it will change anyone's mind. What
i got as a take-away from this is that the sort of mobile apps that i work on
can be done perfectly fine in javascript, because i did their desktop
equivalents in ie7 on pc's with 512 mb ram.

I'm reminded of the native vs web debates on the desktop. Everything the
desktop guys said was true, with whole categories of apps essentially off-
limits to web developers, and the remainder clearly and obviously inferior to
their native counterparts. That didn't change, native apps still rule supreme
in capability and usability. And yet the average person is using web apps the
majority of the time (although that opinion admittedly depends on what you
understand as average). The reason why that is translates to mobile.
Everything this article says is true, and still mobile apps will be mostly
web-based. There is no conflict between those facts.

~~~
coldtea
> _That didn 't change, native apps still rule supreme in capability and
> usability. And yet the average person is using web apps the majority of the
> time_

Those web apps most people use are trivial compared to the stuff we do on
Desktop and Mobile. And the main reason people use them "the majority of time"
is because they are huge time-sinks.

E.g: Social apps like FB and Twitter, that translate to posting and reading
text and images.

Web mail, which translates to working with lists of text articles.

Nothing much CPU intensive, in the way desktop/mobile apps are.

As for the CPU intensive stuff, it's really still a novelty on the web. Nobody
(== few people, less than 1% is the numbers I show), use Google Docs (and
that's not even a full featured office suite anyway -- not to mention the
spreadsheet for example takes lot of time even on my 2010 iMac/Chrome to apply
stuff like conditional formatting). Even less people use something like online
image editors and such.

The only thing people really use on the web, and is CPU intensive are web
games. And even those are more like 1995 quality desktop games.

------
modeless
The use of SunSpider to argue that CPU-bound JavaScript performance has not
improved since 2010 is simply wrong. What SunSpider does is not at all
representative of what CPU-bound JavaScript in a modern web app does. V8 and
SpiderMonkey _have_ significantly improved performance on real app code since
2010, despite the unchanged SunSpider scores. I say this as a developer on a
large and heavily CPU-bound JavaScript app.

The dismissal of asm.js is wrong too, for many reasons. Browsers don't need to
support it explicitly necessarily. GC pauses are eliminated regardless of
browser support, and most JavaScript optimizations will help asm.js too. The
elimination of GC is _not_ the only benefit, or even the main one really (the
main benefit is the elimination of polymorphism, improving JIT performance and
allowing AOT compilation). The assertion that it's not "really" JavaScript is
irrelevant to the question of whether it can speed up web apps or not.

All that said, I really appreciate the article and the discussion it's
sparked.

------
MatthewPhillips
This article is weird. It starts off with a strong premise -- we need more
fact-based arguments when discussing technology choices. Yes, agree!

There are a fair amount of good points here, particularly about ARM's
potential but he's very dismissive of counter points and seems to believe
having lots of quotes is a substitute to a good argument.

The section about garbage collection is especially unconvincing and, again,
dismissive of counter points. He never really addresses Android or Windows
Phone (which, by the way, he repeatedly mistakenly calls Windows Mobile --
which is right in line with the extreme iOS focus the article has). He
_mentions_ them, but then seems to think that a few quotes that show that,
under some extreme circumstances -- mostly games -- the GC can be a problem on
those platforms. But which is it, are all Java and C# Android/Windows Phone
apps unacceptably slow or are _some_ types of apps not really doable in those
GC languages?

This is important because a large part of his argument centers on GCs not
working on mobile.

I'd conclude that the article is overly defensive. To the point that the
author repeatedly poisons-the-well of anyone who might potentially disagree.

> we need to have conversations that at least appear to have a plausible basis
> in facts of some kind–benchmarks, journals, quotes from compiler authors,
> whatever.

This is where I think the article falls apart. Appearing to be based on facts
is what this article is about, but appearance just gets you talked about. It
doesn't really move the conversation forward at all.

~~~
rsynnott
> But which is it, are all Java and C# Android/Windows Phone apps unacceptably
> slow or are some types of apps not really doable in those GC languages?

He didn't really say either; he pointed out (correctly) that for certain
applications in a memory managed language, you'll want to essentially avoid
using the GC, and you'll certainly have to be very cautious of it. The extreme
case was old J2ME games, which usually worked by allocating all the memory
they'd use as a large array at startup, and doing manual memory management
thereafter.

In general, you can do just about anything in a memory managed language, but
in some cases, especially games, it may actually end up a lot more work than
in a non-managed language.

~~~
spullara
There are two performance choices in GC languages: 1) don't allocate after
initialization 2) only allocate per small time frame There is an issue that 2)
is only really feasible in server settings and doesn't really work for
clients. I buy his argument even though I don't wish to program under his
restrictions.

------
azakai
Well written article, but I have a few comments:

1\. This is about JS speed. How important is JS speed in mobile apps, compared
to rendering for example? That's the crucial question here, and I don't see
any numbers on it. Rendering, after all, should be the same speed as a native
app.

2\. "Nothing terribly important has happened to CPU-bound JavaScript lately"
\- picking some benchmarks where there has been little change is possible.
There are also benchmarks where there have been large changes. For example,
many browsers now have incremental GC and background thread JITing. In some
workloads these make a huge difference - but perhaps not on the old SunSpider
benchmark.

edit: As an example of a type of code that can run far faster today than
before, see Box2D, which is a realistic codebase since many mobile games use
it:
[http://j15r.com/blog/2013/07/05/Box2d_Addendum](http://j15r.com/blog/2013/07/05/Box2d_Addendum)

~~~
ryanpetrich
Canvas is only marginally slower than the native 2D drawing APIs (it is a thin
wrapper for Skia on Android and CoreGraphics on iOS), but it still uses only a
single CPU thread for drawing on both platforms. To use the GPU, one has to
turn to the -webkit-transform CSS property which, under certain circumstances,
promotes the element to a GPU buffer. Only once WebGL becomes accessible will
the mobile web have a drawing API that is more efficient than the standard
windowing APIs.

------
untog
One minor caveat I'd add- JS is slow, but you don't always have to use it.
I've had great results using a little JS coupled with hardware accelerated CSS
transitions, animations, etc.

Web apps are slower than native, that's undeniable, but they're only _really_
slow when you port bad desktop practises ($("#home").animate() and the like)
over to mobile.

~~~
pachydermic
I'm just learning JS - I've had plenty of experience with more technical
scripting languages, but I'm still a JS newbie.

Is that a bad practice just because that kind of code is optimized for desktop
use or is it just bad practice in general?

Also, a broader topic I'm wondering about: JS and HTML seem pretty clumsy. I
find it hard to believe that they're really how the majority of web pages run.
Is there really no alternative? Or is it just the case that these tools are
"good enough" and what really creates problems is bad code(rs)?

~~~
dclowd9901
It's not easy to say whether it's a bad practice or not. I think more than any
other platform, how your application is going to be used, and who it's going
to be used by are vital points of information that you need to answer before
you approach the issue:

1) Are your visual flourishes vital to the quality of your product?

2) How concerned are you with baseline performance?

3) How big is your application? Do you need disparate components to be
reusable, or do you need a philosophy or underlying framework to guide
development?

4) Is your application going to be used on mobile?

The answer to 1 dictates how you should approach things like animating
elements. Much of the time, the animation is really just a visual flourish,
and can be handled with graceful degredation through CSS. Plus you can learn a
few tricks using just CSS that don't trigger unnecessary redraws that might
happen using JS.

The answer to 2 dictates whether or not you should adopt an existing framework
or library vs. rolling your own. Libraries have gotten better about making
themselves available piecemeal, but if you know how to write quick code,
you'll often find almost everything is not quite what you absolutely need. See
Canvas game dev.

Question 3 will determine what sort of framework you'll want. Something that
is extendable and object-oriented, vs. something that has more self-contained
reusable components. The drawback of the former is that classing in JS brings
in a considerable performance hit, whereas reusable components make for much
more unmaintainable code.

Question 4 will play a roll in all the aforementioned questions, namely where
matters of performance and DOM redraws are concerned, as that's where mobile
tends to fall flat when it comes to web apps.

------
RyanZAG
Big part of the article focuses on garbage collection, low ram environments,
and mobile devices. I'd just like to say that my S4 has 2GB of ram (yes, 2GB).
More than likely, my next phone will have 3GB or 4GB of ram. As the author
mentions, once you get to 3x-6x more physical ram than what you absolutely
need, the garbage collector no longer slows down your device - in fact, it
speeds it up. So I'm not sure that this is really that big a problem. RAM is
one of the areas where mobile devices really are improving rapidly (the other
being 3D).

That said, the article was excellent. Props to the author.

~~~
kogir
Actually, in general as memory use increases GC takes more time, not less.
It's rare that the number of objects stays constant and they just get bigger,
and more objects means longer and more difficult tracing.

HN has this problem. If we use more than a quarter or third of available RAM
GC destroys site performance. Switching to faster RAM helps, but adding more
doesn't.

~~~
RyanZAG
Interesting point, but as per the chart[1] and discussion in the article, the
relative memory footprint does have a big effect on performance. This is
mostly felt when you get more complex object graphs. If the GC only needs to
run once when you finish your calculation, it only needs to sweep once. If ram
is limited and the GC is forced to run 100 times, then it must sweep 100
times. This is a lot of extra work and is the main reason why GC can be slow.

In the case of HN, it sounds like you have more than enough RAM already, so
you are not hitting the 'worse case' of GC that gets hit often in javascript
on last generation mobile phones. The symptoms to the user are very similar to
'disk cache thrashing' and can be as bad as multiple second pauses.

You're very correct in that increasing heap size too far is also a big
problem, as running GC on 64GB of ram can take seconds - freezing up
everything else while it runs. There are a number of guides on tuning the JVM
on servers to avoid this. The new JVM GC algorithms are also very good at this
situation by default, imo.

Finally, on the issue of RAM speed: phones generally have a fixed amount of
space dedicated to RAM - the 2GB chip on my S4 is the same as a 512MB chip on
an older iPhone. This generally means that it is just as fast to access the
2GB of ram on an S4 compared to the 512MB on older devices. So at least on
mobile, RAM speed is fairly closely related to RAM performance. I can't seem
to find the benchmarks I saw earlier on this at present, anyone can confirm or
deny with numbers?

[1] [http://sealedabstract.com/wp-
content/uploads/2013/05/Screen-...](http://sealedabstract.com/wp-
content/uploads/2013/05/Screen-Shot-2013-05-14-at-10.15.29-PM.png)

~~~
rsynnott
As a general rule, as you add memory to a VM, your overall throughput grows,
but unless you're extremely careful, the length of individual GC pauses may go
up (which is a concern in a GUI app; you don't want to block the main thread).
Of course, all of this is highly dependent on GC used.

> the 2GB chip on my S4 is the same as a 512MB chip on an older iPhone. This
> generally means that it is just as fast to access the 2GB of ram on an S4
> compared to the 512MB on older devices

Not necessarily. Your S4 has a memory bandwidth of either 8.5GB/sec or
12.8GB/sec (depending on if it's the Exynos or Snapdragon model). An iPhone 4S
has memory bandwidth of 6.4GB/sec. So, you can read your whole memory in 230ms
or 158ms (in theory; other limits will show up). The 4S can read its 512MB
memory in 80ms.

------
mhd
My biggest "issue" with this generally rather decent article is that CPU-bound
applications are _rare_. For the plethora of social, productivity and novelty
apps it doesn't matter at all. Optimizing the graphical stack certainly
matters, but that's mostly an infrastructure issue, i.e. already dependent on
the native parts of the OS/framework.

And consider who actually might be asking themselves this very question,
native or webapp: The very same social/productivity/novelty websites and
services who are looking how to best present their product on a mobile
platform. And I'd wager a guess that sheer performance often isn't the
deciding factor here , it's mostly about the native look. (And even seems to
get less important, although I wonder whether iOS 7 could rekindle that)

And that whole talk about GCs sound awefully familiar. Didn't we have the very
same conversation in the mid-90s when Java came out?

Also there's the good old alternating hard and soft layers. Hybrid webapps
(with some native compoents to speed up and slim down things) could easily
bridge the gap until the browser standards/APIs/hardware catches up. It's
really not an all or nothing question, especially considering how many
Android/iOS apps are basically WebViews with a tiny veneer of native buttons,
preferences and background storage.

Not that I like the web platform all that well, but let's be serious about the
"depth" of web apps...

~~~
revscat
> And that whole talk about GCs sound awefully familiar. Didn't we have the
> very same conversation in the mid-90s when Java came out?

Yes, absolutely. The objections made by OP are not very different from those
made in the 90s.

But...

Some of the core objections made then are still true today, they're just
hidden from users because of advancements in both hardware and GC technology
itself. In the case of mobile, however, you're much more hardware constrained,
at least for now. This will likely change sooner rather than later, but for
now that means that for hardware-constrained devices well designed apps
written in non-GC languages will almost always outperform similarly designed
apps written in GC languages.

In other words: it's going to be easier to get good performance out of Obj-C
on mobile than Java, and certainly easier than JavaScript.

For the record: I like Java -- yes, even today -- and the JVM, currently use
it for my day job. For webapps it's moderately neato. For mobile, though...
Not as much.

~~~
pjmlp
I think the main issue is that those environment so far insist in a VM model.

Most likely if Android used native compiled Java, performance could be
improved.

Microsoft went this route with Windows Phone 8, as they replaced the .NET JIT
with a proper native compiler when targeting Windows Phone.

------
purplelobster
In my experience, slow DOM manipulation and various rendering glitches is the
biggest barrier for most apps. Javascript performance is a big problem, but
one affecting mostly apps that are processing heavy.

------
ashcairo
Great post! I'm a huge C++/Anti-GC/Games developer here.

Few notes. On performance, most of the time, I'm GPU bound when writing games.
In this JavaScript based game for iOS:
[http://imgur.com/XQdGKHw,swTBC9T#0](http://imgur.com/XQdGKHw,swTBC9T#0)

I'm getting 11ms used up on the CPU and 29ms used up on the GPU:
[http://imgur.com/XQdGKHw,swTBC9T#1](http://imgur.com/XQdGKHw,swTBC9T#1)

That's 11ms using iOS JavaScript which isn't JIT optimised.

Granted CPU will go crazy if I try running advanced Physics using JavaScript,
but I tend to push Math intensive functions to C++ using a hybrid approach.

On GC, I'm not a fan, I like managing my memory. However, UnrealEngine, which
is super-awesome, has a built in garbage collector. In my experience with it,
whenever someone tries to blame its GC for performance/memory abuse, to my
dissatisfaction, the problem usually lies elsewhere.

Main reason I've full on switched to the JavaScript world for development is
its ability to re-interpret code on the fly. Allowing me to write and tweak,
code/ui/design changes in real-time across multiple-devices.

------
amenod
Coming from the world of C / Java / PHP / Python and similar I found
ObjectiveC maddening at times. But ARC really shines. Brilliant idea and well
executed too: virtually no work on programming side (well, a few small caveats
about function names apply) and no performance penalty on user side. Win-win.
Good job Apple.

------
etler
Am I the only person who's not a fan of the whole blogging with attitude
trend?

~~~
tedsanders
No, you're not alone. (One issue at play may be selection bias. The blog posts
with attitudes cause a bigger reaction in readers, causing higher sharing,
causing them to be read more, causing them to be overrepresented in what we
see.)

------
Pxtl
Nicely explains why I find Android's use of Java as the lingua franca utterly
perplexing. GC has no place on a low-power interactive platform.

~~~
Apocryphon
Maybe it's a trade-off to make developer adoption easy.

~~~
sjmulder
I’m not sure what alternatives they had. Of the top 10 languages on GitHub
([https://github.com/languages](https://github.com/languages)), only C and C++
do not have garbage collection. Those are quite a different beast from Java of
Objective-C.

There’s a selection bias here but I struggle to come up with a popular non-GC,
compiled language that operates on the same sort of level as Java or
Objective-C.

~~~
Aqwis
Python's GC can be turned off.

~~~
Pxtl
Isn't python's GC reference-counted and deterministic anyways?

------
kannanvijayan
Disclaimer: I'm a JS JIT developer.

The use of Sunspider performance to talk about JS application performance is
unfortunate, and undermines a significant part of the argument made in the
article. Sunspider is a great benchmark to use to talk about web _page_
performance.

This is how JITs work: 1\. Javascript code starts executing in a slow
environment. 2\. In this execution mode, a lot of overhead is spent collecting
type information so that the hot code can be optimized. 3\. After some period
of time, the hot code is compiled with an optimizing compiler, which is a very
expensive task. 4\. The compiled code starts running.

For example, in Mozilla's JS engine, these are the actual numbers associated
with optimization: 1\. A function starts off running in the interpreter 2\.
After it gets called 10 times (or a loop within it goes around 10 times), it
gets compiled with the Baseline JIT, which performs heavyweight bytecode
analysis on the javascript code, generates unoptimized machine code which is
instrumented to acquire information about the executing JS. 3\. After 1000
iterations (most of that running within Baseline jitcode), the function gets
compiled with our optimizing compiler, IonMonkey.

How does this relate to Sunspider?

Well, if you actually run Sunspider, you'll notice that the entire benchmark
takes about 0.2 seconds or less on a modern machine (about 1.5 seconds on a
modern phone). It's an extremely tiny benchmark (in terms of computation
heaviness). Comparatively, the Octane suite, while containing fewer
benchmarks, actually takes 100 times or so longer to execute than sunspider.

How many apps do you use that run for less than 1.5 seconds?

This is the problem with using Sunspider to talk about app performance.
Sunspider doesn't measure JIT performance very well at all. When JIT engines
execute Sunspider, they spend a relatively large amount of that time just
collecting the necessary information before they can start optimizing. Then,
soon after we spend all that effort optimizing the code, the benchmark ends,
without really taking advantage of all that fast code.

Very often, JIT optimizations that speed up computationally heavy benchmarks
actually regress Sunspider. This is because the optimizations add some extra
weight to the initial analysis that happens before optimized code is
generated. This hurts sunspider because the initial analysis section is much
more heavily weighted than later optimized performance.

Sunspider's runtime characteristics make it a good benchmark for measuring web
PAGE performance. Web pages are much more likely to contain code that executes
for short periods of time (a few ms), and then stops. But apps have much
different runtime characteristics. Apps don't execute code for a few
milliseconds and then stop. They run for dozens of seconds, minimally, and
often minutes at a time. A JIT engine has much more opportunity to generate
good optimized code, and have that optimized code taken advantage of over more
of the execution time.

It's unfortunate that such a fundamental misunderstanding of the
characteristics and nature of optimized javascript is used to drive an
argument about it.

~~~
__alexs
On Octane I get basically the same performance difference between desktop
(Core i7 L620) vs mobile (Galaxy S3) so I don't think you can wave away his
critique quite so easily. I've seen other peoples results that seem to suggest
that mobile scores 5-10x worse than desktop too. Although I would quite like
to see more benchmark results for Octane published, I haven't found very many
that make it easy to compare.

~~~
kannanvijayan
That's not the point I was addressing. It's obviously true that mobile is
slower than desktop by a factor of 5-10x. However, the author uses Sunspider
as a metric to claim that "JS engines haven't really gotten all that much
better since 2008".

That's not true at all.

On benchmarks like Kraken and Octane, which actually take multiple seconds to
run instead of just milliseconds, both V8 and SpiderMonkey have improved
significantly even just over the past year.

Speaking for SpiderMonkey (only because that's the engine I'm more familiar
with), we've bumped Kraken by 20-25%, and we've bumped Octane by around 50%.

On real-world app code like pdfjs, and gbemu.. we've improved performance by
2-3x over the last year.

JS has gotten significantly faster over the last couple of years, and we're
working quite hard on making it faster yet.

~~~
spullara
I think what he is saying is that your work is in vain because it doesn't
increase performance by 10x or more and that such small increases are nice but
not nice enough to make it worthwhile to use javascript.

~~~
Ygg2
Right, and no software will ever migrate to the Web, because its 5-10 times
slower than native. Who's gonna use it, right? Case closed.

Except its not.

~~~
spullara
Just because I tried to explain the point doesn't mean I believe the point.

~~~
Ygg2
Sorry, I must have written it in affect. Either that or I replied to wrong
post. Is there a way to delete that reply?

------
glogla
I like how he says that "mobile web apps are slow and always will be" (not a
direct quote), but then dismisses asm.js as "it woudn't be a JavaScript
anymore" so it doesn't count.

The point of asm.js (if I understand it correctly) is that you can use
JavaScript in way similar to Python -- you can make your application in high-
level language, and then, after profiling, rewrite the performance critical
parts into lower-level, less powerful, but faster language.

That means, it still is "web app" but it can perform reasonably.

------
cromwellian
I generally agree with his comments on JS and memory management, for stuff
like games, or image processing, but IMHO, these are not, in general, the
reason why people have this notion that "mobile web apps are slow". There
aren't a lot of games or Instagram apps written in JS, what there are, are
lots of content oriented and form processing based mobile web sites, like your
Bank, or your insurance company, or Slate or HuffingtonPost, and it is these
sites which are prompting you to download native versions, and it is people's
experience of these lightweight (non-CPU dependent) sites as being slow which
I think needs addressing.

I would break it down into three effects: 1) startup latency

2) excessive repaints/not leveraging compositor/jank

3) 'uncanny valley' issues where web emulations of native widgets don't match
100% with the feel of system widgets.

The basic documented-oriented part of the mobile web is a lot slower than
something like UIKit/TextKit in iOS, so the time to launch even a basic form
to display some data, or a few paragraphs, and smoothly scroll it, with all
rendering effects, and without layout or repaint jank, is just more work on
the mobile web to optimize for, and therefore, many developers don't do it, or
aren't aware of how to do it, leading to enduser perception that "mobile web
is slow"

I'd summarize my point of view as: NaCl/PNaCL won't solve our problems
(although it is still a good idea to do them)﻿

------
gwu78
Has anyone in mobile OS and mobile apps ever questioned the assumptions that
Java must be used and/or a scripting language must be used, e.g., Javascript?

What if we had a mobile OS and a mobile app that was 100% Java free and only
ran compiled code (i.e., that compiles to ARM assembly)?

Under the Steve Jobs theory of computers, everything should be "fast enough"
just as it is. Because Moore's law and the increasing computing power of
hardware should allow us to focus entirely on making everything as user
friendly (including develeoper friendly) as possible without having regard for
software efficiency. Watch for example the 1980 documentary "Entrepreneur" to
see Jobs state this while the cameras follow him around during a NeXT retreat
in Pebble Beach.

But I've also seen Hertzfeld say Jobs for example pushed hard for him to try
to reduce boot times on the Mac. This is of course is firmly in the realm of
software (bootloader and kernel) so it would seem a conflict of ideas.
Software efficiency does matter sometimes. Even when the focus is user
friendliness and user experience.

Jobs also said things back in the day about "the line between hardware and
software" becoming blurred.

I find this difficut to agree with. But then I'm not called a genius like he
was.

There are no doubt competent C and ARM assembly programmers out there. I've
seen them doing embedded work and making games. Alas, I do not think they are
by and large involved with working on the popular mobile OS's. Instead the
focus seems to be on people who prefer to work with virtual machines and
scripting languages.

~~~
Moto7451
_Has anyone in mobile OS and mobile apps ever questioned the assumptions that
Java must be used and /or a scripting language must be used, e.g., Javascript?
What if we had a mobile OS and a mobile app that was 100% Java free and only
ran compiled code (i.e., that compiles to ARM assembly)?_

First, it's a bit pedantic but JIT VMs still compile down to ARM assembly on
an ARM device. They basically optimize away the overhead of interpreting their
target language.

That said, what you're getting at is how Palm and Windows Mobile did it (for
the most part, there is/was .Net for Windows CE) and how iOS does it now.
Ubuntu Mobile and Win Phone 8 offer native toolsets in addition to GC/managed
runtimes.

I don't think its an assumption at all that mobile devices have to use Java.
J2ME didn't exactly take over the world, and that had the advantage of
hardware acceleration in the form of ARM Jazelle[1].

What has happened is as hardware gets faster/cheaper, Java, .Net, JS, etc
become a better fit for development on mobile devices. On the Desktop we have
stupidly powerful machines that can generally brute force their way through
the inherent overhead of interpreted/JITed/VMed languages. On mobile however,
we're not there yet.

[1][http://en.wikipedia.org/wiki/Jazelle](http://en.wikipedia.org/wiki/Jazelle)

~~~
gwu78
"become a better fit"

How? Why?

I might better understand your perspective if you could elaborate on the
details.

Why do we have to write resource hogging software _now_ for mobile if the
devices are not yet ready to "brute force" their way through it? And if we
wrote small, fast software _now_, what would be the downside fo this as
devices get more powerful in the future?

Users complaining about lightening fast programs that lack the whiz-bang
factor that would have been added if developers had used interpreted
languages? I can't imagine it.

------
MarkMc
As an old Java programmer, I'm astounded that Apple are deprecating garbage
collection on OS X. Is ARC really a better approach _on the desktop_?

~~~
drewcrawford
No. But Apple has a shockingly small number of employees relative to other big
software names, and they have had historically had much fewer third-party
developers than other platforms.

As a result, they have chosen to consolidate the mindshare onto one memory
management technology rather than dilute the talent pool with two competing
proposals. Given that iOS is much, much more popular than OS X, they have
chosen to standardize on the technology that works best for the popular
platform.

~~~
Roboprog
I would respectfully have to disagree. I've been programming for about 30
years, but finally broke down and bought my first Mac about a year ago. It's a
nice platform, and the native apps are a joy to use, compared to the sludge
that most Windows apps are.

Oh, and it lets me do _most_ of what I have been doing on Linux the last 18
years. Not quite everything, but pretty close.

No matter how much you tweak and tune a GC, there are still going to be times
when it destroys "locality" in a hierarchical memory system and causes some
kind of pause. For a small form entry program, this is probably negligible. As
the size of the program and/or its data grows, or the time constraints grow
tighter, these pauses will become more and more unacceptable.

Most of my work the last 10 years has been in Java, but the JVM is _not_ the
one true path to enlightenment. TMTOWTDI :-)

------
weland
This is actually a very interesting post, and the first one of this level of
documentation I've seen in a while. I also actually agree with most of the
author's points.

That being said, I think there's a big elephant in the room: the additional
layers of complexity. Much of the younger generation in the web development
community seems very far removed from the principles of computer engineering,
to the point that there's a bunch of things you can hint at simply by common
sense, yet they are opaque to them. For instance:

1\. Performance hits aren't only due to slower "raw" data processing, they can
also occur due to data representation -- think about cache hits, for instance.
Your average mobile application's processing is more IO-bound than processor-
bound. Unless you can guarantee that your data is being efficiently
represented, you get a hit from that.

2\. Since the applications are sandboxed, the amount of memory your runtime
occupies is directly subtracted from the amount of memory your application
gets. The more layers your runtime piles up that cannot be shared between
applications -- the more memory you consume without actually offering any
functionality.

3\. Every additional layer needs some processing time. Writing into a buffer
that gets efficiently transfered into video memory is always going to be
slower than JIT-ing instructions that write something into a buffer that gets
efficiently transfered into video memory -- _at the very least_ the first time
around.

I'm not sure if someone is actually trying to say or (futilely) prove that a
web application is going to be as fast as a native one (assuming, of course,
the native environment is worth a fsck) -- that's something people would be
able to predict easily. This isn't rocket science, as long as you leave the
jQuery ivory tower every once in a while.

The question of whether it's fast _enough_ , on the other hand, is different,
but I think it's eschewing the wider picture of where "fast" is in the
"efficiency" forest.

I honestly consider web applications that don't actually depend on widely,
remotely accessible data for their function to be an inferior technical
solution; a music player that is written as a web application but doesn't
stream data from the Internet and doesn't even issue a single HTTP request to
a remote server is, IMO, a badly engineered system, due to the additional
technical complexity it involves (even if some of it is hidden). That being
said, if I had to write a mobile application that should work on more than one
platform (or if that platform were Android...), I'd do it as a web application
for other reasons:

1\. The fragmentation of mobile platforms is incredible. One can _barely_
handle the fragmentation of the Android platform alone, where at least all you
need to do is design a handful of UI versions of your app. Due to the closed
nature of most of the mobile platforms, web applications are really the only
(if inferior, shaggy and poorly-performing) way to write (mostly) portable
applications instead of as many applications as platforms.

2\. Some of the platforms are themselves economically untractable for smaller
teams. Android is a prime example of that; even if we skip over the
overengineering of the native API, the documentation is simply useless. The
odds of having a hundred monkeys produce something on the same level of
coherence and legibility as the Android API documentation by typing random
letters on a hundred keyboards are so high, Google should consider hiring
their local zoo for tech writing consultance. When you have deadlines to meet
and limited funding to use (read: you're outside your parents' basement) for a
mobile project, you can't always afford that kind of stuff.

Technically superior, even when measurable in performance, isn't always
"superior" in the real life. Sure, the fact that a programming environment
with bare debugging capabilities, almost no profiling and static analysis
capabilities to speak of and limited optimization options at best is the best
you can get for mobile applications is sad, but I do hope it's just the
difficult beginning of an otherwise productive application platform.

I also don't think the fight is ever going to be resolved. Practical
experience shows users' performance demands are endless: we're nearly thirty
years from the first Macintosh, and yet modern computers boot slower and
applications load slower than they did then. If thirty years haven't filled
the users' thirst for good looks and performance, chances are another thirty
years won't, either, so thirty years from now we'll still have people
frustrated that web applications (or whatever cancerous growth we'll have
then) aren't fast enough.

Edit: there's also another point that I wanted to make, but I forgot about it
while ranting the things above. It's related to this:

> Whether or not this works out kind of hinges on your faith in Moore’s Law in
> the face of trying to power a chip on a 3-ounce battery. I am not a hardware
> engineer, but I once worked for a major semiconductor company, and the
> people there tell me that these days performance is mostly a function of
> your process (e.g., the thing they measure in “nanometers”). The iPhone 5′s
> impressive performance is due in no small part to a process shrink from 45nm
> to 32nm — a reduction of about a third. But to do it again, Apple would have
> to shrink to a 22nm process.

The point the author is trying to make is valid IMHO -- Intel's processors do
benefit from having a superior fabrication technology, but just like in the
case above, "raw" performance is really just one of the trees in the
"performance" view.

First off, a really advanced, difficult manufacturing process isn't nice when
you want massive production. ARM isn't even on 28 nm yet, which means that the
production lines are cheaper; the process is also more reliable, and being
able to keep the same manufacturing line in production for a longer time also
means the whole thing costs less on a long term. It also works better when you
have an economic model like ARM's -- you license your chips, rather than
producing them. When you have a bunch of vendors competing for the same
implementation of a standard, with many of them being able to afford the
production tools they need, chances are you're going to see lower cost just
from the competition effect. There's also the matter of power consumption,
which isn't negligible at all, especially due to batteries being such nasty,
difficult beasts (unfortunately, we suck at storing electrical energy).

Overall, I think that within a year or two, once the amount of money shoved
into mobile systems will reach its peak, we'll see a fairly slower rate of
performance improvement on mobile platforms, at least in terms of raw
processing power. Improvements will start coming from other areas.

------
Apocryphon
And how does this affect Firefox OS's performance?

~~~
wasd
That's a great question. Does anyone have any anecdotal/benchmark evidence
about the performance of Firefox OS?

~~~
lovskogen
I've tried it, and yes – it's sluggish.

~~~
holloway
It would be interesting to know if that was just the low-spec hardware that
FirefoxOS targets. E.g. would a different approach also be relatively slow on
that hardware.

------
baddox
> BUT I THOUGHT V8 / MODERN JS HAD NEAR-C PERFORMANCE?

> It depends on what you mean by “near”. If your C program executes in 10ms,
> then a 50ms JavaScript program would be “near-C” speed. If your C program
> executes in 10 seconds, a 50-second JavaScript program, for most ordinary
> people would probably not be near-C speed.

I would actually reverse those two examples. A 10 second to 50 second jump
probably wouldn't affect most user-facing programs that much, since it's
either being done in the background or the user is staring at a progress bar.
The UI of the application is probably (hopefully) already designed around the
fact that there is a long-running process. But a 10ms to 50ms increase can be
a big deal, since it's probably something that is intended to appear "instant"
to the user.

------
EdSharkey
All the pain of HTML5 and dealing with legacy and inconsistent browser
implementations and yes, even poor performance in JavaScript, is worth it
because of the deployment model. Writing videogames in HTML5 on mobile is a
tall order.

I think it's more important as an industry that we reckon with how to
effectively manage large teams developing huge applications in JS/HTML/CSS. I
suppose that Facebook's development suffered from too many cooks in the
kitchen, cruft building up, race conditions, etc.

A video series I've seen tries to tackle scaling up JavaScript development by
organizing development from an Agile/TDD angle, and I think with some success:

[http://www.letscodejavascript.com/](http://www.letscodejavascript.com/)

------
tedsanders
Off-topic nitpick: I believe this sentence is untrue: "And Intel had to invent
a whole new kind of transistor since the ordinary kind doesn’t work at 22nm
scale." First, regular transistors do work at the 22nm scale (but tri-
gate/finfet transistors work better, if you can pattern them reliably). Also,
Intel didn't invent the tri-gate/finfet transistor. They just got it to
production first.

~~~
erhardm
To be fair, he said he's not a hardware guy. But he also is right, FinFet
transistors are better for power consumption and that's really important in
mobile devices

------
lnanek2
Some of these comments are very out of date. Android hasn't had a stop the
world garbage collector that caused 200-300ms delays since before Android 2.3.
It's true I had to avoid dereferencing anything in the game loop back before
then, but I don't know anyone who codes like that anymore on Android and I
know hundreds of developers...

~~~
epochwolf
And most android devices are on what version of android?

~~~
Jare
2.3 and above, according to Unity's hardware stats at
[http://stats.unity3d.com/mobile/index.html](http://stats.unity3d.com/mobile/index.html)

------
Roboprog
I really liked this article: I think he was very methodical in challenging
current dogma that GC/VM operations can be made as fast or faster than
slightly lower level approaches.

For the curious, I have a simple benchmark program that focuses on string
manipulation (and deliberately creates many temp string objects, like a real
app would) in various languages. If anybody wants to download and try it on
the latest version of their favorite languages on the host-de-jure, go for it!

[https://github.com/roboprog/mp_bench](https://github.com/roboprog/mp_bench)

------
aconz2
over 180 network requests and over 40 scripts, maybe the author needs to learn
a bit more about best practices on the web before standing on his soapbox. and
quit bolding things mid-sentence.

------
andrewgleave
Anyone interested in optimising their mobile web applications should watch
Apple's Safari tracks from this year's WWDC.

They have plenty of informative advice on how to improve general performance
and how to use tools WebKit provides to measure performance problems.

Some key take aways:

Avoid using libraries (like jQuery). It will reduce memory consumption
significantly.

Be careful how often you're invalidating style info and forcing recomuptation

Avoid using scroll handlers to do layout (especially when you're likely to
inadvertently invalidate styles at the same time)

------
6ren
Aside: in some circumstances, JIT can be faster than compiled, by recompiling
based on run-time usage, and so choosing optimizations appropriate to
execution flow.

------
coldtea
I have to say, any minor quibbles aside, this is the best modern-day, non-
academic article I've read on HN since forever.

------
6ren
Why don't gc languages have optional memory management - include _free()_ , so
there's less work for the gc?

~~~
rsynnott
That would tend to make the GC more complicated, not less. Some memory managed
VMs do allow you to allocate outside the GC'd heap, though; see
java.nio.ByteBuffer in Java (or sun.misc.unsafe, if you're very daring)

------
williamcotton
While I feel that most of the article is little more than reaching for straws,
he does make some good points about memory management in Javascript and on
mobile.

Garbage collectors should be optional. I would really love to be able to
handle my memory directly if possible, and while there are some things I can
do related to object pooling, I lose control over that with closures and a
more functional approach.

I guess to counter my own argument here I will say that for most resource
intensive things like simulations (call them games if you will), an object-
oriented approach does a better job of modeling than a functional one. Game
loops, sprites, bullets flying around... go OOP... and then use an object pool
and limit your use of closures. Is it a pain in the ass? You bet, but so is
having to manage the heap on your own.

Now, as for the rest of the article, the author did a LOT to discredit himself
insisting that JS performance will not increase over the next few years...
dude, come on, winning isn't as important as learning and discussing, and
you're clearly just trying to WIN an argument.

This article is not very balanced. It is basically just saying that all
software written for a VM is stupid. That the concept of Garbage Collection is
an abomination.

The truth is that VMs are incredibly successful and aren't going anywhere. I
mean, look at the rise of VPS! Now compiled C and even machine code aren't
running as fast as they could be! Run for the hills!

Performance is not the be-all, end-all of computing. We're a social species
and the reason that JS is the so successful has more to do with the fact that
it is hosted in a VM based on open standards than anything else.

So why have we moved to virtualization? Because it makes a LOT of sense. It's
the same reason we invented compilers and higher-order languages in the first
place... we're human. We are the ones writing and reading source code... the
machines don't, they just blindly follow instructions. We choose to sacrifice
speed for a more humanistic interaction with our machines, and that applies
just as much to programmers as to the people who just use programs.

These are very early days for mobile computing. You say you want to raise the
level of discourse? Well then do it by writing articles that are more about
discussing solutions than winning. For example, HTML5rocks.com (and yes, I am
fully aware of the HTML5 marketing machine..) has been writing some good
articles on memory management in Javascript, and that approach actually raises
the level of discourse.

~~~
eropple
I'm not sure how closely you actually read his post because he doesn't say
anything you attribute to him about software written for VMs being stupid--in
fact he _explicitly states_ that it makes a lot of sense in desktop
environments because it improves programmer productivity. He's got a heading,
in all caps, saying "GC ON MOBILE IS NOT THE SAME ANIMAL AS GC ON THE
DESKTOP", and he quotes, while indicating agreement with, guys like Herb
Sutter and Miguel de Icaza explicitly stating that managed languages make
tradeoffs for developer productivity. Which they do, and those tradeoffs _don
't really work_ on mobile (and won't until we see significantly improved
memory density, among other things).

Disagree with him if you want to--I'd like to, because I like the JVM and the
CLR, but he has the weight of evidence on his side--but can we not make stuff
up?

------
fpgeek
I think the author is making a big mistake in selling short the possibility of
x86-class CPU performance on mobile.

Perhaps it comes from his iOS focus, but he's just wrong about the difficulty
of transitioning the mobile software ecosystem to x86. Android is there today
and has been for over a year (as the reviews of last year's x86 smartphones
make abundantly clear). And, as we all know, Android is the majority of the
devices, by a large margin.

It is true that iOS, Windows Phone and other platforms may have more trouble
with an x86 transition, but so what? If Android makes a performance leap by
moving to x86, other platforms will either find a way to keep up or they'll
fall behind in the market. Either way, the mobile center-of-gravity will move
towards x86-class CPU performance.

~~~
rsynnott
He really mentions the software problem more as an aside (and it's not like
Android on x86 is painless; where there's ARM-only native code, it is JITed to
x86 similarly to Apple's old Rosetta thing, which hurts performance and
impacts power usage); the greater issue is that Intel isn't really ready (as
of now, the Atom SoC stuff is still on an old process node, and has extremely
weak GPUs), and it doesn't look like they will be anytime soon.

Also, of course, Atom isn't really all _that_ much faster than modern ARM.

~~~
fpgeek
> and it doesn't look like they will be anytime soon.

Bay Trail tablets, due out for the holidays, _easily_ beating the ARM side's
upcoming champion (SnapDragon 800):

[http://www.extremetech.com/computing/160320-intel-bay-
trail-...](http://www.extremetech.com/computing/160320-intel-bay-trail-
benchmark-appears-online-crushes-fastest-snapdragon-arm-soc-by-30)

It looks to me like Intel is readier than you think. Moreover, from the
perspective of a web app developer, it doesn't matter whether Intel, Qualcomm,
Apple, Samsung or whoever actually wins a CPU performance war - so long as one
is fought. That I think we can count on.

~~~
rsynnott
Possibly. Really, given Intel's record in the mobile space, with impressive
claims and products which are really extremely disappointing (generally from a
power usage or GPU point of view), I'd like to see real benchmarks conducted
by a reputable third party before getting too excited.

------
bitcuration
Wish author has explained why Flipboard which is built in html5/javascript is
not awfully slow like linkedin.

~~~
eonil
Any clue that it's actually been implemented with HTML5/JS?

------
6ren
Why not increase RAM x5?

~~~
chris_mahan
RAM has to be powered.

~~~
6ren
I hadn't considered power, thinking it would be minimal. Research agrees - but
task dependent:

    
    
      The RAM, audio and ﬂash subsystems consistently showed the lowest power consumption.
      ...RAM power can exceed CPU power in certain workloads, but in practical situations,
      CPU power overshadows RAM by a factor of two or more. [p.11, graph on p.5]
    

from 2010: _An Analysis of Power Consumption in a Smartphone_
[https://docs.google.com/viewer?url=https%3A%2F%2Fwww.usenix....](https://docs.google.com/viewer?url=https%3A%2F%2Fwww.usenix.org%2Fevent%2Fusenix10%2Ftech%2Ffull_papers%2FCarroll.pdf)

