Hacker News new | past | comments | ask | show | jobs | submit login
Why mobile web apps are slow (sealedabstract.com)
458 points by jacobparker on July 9, 2013 | hide | past | favorite | 147 comments



The trend towards retrofitting interactive applications to run inside the equivalent of a VM is just stupid, not only because of the obvious wastefulness of executing JS (given Moore's Law stopped).

When dealing with a browser, you're limited to a broken interaction model (document? URL? back button?), a broken security model, one programming language with a weak API (that isn't equally supported between browsers either), and you have to abuse HTML into things it's not supposed to. It's a development experience that is subpar even by 90's standards.

Add all that to the pedestrian performance, and I'm amazed this is still an option.


> The trend towards retrofitting interactive applications to run inside the equivalent of a VM is just stupid

Well, Android does it.

And in all honesty, when apps start getting complex they're just as bad. There's a reason URLs exist- it's to give a unique identifier to an item. Apps need to do that too, and end up having to fudge around launching and reaching a specific item.


Apps have URLs also - even iOS, which is the weakest link in terms of inter-app interaction, has them.

The difference here is that one platform requires shoehorning a URL into a non-document-centric use case, and the other one leaves it optional, for the developer to implement as he/she sees fit.

If I'm using the Yelp app, I expect to be able to communicate a restaurant listing to someone else via a URL. But why is it that my alarm clock needs a URL? Or my phone dialer?

The web was envisioned as interlinked documents, and for many parts of the web this is still very much the case (see: Amazon). For others this metaphor breaks down badly, and is the source of a great deal if hacks and kludges.


My linux kernel has a url.

file:/boot/vmlinuz-linux

So does my init daemon!

file:/sbin/init

Which is a symlink, so let's just specify a familiar program, and just throw on the proper file extension.

file:/usr/bin/firefox.elf

Still a url. To a program. If your executor supported file URIs you could run that.

I could host it on an http server, and if I mounted it using webdav as a davfs2, I could access anything on that server the same way.

You could write a python interpreter that can take a url to a python script to run: pythor http://github.com/some_repo/foo/script.py

A lot of programs already use URIs - I know KDE dolphin specifies all resource paths as URLs regardless of if they are local or remote, and it supports a buttload of URI schema. The qt5 QML engine specifies qml files to load by URI, so you can load remote qml applications in a browser that links in the qml runtime.

I mean, we even have this nice syntax (that isn't pervasive or transparent) to supply arguments in a url with something like google.com/search?q=bacon which in bash-world would be google --query bacon or google -q=bacon.

So what is missing from URLs from being uniform resource locators?


Not everything being a resource, for one, and not everything being naturally represented in a URI.

It's possible to encode just about any information in a URI, it doesn't mean it isn't a kludge and a force-fit.

Take a very, very simple (and common) use case:

https://maps.google.com/maps?q=Newark+International+Airport,...

Holy God would you look at that URL. It doesn't refer to a resource. In fact it refers to application state. This is a gross abuse of the whole concept of a URL, but the folks at Google aren't idiots - they know this. But the fact of the matter is that Google Maps is not document-based, and people have legitimate need to transport application state, independent of the semantics of the information they're looking for. Even something as simple as showing my friend the same map I'm looking at so we can talk about it requires bending the role of URIs wildly out of shape.

Even better example:

https://maps.google.com/maps?q=restaurants+around+Times+Squa...

One can argue that the previous link was clearly a reference to the airport, and there can be a URI for it (which wouldn't transport application state, substantially hobbling its use, but whatever). This link refers to "restaurants around Times Square", which isn't a logical entity, and whose relevance to the end user depends entirely on the application state being transported (e.g., zoom level, center point, filtered results, etc).

Every time these insane-o URLs get brought up the URI-acolytes seem to end up suggesting that maybe we shouldn't have rich, interactive maps or such things, that we should force software into roles that play nicely with the document-centric, URI-friendly schemes. I for one believe that software serves users first, and methods to locate and specify information needs to adopt its users needs, not the other way around.


> Even something as simple as showing my friend the same map I'm looking at so we can talk about it requires bending the role of URIs wildly out of shape.

How so? That 'same map' _is_ the resource being uniformly indicated.

> This link refers to "restaurants around Times Square", which isn't a logical entity

Why not? It's a logical entity, which is a collection viewed a certain way.


>Well, Android does it.

And that's why it has subpar apps compared to iOS when it comes to anything CPU intensive, high latency that makes it unsuitable for realtime audio/video apps, and you can see core Android engineers debating about why it's slow and such. And for games, devs have to fight the GC and implement stuff manually without allocations to work around it.

So sure, it can run twitter clients and all kinds of everyday apps and web front-end apps quite OK. But for heavy CPU / realtime stuff that get us forward to novel use cases, not so much.


> Well, Android does it.

With the difference it uses a proper language designed to run on a VM in the first place - which is not the case with Javascript.


I guess you missed that whole part of the article that discusses similar issues with GC and memory management with Android apps.


This should be getting more upvotes, not downvotes.


Saying things like Moore law stopped gets a downvote from me. Moore law will stop, but it's not there yet.

NOTE: Moore's law is concerned with number of transistors on a chip. That number has climbed, it's just split into several cores.


> NOTE: Moore's law is concerned with number of transistors on a chip. That number has climbed, it's just split into several cores.

Exactly, so now Moore's law is working against JavaScript, since JS doesn't do threading. So let's assume JS somehow magically gets to be ~2x slower than native instead of ~4x slower. With the standard being quad cores now, that makes JS ~8x slower than native doing image processing - you know, that whole "instagram" thing that's insanely popular on smartphones. And during that ~8x longer it's spending processing that image, your UI is completely frozen and unresponsive.

Yes I know about WebWorkers, but WebWorkers are hideously inefficient & slow by design. The limitations with them are also absurd, it is not a substitute for threading. The world is better off forgetting that they exist - which it basically has.


>NOTE: Moore's law is concerned with number of transistors on a chip. That number has climbed, it's just split into several cores.

We don't care about the pedantic interpretation of Moore law.

We care about the mis-interpreation, that was a corellary of having single-core chips: that every 18 months computers got roughly twice as fast.

This -- and this was the important attribute of Moore's law -- has stopped.


Who's we? You and author/OP? Because I've got good news for you, Moore's law corollary has merely slowed, not stopped. It will stop, just not now.

I think Moore's law will work better on mobiles, because they are retreading known grounds and they are way further from reaching, real physical barriers than regular computers.


>Who's we? You and author/OP?

Most people quoting Moore's law.

That the misconception is more popular than his original statement, speaks for that.


Yeah, marketing is a pita. It still doesn't negate the fact that even the corollary has slowed down, not stopped.


What is broken about the security model?


It's easier to ask what is not broken. Check this presentation:

http://www.slideshare.net/jgrahamc/javascript-security-20649...


Interesting, any idea how that applies to something like Dart, when Dart is running in the Dartium (i.e., not compiled to JS)?


Dart doesn't exist in a mainstream browser yet. My guess, that some of those problems probably persist in Dart or maybe new, unforeseen ones are introduced.


Right, which is why I asked about Dartium (build of Chromium with DartVM built in). But yeah I guess only people really involved with Dart could answer this question properly.


It's a well-researched article, but i doubt it will change anyone's mind. What i got as a take-away from this is that the sort of mobile apps that i work on can be done perfectly fine in javascript, because i did their desktop equivalents in ie7 on pc's with 512 mb ram.

I'm reminded of the native vs web debates on the desktop. Everything the desktop guys said was true, with whole categories of apps essentially off-limits to web developers, and the remainder clearly and obviously inferior to their native counterparts. That didn't change, native apps still rule supreme in capability and usability. And yet the average person is using web apps the majority of the time (although that opinion admittedly depends on what you understand as average). The reason why that is translates to mobile. Everything this article says is true, and still mobile apps will be mostly web-based. There is no conflict between those facts.


>That didn't change, native apps still rule supreme in capability and usability. And yet the average person is using web apps the majority of the time

Those web apps most people use are trivial compared to the stuff we do on Desktop and Mobile. And the main reason people use them "the majority of time" is because they are huge time-sinks.

E.g: Social apps like FB and Twitter, that translate to posting and reading text and images.

Web mail, which translates to working with lists of text articles.

Nothing much CPU intensive, in the way desktop/mobile apps are.

As for the CPU intensive stuff, it's really still a novelty on the web. Nobody (== few people, less than 1% is the numbers I show), use Google Docs (and that's not even a full featured office suite anyway -- not to mention the spreadsheet for example takes lot of time even on my 2010 iMac/Chrome to apply stuff like conditional formatting). Even less people use something like online image editors and such.

The only thing people really use on the web, and is CPU intensive are web games. And even those are more like 1995 quality desktop games.


Not sure if you know this, but iOS doesn't use a swap/page file. iOS will kill an app that uses a lot of memory because it values speed over stability and can very easily run out of RAM. In contrast, Windows won't kill an app when the system runs low of memory and will create more memory (however much slower that memory is than physical RAM) for the app to consume.


The use of SunSpider to argue that CPU-bound JavaScript performance has not improved since 2010 is simply wrong. What SunSpider does is not at all representative of what CPU-bound JavaScript in a modern web app does. V8 and SpiderMonkey have significantly improved performance on real app code since 2010, despite the unchanged SunSpider scores. I say this as a developer on a large and heavily CPU-bound JavaScript app.

The dismissal of asm.js is wrong too, for many reasons. Browsers don't need to support it explicitly necessarily. GC pauses are eliminated regardless of browser support, and most JavaScript optimizations will help asm.js too. The elimination of GC is not the only benefit, or even the main one really (the main benefit is the elimination of polymorphism, improving JIT performance and allowing AOT compilation). The assertion that it's not "really" JavaScript is irrelevant to the question of whether it can speed up web apps or not.

All that said, I really appreciate the article and the discussion it's sparked.


This article is weird. It starts off with a strong premise -- we need more fact-based arguments when discussing technology choices. Yes, agree!

There are a fair amount of good points here, particularly about ARM's potential but he's very dismissive of counter points and seems to believe having lots of quotes is a substitute to a good argument.

The section about garbage collection is especially unconvincing and, again, dismissive of counter points. He never really addresses Android or Windows Phone (which, by the way, he repeatedly mistakenly calls Windows Mobile -- which is right in line with the extreme iOS focus the article has). He mentions them, but then seems to think that a few quotes that show that, under some extreme circumstances -- mostly games -- the GC can be a problem on those platforms. But which is it, are all Java and C# Android/Windows Phone apps unacceptably slow or are some types of apps not really doable in those GC languages?

This is important because a large part of his argument centers on GCs not working on mobile.

I'd conclude that the article is overly defensive. To the point that the author repeatedly poisons-the-well of anyone who might potentially disagree.

> we need to have conversations that at least appear to have a plausible basis in facts of some kind–benchmarks, journals, quotes from compiler authors, whatever.

This is where I think the article falls apart. Appearing to be based on facts is what this article is about, but appearance just gets you talked about. It doesn't really move the conversation forward at all.


> But which is it, are all Java and C# Android/Windows Phone apps unacceptably slow or are some types of apps not really doable in those GC languages?

He didn't really say either; he pointed out (correctly) that for certain applications in a memory managed language, you'll want to essentially avoid using the GC, and you'll certainly have to be very cautious of it. The extreme case was old J2ME games, which usually worked by allocating all the memory they'd use as a large array at startup, and doing manual memory management thereafter.

In general, you can do just about anything in a memory managed language, but in some cases, especially games, it may actually end up a lot more work than in a non-managed language.


There are two performance choices in GC languages: 1) don't allocate after initialization 2) only allocate per small time frame There is an issue that 2) is only really feasible in server settings and doesn't really work for clients. I buy his argument even though I don't wish to program under his restrictions.


Well written article, but I have a few comments:

1. This is about JS speed. How important is JS speed in mobile apps, compared to rendering for example? That's the crucial question here, and I don't see any numbers on it. Rendering, after all, should be the same speed as a native app.

2. "Nothing terribly important has happened to CPU-bound JavaScript lately" - picking some benchmarks where there has been little change is possible. There are also benchmarks where there have been large changes. For example, many browsers now have incremental GC and background thread JITing. In some workloads these make a huge difference - but perhaps not on the old SunSpider benchmark.

edit: As an example of a type of code that can run far faster today than before, see Box2D, which is a realistic codebase since many mobile games use it: http://j15r.com/blog/2013/07/05/Box2d_Addendum


Canvas is only marginally slower than the native 2D drawing APIs (it is a thin wrapper for Skia on Android and CoreGraphics on iOS), but it still uses only a single CPU thread for drawing on both platforms. To use the GPU, one has to turn to the -webkit-transform CSS property which, under certain circumstances, promotes the element to a GPU buffer. Only once WebGL becomes accessible will the mobile web have a drawing API that is more efficient than the standard windowing APIs.


One minor caveat I'd add- JS is slow, but you don't always have to use it. I've had great results using a little JS coupled with hardware accelerated CSS transitions, animations, etc.

Web apps are slower than native, that's undeniable, but they're only really slow when you port bad desktop practises ($("#home").animate() and the like) over to mobile.


How often does a native developer "trick" their framework to trigger an implementation detail to get performance gains? That's really I can think about when I hear web developers talk about "hardware accelerated CSS".


How is using a CSS feature "tricking" the framework? If it's a trick then native developers are using the same trick when they use the device's video hardware, for the same reasons, instead of relying on the software renderer.


It's a trick because you set a 3D property on a style and _sometimes_ the the browser will use the GPU instead. We've actually seen browsers (Safari on iOS) turn off this implementation detail in same cases where it used to be enabled.

Your video example is interesting, but for a Cocoa developer they'd just create a video player object and give it a file to play. Cocoa doesn't make any guarantee it will be hardware accelerated (neither does the HTML5 spec for that matter), but you can be assured that if Cocoa says it can play a video it will be able to play it properly; you, as the person using the framework, never have to worry about any implementation details.


What 3D property are you speaking of? Maybe you're meaning that using a 2 dimensional transition corresponds to a 3D property that causes the hardware to kick in then I suppose you make sense. But the thing is, I fail to see how that's a "trick" when it's an implemented feature of CSS. Also, if a browser vendor turns off the feature then that's a problem of the vendor and not CSS. Again, you are labeling a feature as a "trick" to paint a negative picture of web apps. Believe me, there's plenty of room for improvement in the area of web apps but your example isn't holding up.

I wasn't speaking of playing videos to the screen. I was referring to the hardware in the device that renders pixels to the display, specifically the hardware accelerated part. That's the PC gamer side of me referring to that hardware as video cards or video hardware.

But to combine your Safari and Cocoa examples. If Safari says that it doesn't have hardware accelerated CSS features enabled then you know that the feature doesn't work as nicely as you would want, but the transition will still happen. If Safari says that it is enabled... nah, never mind. Regardless of hardware support or not; you, as the person using CSS, never have to worry about any implementation details. In fact, since CSS isn't a framework, it's far more efficient to use CSS over a Javascript framework anyway, regardless of hardware support.


Some things aren't tricks, they're just different.

This has lots of information on jQuery and performance choices:

http://learn.jquery.com/performance/

Some ways of going about things are more verbose, but the difference in performance isn't negligible. There will always be a balance between verbosity and performance.

Granted this page specifically pertains to jQuery, but most libraries expose ways for developers to start going down the long road of performance tuning.


As the original author of those 7 performance tips, I suggest you ignore them. :) Spending 10 minutes with the DevTools timeline (or network waterfall) will give you way better insight on your perf issues than following these microptimizations.


I'm just learning JS - I've had plenty of experience with more technical scripting languages, but I'm still a JS newbie.

Is that a bad practice just because that kind of code is optimized for desktop use or is it just bad practice in general?

Also, a broader topic I'm wondering about: JS and HTML seem pretty clumsy. I find it hard to believe that they're really how the majority of web pages run. Is there really no alternative? Or is it just the case that these tools are "good enough" and what really creates problems is bad code(rs)?


The tools are "good enough" but just barely, and very good coders have gone to herculean lengths to make them usable. The thing is that these spectacular achievements are available to all, making it into a decent platform.

But naked HTML/CSS/Javascript is a document/hypertext engine perverted into a general-purpose platform.

The thing is that the JS libs and practices used for desktop generally aren't mobile-friendly so even though everything works across all the platforms, it's not necessarily usable.


It's not easy to say whether it's a bad practice or not. I think more than any other platform, how your application is going to be used, and who it's going to be used by are vital points of information that you need to answer before you approach the issue:

1) Are your visual flourishes vital to the quality of your product?

2) How concerned are you with baseline performance?

3) How big is your application? Do you need disparate components to be reusable, or do you need a philosophy or underlying framework to guide development?

4) Is your application going to be used on mobile?

The answer to 1 dictates how you should approach things like animating elements. Much of the time, the animation is really just a visual flourish, and can be handled with graceful degredation through CSS. Plus you can learn a few tricks using just CSS that don't trigger unnecessary redraws that might happen using JS.

The answer to 2 dictates whether or not you should adopt an existing framework or library vs. rolling your own. Libraries have gotten better about making themselves available piecemeal, but if you know how to write quick code, you'll often find almost everything is not quite what you absolutely need. See Canvas game dev.

Question 3 will determine what sort of framework you'll want. Something that is extendable and object-oriented, vs. something that has more self-contained reusable components. The drawback of the former is that classing in JS brings in a considerable performance hit, whereas reusable components make for much more unmaintainable code.

Question 4 will play a roll in all the aforementioned questions, namely where matters of performance and DOM redraws are concerned, as that's where mobile tends to fall flat when it comes to web apps.


Is that a bad practice just because that kind of code is optimized for desktop use or is it just bad practice in general?

I think the point here is that if you use Javascript to animate, that runs in, well, Javascript, and has to be called constantly. If you set a CSS property to achieve the same animation, the computations for it are done in native code. It still gets drawn the same way, but it's basically throwing away CPU for nothing (unless you do something that can't easily be replicated by CSS of course).


> In English: the philosophy of JavaScript (to the extent that it has any philosophy) is that you should not be able to observe what is going on in system memory, full stop.

Mobile web apps are slow because they're secure, precisely for the reason that they cannot examine or manipulate underlying system memory.

Manipulating memory is fundamentally a bad desktop practice incompatible with the way that JavaScript—i.e., untrusted software—is distributed.

Most browser vulnerabilities come from manipulating the browser's underlying C++ objects, causing leaks & buffer overflows. E.g., the WebRTC bug in Chrome which provides a full sandbox jailbreak, or the TIFF/PDF exploit in MobileSafari from iOS 3 (jailbreak.me).


> Mobile web apps are slow because they're secure, precisely for the reason that they cannot examine or manipulate underlying system memory.

By that logic, they'd better start writing OpenSSH in JavaScript, unless they want to compromise every server in the world. Oh no!

Secure doesn't imply slow. Not even sandboxed implies slow. Mobile web apps are slow because their runtime has more layers of functionality and is more complex -- therefore, it requires more processing than a native application.

> Manipulating memory is fundamentally a bad desktop practice incompatible with the way that JavaScript—i.e., untrusted software—is distributed.

This simply isn't true. Native Android applications also can't manipulate memory; how are web applications more secure than those? And if they aren't, why are they slower?


>Mobile web apps are slow because they're secure, precisely for the reason that they cannot examine or manipulate underlying system memory.

I believe the idea he was getting at was "you need to be able to tell how much memory is available to your (web) application, how much of that memory you've already used, and you need to be able to accurately predict how much memory will be allocated in the course of performing an action, in order to achieve acceptable performance on mobile," not "you need to be able to directly bang the address space." You can certainly have those features in a safe language.


Absolutely.. you could also set the higher-level system to never report as more than x% available, even if it really is, to prevent app clashing.. say forground apps available max is 35-40%, and background apps at 3%, etc.


Big part of the article focuses on garbage collection, low ram environments, and mobile devices. I'd just like to say that my S4 has 2GB of ram (yes, 2GB). More than likely, my next phone will have 3GB or 4GB of ram. As the author mentions, once you get to 3x-6x more physical ram than what you absolutely need, the garbage collector no longer slows down your device - in fact, it speeds it up. So I'm not sure that this is really that big a problem. RAM is one of the areas where mobile devices really are improving rapidly (the other being 3D).

That said, the article was excellent. Props to the author.


Actually, in general as memory use increases GC takes more time, not less. It's rare that the number of objects stays constant and they just get bigger, and more objects means longer and more difficult tracing.

HN has this problem. If we use more than a quarter or third of available RAM GC destroys site performance. Switching to faster RAM helps, but adding more doesn't.


Interesting point, but as per the chart[1] and discussion in the article, the relative memory footprint does have a big effect on performance. This is mostly felt when you get more complex object graphs. If the GC only needs to run once when you finish your calculation, it only needs to sweep once. If ram is limited and the GC is forced to run 100 times, then it must sweep 100 times. This is a lot of extra work and is the main reason why GC can be slow.

In the case of HN, it sounds like you have more than enough RAM already, so you are not hitting the 'worse case' of GC that gets hit often in javascript on last generation mobile phones. The symptoms to the user are very similar to 'disk cache thrashing' and can be as bad as multiple second pauses.

You're very correct in that increasing heap size too far is also a big problem, as running GC on 64GB of ram can take seconds - freezing up everything else while it runs. There are a number of guides on tuning the JVM on servers to avoid this. The new JVM GC algorithms are also very good at this situation by default, imo.

Finally, on the issue of RAM speed: phones generally have a fixed amount of space dedicated to RAM - the 2GB chip on my S4 is the same as a 512MB chip on an older iPhone. This generally means that it is just as fast to access the 2GB of ram on an S4 compared to the 512MB on older devices. So at least on mobile, RAM speed is fairly closely related to RAM performance. I can't seem to find the benchmarks I saw earlier on this at present, anyone can confirm or deny with numbers?

[1] http://sealedabstract.com/wp-content/uploads/2013/05/Screen-...


As a general rule, as you add memory to a VM, your overall throughput grows, but unless you're extremely careful, the length of individual GC pauses may go up (which is a concern in a GUI app; you don't want to block the main thread). Of course, all of this is highly dependent on GC used.

> the 2GB chip on my S4 is the same as a 512MB chip on an older iPhone. This generally means that it is just as fast to access the 2GB of ram on an S4 compared to the 512MB on older devices

Not necessarily. Your S4 has a memory bandwidth of either 8.5GB/sec or 12.8GB/sec (depending on if it's the Exynos or Snapdragon model). An iPhone 4S has memory bandwidth of 6.4GB/sec. So, you can read your whole memory in 230ms or 158ms (in theory; other limits will show up). The 4S can read its 512MB memory in 80ms.


As memory increases GC takes far less time as you far less memory is generally still being referred to. If your GC doesn't behave like this something else is wrong.


As memory use goes up, GC takes more time. But with more RAM available, GC (under certain strategies) can take less time. It's triggered less often, but each time it's triggered, it only copies the (no-larger-than-before) 'live' set.

What kind of GC does HN/ARC use?


> As the author mentions, once you get to 3x-6x more physical ram than what you absolutely need, the garbage collector no longer slows down your device - in fact, it speeds it up. So I'm not sure that this is really that big a problem. RAM is one of the areas where mobile devices really are improving rapidly

That's not generally true. GC performance decreases with the number of objects that have to be collected. More objects (and real-life experience shows that as soon as there is more of a finite resource -- memory in this case -- users' demands will quickly lead to ti being depleted again) actually means worse performance, not more.

GC also depends heavily on the memory's speed, and on the speed of the CPU-memory interface. If those remain constant (and, again, real-life experience shows that, while they don't, they tend to get faster slower than the CPUs do), GC performance hits will actually become heavier with increased memory.

This is particularly important if you consider that you can't actually force GC in most web environments. It's up to the system to decide when it starts cleaning. If that's done on a worst-case basis (i.e. postpone collection until the memory use becomes, or looks to be on a track of becoming dangerous), increasing the amount of memory won't help for memory-intensive applications.


Per-pass performance gets worse. Overall performance gets better, because fewer passes need to be made.


> Per-pass performance gets worse. Overall performance gets better, because fewer passes need to be made.

Yes, but what the users will see is that, instead of his application occasionally "feeling funny" every five minutes or so, the application will visibly lock every ten minutes.

It's a hard lesson that Java learned a long time ago: it's ok to be reasonably fast all that time, but people will immediately begin yelling if you're fast most of the time, but suddenly you get really sluggish, even if just for a few seconds.


Which is to a large extent why Java GUIs were essentially a failure, but it still has a large niche in high-performance server apps; you can usually get away with >100ms pauses there.


Actually I think it was more related to:

- sloppy programming, the same kind we see nowadays in JavaScript

- doing everything in the main thread

- not caring to learn how to use Swing properly

- not even taking the time to switch to the native look and feel

- not caring about users and providing a rich UI experience


Well, it was an enterprise language, and quite often most of the flaws you'd get in Java GUIs were there in the previous, native versions, too. Just like they're still there in the intranet web apps that replaced a few of the Java clients.

(I know a few people who still prefer the old-fashioned mainframe terminal apps, if available. If all you have is a few colors and line-drawing characters, even programmers have a hard time messing up simple forms. Not saying that we/they don't often succeed despite this difficulty...)


Nah, the lesson I was thinking about was taught earlier, in the days of AWT. Back then, Java was fairly slow, and the garbage collector was not as sophisticated as today. When the GC started collecting, you knew.


Ah, but on those days all implementations were still plain old interpreters.


With modern Java GUI apps, you still do, as any Netbeans user could attest.


Yes. However, an iOS device feels consistently faster while using much less memory (and processor cores!)

The extra hardware eats the battery like crazy. This is all anecdotal, but its my impression after switching from an iPhone 4 to a Nexus.


On my work laptop, I have 3GB of memory, and having Chrome with Google music, Gmail and work stuff open takes up over 50% of that. Gmail and Google Music alone take up >200MB each.

So, now I'm using Firefox, and Firefox running Gmail + my work is about 700MB, or >50% reduction in memory usage over Chrome.

Additionally, I'm using Spotify desktop, which consumes about 80MB of RAM while streaming, over the Google Music webapp, which takes 200MB+.

RAM may be improving quickly for new generations of mobile devices, but many businesses still purchase the minimum specs for their employee's machines, and this sad little 3GB laptop is not even multiple years old yet.


My biggest "issue" with this generally rather decent article is that CPU-bound applications are rare. For the plethora of social, productivity and novelty apps it doesn't matter at all. Optimizing the graphical stack certainly matters, but that's mostly an infrastructure issue, i.e. already dependent on the native parts of the OS/framework.

And consider who actually might be asking themselves this very question, native or webapp: The very same social/productivity/novelty websites and services who are looking how to best present their product on a mobile platform. And I'd wager a guess that sheer performance often isn't the deciding factor here , it's mostly about the native look. (And even seems to get less important, although I wonder whether iOS 7 could rekindle that)

And that whole talk about GCs sound awefully familiar. Didn't we have the very same conversation in the mid-90s when Java came out?

Also there's the good old alternating hard and soft layers. Hybrid webapps (with some native compoents to speed up and slim down things) could easily bridge the gap until the browser standards/APIs/hardware catches up. It's really not an all or nothing question, especially considering how many Android/iOS apps are basically WebViews with a tiny veneer of native buttons, preferences and background storage.

Not that I like the web platform all that well, but let's be serious about the "depth" of web apps...


>My biggest "issue" with this generally rather decent article is that CPU-bound applications are rare. For the plethora of social, productivity and novelty apps it doesn't matter at all.

He addresses that in the very first paragraphs: if your mobile app is basicaly a "web page with some buttons" then you are OK.

But the apps that takes us forward are more CPU bound than BS clients and novelty apps. The spreadsheet. The word processor. The image editor. The video editor. Realtime stuff. Voice processing. Etc. We have those on the desktop and we want them on the mobile.

>And that whole talk about GCs sound awefully familiar. Didn't we have the very same conversation in the mid-90s when Java came out?

Yes. And notice how Java never got anywhere in the Desktop space? How SUN failed to create a web browser in Java because the thing was dog slow? And how, even today, users curse Eclipse for it's long GC pauses?

And that the Java (well, Dalvik) GC is behind many of the things that plague Android devs in the mobile space (a lot of examples of which are in the article)?


> My biggest "issue" with this generally rather decent article is that CPU-bound applications are rare. For the plethora of social, productivity and novelty apps it doesn't matter at all.

Or we can argue the other way around. Maybe we only have social, productivity and novelties over and over again because the limitations make those feasible?


Don't think so. We also have lots of games, which show that if you actually need the performance, people are willing to "drop down" to native. But the lion's share of the rest of the AppStore market could probably be replaced with webapps without CPUs being rent apart. But it won't look as native and you can't charge money for it as easily.

We also have lots of social apps because, after all, most mobile devices are communication devices, so this comes naturally. Never mind that even when you consider full-fledged desktop apps and the backends of lots of web sites, there's not that much that is really CPU-bound (although it's getting back there with ubiquitous data mining, which is one of the main reason we're seeing a resurgence of native compilers)

Two device sectors where this might be different are big tablets and low-end devices, pretty much both extremes of the mobile sector. Tablets used as full PC replacements would probably be the main users of the CPU-bound spreadsheet the author mentioned, as well as other almost-desktop apps that could require more resources, esp. considering multi-tasking. And they would be on for long stretches of time, so even if the quad-core processor could cope, it would be nice if it could cope for twice as long.

And then we have low-end devices. When the Palm came out, all the people who learned 68k assembly on their Amigas/Ataris were in demand again, as there was little CPU and memory to spare. On the other hand currently the environment mostly targeted at those sectors is the one with the least support/demand for native: FirefoxOS. Which seems a bit weird at first, as they certainly could make use of every MIPS they can scrounge. But that's economics for you: If it's mostly cheap phones and (most likely) users with little income to spare on frivolous apps, why would you target it? On the other hand, you're probably writing a webapp for other devices, and FirefoxOS users can piggyback on that. Sometimes you rather have a slow app than none at all.


> And that whole talk about GCs sound awefully familiar. Didn't we have the very same conversation in the mid-90s when Java came out?

Yes, absolutely. The objections made by OP are not very different from those made in the 90s.

But...

Some of the core objections made then are still true today, they're just hidden from users because of advancements in both hardware and GC technology itself. In the case of mobile, however, you're much more hardware constrained, at least for now. This will likely change sooner rather than later, but for now that means that for hardware-constrained devices well designed apps written in non-GC languages will almost always outperform similarly designed apps written in GC languages.

In other words: it's going to be easier to get good performance out of Obj-C on mobile than Java, and certainly easier than JavaScript.

For the record: I like Java -- yes, even today -- and the JVM, currently use it for my day job. For webapps it's moderately neato. For mobile, though... Not as much.


I think the main issue is that those environment so far insist in a VM model.

Most likely if Android used native compiled Java, performance could be improved.

Microsoft went this route with Windows Phone 8, as they replaced the .NET JIT with a proper native compiler when targeting Windows Phone.


I think this is because users are changed. Unlike old days, now users basically expects realtime graphics. And in realtime fields...


> And that whole talk about GCs sound awefully familiar. Didn't we have the very same conversation in the mid-90s when Java came out?

Yes. Fortunately, all of those problems just went away, and now we use Java for writing GUI desktop apps all the time! Why, imagine how terrible it would be if Java GUI apps were slow and awful and tended to mysteriously pause when there was a nasty GC cycle!

Oh, wait.


2003 Java is why Java is not popular on desktops, not 2013 Java.


As someone who uses Netbeans all day, well, I'd love to say you're right, but no, I can still feel the GC pauses, both the tiny disconcerting ones and the >1 second enraging ones.


In my experience, slow DOM manipulation and various rendering glitches is the biggest barrier for most apps. Javascript performance is a big problem, but one affecting mostly apps that are processing heavy.


Great post! I'm a huge C++/Anti-GC/Games developer here.

Few notes. On performance, most of the time, I'm GPU bound when writing games. In this JavaScript based game for iOS: http://imgur.com/XQdGKHw,swTBC9T#0

I'm getting 11ms used up on the CPU and 29ms used up on the GPU: http://imgur.com/XQdGKHw,swTBC9T#1

That's 11ms using iOS JavaScript which isn't JIT optimised.

Granted CPU will go crazy if I try running advanced Physics using JavaScript, but I tend to push Math intensive functions to C++ using a hybrid approach.

On GC, I'm not a fan, I like managing my memory. However, UnrealEngine, which is super-awesome, has a built in garbage collector. In my experience with it, whenever someone tries to blame its GC for performance/memory abuse, to my dissatisfaction, the problem usually lies elsewhere.

Main reason I've full on switched to the JavaScript world for development is its ability to re-interpret code on the fly. Allowing me to write and tweak, code/ui/design changes in real-time across multiple-devices.


Coming from the world of C / Java / PHP / Python and similar I found ObjectiveC maddening at times. But ARC really shines. Brilliant idea and well executed too: virtually no work on programming side (well, a few small caveats about function names apply) and no performance penalty on user side. Win-win. Good job Apple.


Am I the only person who's not a fan of the whole blogging with attitude trend?


No, you're not alone. (One issue at play may be selection bias. The blog posts with attitudes cause a bigger reaction in readers, causing higher sharing, causing them to be read more, causing them to be overrepresented in what we see.)


No, you're not. It detracts from the quality of the article.


I didn't pickup any attitude. The author is quite up front about his position but I didn't sense that he was belittling those that took an opposing pov, he simply thought they were wrong and attempted to provide evidence (the veracity of the evidence could, howeve, be debatable)


I agree, but here it’s backed up with evidence. Maybe the concerns are irrelevant in three years time with more advancements in hardware (though the author addresses that), the points made are valid.


Nicely explains why I find Android's use of Java as the lingua franca utterly perplexing. GC has no place on a low-power interactive platform.


Maybe it's a trade-off to make developer adoption easy.


I’m not sure what alternatives they had. Of the top 10 languages on GitHub (https://github.com/languages), only C and C++ do not have garbage collection. Those are quite a different beast from Java of Objective-C.

There’s a selection bias here but I struggle to come up with a popular non-GC, compiled language that operates on the same sort of level as Java or Objective-C.


How about Pascal/Modula or FreePascal? It's almost as easy as Java (certainly less error prone as C), and almost as fast as C.

Alas, I went to college in a time when they still taught Pascal, but then phased it out for C, which was more widely available on multiple platforms by the late 80s.


Python's GC can be turned off.


Isn't python's GC reference-counted and deterministic anyways?


Actually exactly opposite happen on game developers.


When Android was being designed, though, did people really think high-performance games on mobile were going to be a big deal? Remember, at the time, Apple didn't have apps, and the first Android phones had almost no built-in storage (and no native code for third party developers; the NDK wasn't added until 1.5). It seems fairly clear that the nature of the mobile games market that emerged caught both Apple and Google by surprise.


It wasn't a surprise to Apple; they demoed a game the day they announced the iOS SDK:

http://youtu.be/Qs4gCwlBkWo?t=2m41s


Well, really, they had very limited options. If they wanted a high-level non-GC language... well, they could have gone with Objective C, I suppose...


Disclaimer: I'm a JS JIT developer.

The use of Sunspider performance to talk about JS application performance is unfortunate, and undermines a significant part of the argument made in the article. Sunspider is a great benchmark to use to talk about web _page_ performance.

This is how JITs work: 1. Javascript code starts executing in a slow environment. 2. In this execution mode, a lot of overhead is spent collecting type information so that the hot code can be optimized. 3. After some period of time, the hot code is compiled with an optimizing compiler, which is a very expensive task. 4. The compiled code starts running.

For example, in Mozilla's JS engine, these are the actual numbers associated with optimization: 1. A function starts off running in the interpreter 2. After it gets called 10 times (or a loop within it goes around 10 times), it gets compiled with the Baseline JIT, which performs heavyweight bytecode analysis on the javascript code, generates unoptimized machine code which is instrumented to acquire information about the executing JS. 3. After 1000 iterations (most of that running within Baseline jitcode), the function gets compiled with our optimizing compiler, IonMonkey.

How does this relate to Sunspider?

Well, if you actually run Sunspider, you'll notice that the entire benchmark takes about 0.2 seconds or less on a modern machine (about 1.5 seconds on a modern phone). It's an extremely tiny benchmark (in terms of computation heaviness). Comparatively, the Octane suite, while containing fewer benchmarks, actually takes 100 times or so longer to execute than sunspider.

How many apps do you use that run for less than 1.5 seconds?

This is the problem with using Sunspider to talk about app performance. Sunspider doesn't measure JIT performance very well at all. When JIT engines execute Sunspider, they spend a relatively large amount of that time just collecting the necessary information before they can start optimizing. Then, soon after we spend all that effort optimizing the code, the benchmark ends, without really taking advantage of all that fast code.

Very often, JIT optimizations that speed up computationally heavy benchmarks actually regress Sunspider. This is because the optimizations add some extra weight to the initial analysis that happens before optimized code is generated. This hurts sunspider because the initial analysis section is much more heavily weighted than later optimized performance.

Sunspider's runtime characteristics make it a good benchmark for measuring web PAGE performance. Web pages are much more likely to contain code that executes for short periods of time (a few ms), and then stops. But apps have much different runtime characteristics. Apps don't execute code for a few milliseconds and then stop. They run for dozens of seconds, minimally, and often minutes at a time. A JIT engine has much more opportunity to generate good optimized code, and have that optimized code taken advantage of over more of the execution time.

It's unfortunate that such a fundamental misunderstanding of the characteristics and nature of optimized javascript is used to drive an argument about it.


On Octane I get basically the same performance difference between desktop (Core i7 L620) vs mobile (Galaxy S3) so I don't think you can wave away his critique quite so easily. I've seen other peoples results that seem to suggest that mobile scores 5-10x worse than desktop too. Although I would quite like to see more benchmark results for Octane published, I haven't found very many that make it easy to compare.


That's not the point I was addressing. It's obviously true that mobile is slower than desktop by a factor of 5-10x. However, the author uses Sunspider as a metric to claim that "JS engines haven't really gotten all that much better since 2008".

That's not true at all.

On benchmarks like Kraken and Octane, which actually take multiple seconds to run instead of just milliseconds, both V8 and SpiderMonkey have improved significantly even just over the past year.

Speaking for SpiderMonkey (only because that's the engine I'm more familiar with), we've bumped Kraken by 20-25%, and we've bumped Octane by around 50%.

On real-world app code like pdfjs, and gbemu.. we've improved performance by 2-3x over the last year.

JS has gotten significantly faster over the last couple of years, and we're working quite hard on making it faster yet.


I think what he is saying is that your work is in vain because it doesn't increase performance by 10x or more and that such small increases are nice but not nice enough to make it worthwhile to use javascript.


Right, and no software will ever migrate to the Web, because its 5-10 times slower than native. Who's gonna use it, right? Case closed.

Except its not.


Just because I tried to explain the point doesn't mean I believe the point.


Sorry, I must have written it in affect. Either that or I replied to wrong post. Is there a way to delete that reply?


The 5-10x difference mentioned is the difference in raw hardware speed between mobile CPUs and desktop CPUs, not a reference to JS speed vs native speed.

JS speed as compared to native depends heavily on the nature of the code being executed, how type-stable it is, how polymorphic it is, what its allocation behaviour is like, and a number of other factors.


>It's unfortunate that such a fundamental misunderstanding of the characteristics and nature of optimized javascript is used to drive an argument about it.

I don't see any "fundamental misunderstanding of the characteristics and nature of optimized javascript".

At worse it merely shows a misunderstanding of Sunspider's utility. And that's like 1/10 of the article -- and he even mentions some objection himself.


I like how he says that "mobile web apps are slow and always will be" (not a direct quote), but then dismisses asm.js as "it woudn't be a JavaScript anymore" so it doesn't count.

The point of asm.js (if I understand it correctly) is that you can use JavaScript in way similar to Python -- you can make your application in high-level language, and then, after profiling, rewrite the performance critical parts into lower-level, less powerful, but faster language.

That means, it still is "web app" but it can perform reasonably.


I generally agree with his comments on JS and memory management, for stuff like games, or image processing, but IMHO, these are not, in general, the reason why people have this notion that "mobile web apps are slow". There aren't a lot of games or Instagram apps written in JS, what there are, are lots of content oriented and form processing based mobile web sites, like your Bank, or your insurance company, or Slate or HuffingtonPost, and it is these sites which are prompting you to download native versions, and it is people's experience of these lightweight (non-CPU dependent) sites as being slow which I think needs addressing.

I would break it down into three effects: 1) startup latency

2) excessive repaints/not leveraging compositor/jank

3) 'uncanny valley' issues where web emulations of native widgets don't match 100% with the feel of system widgets.

The basic documented-oriented part of the mobile web is a lot slower than something like UIKit/TextKit in iOS, so the time to launch even a basic form to display some data, or a few paragraphs, and smoothly scroll it, with all rendering effects, and without layout or repaint jank, is just more work on the mobile web to optimize for, and therefore, many developers don't do it, or aren't aware of how to do it, leading to enduser perception that "mobile web is slow"

I'd summarize my point of view as: NaCl/PNaCL won't solve our problems (although it is still a good idea to do them)


Has anyone in mobile OS and mobile apps ever questioned the assumptions that Java must be used and/or a scripting language must be used, e.g., Javascript?

What if we had a mobile OS and a mobile app that was 100% Java free and only ran compiled code (i.e., that compiles to ARM assembly)?

Under the Steve Jobs theory of computers, everything should be "fast enough" just as it is. Because Moore's law and the increasing computing power of hardware should allow us to focus entirely on making everything as user friendly (including develeoper friendly) as possible without having regard for software efficiency. Watch for example the 1980 documentary "Entrepreneur" to see Jobs state this while the cameras follow him around during a NeXT retreat in Pebble Beach.

But I've also seen Hertzfeld say Jobs for example pushed hard for him to try to reduce boot times on the Mac. This is of course is firmly in the realm of software (bootloader and kernel) so it would seem a conflict of ideas. Software efficiency does matter sometimes. Even when the focus is user friendliness and user experience.

Jobs also said things back in the day about "the line between hardware and software" becoming blurred.

I find this difficut to agree with. But then I'm not called a genius like he was.

There are no doubt competent C and ARM assembly programmers out there. I've seen them doing embedded work and making games. Alas, I do not think they are by and large involved with working on the popular mobile OS's. Instead the focus seems to be on people who prefer to work with virtual machines and scripting languages.


Has anyone in mobile OS and mobile apps ever questioned the assumptions that Java must be used and/or a scripting language must be used, e.g., Javascript? What if we had a mobile OS and a mobile app that was 100% Java free and only ran compiled code (i.e., that compiles to ARM assembly)?

First, it's a bit pedantic but JIT VMs still compile down to ARM assembly on an ARM device. They basically optimize away the overhead of interpreting their target language.

That said, what you're getting at is how Palm and Windows Mobile did it (for the most part, there is/was .Net for Windows CE) and how iOS does it now. Ubuntu Mobile and Win Phone 8 offer native toolsets in addition to GC/managed runtimes.

I don't think its an assumption at all that mobile devices have to use Java. J2ME didn't exactly take over the world, and that had the advantage of hardware acceleration in the form of ARM Jazelle[1].

What has happened is as hardware gets faster/cheaper, Java, .Net, JS, etc become a better fit for development on mobile devices. On the Desktop we have stupidly powerful machines that can generally brute force their way through the inherent overhead of interpreted/JITed/VMed languages. On mobile however, we're not there yet.

[1]http://en.wikipedia.org/wiki/Jazelle


"become a better fit"

How? Why?

I might better understand your perspective if you could elaborate on the details.

Why do we have to write resource hogging software _now_ for mobile if the devices are not yet ready to "brute force" their way through it? And if we wrote small, fast software _now_, what would be the downside fo this as devices get more powerful in the future?

Users complaining about lightening fast programs that lack the whiz-bang factor that would have been added if developers had used interpreted languages? I can't imagine it.


As an old Java programmer, I'm astounded that Apple are deprecating garbage collection on OS X. Is ARC really a better approach on the desktop?


It's static vs realtime graphics issue rather than mobile vs desktop environment. Basically Apple is treating whole graphics as hard-realtime condition, and most people don't understand that.

When you cross the line of realtime, many things magically happen. Anything non-deterministic becomes whole source of problem, and GC is one of the biggest. Under realtime conditions, manual memory management is easier than dealing with non-deterministic black box.

So even on desktop, trivial GC is inferior architecture for realtime program by the definition of realtime. There's no room for non-deterministic deferred batch operation in 60fps graphics. Anyway specially designed GC may overcome this, but I couldn't see any actual implementation on the tech market.

Unlike old days, people want more dynamically and more smoothly working apps. Like games, all apps will become fully realtime eventually. This means you need real-time graphics approach. rather than static graphics.

If you're lucky that your users don't care those stuffs, then you're safe. But Apple is the company needs to treat single frame drop as failure. That's why they don't use GC and finally deprecated it.


No. But Apple has a shockingly small number of employees relative to other big software names, and they have had historically had much fewer third-party developers than other platforms.

As a result, they have chosen to consolidate the mindshare onto one memory management technology rather than dilute the talent pool with two competing proposals. Given that iOS is much, much more popular than OS X, they have chosen to standardize on the technology that works best for the popular platform.


I would respectfully have to disagree. I've been programming for about 30 years, but finally broke down and bought my first Mac about a year ago. It's a nice platform, and the native apps are a joy to use, compared to the sludge that most Windows apps are.

Oh, and it lets me do most of what I have been doing on Linux the last 18 years. Not quite everything, but pretty close.

No matter how much you tweak and tune a GC, there are still going to be times when it destroys "locality" in a hierarchical memory system and causes some kind of pause. For a small form entry program, this is probably negligible. As the size of the program and/or its data grows, or the time constraints grow tighter, these pauses will become more and more unacceptable.

Most of my work the last 10 years has been in Java, but the JVM is not the one true path to enlightenment. TMTOWTDI :-)


Yes, insofar as their desktop operating system is also their laptop operating system.

A prominent theme at WWDC this year was all of the changes they've been doing to improve power management. That garbage collection thread always running in the background is going to interfere with your efforts to save battery. A predictable strategy based on automatic reference counting, on the other hand...


Have you looked into what ARC really is and does? If not I suggest you do.

With ARC it replaces the manual retain/release management that used to be required and the compiler manages that for you. To write it most[1] of the time is just like if you were using GC[2][3]. The additional thought compared to garbage collection is pretty small and the benefits just in terms of the determinism are quite valuable while debugging not to mention the improved performance.

[1] Except when interfacing with non-ARC code.

[2] Some garbage collection algorithms can cope with reference cycles which ARC can't so you need to make sure the loop is broken by using a weak reference.

[3] As with Java you still need to consider object lifetimes to some extent particularly when requesting notfications or setting callbacks.


In one of WWDC sessions they've told that they rewrote Xcode 5 to use ARC and saw 1.5-2x speed improvements in key tasks like indexing source files.


Objective C GC wasn't around for a long time, and never really worked well; in particular, using it with C stuff (and Objective C apps tend to use C stuff all the time, directly; there's no equivalent of the JNI) was very messy and error-prone.


This is actually a very interesting post, and the first one of this level of documentation I've seen in a while. I also actually agree with most of the author's points.

That being said, I think there's a big elephant in the room: the additional layers of complexity. Much of the younger generation in the web development community seems very far removed from the principles of computer engineering, to the point that there's a bunch of things you can hint at simply by common sense, yet they are opaque to them. For instance:

1. Performance hits aren't only due to slower "raw" data processing, they can also occur due to data representation -- think about cache hits, for instance. Your average mobile application's processing is more IO-bound than processor-bound. Unless you can guarantee that your data is being efficiently represented, you get a hit from that.

2. Since the applications are sandboxed, the amount of memory your runtime occupies is directly subtracted from the amount of memory your application gets. The more layers your runtime piles up that cannot be shared between applications -- the more memory you consume without actually offering any functionality.

3. Every additional layer needs some processing time. Writing into a buffer that gets efficiently transfered into video memory is always going to be slower than JIT-ing instructions that write something into a buffer that gets efficiently transfered into video memory -- at the very least the first time around.

I'm not sure if someone is actually trying to say or (futilely) prove that a web application is going to be as fast as a native one (assuming, of course, the native environment is worth a fsck) -- that's something people would be able to predict easily. This isn't rocket science, as long as you leave the jQuery ivory tower every once in a while.

The question of whether it's fast enough, on the other hand, is different, but I think it's eschewing the wider picture of where "fast" is in the "efficiency" forest.

I honestly consider web applications that don't actually depend on widely, remotely accessible data for their function to be an inferior technical solution; a music player that is written as a web application but doesn't stream data from the Internet and doesn't even issue a single HTTP request to a remote server is, IMO, a badly engineered system, due to the additional technical complexity it involves (even if some of it is hidden). That being said, if I had to write a mobile application that should work on more than one platform (or if that platform were Android...), I'd do it as a web application for other reasons:

1. The fragmentation of mobile platforms is incredible. One can barely handle the fragmentation of the Android platform alone, where at least all you need to do is design a handful of UI versions of your app. Due to the closed nature of most of the mobile platforms, web applications are really the only (if inferior, shaggy and poorly-performing) way to write (mostly) portable applications instead of as many applications as platforms.

2. Some of the platforms are themselves economically untractable for smaller teams. Android is a prime example of that; even if we skip over the overengineering of the native API, the documentation is simply useless. The odds of having a hundred monkeys produce something on the same level of coherence and legibility as the Android API documentation by typing random letters on a hundred keyboards are so high, Google should consider hiring their local zoo for tech writing consultance. When you have deadlines to meet and limited funding to use (read: you're outside your parents' basement) for a mobile project, you can't always afford that kind of stuff.

Technically superior, even when measurable in performance, isn't always "superior" in the real life. Sure, the fact that a programming environment with bare debugging capabilities, almost no profiling and static analysis capabilities to speak of and limited optimization options at best is the best you can get for mobile applications is sad, but I do hope it's just the difficult beginning of an otherwise productive application platform.

I also don't think the fight is ever going to be resolved. Practical experience shows users' performance demands are endless: we're nearly thirty years from the first Macintosh, and yet modern computers boot slower and applications load slower than they did then. If thirty years haven't filled the users' thirst for good looks and performance, chances are another thirty years won't, either, so thirty years from now we'll still have people frustrated that web applications (or whatever cancerous growth we'll have then) aren't fast enough.

Edit: there's also another point that I wanted to make, but I forgot about it while ranting the things above. It's related to this:

> Whether or not this works out kind of hinges on your faith in Moore’s Law in the face of trying to power a chip on a 3-ounce battery. I am not a hardware engineer, but I once worked for a major semiconductor company, and the people there tell me that these days performance is mostly a function of your process (e.g., the thing they measure in “nanometers”). The iPhone 5′s impressive performance is due in no small part to a process shrink from 45nm to 32nm — a reduction of about a third. But to do it again, Apple would have to shrink to a 22nm process.

The point the author is trying to make is valid IMHO -- Intel's processors do benefit from having a superior fabrication technology, but just like in the case above, "raw" performance is really just one of the trees in the "performance" view.

First off, a really advanced, difficult manufacturing process isn't nice when you want massive production. ARM isn't even on 28 nm yet, which means that the production lines are cheaper; the process is also more reliable, and being able to keep the same manufacturing line in production for a longer time also means the whole thing costs less on a long term. It also works better when you have an economic model like ARM's -- you license your chips, rather than producing them. When you have a bunch of vendors competing for the same implementation of a standard, with many of them being able to afford the production tools they need, chances are you're going to see lower cost just from the competition effect. There's also the matter of power consumption, which isn't negligible at all, especially due to batteries being such nasty, difficult beasts (unfortunately, we suck at storing electrical energy).

Overall, I think that within a year or two, once the amount of money shoved into mobile systems will reach its peak, we'll see a fairly slower rate of performance improvement on mobile platforms, at least in terms of raw processing power. Improvements will start coming from other areas.


And how does this affect Firefox OS's performance?


That's a great question. Does anyone have any anecdotal/benchmark evidence about the performance of Firefox OS?


I've tried it, and yes – it's sluggish.


It would be interesting to know if that was just the low-spec hardware that FirefoxOS targets. E.g. would a different approach also be relatively slow on that hardware.


> BUT I THOUGHT V8 / MODERN JS HAD NEAR-C PERFORMANCE?

> It depends on what you mean by “near”. If your C program executes in 10ms, then a 50ms JavaScript program would be “near-C” speed. If your C program executes in 10 seconds, a 50-second JavaScript program, for most ordinary people would probably not be near-C speed.

I would actually reverse those two examples. A 10 second to 50 second jump probably wouldn't affect most user-facing programs that much, since it's either being done in the background or the user is staring at a progress bar. The UI of the application is probably (hopefully) already designed around the fact that there is a long-running process. But a 10ms to 50ms increase can be a big deal, since it's probably something that is intended to appear "instant" to the user.


All the pain of HTML5 and dealing with legacy and inconsistent browser implementations and yes, even poor performance in JavaScript, is worth it because of the deployment model. Writing videogames in HTML5 on mobile is a tall order.

I think it's more important as an industry that we reckon with how to effectively manage large teams developing huge applications in JS/HTML/CSS. I suppose that Facebook's development suffered from too many cooks in the kitchen, cruft building up, race conditions, etc.

A video series I've seen tries to tackle scaling up JavaScript development by organizing development from an Agile/TDD angle, and I think with some success:

http://www.letscodejavascript.com/


Off-topic nitpick: I believe this sentence is untrue: "And Intel had to invent a whole new kind of transistor since the ordinary kind doesn’t work at 22nm scale." First, regular transistors do work at the 22nm scale (but tri-gate/finfet transistors work better, if you can pattern them reliably). Also, Intel didn't invent the tri-gate/finfet transistor. They just got it to production first.


To be fair, he said he's not a hardware guy. But he also is right, FinFet transistors are better for power consumption and that's really important in mobile devices


Some of these comments are very out of date. Android hasn't had a stop the world garbage collector that caused 200-300ms delays since before Android 2.3. It's true I had to avoid dereferencing anything in the game loop back before then, but I don't know anyone who codes like that anymore on Android and I know hundreds of developers...


And most android devices are on what version of android?


2.3 and above, according to Unity's hardware stats at http://stats.unity3d.com/mobile/index.html


I really liked this article: I think he was very methodical in challenging current dogma that GC/VM operations can be made as fast or faster than slightly lower level approaches.

For the curious, I have a simple benchmark program that focuses on string manipulation (and deliberately creates many temp string objects, like a real app would) in various languages. If anybody wants to download and try it on the latest version of their favorite languages on the host-de-jure, go for it!

https://github.com/roboprog/mp_bench


over 180 network requests and over 40 scripts, maybe the author needs to learn a bit more about best practices on the web before standing on his soapbox. and quit bolding things mid-sentence.


Anyone interested in optimising their mobile web applications should watch Apple's Safari tracks from this year's WWDC.

They have plenty of informative advice on how to improve general performance and how to use tools WebKit provides to measure performance problems.

Some key take aways:

Avoid using libraries (like jQuery). It will reduce memory consumption significantly.

Be careful how often you're invalidating style info and forcing recomuptation

Avoid using scroll handlers to do layout (especially when you're likely to inadvertently invalidate styles at the same time)


Aside: in some circumstances, JIT can be faster than compiled, by recompiling based on run-time usage, and so choosing optimizations appropriate to execution flow.


I have to say, any minor quibbles aside, this is the best modern-day, non-academic article I've read on HN since forever.


Why don't gc languages have optional memory management - include free(), so there's less work for the gc?


That would tend to make the GC more complicated, not less. Some memory managed VMs do allow you to allocate outside the GC'd heap, though; see java.nio.ByteBuffer in Java (or sun.misc.unsafe, if you're very daring)


Maybe Microsoft C++/CLI?


Do you know what GC does?


While I feel that most of the article is little more than reaching for straws, he does make some good points about memory management in Javascript and on mobile.

Garbage collectors should be optional. I would really love to be able to handle my memory directly if possible, and while there are some things I can do related to object pooling, I lose control over that with closures and a more functional approach.

I guess to counter my own argument here I will say that for most resource intensive things like simulations (call them games if you will), an object-oriented approach does a better job of modeling than a functional one. Game loops, sprites, bullets flying around... go OOP... and then use an object pool and limit your use of closures. Is it a pain in the ass? You bet, but so is having to manage the heap on your own.

Now, as for the rest of the article, the author did a LOT to discredit himself insisting that JS performance will not increase over the next few years... dude, come on, winning isn't as important as learning and discussing, and you're clearly just trying to WIN an argument.

This article is not very balanced. It is basically just saying that all software written for a VM is stupid. That the concept of Garbage Collection is an abomination.

The truth is that VMs are incredibly successful and aren't going anywhere. I mean, look at the rise of VPS! Now compiled C and even machine code aren't running as fast as they could be! Run for the hills!

Performance is not the be-all, end-all of computing. We're a social species and the reason that JS is the so successful has more to do with the fact that it is hosted in a VM based on open standards than anything else.

So why have we moved to virtualization? Because it makes a LOT of sense. It's the same reason we invented compilers and higher-order languages in the first place... we're human. We are the ones writing and reading source code... the machines don't, they just blindly follow instructions. We choose to sacrifice speed for a more humanistic interaction with our machines, and that applies just as much to programmers as to the people who just use programs.

These are very early days for mobile computing. You say you want to raise the level of discourse? Well then do it by writing articles that are more about discussing solutions than winning. For example, HTML5rocks.com (and yes, I am fully aware of the HTML5 marketing machine..) has been writing some good articles on memory management in Javascript, and that approach actually raises the level of discourse.


I'm not sure how closely you actually read his post because he doesn't say anything you attribute to him about software written for VMs being stupid--in fact he explicitly states that it makes a lot of sense in desktop environments because it improves programmer productivity. He's got a heading, in all caps, saying "GC ON MOBILE IS NOT THE SAME ANIMAL AS GC ON THE DESKTOP", and he quotes, while indicating agreement with, guys like Herb Sutter and Miguel de Icaza explicitly stating that managed languages make tradeoffs for developer productivity. Which they do, and those tradeoffs don't really work on mobile (and won't until we see significantly improved memory density, among other things).

Disagree with him if you want to--I'd like to, because I like the JVM and the CLR, but he has the weight of evidence on his side--but can we not make stuff up?


> This article is not very balanced. It is basically just saying that all software written for a VM is stupid. That the concept of Garbage Collection is an abomination.

Er, no it's not; it's pointing out a number of ways that it's problematic for GUI apps (and especially games) on highly resource-constrained mobile devices.

> The truth is that VMs are incredibly successful and aren't going anywhere. I mean, look at the rise of VPS!

He wasn't talking about that sort of VM.


I think the author is making a big mistake in selling short the possibility of x86-class CPU performance on mobile.

Perhaps it comes from his iOS focus, but he's just wrong about the difficulty of transitioning the mobile software ecosystem to x86. Android is there today and has been for over a year (as the reviews of last year's x86 smartphones make abundantly clear). And, as we all know, Android is the majority of the devices, by a large margin.

It is true that iOS, Windows Phone and other platforms may have more trouble with an x86 transition, but so what? If Android makes a performance leap by moving to x86, other platforms will either find a way to keep up or they'll fall behind in the market. Either way, the mobile center-of-gravity will move towards x86-class CPU performance.


He really mentions the software problem more as an aside (and it's not like Android on x86 is painless; where there's ARM-only native code, it is JITed to x86 similarly to Apple's old Rosetta thing, which hurts performance and impacts power usage); the greater issue is that Intel isn't really ready (as of now, the Atom SoC stuff is still on an old process node, and has extremely weak GPUs), and it doesn't look like they will be anytime soon.

Also, of course, Atom isn't really all _that_ much faster than modern ARM.


> and it doesn't look like they will be anytime soon.

Bay Trail tablets, due out for the holidays, easily beating the ARM side's upcoming champion (SnapDragon 800):

http://www.extremetech.com/computing/160320-intel-bay-trail-...

It looks to me like Intel is readier than you think. Moreover, from the perspective of a web app developer, it doesn't matter whether Intel, Qualcomm, Apple, Samsung or whoever actually wins a CPU performance war - so long as one is fought. That I think we can count on.


Possibly. Really, given Intel's record in the mobile space, with impressive claims and products which are really extremely disappointing (generally from a power usage or GPU point of view), I'd like to see real benchmarks conducted by a reputable third party before getting too excited.


Motorola's Atom based RAZR was slower than their ARM based one. The x86 architecture isn't doing anything for you on mobile and so far the reason Intel has struggled is because they couldn't compete on power consumption, a more important factor. Since desktops can burn power, they can run a lot more transistors and run them more quickly. I don't see mobile CPU performance catching up to desktops soon.


Here's a specific comparison of the Razr i vs the Razr M. http://www.engadget.com/2012/10/04/motorola-razr-i-review/

Yes, the Razr i lost (mostly modestly) on 4 out of 5 performance benchmarks, but...

1. It modestly won on power consumption (9 hours vs 8).

2. Quoting from the review: "Aside from the benchmark results outlined above, the Medfield entry offered a marginally faster response to most actions."

3. Drawing CPU vs CPU conclusions from several benchmarks is complicated by one of the other differences between the two phones - the GPU (which is mostly an orthogonal question).

Oh, and then there's that benchmark the RAZR i won, by almost a factor of 2: SunSpider. While SunSpider has its limitations, it's the most still relevant benchmark of the set to the discussion we're having right now. And this performance difference has real-world consequences (another quote): "The results remain largely unchanged, but after spending a week with the device, we'd like to add that the web browser still gives a superb performance."

Intel's problem in mobile has never been performance (except when self-inflicted and they're past that). As of last year, it isn't power consumption either. Today, Intel's last problem is modems (specifically LTE modems), and they're hard at work on that one too: http://newsroom.intel.com/community/intel_newsroom/blog/2013...

Finally, Intel's tagline for Silvermont is "~3X the Performance or ~5X Lower Power". If they pull that off and integrate it with a competitive LTE modem (and those are both still significant ifs even though today's indications look good), that's a ~2X SunSpider combined with a separate ~3X CPU leap. That won't just multiply because the bottlenecks will shift and SunSpider is the wrong benchmark CPU-bound JavaScript anyway. But however it plays out, that still would close a big chunk of the mobile-desktop JavaScript performance gap, especially when you notice that this year's Haswell focused on power consumption and, at best, only offers a small performance bump over last year's Ivy Bridge.

It isn't a lock, but it isn't a possibility to dismiss either.


Wish author has explained why Flipboard which is built in html5/javascript is not awfully slow like linkedin.


Any clue that it's actually been implemented with HTML5/JS?


Why not increase RAM x5?


RAM has to be powered.


I hadn't considered power, thinking it would be minimal. Research agrees - but task dependent:

  The RAM, audio and flash subsystems consistently showed the lowest power consumption.
  ...RAM power can exceed CPU power in certain workloads, but in practical situations,
  CPU power overshadows RAM by a factor of two or more. [p.11, graph on p.5]
from 2010: An Analysis of Power Consumption in a Smartphone https://docs.google.com/viewer?url=https%3A%2F%2Fwww.usenix....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: