The article is right about Javascript, but wrong about why mobile web apps are slow. Most mobile web apps are not games nor image processing apps, they are simple content sites or form-driven applications that are not CPU bound and don't use a lot of JS. These apps are subjectively slower because of browser layout engine performance.
Even on "heavy" JS apps, I've heard it said they only use 30% CPU. So if 70% time is not spent in JS, even if you run that 30% 50 times faster, at best you get a 42% speedup from 50 times better CPU performance.
I feel the author has caused many to have the wrong take-away message, and the anti-web contingent are lapping it up. Mobile Wep Apps can be made to perform fast today, but it takes way way too much effort to tweak the CSS/DOM manipulations to avoid jank. Consider for example the need to manually "hint" you want GPU compositing by using identity CSS3 transforms.
The rendering engines need to get better with leveraging the GPU and avoiding work when they don't need to.
If you want to see what a herculean effort can achieve, see Sencha's FastBook.
Basically, the author is right, but I think it's a red herring given that when people talk about "slow mobile web apps", 99% of the time, they are not talking about an Instagram written in JS, or AngryBirds in Javascript, but some basic news feed app.
Even if you accept that giving web apps 10x more CPU than today they'd be equal to today's native apps (I don't accept this), you must acknowledge you've given 10x more CPU to the native apps too. And clever people will find impressive ways to use that CPU in native's advantage (witness all the neat physics stuff in IOS7).
>So Moore’s Law might be right after all, but it is right in a way that would require the entire mobile ecosystem to transition to x86. It’s not entirely impossible–it’s been done once before. But it was done at a time when yearly sales were around a million units, and now they are selling 62 million per quarter. It was done with an off-the-shelf virtualization environment that could emulate the old architecture at about 60% speed, meanwhile the performance of today’s hypothetical research virtualization systems for optimized (O3) ARM code are closer to 27%.
Why do you need virtualization if the code is Java bytecode? Sure, this doesn't work for iPhone, but Android should be able to easily change the processor architecture. (If I'm not missing something here). There's already Android-x86 project.
Moore's law won't necessarily be progressing in a steady line. We will probably see more jumps and spikes as technology advances. Do you think computing power will only merely double once we take advantage of quantum computing? No, it will go up orders of magnitude in a short period of time.
Our current progression with silicon based computing is slowing because we're reaching the physical limitations of that technology.
One additional point is that even if .js doesn't improve much, mobile browsers can get a lot faster. iPhone's webview is a lot faster than Android's webview, so there's plenty of progress to be made especially with rendering.
What actual proof does this dude have that pocket-sized computers have reached their limits of performance and memory?
All I see are some carefully selected quotes from some "hardware engineers" who are doing nothing more than offering up their opinion.
An "ex-Intel engineer" and a "robotics engineer" have a single paragraph. Really, this is your research? Some anonymous quotes, probably taken out of context?
> I have consulted many such qualified engineers for this article, and they have all declined to take the position on record. This suggests to me that the position is not any good.
>What actual proof does this dude have that pocket-sized computers have reached their limits of performance and memory?
Could you quote the part where the author of this article or the original Why mobile web apps are slow article say that we have reached a hard "limit of performance and memory", because I don't see where they say that at all.
The article I'm reading says this:
Unchecked exponential growth has to end sometime, by definition, and this is how it would happen; not with a bang, but with a whimper. We won’t hit a wall, we’ll just…start…to…slow…down.
Agreed. This analysis article [1] they refer to is overrated. Its almost as bad as the flame war articles that it criticizes mostly because it makes giant leaps of reasoning whenever that suits the author's point of view
For example, author answers the question "how does JS performance compare to native performance exactly?" by takinig a random benchmark from the benchmarks game. What I was hoping for was comparison with desktop browser in
- dom manipulation
- canvas
- css animation performance
There are different problem domains, each requiring different kind of performance. The randomly picked benchmark is probably the one that is most removed from real world usage.
And from my personal experience, the worst performance bottlenecks in mobile JS apps are in DOM rendering and manipulation, not in raw JS speed. But my problem domain isn't games and image processing: instead it involves news reader, presentation editor, GPS tracking, tiled map rendering, exercise diary and stats app, and an app that makes Google Docs run on the iPad.
Interestingly enough, all iOS devices younger than iPhone 4 can do all of the above smoothly, even in a WebView where supposedly nitro is disabled, while the WebView on Android devices stutters and struggles to achieve 30 FPS, sometimes going down to 2-3 FPS even on modern devices and the latest version. So really, the raw JS performance already doesn't matter for my usage cases.
Its the variable rendering performance and behavior of Android's WebView that is the main problem. As soon as you think that you've found a solution to a certain rendering speed problem that works across all Android devices, a new version of Android appears on a certain device and invalidates that assumption by being slow as molasses in January.
Another example that makes it hard for me to take this article seriously:
"It’s slower than server-side Java/Ruby/Python/C# by a factor of about 10"
How can he claim this with a straight face? Java itself is faster than Ruby by a factor of 10 - can he try to be a bit more accurate when making claims?
I agree with one statement of the article, however. "Let’s raise the level of discourse". However, this article doesn't do that at all.
This really needs to be stressed more. I was underwhelmed by that article, too, because the author does not seem to have a deep understanding of the issues involved (specifically JIT compilation and automatic memory management) necessary. The overall conclusion -- that mobile HTML5 apps have inherent performance problems -- may well be correct, but not for the reason that he gives and for data that he often seems to misread.
The most glaring issue was probably the garbage collection performance graph: He picked out the performance of mark and sweep collectors to make a bold statement about memory usage necessary for performance, and completely ignored the data for the generational collectors for that statement.
This is not to say that garbage collection is a non-problem in mobile applications (it can be, but the argument for why is trickier and is not easily translated into soundbites), but somebody flat-out ignoring data does not inspire confidence.
The big problem is that the author doesn't ever seem to have seen a compiler from the inside, yet makes sweeping statements about programming language technology that simply aren't borne out by fact.
There's a lot of interesting work on memristor memory, which is much cheaper and faster (and probably lower power) than RAM ,And a lot of interesting work on faster, lower power 3D memory interfaces.
Those and other hardware innovations could have a huge effect on the garbage collection issue, and this whole debate.
Mobile web apps are slow because of layers upon layers of leaky abstractions and algorithms at each layer that still have a lot of room to be optimized.
If you want a consistent 60 frames a second, you need to architect systems that complete the work that needs to render each frame within 16.6ms. If you can't complete all your computations within 16.6ms, you need to prioritize computations and spread them across frames.
It is my experience that the biggest reason for why things are slow is that they need to fetch content from outside it's environment.
I don't know if this affects JS ability to render it but it is at least my experience that JS based apps are simply not able to give me the same non-interupted feel as natively build ones, this includes even apps like RSS readers.
Even if this is true, this only spells the end of exponential growth in hardware performance and the start of the more efficient software era. Engineering is all about finding the right compromises to get the job done given a set of restraints. When the HW performance becomes the major restraint then you just have to design around that.
Maybe the slowing of this Law, which I am not convinced of per this article, will slow new hardware introduction. If this happens, perhaps this will allow closer optimization of applications to a small, relatively static hardware complement.
Phones should be thin clients anyway, the cloud provides the muscle. What use is packing compute into it. You just want it get smaller and zero energy use (using ambient energy for charging).
1. Do computation locally on local data, thus avoiding round-trips to a far away server.
2. Distribute an app to lots of people without having to pay for a lot of server resources to match, since the computation is done on each user's device.
3. Enable users to keep more data private, by keeping it on the user's own device.
You will still have problems with latency and you cant get way from the fact that you need power to communicate via radio your not going to be able to cut down the power that the radio needs to use that much.
“in a memory constrained environment garbage collection performance degrades exponentially.” The obvious solution, then, is to throw memory at the problem, no?
Isn't the obvious solution to avoid garbage collection whenever possible? Just consider 'Date.now()' vs 'new Date().getTime()' - are we even trying? Shouldn't we start considering unnecessary garbage a bug, like a memory leak? That might not change everything, but it surely would help, no?
That's an interesting idea, but how does a language distinguish between memory leaks and regular non-leaks? Maybe some kind of metadata that informs the GC about how memory will be used so it can optimize memory usage and issue errors when the code fails to follow these rules?
If we really start hitting these kinds of walls, I think that Javascript subsets (like asm.js) will become more prominent, but it's an interesting idea.
Then know that it was your interesting idea, since I wasn't thinking of anything automatic myself :) Maybe is possible to automate things like generating code that minimizes branching, or re-uses objects instead of destroying and allocating them all the time... but it's definitely possible, in some cases at least, to rethink the approach and reduce GC. Especially since there are usually trade-offs (using a fixed amount of memory "just in case" is distasteful for its own reasons), and the programmer might need to make an informed decision.
But people still need to be aware of it first; making new objects for one time use is handy, and the normal way to go about things. It's certainly the default way tutorials tell you to get the time (even in game loops). At least personally, before today I didn't even know about Date.now(), and I never saw a discussion about any of this stuff in regards to libraries (how good or sloppy they are with GC)... it just doesn't seem to be on the radar, outside of gamedev. And by the time stuff gets sluggish by the accumulated little objects here and there, there is no easy fix, other than a whole lot refactoring or rewriting from scratch with it in mind.
Even on "heavy" JS apps, I've heard it said they only use 30% CPU. So if 70% time is not spent in JS, even if you run that 30% 50 times faster, at best you get a 42% speedup from 50 times better CPU performance.
I feel the author has caused many to have the wrong take-away message, and the anti-web contingent are lapping it up. Mobile Wep Apps can be made to perform fast today, but it takes way way too much effort to tweak the CSS/DOM manipulations to avoid jank. Consider for example the need to manually "hint" you want GPU compositing by using identity CSS3 transforms.
The rendering engines need to get better with leveraging the GPU and avoiding work when they don't need to.
If you want to see what a herculean effort can achieve, see Sencha's FastBook.
Basically, the author is right, but I think it's a red herring given that when people talk about "slow mobile web apps", 99% of the time, they are not talking about an Instagram written in JS, or AngryBirds in Javascript, but some basic news feed app.