When dealing with a browser, you're limited to a broken interaction model (document? URL? back button?), a broken security model, one programming language with a weak API (that isn't equally supported between browsers either), and you have to abuse HTML into things it's not supposed to. It's a development experience that is subpar even by 90's standards.
Add all that to the pedestrian performance, and I'm amazed this is still an option.
Well, Android does it.
And in all honesty, when apps start getting complex they're just as bad. There's a reason URLs exist- it's to give a unique identifier to an item. Apps need to do that too, and end up having to fudge around launching and reaching a specific item.
The difference here is that one platform requires shoehorning a URL into a non-document-centric use case, and the other one leaves it optional, for the developer to implement as he/she sees fit.
If I'm using the Yelp app, I expect to be able to communicate a restaurant listing to someone else via a URL. But why is it that my alarm clock needs a URL? Or my phone dialer?
The web was envisioned as interlinked documents, and for many parts of the web this is still very much the case (see: Amazon). For others this metaphor breaks down badly, and is the source of a great deal if hacks and kludges.
So does my init daemon!
Which is a symlink, so let's just specify a familiar program, and just throw on the proper file extension.
Still a url. To a program. If your executor supported file URIs you could run that.
I could host it on an http server, and if I mounted it using webdav as a davfs2, I could access anything on that server the same way.
You could write a python interpreter that can take a url to a python script to run: pythor http://github.com/some_repo/foo/script.py
A lot of programs already use URIs - I know KDE dolphin specifies all resource paths as URLs regardless of if they are local or remote, and it supports a buttload of URI schema. The qt5 QML engine specifies qml files to load by URI, so you can load remote qml applications in a browser that links in the qml runtime.
I mean, we even have this nice syntax (that isn't pervasive or transparent) to supply arguments in a url with something like google.com/search?q=bacon which in bash-world would be google --query bacon or google -q=bacon.
So what is missing from URLs from being uniform resource locators?
It's possible to encode just about any information in a URI, it doesn't mean it isn't a kludge and a force-fit.
Take a very, very simple (and common) use case:
Holy God would you look at that URL. It doesn't refer to a resource. In fact it refers to application state. This is a gross abuse of the whole concept of a URL, but the folks at Google aren't idiots - they know this. But the fact of the matter is that Google Maps is not document-based, and people have legitimate need to transport application state, independent of the semantics of the information they're looking for. Even something as simple as showing my friend the same map I'm looking at so we can talk about it requires bending the role of URIs wildly out of shape.
Even better example:
One can argue that the previous link was clearly a reference to the airport, and there can be a URI for it (which wouldn't transport application state, substantially hobbling its use, but whatever). This link refers to "restaurants around Times Square", which isn't a logical entity, and whose relevance to the end user depends entirely on the application state being transported (e.g., zoom level, center point, filtered results, etc).
Every time these insane-o URLs get brought up the URI-acolytes seem to end up suggesting that maybe we shouldn't have rich, interactive maps or such things, that we should force software into roles that play nicely with the document-centric, URI-friendly schemes. I for one believe that software serves users first, and methods to locate and specify information needs to adopt its users needs, not the other way around.
How so? That 'same map' _is_ the resource being uniformly indicated.
> This link refers to "restaurants around Times Square", which isn't a logical entity
Why not? It's a logical entity, which is a collection viewed a certain way.
And that's why it has subpar apps compared to iOS when it comes to anything CPU intensive, high latency that makes it unsuitable for realtime audio/video apps, and you can see core Android engineers debating about why it's slow and such. And for games, devs have to fight the GC and implement stuff manually without allocations to work around it.
So sure, it can run twitter clients and all kinds of everyday apps and web front-end apps quite OK. But for heavy CPU / realtime stuff that get us forward to novel use cases, not so much.
NOTE: Moore's law is concerned with number of transistors on a chip. That number has climbed, it's just split into several cores.
Yes I know about WebWorkers, but WebWorkers are hideously inefficient & slow by design. The limitations with them are also absurd, it is not a substitute for threading. The world is better off forgetting that they exist - which it basically has.
We don't care about the pedantic interpretation of Moore law.
We care about the mis-interpreation, that was a corellary of having single-core chips: that every 18 months computers got roughly twice as fast.
This -- and this was the important attribute of Moore's law -- has stopped.
I think Moore's law will work better on mobiles, because they are retreading known grounds and they are way further from reaching, real physical barriers than regular computers.
Most people quoting Moore's law.
That the misconception is more popular than his original statement, speaks for that.
I'm reminded of the native vs web debates on the desktop. Everything the desktop guys said was true, with whole categories of apps essentially off-limits to web developers, and the remainder clearly and obviously inferior to their native counterparts. That didn't change, native apps still rule supreme in capability and usability. And yet the average person is using web apps the majority of the time (although that opinion admittedly depends on what you understand as average). The reason why that is translates to mobile. Everything this article says is true, and still mobile apps will be mostly web-based. There is no conflict between those facts.
Those web apps most people use are trivial compared to the stuff we do on Desktop and Mobile. And the main reason people use them "the majority of time" is because they are huge time-sinks.
E.g: Social apps like FB and Twitter, that translate to posting and reading text and images.
Web mail, which translates to working with lists of text articles.
Nothing much CPU intensive, in the way desktop/mobile apps are.
As for the CPU intensive stuff, it's really still a novelty on the web. Nobody (== few people, less than 1% is the numbers I show), use Google Docs (and that's not even a full featured office suite anyway -- not to mention the spreadsheet for example takes lot of time even on my 2010 iMac/Chrome to apply stuff like conditional formatting). Even less people use something like online image editors and such.
The only thing people really use on the web, and is CPU intensive are web games. And even those are more like 1995 quality desktop games.
All that said, I really appreciate the article and the discussion it's sparked.
There are a fair amount of good points here, particularly about ARM's potential but he's very dismissive of counter points and seems to believe having lots of quotes is a substitute to a good argument.
The section about garbage collection is especially unconvincing and, again, dismissive of counter points. He never really addresses Android or Windows Phone (which, by the way, he repeatedly mistakenly calls Windows Mobile -- which is right in line with the extreme iOS focus the article has). He mentions them, but then seems to think that a few quotes that show that, under some extreme circumstances -- mostly games -- the GC can be a problem on those platforms. But which is it, are all Java and C# Android/Windows Phone apps unacceptably slow or are some types of apps not really doable in those GC languages?
This is important because a large part of his argument centers on GCs not working on mobile.
I'd conclude that the article is overly defensive. To the point that the author repeatedly poisons-the-well of anyone who might potentially disagree.
> we need to have conversations that at least appear to have a plausible basis in facts of some kind–benchmarks, journals, quotes from compiler authors, whatever.
This is where I think the article falls apart. Appearing to be based on facts is what this article is about, but appearance just gets you talked about. It doesn't really move the conversation forward at all.
He didn't really say either; he pointed out (correctly) that for certain applications in a memory managed language, you'll want to essentially avoid using the GC, and you'll certainly have to be very cautious of it. The extreme case was old J2ME games, which usually worked by allocating all the memory they'd use as a large array at startup, and doing manual memory management thereafter.
In general, you can do just about anything in a memory managed language, but in some cases, especially games, it may actually end up a lot more work than in a non-managed language.
1. This is about JS speed. How important is JS speed in mobile apps, compared to rendering for example? That's the crucial question here, and I don't see any numbers on it. Rendering, after all, should be the same speed as a native app.
edit: As an example of a type of code that can run far faster today than before, see Box2D, which is a realistic codebase since many mobile games use it: http://j15r.com/blog/2013/07/05/Box2d_Addendum
Web apps are slower than native, that's undeniable, but they're only really slow when you port bad desktop practises ($("#home").animate() and the like) over to mobile.
Your video example is interesting, but for a Cocoa developer they'd just create a video player object and give it a file to play. Cocoa doesn't make any guarantee it will be hardware accelerated (neither does the HTML5 spec for that matter), but you can be assured that if Cocoa says it can play a video it will be able to play it properly; you, as the person using the framework, never have to worry about any implementation details.
I wasn't speaking of playing videos to the screen. I was referring to the hardware in the device that renders pixels to the display, specifically the hardware accelerated part. That's the PC gamer side of me referring to that hardware as video cards or video hardware.
This has lots of information on jQuery and performance choices:
Some ways of going about things are more verbose, but the difference in performance isn't negligible. There will always be a balance between verbosity and performance.
Granted this page specifically pertains to jQuery, but most libraries expose ways for developers to start going down the long road of performance tuning.
Is that a bad practice just because that kind of code is optimized for desktop use or is it just bad practice in general?
Also, a broader topic I'm wondering about: JS and HTML seem pretty clumsy. I find it hard to believe that they're really how the majority of web pages run. Is there really no alternative? Or is it just the case that these tools are "good enough" and what really creates problems is bad code(rs)?
The thing is that the JS libs and practices used for desktop generally aren't mobile-friendly so even though everything works across all the platforms, it's not necessarily usable.
1) Are your visual flourishes vital to the quality of your product?
2) How concerned are you with baseline performance?
3) How big is your application? Do you need disparate components to be reusable, or do you need a philosophy or underlying framework to guide development?
4) Is your application going to be used on mobile?
The answer to 1 dictates how you should approach things like animating elements. Much of the time, the animation is really just a visual flourish, and can be handled with graceful degredation through CSS. Plus you can learn a few tricks using just CSS that don't trigger unnecessary redraws that might happen using JS.
The answer to 2 dictates whether or not you should adopt an existing framework or library vs. rolling your own. Libraries have gotten better about making themselves available piecemeal, but if you know how to write quick code, you'll often find almost everything is not quite what you absolutely need. See Canvas game dev.
Question 3 will determine what sort of framework you'll want. Something that is extendable and object-oriented, vs. something that has more self-contained reusable components. The drawback of the former is that classing in JS brings in a considerable performance hit, whereas reusable components make for much more unmaintainable code.
Question 4 will play a roll in all the aforementioned questions, namely where matters of performance and DOM redraws are concerned, as that's where mobile tends to fall flat when it comes to web apps.
Mobile web apps are slow because they're secure, precisely for the reason that they cannot examine or manipulate underlying system memory.
Most browser vulnerabilities come from manipulating the browser's underlying C++ objects, causing leaks & buffer overflows. E.g., the WebRTC bug in Chrome which provides a full sandbox jailbreak, or the TIFF/PDF exploit in MobileSafari from iOS 3 (jailbreak.me).
Secure doesn't imply slow. Not even sandboxed implies slow. Mobile web apps are slow because their runtime has more layers of functionality and is more complex -- therefore, it requires more processing than a native application.
This simply isn't true. Native Android applications also can't manipulate memory; how are web applications more secure than those? And if they aren't, why are they slower?
I believe the idea he was getting at was "you need to be able to tell how much memory is available to your (web) application, how much of that memory you've already used, and you need to be able to accurately predict how much memory will be allocated in the course of performing an action, in order to achieve acceptable performance on mobile," not "you need to be able to directly bang the address space." You can certainly have those features in a safe language.
That said, the article was excellent. Props to the author.
HN has this problem. If we use more than a quarter or third of available RAM GC destroys site performance. Switching to faster RAM helps, but adding more doesn't.
You're very correct in that increasing heap size too far is also a big problem, as running GC on 64GB of ram can take seconds - freezing up everything else while it runs. There are a number of guides on tuning the JVM on servers to avoid this. The new JVM GC algorithms are also very good at this situation by default, imo.
Finally, on the issue of RAM speed: phones generally have a fixed amount of space dedicated to RAM - the 2GB chip on my S4 is the same as a 512MB chip on an older iPhone. This generally means that it is just as fast to access the 2GB of ram on an S4 compared to the 512MB on older devices. So at least on mobile, RAM speed is fairly closely related to RAM performance. I can't seem to find the benchmarks I saw earlier on this at present, anyone can confirm or deny with numbers?
> the 2GB chip on my S4 is the same as a 512MB chip on an older iPhone. This generally means that it is just as fast to access the 2GB of ram on an S4 compared to the 512MB on older devices
Not necessarily. Your S4 has a memory bandwidth of either 8.5GB/sec or 12.8GB/sec (depending on if it's the Exynos or Snapdragon model). An iPhone 4S has memory bandwidth of 6.4GB/sec. So, you can read your whole memory in 230ms or 158ms (in theory; other limits will show up). The 4S can read its 512MB memory in 80ms.
What kind of GC does HN/ARC use?
That's not generally true. GC performance decreases with the number of objects that have to be collected. More objects (and real-life experience shows that as soon as there is more of a finite resource -- memory in this case -- users' demands will quickly lead to ti being depleted again) actually means worse performance, not more.
GC also depends heavily on the memory's speed, and on the speed of the CPU-memory interface. If those remain constant (and, again, real-life experience shows that, while they don't, they tend to get faster slower than the CPUs do), GC performance hits will actually become heavier with increased memory.
This is particularly important if you consider that you can't actually force GC in most web environments. It's up to the system to decide when it starts cleaning. If that's done on a worst-case basis (i.e. postpone collection until the memory use becomes, or looks to be on a track of becoming dangerous), increasing the amount of memory won't help for memory-intensive applications.
Yes, but what the users will see is that, instead of his application occasionally "feeling funny" every five minutes or so, the application will visibly lock every ten minutes.
It's a hard lesson that Java learned a long time ago: it's ok to be reasonably fast all that time, but people will immediately begin yelling if you're fast most of the time, but suddenly you get really sluggish, even if just for a few seconds.
- doing everything in the main thread
- not caring to learn how to use Swing properly
- not even taking the time to switch to the native look and feel
- not caring about users and providing a rich UI experience
(I know a few people who still prefer the old-fashioned mainframe terminal apps, if available. If all you have is a few colors and line-drawing characters, even programmers have a hard time messing up simple forms. Not saying that we/they don't often succeed despite this difficulty...)
The extra hardware eats the battery like crazy. This is all anecdotal, but its my impression after switching from an iPhone 4 to a Nexus.
So, now I'm using Firefox, and Firefox running Gmail + my work is about 700MB, or >50% reduction in memory usage over Chrome.
Additionally, I'm using Spotify desktop, which consumes about 80MB of RAM while streaming, over the Google Music webapp, which takes 200MB+.
RAM may be improving quickly for new generations of mobile devices, but many businesses still purchase the minimum specs for their employee's machines, and this sad little 3GB laptop is not even multiple years old yet.
And consider who actually might be asking themselves this very question, native or webapp: The very same social/productivity/novelty websites and services who are looking how to best present their product on a mobile platform. And I'd wager a guess that sheer performance often isn't the deciding factor here , it's mostly about the native look. (And even seems to get less important, although I wonder whether iOS 7 could rekindle that)
And that whole talk about GCs sound awefully familiar. Didn't we have the very same conversation in the mid-90s when Java came out?
Also there's the good old alternating hard and soft layers. Hybrid webapps (with some native compoents to speed up and slim down things) could easily bridge the gap until the browser standards/APIs/hardware catches up. It's really not an all or nothing question, especially considering how many Android/iOS apps are basically WebViews with a tiny veneer of native buttons, preferences and background storage.
Not that I like the web platform all that well, but let's be serious about the "depth" of web apps...
He addresses that in the very first paragraphs: if your mobile app is basicaly a "web page with some buttons" then you are OK.
But the apps that takes us forward are more CPU bound than BS clients and novelty apps. The spreadsheet. The word processor. The image editor. The video editor. Realtime stuff. Voice processing. Etc. We have those on the desktop and we want them on the mobile.
>And that whole talk about GCs sound awefully familiar. Didn't we have the very same conversation in the mid-90s when Java came out?
Yes. And notice how Java never got anywhere in the Desktop space? How SUN failed to create a web browser in Java because the thing was dog slow? And how, even today, users curse Eclipse for it's long GC pauses?
And that the Java (well, Dalvik) GC is behind many of the things that plague Android devs in the mobile space (a lot of examples of which are in the article)?
Or we can argue the other way around. Maybe we only have social, productivity and novelties over and over again because the limitations make those feasible?
We also have lots of social apps because, after all, most mobile devices are communication devices, so this comes naturally. Never mind that even when you consider full-fledged desktop apps and the backends of lots of web sites, there's not that much that is really CPU-bound (although it's getting back there with ubiquitous data mining, which is one of the main reason we're seeing a resurgence of native compilers)
Two device sectors where this might be different are big tablets and low-end devices, pretty much both extremes of the mobile sector. Tablets used as full PC replacements would probably be the main users of the CPU-bound spreadsheet the author mentioned, as well as other almost-desktop apps that could require more resources, esp. considering multi-tasking. And they would be on for long stretches of time, so even if the quad-core processor could cope, it would be nice if it could cope for twice as long.
And then we have low-end devices. When the Palm came out, all the people who learned 68k assembly on their Amigas/Ataris were in demand again, as there was little CPU and memory to spare. On the other hand currently the environment mostly targeted at those sectors is the one with the least support/demand for native: FirefoxOS. Which seems a bit weird at first, as they certainly could make use of every MIPS they can scrounge. But that's economics for you: If it's mostly cheap phones and (most likely) users with little income to spare on frivolous apps, why would you target it? On the other hand, you're probably writing a webapp for other devices, and FirefoxOS users can piggyback on that. Sometimes you rather have a slow app than none at all.
Yes, absolutely. The objections made by OP are not very different from those made in the 90s.
Some of the core objections made then are still true today, they're just hidden from users because of advancements in both hardware and GC technology itself. In the case of mobile, however, you're much more hardware constrained, at least for now. This will likely change sooner rather than later, but for now that means that for hardware-constrained devices well designed apps written in non-GC languages will almost always outperform similarly designed apps written in GC languages.
For the record: I like Java -- yes, even today -- and the JVM, currently use it for my day job. For webapps it's moderately neato. For mobile, though... Not as much.
Most likely if Android used native compiled Java, performance could be improved.
Microsoft went this route with Windows Phone 8, as they replaced the .NET JIT with a proper native compiler when targeting Windows Phone.
Yes. Fortunately, all of those problems just went away, and now we use Java for writing GUI desktop apps all the time! Why, imagine how terrible it would be if Java GUI apps were slow and awful and tended to mysteriously pause when there was a nasty GC cycle!
I'm getting 11ms used up on the CPU and 29ms used up on the GPU: http://imgur.com/XQdGKHw,swTBC9T#1
On GC, I'm not a fan, I like managing my memory. However, UnrealEngine, which is super-awesome, has a built in garbage collector. In my experience with it, whenever someone tries to blame its GC for performance/memory abuse, to my dissatisfaction, the problem usually lies elsewhere.
There’s a selection bias here but I struggle to come up with a popular non-GC, compiled language that operates on the same sort of level as Java or Objective-C.
Alas, I went to college in a time when they still taught Pascal, but then phased it out for C, which was more widely available on multiple platforms by the late 80s.
The use of Sunspider performance to talk about JS application performance is unfortunate, and undermines a significant part of the argument made in the article. Sunspider is a great benchmark to use to talk about web _page_ performance.
This is how JITs work:
2. In this execution mode, a lot of overhead is spent collecting type information so that the hot code can be optimized.
3. After some period of time, the hot code is compiled with an optimizing compiler, which is a very expensive task.
4. The compiled code starts running.
For example, in Mozilla's JS engine, these are the actual numbers associated with optimization:
1. A function starts off running in the interpreter
3. After 1000 iterations (most of that running within Baseline jitcode), the function gets compiled with our optimizing compiler, IonMonkey.
How does this relate to Sunspider?
Well, if you actually run Sunspider, you'll notice that the entire benchmark takes about 0.2 seconds or less on a modern machine (about 1.5 seconds on a modern phone). It's an extremely tiny benchmark (in terms of computation heaviness). Comparatively, the Octane suite, while containing fewer benchmarks, actually takes 100 times or so longer to execute than sunspider.
How many apps do you use that run for less than 1.5 seconds?
This is the problem with using Sunspider to talk about app performance. Sunspider doesn't measure JIT performance very well at all. When JIT engines execute Sunspider, they spend a relatively large amount of that time just collecting the necessary information before they can start optimizing. Then, soon after we spend all that effort optimizing the code, the benchmark ends, without really taking advantage of all that fast code.
Very often, JIT optimizations that speed up computationally heavy benchmarks actually regress Sunspider. This is because the optimizations add some extra weight to the initial analysis that happens before optimized code is generated. This hurts sunspider because the initial analysis section is much more heavily weighted than later optimized performance.
Sunspider's runtime characteristics make it a good benchmark for measuring web PAGE performance. Web pages are much more likely to contain code that executes for short periods of time (a few ms), and then stops. But apps have much different runtime characteristics. Apps don't execute code for a few milliseconds and then stop. They run for dozens of seconds, minimally, and often minutes at a time. A JIT engine has much more opportunity to generate good optimized code, and have that optimized code taken advantage of over more of the execution time.
That's not true at all.
On benchmarks like Kraken and Octane, which actually take multiple seconds to run instead of just milliseconds, both V8 and SpiderMonkey have improved significantly even just over the past year.
Speaking for SpiderMonkey (only because that's the engine I'm more familiar with), we've bumped Kraken by 20-25%, and we've bumped Octane by around 50%.
On real-world app code like pdfjs, and gbemu.. we've improved performance by 2-3x over the last year.
JS has gotten significantly faster over the last couple of years, and we're working quite hard on making it faster yet.
Except its not.
JS speed as compared to native depends heavily on the nature of the code being executed, how type-stable it is, how polymorphic it is, what its allocation behaviour is like, and a number of other factors.
At worse it merely shows a misunderstanding of Sunspider's utility. And that's like 1/10 of the article -- and he even mentions some objection himself.
That means, it still is "web app" but it can perform reasonably.
I would break it down into three effects:
1) startup latency
2) excessive repaints/not leveraging compositor/jank
3) 'uncanny valley' issues where web emulations of native widgets don't match 100% with the feel of system widgets.
The basic documented-oriented part of the mobile web is a lot slower than something like UIKit/TextKit in iOS, so the time to launch even a basic form to display some data, or a few paragraphs, and smoothly scroll it, with all rendering effects, and without layout or repaint jank, is just more work on the mobile web to optimize for, and therefore, many developers don't do it, or aren't aware of how to do it, leading to enduser perception that "mobile web is slow"
I'd summarize my point of view as: NaCl/PNaCL won't solve our problems (although it is still a good idea to do them)
What if we had a mobile OS and a mobile app that was 100% Java free and only ran compiled code (i.e., that compiles to ARM assembly)?
Under the Steve Jobs theory of computers, everything should be "fast enough" just as it is. Because Moore's law and the increasing computing power of hardware should allow us to focus entirely on making everything as user friendly (including develeoper friendly) as possible without having regard for software efficiency. Watch for example the 1980 documentary "Entrepreneur" to see Jobs state this while the cameras follow him around during a NeXT retreat in Pebble Beach.
But I've also seen Hertzfeld say Jobs for example pushed hard for him to try to reduce boot times on the Mac. This is of course is firmly in the realm of software (bootloader and kernel) so it would seem a conflict of ideas. Software efficiency does matter sometimes. Even when the focus is user friendliness and user experience.
Jobs also said things back in the day about "the line between hardware and software" becoming blurred.
I find this difficut to agree with. But then I'm not called a genius like he was.
There are no doubt competent C and ARM assembly programmers out there. I've seen them doing embedded work and making games. Alas, I do not think they are by and large involved with working on the popular mobile OS's. Instead the focus seems to be on people who prefer to work with virtual machines and scripting languages.
First, it's a bit pedantic but JIT VMs still compile down to ARM assembly on an ARM device. They basically optimize away the overhead of interpreting their target language.
That said, what you're getting at is how Palm and Windows Mobile did it (for the most part, there is/was .Net for Windows CE) and how iOS does it now. Ubuntu Mobile and Win Phone 8 offer native toolsets in addition to GC/managed runtimes.
I don't think its an assumption at all that mobile devices have to use Java. J2ME didn't exactly take over the world, and that had the advantage of hardware acceleration in the form of ARM Jazelle.
What has happened is as hardware gets faster/cheaper, Java, .Net, JS, etc become a better fit for development on mobile devices. On the Desktop we have stupidly powerful machines that can generally brute force their way through the inherent overhead of interpreted/JITed/VMed languages. On mobile however, we're not there yet.
I might better understand your perspective if you could elaborate on the details.
Why do we have to write resource hogging software _now_ for mobile if the devices are not yet ready to "brute force" their way through it? And if we wrote small, fast software _now_, what would be the downside fo this as devices get more powerful in the future?
Users complaining about lightening fast programs that lack the whiz-bang factor that would have been added if developers had used interpreted languages? I can't imagine it.
When you cross the line of realtime, many things magically happen. Anything non-deterministic becomes whole source of problem, and GC is one of the biggest. Under realtime conditions, manual memory management is easier than dealing with non-deterministic black box.
So even on desktop, trivial GC is inferior architecture for realtime program by the definition of realtime. There's no room for non-deterministic deferred batch operation in 60fps graphics. Anyway specially designed GC may overcome this, but I couldn't see any actual implementation on the tech market.
Unlike old days, people want more dynamically and more smoothly working apps. Like games, all apps will become fully realtime eventually. This means you need real-time graphics approach. rather than static graphics.
If you're lucky that your users don't care those stuffs, then you're safe. But Apple is the company needs to treat single frame drop as failure. That's why they don't use GC and finally deprecated it.
As a result, they have chosen to consolidate the mindshare onto one memory management technology rather than dilute the talent pool with two competing proposals. Given that iOS is much, much more popular than OS X, they have chosen to standardize on the technology that works best for the popular platform.
Oh, and it lets me do most of what I have been doing on Linux the last 18 years. Not quite everything, but pretty close.
No matter how much you tweak and tune a GC, there are still going to be times when it destroys "locality" in a hierarchical memory system and causes some kind of pause. For a small form entry program, this is probably negligible. As the size of the program and/or its data grows, or the time constraints grow tighter, these pauses will become more and more unacceptable.
Most of my work the last 10 years has been in Java, but the JVM is not the one true path to enlightenment. TMTOWTDI :-)
A prominent theme at WWDC this year was all of the changes they've been doing to improve power management. That garbage collection thread always running in the background is going to interfere with your efforts to save battery. A predictable strategy based on automatic reference counting, on the other hand...
With ARC it replaces the manual retain/release management that used to be required and the compiler manages that for you. To write it most of the time is just like if you were using GC. The additional thought compared to garbage collection is pretty small and the benefits just in terms of the determinism are quite valuable while debugging not to mention the improved performance.
 Except when interfacing with non-ARC code.
 Some garbage collection algorithms can cope with reference cycles which ARC can't so you need to make sure the loop is broken by using a weak reference.
 As with Java you still need to consider object lifetimes to some extent particularly when requesting notfications or setting callbacks.
That being said, I think there's a big elephant in the room: the additional layers of complexity. Much of the younger generation in the web development community seems very far removed from the principles of computer engineering, to the point that there's a bunch of things you can hint at simply by common sense, yet they are opaque to them. For instance:
1. Performance hits aren't only due to slower "raw" data processing, they can also occur due to data representation -- think about cache hits, for instance. Your average mobile application's processing is more IO-bound than processor-bound. Unless you can guarantee that your data is being efficiently represented, you get a hit from that.
2. Since the applications are sandboxed, the amount of memory your runtime occupies is directly subtracted from the amount of memory your application gets. The more layers your runtime piles up that cannot be shared between applications -- the more memory you consume without actually offering any functionality.
3. Every additional layer needs some processing time. Writing into a buffer that gets efficiently transfered into video memory is always going to be slower than JIT-ing instructions that write something into a buffer that gets efficiently transfered into video memory -- at the very least the first time around.
I'm not sure if someone is actually trying to say or (futilely) prove that a web application is going to be as fast as a native one (assuming, of course, the native environment is worth a fsck) -- that's something people would be able to predict easily. This isn't rocket science, as long as you leave the jQuery ivory tower every once in a while.
The question of whether it's fast enough, on the other hand, is different, but I think it's eschewing the wider picture of where "fast" is in the "efficiency" forest.
I honestly consider web applications that don't actually depend on widely, remotely accessible data for their function to be an inferior technical solution; a music player that is written as a web application but doesn't stream data from the Internet and doesn't even issue a single HTTP request to a remote server is, IMO, a badly engineered system, due to the additional technical complexity it involves (even if some of it is hidden). That being said, if I had to write a mobile application that should work on more than one platform (or if that platform were Android...), I'd do it as a web application for other reasons:
1. The fragmentation of mobile platforms is incredible. One can barely handle the fragmentation of the Android platform alone, where at least all you need to do is design a handful of UI versions of your app. Due to the closed nature of most of the mobile platforms, web applications are really the only (if inferior, shaggy and poorly-performing) way to write (mostly) portable applications instead of as many applications as platforms.
2. Some of the platforms are themselves economically untractable for smaller teams. Android is a prime example of that; even if we skip over the overengineering of the native API, the documentation is simply useless. The odds of having a hundred monkeys produce something on the same level of coherence and legibility as the Android API documentation by typing random letters on a hundred keyboards are so high, Google should consider hiring their local zoo for tech writing consultance. When you have deadlines to meet and limited funding to use (read: you're outside your parents' basement) for a mobile project, you can't always afford that kind of stuff.
Technically superior, even when measurable in performance, isn't always "superior" in the real life. Sure, the fact that a programming environment with bare debugging capabilities, almost no profiling and static analysis capabilities to speak of and limited optimization options at best is the best you can get for mobile applications is sad, but I do hope it's just the difficult beginning of an otherwise productive application platform.
I also don't think the fight is ever going to be resolved. Practical experience shows users' performance demands are endless: we're nearly thirty years from the first Macintosh, and yet modern computers boot slower and applications load slower than they did then. If thirty years haven't filled the users' thirst for good looks and performance, chances are another thirty years won't, either, so thirty years from now we'll still have people frustrated that web applications (or whatever cancerous growth we'll have then) aren't fast enough.
Edit: there's also another point that I wanted to make, but I forgot about it while ranting the things above. It's related to this:
> Whether or not this works out kind of hinges on your faith in Moore’s Law in the face of trying to power a chip on a 3-ounce battery. I am not a hardware engineer, but I once worked for a major semiconductor company, and the people there tell me that these days performance is mostly a function of your process (e.g., the thing they measure in “nanometers”). The iPhone 5′s impressive performance is due in no small part to a process shrink from 45nm to 32nm — a reduction of about a third. But to do it again, Apple would have to shrink to a 22nm process.
The point the author is trying to make is valid IMHO -- Intel's processors do benefit from having a superior fabrication technology, but just like in the case above, "raw" performance is really just one of the trees in the "performance" view.
First off, a really advanced, difficult manufacturing process isn't nice when you want massive production. ARM isn't even on 28 nm yet, which means that the production lines are cheaper; the process is also more reliable, and being able to keep the same manufacturing line in production for a longer time also means the whole thing costs less on a long term. It also works better when you have an economic model like ARM's -- you license your chips, rather than producing them. When you have a bunch of vendors competing for the same implementation of a standard, with many of them being able to afford the production tools they need, chances are you're going to see lower cost just from the competition effect. There's also the matter of power consumption, which isn't negligible at all, especially due to batteries being such nasty, difficult beasts (unfortunately, we suck at storing electrical energy).
Overall, I think that within a year or two, once the amount of money shoved into mobile systems will reach its peak, we'll see a fairly slower rate of performance improvement on mobile platforms, at least in terms of raw processing power. Improvements will start coming from other areas.
I would actually reverse those two examples. A 10 second to 50 second jump probably wouldn't affect most user-facing programs that much, since it's either being done in the background or the user is staring at a progress bar. The UI of the application is probably (hopefully) already designed around the fact that there is a long-running process. But a 10ms to 50ms increase can be a big deal, since it's probably something that is intended to appear "instant" to the user.
I think it's more important as an industry that we reckon with how to effectively manage large teams developing huge applications in JS/HTML/CSS. I suppose that Facebook's development suffered from too many cooks in the kitchen, cruft building up, race conditions, etc.
For the curious, I have a simple benchmark program that focuses on string manipulation (and deliberately creates many temp string objects, like a real app would) in various languages. If anybody wants to download and try it on the latest version of their favorite languages on the host-de-jure, go for it!
They have plenty of informative advice on how to improve general performance and how to use tools WebKit provides to measure performance problems.
Some key take aways:
Avoid using libraries (like jQuery). It will reduce memory consumption significantly.
Be careful how often you're invalidating style info and forcing recomuptation
Avoid using scroll handlers to do layout (especially when you're likely to inadvertently invalidate styles at the same time)
Garbage collectors should be optional. I would really love to be able to handle my memory directly if possible, and while there are some things I can do related to object pooling, I lose control over that with closures and a more functional approach.
I guess to counter my own argument here I will say that for most resource intensive things like simulations (call them games if you will), an object-oriented approach does a better job of modeling than a functional one. Game loops, sprites, bullets flying around... go OOP... and then use an object pool and limit your use of closures. Is it a pain in the ass? You bet, but so is having to manage the heap on your own.
Now, as for the rest of the article, the author did a LOT to discredit himself insisting that JS performance will not increase over the next few years... dude, come on, winning isn't as important as learning and discussing, and you're clearly just trying to WIN an argument.
This article is not very balanced. It is basically just saying that all software written for a VM is stupid. That the concept of Garbage Collection is an abomination.
The truth is that VMs are incredibly successful and aren't going anywhere. I mean, look at the rise of VPS! Now compiled C and even machine code aren't running as fast as they could be! Run for the hills!
Performance is not the be-all, end-all of computing. We're a social species and the reason that JS is the so successful has more to do with the fact that it is hosted in a VM based on open standards than anything else.
So why have we moved to virtualization? Because it makes a LOT of sense. It's the same reason we invented compilers and higher-order languages in the first place... we're human. We are the ones writing and reading source code... the machines don't, they just blindly follow instructions. We choose to sacrifice speed for a more humanistic interaction with our machines, and that applies just as much to programmers as to the people who just use programs.
Disagree with him if you want to--I'd like to, because I like the JVM and the CLR, but he has the weight of evidence on his side--but can we not make stuff up?
Er, no it's not; it's pointing out a number of ways that it's problematic for GUI apps (and especially games) on highly resource-constrained mobile devices.
> The truth is that VMs are incredibly successful and aren't going anywhere. I mean, look at the rise of VPS!
He wasn't talking about that sort of VM.
Perhaps it comes from his iOS focus, but he's just wrong about the difficulty of transitioning the mobile software ecosystem to x86. Android is there today and has been for over a year (as the reviews of last year's x86 smartphones make abundantly clear). And, as we all know, Android is the majority of the devices, by a large margin.
It is true that iOS, Windows Phone and other platforms may have more trouble with an x86 transition, but so what? If Android makes a performance leap by moving to x86, other platforms will either find a way to keep up or they'll fall behind in the market. Either way, the mobile center-of-gravity will move towards x86-class CPU performance.
Also, of course, Atom isn't really all _that_ much faster than modern ARM.
Bay Trail tablets, due out for the holidays, easily beating the ARM side's upcoming champion (SnapDragon 800):
It looks to me like Intel is readier than you think. Moreover, from the perspective of a web app developer, it doesn't matter whether Intel, Qualcomm, Apple, Samsung or whoever actually wins a CPU performance war - so long as one is fought. That I think we can count on.
Yes, the Razr i lost (mostly modestly) on 4 out of 5 performance benchmarks, but...
1. It modestly won on power consumption (9 hours vs 8).
2. Quoting from the review: "Aside from the benchmark results outlined above, the Medfield entry offered a marginally faster response to most actions."
3. Drawing CPU vs CPU conclusions from several benchmarks is complicated by one of the other differences between the two phones - the GPU (which is mostly an orthogonal question).
Oh, and then there's that benchmark the RAZR i won, by almost a factor of 2: SunSpider. While SunSpider has its limitations, it's the most still relevant benchmark of the set to the discussion we're having right now. And this performance difference has real-world consequences (another quote): "The results remain largely unchanged, but after spending a week with the device, we'd like to add that the web browser still gives a superb performance."
Intel's problem in mobile has never been performance (except when self-inflicted and they're past that). As of last year, it isn't power consumption either. Today, Intel's last problem is modems (specifically LTE modems), and they're hard at work on that one too: http://newsroom.intel.com/community/intel_newsroom/blog/2013...
It isn't a lock, but it isn't a possibility to dismiss either.
The RAM, audio and ﬂash subsystems consistently showed the lowest power consumption.
...RAM power can exceed CPU power in certain workloads, but in practical situations,
CPU power overshadows RAM by a factor of two or more. [p.11, graph on p.5]