Hacker News new | past | comments | ask | show | jobs | submit login

This is actually a very interesting post, and the first one of this level of documentation I've seen in a while. I also actually agree with most of the author's points.

That being said, I think there's a big elephant in the room: the additional layers of complexity. Much of the younger generation in the web development community seems very far removed from the principles of computer engineering, to the point that there's a bunch of things you can hint at simply by common sense, yet they are opaque to them. For instance:

1. Performance hits aren't only due to slower "raw" data processing, they can also occur due to data representation -- think about cache hits, for instance. Your average mobile application's processing is more IO-bound than processor-bound. Unless you can guarantee that your data is being efficiently represented, you get a hit from that.

2. Since the applications are sandboxed, the amount of memory your runtime occupies is directly subtracted from the amount of memory your application gets. The more layers your runtime piles up that cannot be shared between applications -- the more memory you consume without actually offering any functionality.

3. Every additional layer needs some processing time. Writing into a buffer that gets efficiently transfered into video memory is always going to be slower than JIT-ing instructions that write something into a buffer that gets efficiently transfered into video memory -- at the very least the first time around.

I'm not sure if someone is actually trying to say or (futilely) prove that a web application is going to be as fast as a native one (assuming, of course, the native environment is worth a fsck) -- that's something people would be able to predict easily. This isn't rocket science, as long as you leave the jQuery ivory tower every once in a while.

The question of whether it's fast enough, on the other hand, is different, but I think it's eschewing the wider picture of where "fast" is in the "efficiency" forest.

I honestly consider web applications that don't actually depend on widely, remotely accessible data for their function to be an inferior technical solution; a music player that is written as a web application but doesn't stream data from the Internet and doesn't even issue a single HTTP request to a remote server is, IMO, a badly engineered system, due to the additional technical complexity it involves (even if some of it is hidden). That being said, if I had to write a mobile application that should work on more than one platform (or if that platform were Android...), I'd do it as a web application for other reasons:

1. The fragmentation of mobile platforms is incredible. One can barely handle the fragmentation of the Android platform alone, where at least all you need to do is design a handful of UI versions of your app. Due to the closed nature of most of the mobile platforms, web applications are really the only (if inferior, shaggy and poorly-performing) way to write (mostly) portable applications instead of as many applications as platforms.

2. Some of the platforms are themselves economically untractable for smaller teams. Android is a prime example of that; even if we skip over the overengineering of the native API, the documentation is simply useless. The odds of having a hundred monkeys produce something on the same level of coherence and legibility as the Android API documentation by typing random letters on a hundred keyboards are so high, Google should consider hiring their local zoo for tech writing consultance. When you have deadlines to meet and limited funding to use (read: you're outside your parents' basement) for a mobile project, you can't always afford that kind of stuff.

Technically superior, even when measurable in performance, isn't always "superior" in the real life. Sure, the fact that a programming environment with bare debugging capabilities, almost no profiling and static analysis capabilities to speak of and limited optimization options at best is the best you can get for mobile applications is sad, but I do hope it's just the difficult beginning of an otherwise productive application platform.

I also don't think the fight is ever going to be resolved. Practical experience shows users' performance demands are endless: we're nearly thirty years from the first Macintosh, and yet modern computers boot slower and applications load slower than they did then. If thirty years haven't filled the users' thirst for good looks and performance, chances are another thirty years won't, either, so thirty years from now we'll still have people frustrated that web applications (or whatever cancerous growth we'll have then) aren't fast enough.

Edit: there's also another point that I wanted to make, but I forgot about it while ranting the things above. It's related to this:

> Whether or not this works out kind of hinges on your faith in Moore’s Law in the face of trying to power a chip on a 3-ounce battery. I am not a hardware engineer, but I once worked for a major semiconductor company, and the people there tell me that these days performance is mostly a function of your process (e.g., the thing they measure in “nanometers”). The iPhone 5′s impressive performance is due in no small part to a process shrink from 45nm to 32nm — a reduction of about a third. But to do it again, Apple would have to shrink to a 22nm process.

The point the author is trying to make is valid IMHO -- Intel's processors do benefit from having a superior fabrication technology, but just like in the case above, "raw" performance is really just one of the trees in the "performance" view.

First off, a really advanced, difficult manufacturing process isn't nice when you want massive production. ARM isn't even on 28 nm yet, which means that the production lines are cheaper; the process is also more reliable, and being able to keep the same manufacturing line in production for a longer time also means the whole thing costs less on a long term. It also works better when you have an economic model like ARM's -- you license your chips, rather than producing them. When you have a bunch of vendors competing for the same implementation of a standard, with many of them being able to afford the production tools they need, chances are you're going to see lower cost just from the competition effect. There's also the matter of power consumption, which isn't negligible at all, especially due to batteries being such nasty, difficult beasts (unfortunately, we suck at storing electrical energy).

Overall, I think that within a year or two, once the amount of money shoved into mobile systems will reach its peak, we'll see a fairly slower rate of performance improvement on mobile platforms, at least in terms of raw processing power. Improvements will start coming from other areas.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: