Hacker News new | past | comments | ask | show | jobs | submit login

>x86 has had to be fast because most of the applications you will run on a daily basis are likely not really optimized, profiled or threaded anything above a couple compiler switches. They have to be fast because the code is so slow. Now with Android and IOS the language, libraries and sand-boxing improves the underlying mechanisms to the degree that most of the code that matters is optimized by Apple and Google where the equivalent Microsoft Windows libraries are not as optimized and in many cases so specialized that it gives a look and feel of a Wordpad type app rather than what you are really after.

This is so wrong, I actually don't know where to begin.

1. It's true that most code isn't optimized for x86. But most code isn't optimized, period. Optimization is freaking hard. Android and iOS aren't necessarily better optimized than Windows. And Linux, especially RHEL, is screaming fast on the new Intel chips. Windows isn't that terrible, either.

2. Sandboxing actually hurts performance, because it requires an additional layer between the OS and userland to make sure that the code the user is executing is correct.

3. None of this actually matters for chip architecture, since #1 is true of code in general, and #2 doesn't have any special architecture-based support.

4. x86 isn't just Windows. It's Linux, too.




Let me clarify.

When you program for Android/IOS how much of the logic you write is referenced from optimized libraries and how much is your own craft? Now look at the entire Market Place / App Store and figure how many of those apps entirely rely on Android / IOS optimized libraries.

While it may be true that some may wander off the beaten path and write their Apps in OpenGL ES directly and maybe even C/C++ most rely on frameworks and libraries already built in (which are indeed optimized).


You're still running on the blind assumption that Apple and Google are magically better at optimization than everyone else. I would not make that assumption, because, as I said before, optimization is hard. There's a billion variables that goes into your code's performance, and tiny changes can completely ruin performance-or make it awesome.

Look at Android(as an example). For the first few years of existence, the OS was plagued with issues of bad battery life due to poorly optimized code and bugs. Android 3.0 was basically scrapped as an OS due to bad performance.

iOS 5.0 had absolutely horrible battery life due to a bug. The Nitro Javascript engine isn't available outside of Safari, too.


So you are suggesting that it is out of reach for Google and Apple to publish libraries and frameworks that are optimized for a specific hardware platform (ARM A7 + A15) or is it that since Windows X already optimizes Intel / AMD offerings to an overwhelming extent that the differential would be moot?


It's out of reach for anyone - Google, Apple or Microsoft - to optimize their libraries enough to make up for CPU performance differences.


The only data point I can think of is this JSON benchmark:

http://soff.es/updated-iphone-json-benchmarks

...where Apple beats three community-developed libraries. But maybe it just wasn't that hard to beat them.

I'm also skeptical because I don't see how Apple has any incentive to optimise code ever. Their devices (ARM & x86) are doubling in CPU power left and right while the UX basically stays the same. The second-to-last generation inevitably feels sluggish on the current OS version...which just happens to be the time when people usually buy their next Apple device. Why should they make their codebase harder to maintain in that environment?


Look at https://github.com/johnezang/JSONKit, which is fast.


>...where Apple beats three community-developed libraries.

That's just in one very restricted area (JSON parsing) where there are TONS of third-party libraries of varying quality for the exact same thing. Doesn't mean much in the big picture.

>I'm also skeptical because I don't see how Apple has any incentive to optimise code ever.

And yet, they use to do it all the time in OS X, replacing bad performing components with better ones. From 10.1 on, each release actually had better performance on the SAME hardware, until Snow Leopard at least. They had hit a plateau there I guess where all the low hanging fruit optimisations were already made.

Still, it makes sense to optimise severely, if not for anything else to boast better battery life.


> Still, it makes sense to optimise severely, if not for anything else to boast better battery life.

No doubt about 10.0-10.5/10.6. But that seems to have been an afterthought for the last two OS X releases:

http://www.macobserver.com/tmo/article/os-x-battery-life-ana...

And has there ever been an iOS update that has made things faster on the same hardware?

I don't think that Apple is intentionally making things slower, which is what I'm trying to say with the JSON parser (it is easy to write a wasteful implementation). But in the big picture, they're not optimising much either.


How is this any different on Android/iOS than it is on Windows/Linux/Mac?


>"When you program for Android/IOS how much of the logic you write is referenced from optimized libraries and how much is your own craft?"

It doesn't matter either way.

For one, Apple and Google aren't that keen on optimising their stuff either.

Second, most desktop applications use libraries and GUI toolkits by a major source, like Apple and MS, so the situation regarding "a large part of the app is made by a third party that can optimise it" is there for those too.

Third, tons of iOS/Android apps use third party frameworks, like Corona, MonoTouch, Titanium, Unity, etc etc, and not the core iOS/Android framework.

Fourth, the most speed critical parts of an app are generally the dedicated stuff it does, and not the generic iOS/Android provided infrastructure code it uses.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: