Hacker News new | past | comments | ask | show | jobs | submit login
Mobile CPUs and the Performance Inequality Gap (twitter.com/slightlylate)
67 points by luu on March 29, 2020 | hide | past | favorite | 60 comments



This is essentially Qualcomm vs Apple in the mobile market.

But the question I've got from looking at the the Geekbench site:

iPhone 11: https://browser.geekbench.com/v5/cpu/1611448

AMD 3900x 12 Zen 2 12 Core: https://browser.geekbench.com/v5/cpu/1611445

The iPhone has higher single core performance than a desktop CPU with a 105W TDP.

Of course it has more cores. But am I missing something here?

Is this test actually representative of device performance?

Or are certain desktop features not tested?

Does the whole RISC Vs CISC of X86 and ARM make a difference?

Assuming core count was equal. Would a desktop CPU and an Apple SoC run at equivalent performance if it was running Ubuntu and running native compiled code?


Geekbench is not measuring raw performance in terms of operations per second. It's measuring very specific use cases (their current blurb mentions AI and ML, in the past mentioned synthetic tests to approximate browsers). Because thermal constraints would prevent Apple from competing in a brute force approach, Apple have been more willing to include specialised hardware for tasks like AI/ML as in the A12 CPU. Of course, single core AI/ML perf is a bit of a silly metric but it's one thing geekbench is claiming to measure here that Apple probably wins at. I think in the past encryption/decryption similarly had acceleration sooner on Apple platforms, and Safari is able to make better use of the GPU with less varied hardware to support, and I think real web browsing is another component in geekbench tests.


Operations per second is a notoriously useless measure. Which is why we have higher-level benchmarks like GeekBench that are actually incredibly broad in what they test, performing a lot of real-world type activities in a larger macro-benchmark suite.

The Bionic chips aren't cheating to a win. They win GeekBench, and virtually any other cross-platform activity that you can throw at them. As I mentioned in another post, my iPhone 11 absolutely lays waste to my laptop with an i7-7700HQ processor at the JetStream 2 benchmark. Now this is a JavaScript benchmark that runs on completely different software stacks / OS / etc (my i7 running Windows 10, Chrome 80, etc), but it is layers and layers of dependencies on the performance of the platform. And my big beefy i7 is beaten by a tiny little mobile processor. It is quite remarkable.

It's a blazingly fast little processor. We would probably have seen them use them in other hardware sooner if Apple wasn't always suspicious that Intel was sandbagging in some way and was ready to wow the industry.


The Geekbench score is a pretty useless metric when you have a specific workload in mind. If you are choosing a machine for video editing you're not going to look at the Geekbench score. If you are choosing a machine for compiling code you're not going to look a the Geekbench score. If you need a machine for gaming you're not going to look at the Geekbench score. If you have no specific requirements then you might not even care about having the best performance.


At this point the conversation turns completely useless where we all tell ourselves that no benchmark matters.

Only they do. The Bionic chips are ridiculously performant, and are bound to have a much wider range with even moderate cooling.


I don’t think comparing Geekbench scores directly between completely different processor architectures tells you that much about real world application performance. It’s a synthetic benchmark that may be affected disproportionally by factors that aren’t very significant for ‘normal’ use cases, e.g. specific optimizations (compiler, hardware, etc) that hit the happy flow for some part of the benchmark on one architecture, but not the other.

But the main difference is thermals and performance under sustained load. So far none of the Appple SoC’s have been shown to handle sustained workloads at heir highest frequency. The CPU in the most recent iPhones is known to throttle under load quite fast, and nobody but Apple knows how it would perform in an active cooling setup.


"Assuming core count was equal. Would a desktop CPU and an Apple SoC run at equivalent performance if it was running Ubuntu and running native compiled code?"

Yes. Of course there are going to be edge conditions where one or the other are going to shine particularly well, but overall the performance is going to be close, if not giving a nod to the Apple device.

This is the reason there have been wide expectations that Apple would move their desktop/laptop platforms to their own chips, and if rumors are true that will be next year. In a situation where their own chips had credible heat dissipation and wouldn't be subject to thermal throttling, it would be very impressive. I mean it's already spectacularly impressive, but it would be quite dominant with real cooling.

Apple needs to be careful, though, explaining their patience. There has always been the potential that Intel comes out with a game changer.

Of course in such discussions everyone is going to discount whatever benchmarks are used. Yet we've seen this in benchmark after benchmark, at generalists tasks that annihilate any trick instruction. Out of curiosity I just ran JetStream 2 -- it yields a 81.6 score on my i7 laptop, and 153 on my iPhone 11. The iPhone with incredibly poor heat dissipation.

The ARM / x86 / x86_64 / CISC / RISC things are all abstract higher level notions now and aren't the reason. Apple's team has just proven astonishingly good at designing chips. Which quite honestly I thought would turn out otherwise, and Apple would end up begging the industry for whatever the new hotness was.


> Does the whole RISC Vs CISC of X86 and ARM make a difference?

Not really. Arm hasn't been RISC for quite a while now (there's a "Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero" instruction(!), SIMD, etc etc).

The difference can be explained partially by the memory model: x86 has total store ordering, which can be slower than Arm's weak memory model (it allows the hardware to be more creative).

> running native compiled code

There's more to it than 'running native code'. It depends a lot on what code is running (any CPU implementing the above javascript instruction would be much faster on a web benchmark for example). It also depends on the compiler. If the code is control-flow heavy, there isn't much to do except having large cache sizes and wide pipes, which most high-end, out-of-order CPU do already.


I read an analysis somewhere (anandtech?) which suggested that a lot of the performance of Apple's chips could be attributed to them having a really large / fast cache system.


What is even the hypothesis here? Websites are slow because Android devices are slow?

Lest we remind our web developer friends of the cache hierarchy, the slowest part of all here is still fetching the fricking page over the internet. On this mobile Twitter page alone two spinners are spinning simultaneously.

If, after all the network stuff, you are still taking a second to render in a world where smartphones can play Doom 3 at 60 fps - you are just taking the piss.


> the slowest part of all here is still fetching the fricking page over the internet.

That may be true, but have you tried browsing the web on an older device? Once the webpage is loaded, I can't say the experience is very nice on my iPad mini 2. Now, granted, that's a 2013 device. All I'm saying is that after all the "network stuff" you mention is done, the device power matters greatly, especially on JS-heavy sites.


> What is even the hypothesis here? Websites are slow because Android devices are slow?

Hypothesis is: websites are slow because developers are testing them using their high end devices where bad code performance is compensated by higher performance of devices. All of this while real world users are struggling with garbage quality end product.


Even though the average Android phone doesn’t compare performance wise to an iOS device and really sucks when it comes to running complicated web apps, it’s mostly Android users here who claim that progressive web apps are the future and that Apple is holding them back.


That's because it's not performance that's holding back PWAs. With a bit of work, perf was good enough for web apps on an iPhone 4S or earlier. iPhone 4s was released in 2011, so if low-end android devices are not hitting 2012/2013 iPhone performance levels, then they're also good enough for PWAs.

It's the lack of platform APIs for things like offline storage and push notifications that are holding back web apps, not performance.


I bought my son a MotoG back in 2017. Because I kept reading that it was a good midrange Android phone. It was slow running “native” apps, web performance was even worse.

He was more than happy to “upgrade” to my iPhone 6s (2015) in 2018. Performance was noticeable better. Also, it is still supported by the latest version of iOS.

Of course by then the phone was three years old so I replaced the battery.


ios 14 won't support 6s most probably and soon your iphone 6s will go to trash.

So your comment is lucky to have some truth for a couple of months while I look at my trashed iphone5 and iphone 6.


The iPhone 6s would have gotten 5 years of os upgrades. But, just because Apple won’t support a device with latest version of iOS, doesn’t mean that it won’t provide security patches and bug fixes. For instance, Apple released bug fix patches for iOS 9 and iOS 10 supporting phones back to 2011 last year.

https://appleinsider.com/articles/19/07/22/apple-issues-ios-...

As far as going into the trash. I had an old iPad 1st generation (2010) that last saw an OS upgrade in 2011. I could still download the “last compatible version” of apps like Netflix, Hulu, Plex and Crackle and they still work. iCloud syncing also works with Apple’s iWork apps and the built in apps (except for Notes).


basic html support doesn't work on earlier versions of ios safari due to the wrong decision that safari is bundled to ios.

you heard safari is the new internet explorer? that's why it's impractical and unsafe to use earlier versions of ios.


I’ve “heard” a lot of things. But, my first generation iPad from 2010 could render any page that I went to. Of course it crashed all of the time because that iPad only had 256Mb of RAM.

As far as being “unsafe”, Apple was publishing patches for iOS 9 and 10 as late as the middle of last year.

But do you really want to compare Apple supporting older hardware to Android?


Yeah regarding no filesystem, not sure if it is dumb but I was using Dexie(IndexedDB wrapper,) to store images as base64 strings. It works but what can suck is the time to turn it back to an image to be displayed. Which using small thumbnails it's not bad but without pagination(seems obvious you should) but it's so much data it throws a chromium-level error so idk ... But was faster to reach MVP than RN for me with my skills. There was an offline-first requirement so that's why the images are stored that way but it has a remote REST API for sync and S3 for storing the large images.


I'd be curious about the performance difference storing them as blobs https://hacks.mozilla.org/2012/02/storing-images-and-files-i...


thanks for the idea, I will try it because it looks like BLOB is smaller than base64 and looks like maybe faster to render too due to not having to decode/encode. It is unfortunate I was not aware of this sooner with regard to the "data format juggling" but still...

I'm having an issue where there's too much data being loaded/pulled from Indexed DB so I have to do stuff like pagination, but this will help no doubt.


Because Angular, React, Vue aren't needed for like 90% of those applications, server side rendering is more than enough.

Specially when tons of "native" applications are actually Cordova and Ionic garbage that take longer to start than a HTTP server reply to the native browser.

Native applications are great, when we are talking about stuff that actually makes use of the GPGPU or hardware sensors.

Outside games, office and drawing applications, I don't have any applications installed that couldn't easily be done as mobile Web sites.


How are you going to have a PWA that works offline with server side rendering? The latest “Apple wants to force everyone to develop apps instead of using the web so they can get their 30% cut of mostly free apps” controversy was about offline storage expiring after 7 days.

Not to mention that the same people who are buying cheap Android phones probably have limited data plans and/or low bandwidth.


By having a server worker that caches the whole site, done.

Apple's decision doesn't apply to PWAs as clarified on a later update.

Plus updating the browser cache every couple of days isn't a big deal, native apps get store updates almost every day.


So, no dynamic changes of the UI? We are talking about “web apps”.


Many "web apps" are plain CRUD entry forms with fancy CSS animations.


I’ve done CRUD apps on mobile mostly for field services (technicians, deliveries, etc.) where it had to work offline. Even then you needed to navigate between screens, menus, navigate to different screens based on options chosen, client side validation, manage state on the client, etc.

You still needed UI changes.


Still possible with service workers without having the poor browser chew a SPA framework.


How do service workers help if you can’t depend on having a network connection?

Are you really saying that it would he more performant that you have to make a round trip just to perform validation? How slow would the hardware have to be for a network round trip to be faster?


By reading from the cache the necessary static HTML and CSS.

Validations always need to be done on both sides, unless one does not value security.

All of this is possible with vanilaJS, using builtin browser features, no need to kill the poor CPU with an additional bloated framework.

I used to be big in native apps, until having seen too many examples of what used to be done in VB/Acess back in the day.

So unless it is a game using OpenGL ES 3.0, Vulkan, Metal, something that requires AR/VR, real time audio, GPGPU for machine learning, or direct access to hardware sensors, it might be quite doable as PWA.


Validation does need to be done on both sides. But validations on the client side means no round trips and you can do validations even when you’re not online. Also even if the content is “static”, if you don’t have access to the network, how do you do simple things like display totals from service charges, a receipt of all the items delivered - propane as deliveries and tank repairs, alarm installation or at home nurse care where they had a checklist?

Even with CRUD apps you need to display dynamic content based on user interaction.

I used to be big in native apps, until having seen too many examples of what used to be done in VB/Acess back in the day.

So you haven’t seen the poor performance of ASP.Net Web Forms with the huge hidden field to maintain state and redrawing the screen for every interaction and a network round trip?


Not to the point that it actually matters, in a world were plenty of people are deploying node, django and rails applications.


if you haven’t noticed, the world is moving away from server side rendering - even Microsoft is pushing toward client side rendering.


I notice what our customers pay for and write on their RFPs, everything else are just caravans passing by our shovel store.


I run my own PWA on a 2012 Samsung Galaxy S3 and the only performance problems I have, are due to bad decisions as a developer (e.g. loading all elements of a list isn't such a great idea when the list grows over time and you use the app for 3 years (so far)).

So if you are serious about building PWAs you shouldn't have performance issues. Yes, they are a bit slower than native apps, but with today's devices that shouldn't matter.

The issue with Apple is, however, that they don't support all the APIs you need to create an awesome PWA (and decided recently to throw away the localStorage every 7 days unless the user wants to clutter his desktop with every PWA that needs to store data on the device for longer periods).


That’s actually a good thing. If every website could store data indefinitely, it would be abused. By only allowing long term storage for websites that the user “installed” on their phone by adding it to the home screen expresses intentionality.


Seems to me if you want an "equal" experience across the board, you make the clients responsible for the bare minimum functionality. Everything that can be is processed server-side. You should target the lowest common denominator wherever possible. Anything else is tantamount to digital elitism or simply bad customer service.

Assuming compute & bandwidth costs are fixed, are there any remaining justifications for doing SPA, et. al., especially if the server is required at all times in order to satisfy application requests? Is pushing a final compressed DOM over the wire really all that much worse than pushing 20 different JSON objects in order to recompose the same logical DOM client-side? Doesn't it seem like one of these approaches would be substantially faster and easier than the other in most cases?

I think we got off track somewhere around when cloud computing was introduced. I can see a huge motivating factor for pushing the "offload to client" narrative if your cloud compute portfolio provides 10% the relative cost-adjusted performance compared to a rack full of DL380s sitting in a colo. With the density of compute being offered today (e.g. 256 x86 cores in a 2U w/ 2TB ram), I feel a lot of businesses should start to look back to these approaches. The software and operating systems have vastly improved as well. Just take look at the performance you get with something like Kestrel in .Net Core:

https://www.techempower.com/benchmarks/#section=data-r18&hw=...

Nearly 7 million HTTP requests per second on a single 10GBe host. For 99% of business applications out there, just 1 of these servers is probably all you would ever need... A decade ago we were putting the finishing touches on 10k/second. Why the hell are we not aggressively trying to leverage this 100x speedup? Is it because all of this extra margin is being spread thin by Amazon, et. al for purposes of maximum profit extraction? Or, is there a more nuanced game afoot?


SPA is overused to be sure. But people are also building far more complex things on the web now and some of them are difficult or impossible to do purely with SSR. So pick the right tool for the job but let’s not pretend that we can do everything with just one tool.

And you might say those kinds of applications shouldn’t be built on the web but I’d rather have them on the web than locked up in some proprietary app store with all the centralization and potential abuses of power that allows.


geekbench have unreliable perfromance stats that've been favoring iphone for a looong while.

check your conclusions - 90% of articles with rocking iphone performance cite geekbench.


Practical experience with developing native cross platform applications in my experiece strongly aligns with geekbench results.

Cross platform as in most CPU intensive code is in C++ that is exactly shared between both, only trivial but obviously needed differences in input handling. And for Android the renderer is Vulkan and for iOS it’s metal, both essentially identical in how features are used.

Some super high end Android phones come close in some cases, but on average iOS devices wipe the floor with Android ones.

I want to emphasize that on top of Android HW being slower on average it is also hampered by the decision to use Java. Some api’s are Java only requiring JNI, which is slow, to access. Another issue is GC pauses, they are small but can result in skipped frame here and there, making the experience subtly worse.


There was a version of Geekbench a while ago for which you could make a good argument for that but Geekbench 5 seems to tell the same story that Spec, web tests, etc all tell. Apple's cores really do seem to be that good though they benefit from running at the clock speed they were designed for, unlike an Intel U-series which has more pipeline stages than it needs to hit its clock.

And Apple also benefits from being able to set their own page size and probably from getting to have caches that are virtually indexed and physically tagged rather then physically indexed and physically tagged. Doing address translation in parallel with way lookup is a nice little performance boost. There are rumors of other cleverness in the cache heirarchy from control of the OS but I don't know the details.


specint2006 shows that A13 outperforms desktop ryzen 3900x. 5W vs 105W. 6 cores vs 12 cores. something is wrong with numbers here.


Where can I see these SPEC results for Apple CPUs?


Anandtech frequently includes SPEC results in their reviews.

https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...


Thanks! These are SPEC2006 which is already retired, but those numbers for the A13 do indeed come in the same ballpark as the last official SPEC2006 results that were published for Xeons in 2017 before retirement.


We also know that the average selling price of Android is less than half that of an iPhone. Most Android phones being sold are lowend phones.

Besides that, there have been plenty of articles where they found that Android manufacturers were detecting when benchmarks were being run and cheating.


Anandtech uses SPEC2006 and the performance gap is not much different than Geekbench: https://www.anandtech.com/show/15207/the-snapdragon-865-perf...


Is this really about the CPU though? My guess is - especially in the countries where these low-end phones are popular - bandwidth is a much bigger bottleneck.


What are these charts intended to show? The relative differences seem broadly the same, going back years.

Is the idea that if all Android phones were magically 2x faster, some type of app architecture would magically become usable?


On the other side, on my old Moto G4 I could listen to music in background, now with my new Samsung Note I allways have to stream videos, so the display stay on.


What about no.js?


What do you iPhone users do with all of your power?


The other question to ask is why do some consumers want less for their money?

Apple is making luxury goods, they're not going to drop their prices.

It's like how there's recently been a bunch of articles about how phones are getting "too much" RAM and the comparison is laptops.

When the question should really be, why do laptop manufacturers get away with putting 4 or 8gb of RAM in a laptop and mobile phones are catching up.

And anyway. What most people do is use their devices however they want with the expectation that everything runs smooth, they can multitask and the demands of higher quality video/pictures/refresh rates can be met.

Of course there are people that are satisfied with yesterday's technology. Plenty of people use 10 year old ThinkPads and get their work and entertainment done. Same with getting a £100 smart phone.


> When the question should really be, why do laptop manufacturers get away with putting 4 or 8gb of RAM in a laptop and mobile phones are catching up.

I think this is comparing cheap laptops to expensive phones.


browse Twitter and marvel at how mobile rhythm games can't maintain 60fos


Wow, that's gone backwards. Tap tap revenge used to work fine on my iPod touch 2g.


Well, for example, I recently used my iPhone X to edit a 4K 60fps video using iMovie. This was quick, easy, and all playback was silky smooth. I did this while sitting at the airport lounge waiting for my flight, and it didn't use too much of my battery.

Meanwhile, my very high-end gaming PC struggles to play the video files from my iPhone at 60fps for some mysterious reason.


Yes it’s funny how fast people forget how much power is actually required to run all that basic stuff included in mobile OSes by default. Take the camera app on iOS e.g, it stitches 20+ MP panoramas in real-time, with filers enabled, does continuous zoom by combining images from multiple cameras etc. It wasn’t very long ago that I had software on my Linux desktop PC that did that (with much worse results) and it took up to half an hour in some cases. Or try browsing the modern web with a iPhone 4 or a ~5 year old Android phone for that matter. You don’t need to be a power user at all to benefit from the increase in processing capabilities in modern phones.


The playback issue (and the reason video editing went so smoothly) is probably because of hardware video codex support. Mobile devices, iPhone or otherwise, are miles ahead of most desktop graphics cards when it comes to video codec support.

H.265 support in graphics cards and consumer processors has laughably lacked behind on mobile, probably because of licensing.

There's also the fact that a lot of video applications on Windows don't enable hardware decoding by default out of fear of encountering buggy drivers or hardware.

On mobile there's basically only one way to decode video, which makes use of all the phone's dedicated hardware, while on desktop applications can use a wide variety of codecs. Then comes the decision whether to decode on the CPU or GPU, as both have hardware decoding support these days, or to fall back to software decoding because the specific combination of codec and bit depth wasn't supported by the hardware of choice.

Then there's the fact that Apple logically encodes video the camera records into a format that iMovie is well optimised for. If you encode video with your PC's GPU, it'll probably be just as snappy playing back while the iPhone video might still be struggling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: