Hacker News new | past | comments | ask | show | jobs | submit login
iPhone 11 has the fastest single-core performance of any Apple computer (twitter.com)
128 points by tambourine_man 23 days ago | hide | past | web | favorite | 89 comments



It makes zero sense to compare like this. Geekbench is closed source. So, there is no way of knowing what they're even running on ARM v/s x86. But, no way is my MacBook Pro slower than my iPad Pro

This Geekbench comparison, on the other hand, tells me they're on par (both single and multi-core scores) https://browser.geekbench.com/v5/cpu/compare/140913?baseline...

I'd take this claim with a massive boulder of salt, if you consider it at all that is.


>there is no way of knowing what they're even running on ARM v/s x86

They run the same workload compiled with Clang for each platform, as described in a PDF on their website.

https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf


Thanks. It's still lacking information on what specific compiler flags were used. Was the compiler running with -mavx turned on? Probably not (Considering there's a single Geekbench binary distributed for all generations of Macbook Pros). Xcode on the other hand, compiles a binary that's optimized for each target device. So, again, not really that informative on what optimizations were applied to the binary and what instruction sets were optimized for.


Well, memory aside (the iPad Pro having less) and perhaps SSD performance, how is the iPad Pro slower? What have you tried? It can run at least some code (the Geekbench) equally well. So what other programs could it run equally well?


Video encoding. The iPad pro is significantly slower at encoding 4k video compared to my MacBook Pro.


Isn’t video encoding a multi-threaded workload?


Yes. And the Geekbench comparison I linked to in my comment claims the multi-core scores are also similar.


Wouldn’t that be hardware accelerated on the iPad? What iOS app is doing software video encoding?


That's because of throttling though. Your MacBook Pro has fan(s), the iPad doesn't.

I'm pretty sure the iPad Pro SoC in a MacBook Pro case would be at least on par with the latest Intel-specced MBP


Based on what evidence? A single core Geekbench run?


ARM SIMD is only 128 bit isn't it? Would eat pretty heavily on video encoding speed.


The iPad, magically, performs well in Geekbench but has throttling issues in a real world task? I find that hard to believe.


As someone who has literally been running thermal tests against a random Android device for days (note: not as an expert in hardware, but as someone writing software that is causing the device to overheat), analyzing at all of the temperature sensor and power regulator debug information in spreadsheets while varying the workload, it isn't even remotely surprising to me that pegging a single core (which was the point of this benchmark) might not generate enough heat to cause thermal throttling but running a multi-threaded multi-core workload might.


Oh, I'm not talking about single core scores. I'm talking about multi-core scores. The comparison link in my original comment is claiming similar multi core scores.


That makes sense, you're MBP has a pretty decent GPU to do that.


The quality of the GPU usually has no impact on video encoding.

At best, you can use a hardware encoder that happens to come with the GPU but usually you won't want that because it comes with a compression hit compared to software.


It looks like you're getting downvoted and I'm not sure why - most encoding that I do is CPU-only, GPU encoding only works for a very limited set of codecs at a limited range of presets. If you need to encode anything other than those presets CPU it is.


Does this compression hit you mention also apply to encoders that use for example CUDA on Nvidia GPUs? With these there definitely are significant differences in encoding speed between different GPUs.


There's no reason that using CUDA inherently leads to inferior compression but I'm not aware of a CUDA-accelerated encoder that's any good relative to a pure software encoder (or even NVENC, the pure hardware encoder on an NVIDIA GPU).


Well, it's not that simple. The CUDA model has a inherent penalty for divergent thread behavior, that is, when all threads do the same, peak performance is attained. If they all follow a different path, the result can be disastrous.

The relevance to media encoding is that the higher profiles do a lot of data dependent special casing. It's no accident that the earliest CUDA Video codecs only handled the base profile.

This also hurts a CPU implementation but much less.


How does a user distinguish between that being due to software versus hardware?


How does that matter for this comparison? The claim is that the iPad pro is as fast as the Macbook Pro. If the hardware encoder is slower than a Macbook Pro software encoder and the iPad pro is as fast as the Macbook Pro, then by that corollary, the Macbook Pro's software encoder should perform faster than its hardware encoder. Surely Apple wouldn't add a slower hardware offload and use that?


Processor aren't getting that faster - per core - anymore because of miniaturization limits. I think comparing old MacBooks with new ones, most performance gains just came from additional cores/hyperthreads. (Before around 2014 it was really common to have just one thread per core.) Also there is little reason to put in cheap processors into iPads, people use it for games, so there must be a lot of performance.

> It makes zero sense to compare like this. Geekbench is closed source.

But probably there are remarkable differences in Integer vs. Floating point performance or Hardware acceleration. I assume iPad/iPhone have good HW acceleration for anything video/crypto/3D/...


> But probably there are remarkable differences in Integer vs. Floating point performance or Hardware acceleration. I assume iPad/iPhone have good HW acceleration for anything video/crypto/3D

Processor are definitely stalling in IPC (Instructions Per Cycle) improvements. But I don't think it's fair to call the A13 the fastest chip ever in an Apple device based on heavily hardware accelerated tasks. That's like saying an ASIC is faster than an AMD/Intel. Of course the ASIC will perform faster in certain tasks since they're purpose built for that. But the AMD/Intel are general purpose processors that are meant to do anything. The iPad most definitely has crypto/video/image processing hardware offloads and those are as fast as an Intel. But if I run a workload that makes extensive use of AVX (Instructions for processing Vectorized data), the Intel processors will blow the Apple A13 out of the water.

The point here is, these kinds of click-baity tweets are the ones that end up on articles and reach the general public. It's irresponsible.


Sorry if this sounds stupid, but I've always wondered: is 1 point in Geekbench for ARM really equivalent to 1 point for x64? Are they really comparable in the first place? If yes, how is it that the ARM SOC can beat an i5 while consuming so much less power? And why is ARM outstripping x64 in terms of single core speed? It just doesn't add up.


Linus called Geekbench as useless few years ago since it uses hardware implementations of crypto algorithms on some architectures and not on others.

[1] https://www.realworldtech.com/forum/?threadid=136526&curpost...


This keeps getting posted and it was on Geekbench 3. He is OK with Geekbench 4 [1].

[1] https://www.realworldtech.com/forum/?threadid=159853&curpost...


I don't know the answer to your question, but as a moderately frequent geekbench user I do know that even within x86 the Linux version outscores the windows version.

On this basis alone I would be extremely dubious comparing between platforms


So an i3-8100 has a single score of 1000, and the highest PC score they have is 1371?

These Geekbench comparisons keep showing phone CPUs being faster than server ones, but I question their validity.


The A12 has better single-threaded SPECint2006 scores than EPYC7742 almost across the board. It's about 10% worse than the 3900X, and 20% worse than the i9-9900K, so if the 20% uplift holds true for SPECint scores then the A13 should be pretty much par with the i9-9900K.

ikr

https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-re... https://www.anandtech.com/show/14694/amd-rome-epyc-2nd-gen/9 https://www.anandtech.com/show/14664/testing-intel-ice-lake-...


Why? Remember that we're talking about single-thread performance. There's no big reason why servers should be more optimised for single-thread performance than mobile ones. It's more useful to use the extra die area for more cores, or other features. They have the same fundamental limitations that makes it hard to scale to higher single-thread performance (remember that CPU processors also have to minimise power consumption, even if they use a lot more power in total). They're both using cutting edge processes, and they both have more than enough die area to make a single core perform as well as is physically possible.

I really do think this indicates that Apple/ARM has caught up to Intel when it comes to architecture and we already know TSMC has surpassed them in process technology.


Here's an iMac (i9 9900K) result of 1402: https://browser.geekbench.com/v5/cpu/172942


With all the discussions on how realistic the geekbench results are, why isn't there an open source benchmark suite which can be run as an app on iOS devices which would give a more thorough comparison between different architectures?


I question all these scores. The reason is that the Apple chip also has pretty low frequency and not so fast memory. This would mean that there is a lot of IPC opportunity left in comparison to the top x86 chips. This is the range of 2x in comparison to Skylake. This is a crazy wide core. What kinds of reorder buffers are required for this? Just 2x more execution units won't be enough.


For those interested in comparing, note that Geekbench has updated their baseline recently. Make sure to use Geekbench 5 in your tests!


Geekbench’s usefulness aside, I think it’s pretty safe to say that the A series has surpassed x86 for the ultraportable laptop chip.

An ARM MacBook would be a kickass machine. I actually hope they don’t release some sort of emulation layer like they did in the previous transitions. We don’t want to be spending battery with that when the tooling these days is much more homogeneous and high level.

It’s also very tempting to speculate what they would be able to do with a larger, 40 something watt budget.


Does that include the new Mac Pro?

If so does that mean PC games would theoretically run faster on an iPhone 11 than on desktop PCs (ignoring GPU speeds)?


The Mac Pro is actually a poor comparison here.

Mac Pros run Xeon processors that prioritize having many cores. Their single core performance is very good, but worse than Intel's top mainstream chips, like the 9900K.


playing games means sustaining stable high performance for long periods. gaming PCs have active cooling which helps. i doubt a phone can do that without hitting thermal issues.


Most modern games are able to use four or more fast cpu cores, and even a budget desktop can afford a quad core cpu for less than $100 these days: intel i3-9100F (no graphics, coffee lake), or amd ryzen 5 3200g (graphics, zen+) are likely much better for multithreaded experiences. Plus heat dissipation, optionally more ram, and i/o abilities.

If you're willing to spend $200 on a cpu, you can get faster or more cores.


Probably for a few seconds before it starts throttling.


I just want a phone with a battery that lasts a few days, what about a week? I will never edit video on my phone


I can imagine Apple saw this coming a while back and decided to go all in on iOS while maintaining MacOS on the back burner.

It must be difficult to be in a management position where you obviously see the industry is moving towards ARM, but you need to orchestrate a plan in motion to correctly time when the customer/ developer is ready to make the switch.


> It must be difficult to be in a management position where you obviously see the industry is moving towards ARM, but you need to orchestrate a plan in motion to correctly time when the customer/ developer is ready to make the switch.

Apple is in a unique position to make this happen though. And they have done it before with the PowerPC->x86 switchover.

Their solution has previously been to have fat binaries with both versions compiled alongside. They control the toolchain and the app store so this is not that difficult to enforce.


And the 68000 to PowerPC before that.


Not to be pedantic, but you could argue it goes all the way to 6502, so they have gone though all these transitions: 6502 -> m68k -> PowerPC -> x86. Added a x86 -> Arm64 transition would hardly be shocking.

EDIT: In case it wasn't clear; there are Apple II emulators for the Macintosh.


Instead of switching macOS to arm, they could promote iPadOS to laptops. They already added a file manager and advanced windowing to it.


The ironic thing is, that the question isn't so much one of hardware, but software. With the iPad Pro, Apple has an almost perfect piece of hardware, that could be used as an ultralight laptop alternative. The problem is, Apple does not allow the use of an iPad as a computer. File handling - though a little bit improved on iPadOS - is still extremely limited and even more so is the software. There are no real development environments on the iPad because of the app store limitations, there is no Termux for the iPad. Even if fully sandboxed, that could increase the productivity as a computer a lot.

So the simplest way for Apple to get a ARM-based computer would be allowing the iPad to be more of a computer. Alternatively, they do it the hard way and launch ARM-based Macbooks. That would be nice too. But putting iPadOS onto a Macbook would rather go in the wrong direction. MacOS should run well on ARMs anyway.


Have you tried iSH? It’s kind of like Termux


I have heard about it, but not tried it. It is sad how many heroics iSH has to go through, if Apple could just provide a sandboxed Termux.


iSH is a x86 emulator running on an ARM phone. It’s crazy.

A nice proof of concept, but I don’t think it’s the right path for a serious tool.


True, most stuff doesn’t work and it’s very slow. But it runs youtube-dl well enough which is something.


No re-encoding I hope :)


Not reencoding but it can remux separated streams quickly using ffmpeg


I expect within the next five years you’re going to see some sort of convergence between Apple’s phone/tablet and traditional computer lines.


At least on the level of the 13 inch devices, this would absolutely make sense. The Macbook competitors nowadays offer touch and some even pen input, which the Macbook lacks, the iPad has those and plenty of cpu power and is ultralight, has no fan and is more energy efficient. The iPad Pro could replace a laptop for most usages, if were not limited by iOS/iPadOS.

Personally, I would refresh my iPad much more often, without the limitations. And it is not very realistic from Apple to expect people to carry both a laptop and a tablet. So forcing the customers to choose between those device types is not very customer friendly.

Actually, I am currently considering to get the Samsung Galaxy Book S, which seems to be a very interesting, ARM-powered laptop. Of course I would prefer a new iPad Pro, if not for its limitations.


And Apple won’t maintain a whole separate hardware/software design stream. The pro-oriented users will have to accept the confines of a consumer-optimized set of designs or switch to another platform. This is already true to a degree of course but I expect we’ll see a real crossroads.


Box86 exists though. They don’t need fat binaries, they only need x86 emulation. Many x86 patents have also expired or will expire soon, so the set of instructions that need to be emulated is reduced.


Emulation works as a transition layer, and they've done it before (see Rosetta from ppc->x86 and the 68k emulator from when they switched to ppc), but native binaries will be significantly faster and more energy efficient.


Native binaries using the latest stable libraries with the latest compiler optimizations will always be faster and more energy efficient.

Even the Xbox had backwards compatibility across ISAs.

Somehow I doubt people will care that much about efficiency.


They will care on a laptop with a battery.


Apple is removing Python etc for more than one reason.

Curious if Apple does an ARM Mac Mini first as a canary.


The current Mac mini is an awesome machine and in top spec has some impressive performance. Given that a key strength of ARM is its power efficiency, it would seem more logical to start with something that has a battery. That way any performance hits can be seen as a trade off for longer battery life.

I say this knowing full well than in any battery life versus thinness tradeoff, Apple has picked thinner just about every time.


If Apple does any Mac with a ARM processor, I think it will absolutely be the Macbook Air (or the recently discontinued Macbook). Those ARM processors will be so power-efficient that they will be able to market their laptops as "lasting for several days", which a lot of consumers will absolutely crave.


They can’t even get their phone batteries to last a full day because they’ll continue to prioritize form over function.

I had several smartphones I didn’t need to charge more than every other day or even every three days if I wasn’t using them nonstop. Apple could do that today.


> They can’t even get their phone batteries to last a full day because they’ll continue to prioritize form over function.

Well, to be fair, smartphones have much, much smaller bodies to carry around a battery. If Apple makes a notebook with the same A13 processor I’m pretty sure it will last more than two or three days.

BTW, one of smartphone’s function is that it’s portable. You really can’t blame it being form over function when talking about battery life; there are many, many people who want light phones. If you need longer battery life by sacrificing weight, you can get yourself a backup battery (or a battery case)

> I had several smartphones I didn’t need to charge more than every other day or even every three days if I wasn’t using them nonstop. Apple could do that today.

My experience is that my iPhone XS survives a day and a half without any charging. YMMV, of course, but for me that’s enough.


The new iPhone 11 is slightly larger and thicker with a much bigger battery and an ugly camera array. Function seems to have been prioritised.


CPython works on ARM, doesn't it?


Yes there's even iOS app like Pythonista which provide Python and n IDE.


I wonder, why Apple has nothing like Samsung DEX. But also I wonder why Samsung and Google don't push this feature not more. I almost bought one of the last Windows phone, the HP Z3, because it could be use as base for a small notebook (they had just a "dumb" notebook for it, just the screen and a keyboard, all the processing was over air or cable from the phone). Even the new Linux phone (Librem 5) comes with a feature to connect the phone to a monitor and keyboard etc. Apple has never showed any of this concepts even a tiny bit.


... according to geekbench that was always favoring iphone cpus.


Gee, that’s great! It’s just too bad that you don’t really have any ownership or control over what that CPU runs, despite having paid for it in full.


There's no doubt that Apple is currently making the best SOCs for the consumer smartphones. But, we should also be careful when comparing it with Apple Computers as some of them are underpowered e.g. Core M3 in Mid 2017 MacBook is same as the that found in intel based SBCs such as LattePanda Alpha.

It would be interesting to see the single thread performance of intel Xeon W-3275M CPU from the Mac Pro 2019. The Cascade Lake architecture supposedly increases single thread performance by decreasing the base frequency of certain cores while increasing other cores as part of its Speed Select Technology (SST). Of course this depends on the power configuration set by Apple, I wonder whether they would voluntarily keep it under max performance in order to project the single thread superiority of their AX chips.


Comparing modern hardware like this is fine, but kind of pointless.

The sad truth is that processing speed improvements have slowed down to a crawl. The exponential predicted by Moore's law looks more like a shallow linear progression today.

Computers are not improving nearly as quickly as they once were. Weirdly, no one is talking about this. If computers stop improving, efficiency will stop improving, which will ultimately mean that standards of living will stop improving. If standards of living stop improving, i.e., the pie stops growing, there could be a whole variety of horrible social and political repercussions as everyone fights for their finite slice.


This is actually pretty interesting. The statistic most people noticed is frequency, which climbed exponentially between 1995-2003 and came to a dead stop a year later.

https://i.imgur.com/QAEqcie.png

However, almost immediately afterwards, IPC, which had been stagnant over that whole time period, started itself steadily climbing exponentially, with the rate falling only fairly slowly.

https://i.imgur.com/KiUqMhl.png

The overall result is exponential growth in ST performance that far outlived frequency growth, and which has only truly tapered recently... though it's likely AMD's recent reentry to the market starts to recover trends somewhat.

https://i.imgur.com/h2jN9RK.png

But now that IPC is starting to struggle (though not falter), core counts are finally taking off. We haven't hit the end yet.


Interesting. But core count will probably be barely perceptible by the user. Most real world applications are hard to parallelise, either because the underlying problem is hard, or because the engineering skillset is limited, or because it makes development harder with bugs that are difficult to reproduce and analyse. Either way I don’t think we will be in a world of all algorithms parallelised anytime soon, only some specific widely used applications (video compressing, etc).

And even though some languages that are “natively parallel” like erlang, rust, f# etc are attracting interest, the mass of developers is converging on languages that are designed not to be parallelised (python, javascript). So it’s not even a matter of a younger generation of developers is about to come to show us oldies how it’s done.


But computers don't run a single algorithm at a time. They run a large number of algorithms at the same time. In this scenario increasing core count can increase the number of algorithms truly executing simultaneously which brings tangible benefits the user will notice.


I don't see the problem. Almost all consumer workloads are trivial in comparison to video encoding.


> But now that IPC is starting to struggle (though not falter), core counts are finally taking off. We haven't hit the end yet.

The fact that the industry is now relying on these tricks are actually good evidence that performance is hitting a wall. In the past, if you wanted to 2X boost in performance, you could just wait a year. Now waiting doesn't work. You need to build your own ASIC or move things to the cloud where you can rely on big-tech's economies of scale.

We have hit an end, and the performance boosts we see today are one trick ponies. After you build your ASIC, you can't get much faster. After you move to the cloud and reduce processing power costs, you can't reduce further. These tricks have slowed the "perceived" deterioration of Moore's law, but they are acts of desperation, not real progress.


>> You need to build your own ASIC

Might be the reason why Intel is putting FPGAs in the latest Xeons: https://www.anandtech.com/show/12773/intel-shows-xeon-scalab...


>> You need to build your own ASIC

> Might be the reason why Intel is putting FPGAs in the latest Xeons: https://www.anandtech.com/show/12773/intel-shows-xeon-scalab....

This is an excellent example. As things are trending, I would expect FPGA and other hardware programming to become more common. I would not be surprised if in a few years websites and browsers start doing their own FPGA optimizations. Adding FPGA programmibility does not increase real raw performance, it just allows developers to do optimizations they would not otherwise be able to do.

Along this line of thinking, you would expect developers to start moving away from slower scripting languages towards faster compiled languages. This has not happened yet at any large scale, largely because the huge demand of digital services has placed a premium on developer productivity over application speed. However, this trend is happening on the edges (e.g., using C-based Numpy within Python, the steady rise of Go). If computer performance remains as flat as it has been, I predict we'll start seeing more popular libraries rewritten in faster compiled languages (and eventually FPGAs/ASICs) within the next 5 years.


You don't deserve the downvotes that you are getting. They dislike you because you're right


Performance has not hit a wall at all. More transistors was never going to speed up serial processes forever. Transistor count is still increasing, core count is still increasing and software speed usually has room for a 10x-100x speedup in most places since it is usually done so poorly. Most of the time high end games are the only areas where CPUs are actually used to their potential because of memory latency.


What are you on about? OP was talking about IPC improvements, not cloud BS.


> If computers stop improving, efficiency will stop improving, which will ultimately mean that standards of living will stop improving. If standards of living stop improving, i.e., the pie stops growing, there could be a whole variety of horrible social and political repercussions as everyone fights for their finite slice.

I fail to see the connection between the computers’ efficiency and standard of living. You seem to have made quite a big leap there. Can you please elaborate?


> I fail to see the connection between the computers’ efficiency and standard of living. You seem to have made quite a big leap there. Can you please elaborate?

It is a big leap, and it's probably better suited for an essay than a HN comment. But the general idea is that most improvements to standard of living have relied on technology, whether that's agriculture (past 5000 years); plumbing and sanitation (past 2000 years); transportation and mass production (past 500 years); or calculation, communication and automation (past 50 years).

Humanity has been on an exponential curve of improvement for a hundred years, if not many hundreds. If this exponential in fact turns out to be more of a sigmoid function, and technological development eventually stops, it will have huge repercussions for our entire society.

I'm personally not fully convinced we've reached the end, but there are signs that we already have or will soon. Even if you think the chance of this happening is low, the consequences of an end to technological progress is huge. We need technologists, economists, socialists, politicians, and even fiction writers to be investigating this potentiality. The fact that this is not heavily studied is disturbing.


IIRC, Moore’s law was about transistor count. We’ve historically put those transistors to specific uses that have since become dead ends, but we haven’t run out of ideas of what to do with them yet.


Do you mean the mainstream press or something like this ? The fading out of single thread performance has been the major topic in computer architecture for a very long time.


> Do you mean the mainstream press or something like this ? The fading out of single thread performance has been the major topic in computer architecture for a very long time.

Yes, computer architects have been thinking about this for a while, but so far it's largely stayed in their domain. Everyone else keeps thinking the "nerds" will keep on doing their thing and progress will continue exponentially as it has for the past 50-100 years. But what if that exponential stops, what if it's a sigmoid and not an exponential at all? We need folks outside of the computer industry to consider this possible outcome.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: