
iPhone 11 has the fastest single-core performance of any Apple computer - tambourine_man
https://twitter.com/elkmovie/status/1174003718188142592
======
aloknnikhil
It makes zero sense to compare like this. Geekbench is closed source. So,
there is no way of knowing what they're even running on ARM v/s x86. But, no
way is my MacBook Pro slower than my iPad Pro

This Geekbench comparison, on the other hand, tells me they're on par (both
single and multi-core scores)
[https://browser.geekbench.com/v5/cpu/compare/140913?baseline...](https://browser.geekbench.com/v5/cpu/compare/140913?baseline=201758)

I'd take this claim with a massive boulder of salt, if you consider it at all
that is.

~~~
GeekyBear
>there is no way of knowing what they're even running on ARM v/s x86

They run the same workload compiled with Clang for each platform, as described
in a PDF on their website.

[https://www.geekbench.com/doc/geekbench5-cpu-
workloads.pdf](https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf)

~~~
aloknnikhil
Thanks. It's still lacking information on what specific compiler flags were
used. Was the compiler running with -mavx turned on? Probably not (Considering
there's a single Geekbench binary distributed for all generations of Macbook
Pros). Xcode on the other hand, compiles a binary that's optimized for each
target device. So, again, not really that informative on what optimizations
were applied to the binary and what instruction sets were optimized for.

------
harrygeez
Sorry if this sounds stupid, but I've always wondered: is 1 point in Geekbench
for ARM really equivalent to 1 point for x64? Are they really comparable in
the first place? If yes, how is it that the ARM SOC can beat an i5 while
consuming so much less power? And why is ARM outstripping x64 in terms of
single core speed? It just doesn't add up.

~~~
for_xyz
Linus called Geekbench as useless few years ago since it uses hardware
implementations of crypto algorithms on some architectures and not on others.

[1]
[https://www.realworldtech.com/forum/?threadid=136526&curpost...](https://www.realworldtech.com/forum/?threadid=136526&curpostid=136666)

~~~
ksec
This keeps getting posted and it was on Geekbench 3. He is _OK_ with Geekbench
4 [1].

[1]
[https://www.realworldtech.com/forum/?threadid=159853&curpost...](https://www.realworldtech.com/forum/?threadid=159853&curpostid=159860)

------
GrayShade
So an i3-8100 has a single score of 1000, and the highest PC score they have
is 1371?

These Geekbench comparisons keep showing phone CPUs being faster than server
ones, but I question their validity.

~~~
Veedrac
The A12 has better single-threaded SPECint2006 scores than EPYC7742 almost
across the board. It's about 10% worse than the 3900X, and 20% worse than the
i9-9900K, so if the 20% uplift holds true for SPECint scores then the A13
should be pretty much par with the i9-9900K.

ikr

[https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-
re...](https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-review-
unveiling-the-silicon-secrets/4) [https://www.anandtech.com/show/14694/amd-
rome-epyc-2nd-gen/9](https://www.anandtech.com/show/14694/amd-rome-epyc-2nd-
gen/9) [https://www.anandtech.com/show/14664/testing-intel-ice-
lake-...](https://www.anandtech.com/show/14664/testing-intel-ice-lake-10nm/4)

------
veselin
I question all these scores. The reason is that the Apple chip also has pretty
low frequency and not so fast memory. This would mean that there is a lot of
IPC opportunity left in comparison to the top x86 chips. This is the range of
2x in comparison to Skylake. This is a crazy wide core. What kinds of reorder
buffers are required for this? Just 2x more execution units won't be enough.

------
_ph_
With all the discussions on how realistic the geekbench results are, why isn't
there an open source benchmark suite which can be run as an app on iOS devices
which would give a more thorough comparison between different architectures?

------
saagarjha
For those interested in comparing, note that Geekbench has updated their
baseline recently. Make sure to use Geekbench 5 in your tests!

------
tambourine_man
Geekbench’s usefulness aside, I think it’s pretty safe to say that the A
series has surpassed x86 for the ultraportable laptop chip.

An ARM MacBook would be a kickass machine. I actually hope they don’t release
some sort of emulation layer like they did in the previous transitions. We
don’t want to be spending battery with that when the tooling these days is
much more homogeneous and high level.

It’s also very tempting to speculate what they would be able to do with a
larger, 40 something watt budget.

------
belltaco
Does that include the new Mac Pro?

If so does that mean PC games would theoretically run faster on an iPhone 11
than on desktop PCs (ignoring GPU speeds)?

~~~
Wowfunhappy
The Mac Pro is actually a poor comparison here.

Mac Pros run Xeon processors that prioritize having many cores. Their single
core performance is very good, but worse than Intel's top mainstream chips,
like the 9900K.

------
thesquib
I just want a phone with a battery that lasts a few days, what about a week? I
will never edit video on my phone

------
mlacks
I can imagine Apple saw this coming a while back and decided to go all in on
iOS while maintaining MacOS on the back burner.

It must be difficult to be in a management position where you obviously see
the industry is moving towards ARM, but you need to orchestrate a plan in
motion to correctly time _when_ the customer/ developer is ready to make the
switch.

~~~
paulmd
> It must be difficult to be in a management position where you obviously see
> the industry is moving towards ARM, but you need to orchestrate a plan in
> motion to correctly time when the customer/ developer is ready to make the
> switch.

Apple is in a unique position to make this happen though. And they have done
it before with the PowerPC->x86 switchover.

Their solution has previously been to have fat binaries with both versions
compiled alongside. They control the toolchain and the app store so this is
not that difficult to enforce.

~~~
Tinfoilhat666
Instead of switching macOS to arm, they could promote iPadOS to laptops. They
already added a file manager and advanced windowing to it.

~~~
_ph_
The ironic thing is, that the question isn't so much one of hardware, but
software. With the iPad Pro, Apple has an almost perfect piece of hardware,
that could be used as an ultralight laptop alternative. The problem is, Apple
does not allow the use of an iPad as a computer. File handling - though a
little bit improved on iPadOS - is still extremely limited and even more so is
the software. There are no real development environments on the iPad because
of the app store limitations, there is no Termux for the iPad. Even if fully
sandboxed, that could increase the productivity as a computer a lot.

So the simplest way for Apple to get a ARM-based computer would be allowing
the iPad to be more of a computer. Alternatively, they do it the hard way and
launch ARM-based Macbooks. That would be nice too. But putting iPadOS onto a
Macbook would rather go in the wrong direction. MacOS should run well on ARMs
anyway.

~~~
ubercow13
Have you tried iSH? It’s kind of like Termux

~~~
tambourine_man
iSH is a x86 emulator running on an ARM phone. It’s crazy.

A nice proof of concept, but I don’t think it’s the right path for a serious
tool.

~~~
ubercow13
True, most stuff doesn’t work and it’s very slow. But it runs youtube-dl well
enough which is something.

~~~
tambourine_man
No re-encoding I hope :)

~~~
ubercow13
Not reencoding but it can remux separated streams quickly using ffmpeg

------
altmind
... according to geekbench that was always favoring iphone cpus.

------
wayneftw
Gee, that’s great! It’s just too bad that you don’t really have any ownership
or control over what that CPU runs, despite having paid for it in full.

------
Abishek_Muthian
There's no doubt that Apple is currently making the best SOCs for the consumer
smartphones. But, we should also be careful when comparing it with Apple
Computers as some of them are underpowered e.g. Core M3 in Mid 2017 MacBook is
same as the that found in intel based SBCs such as LattePanda Alpha.

It would be interesting to see the single thread performance of intel Xeon
W-3275M CPU from the Mac Pro 2019. The Cascade Lake architecture supposedly
increases single thread performance by decreasing the base frequency of
certain cores while increasing other cores as part of its Speed Select
Technology (SST). Of course this depends on the power configuration set by
Apple, I wonder whether they would voluntarily keep it under max performance
in order to project the single thread superiority of their AX chips.

------
speedplane
Comparing modern hardware like this is fine, but kind of pointless.

The sad truth is that processing speed improvements have slowed down to a
crawl. The exponential predicted by Moore's law looks more like a shallow
linear progression today.

Computers are not improving nearly as quickly as they once were. Weirdly, no
one is talking about this. If computers stop improving, efficiency will stop
improving, which will ultimately mean that standards of living will stop
improving. If standards of living stop improving, i.e., the pie stops growing,
there could be a whole variety of horrible social and political repercussions
as everyone fights for their finite slice.

~~~
Veedrac
This is actually pretty interesting. The statistic most people noticed is
frequency, which climbed exponentially between 1995-2003 and came to a dead
stop a year later.

[https://i.imgur.com/QAEqcie.png](https://i.imgur.com/QAEqcie.png)

However, almost immediately afterwards, IPC, which had been stagnant over that
whole time period, started itself steadily climbing exponentially, with the
rate falling only fairly slowly.

[https://i.imgur.com/KiUqMhl.png](https://i.imgur.com/KiUqMhl.png)

The overall result is exponential growth in ST performance that far outlived
frequency growth, and which has only truly tapered recently... though it's
likely AMD's recent reentry to the market starts to recover trends somewhat.

[https://i.imgur.com/h2jN9RK.png](https://i.imgur.com/h2jN9RK.png)

But now that IPC is starting to struggle (though not falter), core counts are
finally taking off. We haven't hit the end yet.

~~~
speedplane
> But now that IPC is starting to struggle (though not falter), core counts
> are finally taking off. We haven't hit the end yet.

The fact that the industry is now relying on these tricks are actually good
evidence that performance is hitting a wall. In the past, if you wanted to 2X
boost in performance, you could just wait a year. Now waiting doesn't work.
You need to build your own ASIC or move things to the cloud where you can rely
on big-tech's economies of scale.

We have hit an end, and the performance boosts we see today are one trick
ponies. After you build your ASIC, you can't get much faster. After you move
to the cloud and reduce processing power costs, you can't reduce further.
These tricks have slowed the "perceived" deterioration of Moore's law, but
they are acts of desperation, not real progress.

~~~
mamon
>> You need to build your own ASIC

Might be the reason why Intel is putting FPGAs in the latest Xeons:
[https://www.anandtech.com/show/12773/intel-shows-xeon-
scalab...](https://www.anandtech.com/show/12773/intel-shows-xeon-scalable-
gold-6138p-with-integrated-fpga-shipping-to-vendors)

~~~
speedplane
>> You need to build your own ASIC

> Might be the reason why Intel is putting FPGAs in the latest Xeons:
> [https://www.anandtech.com/show/12773/intel-shows-xeon-
> scalab...](https://www.anandtech.com/show/12773/intel-shows-xeon-scalab..).

This is an excellent example. As things are trending, I would expect FPGA and
other hardware programming to become more common. I would not be surprised if
in a few years websites and browsers start doing their own FPGA optimizations.
Adding FPGA programmibility does not increase real raw performance, it just
allows developers to do optimizations they would not otherwise be able to do.

Along this line of thinking, you would expect developers to start moving away
from slower scripting languages towards faster compiled languages. This has
not happened yet at any large scale, largely because the huge demand of
digital services has placed a premium on developer productivity over
application speed. However, this trend is happening on the edges (e.g., using
C-based Numpy within Python, the steady rise of Go). If computer performance
remains as flat as it has been, I predict we'll start seeing more popular
libraries rewritten in faster compiled languages (and eventually FPGAs/ASICs)
within the next 5 years.

