Hacker News new | past | comments | ask | show | jobs | submit login
Performance comparison between 2017 iPad Pro and 2012 iMac (geekbench.com)
122 points by my123 on June 7, 2017 | hide | past | web | favorite | 123 comments

It might make more sense to compare to a current-gen laptop, not a Desktop machine from five years ago: http://browser.geekbench.com/v4/cpu/compare/3036382?baseline... (archive: http://archive.is/EVcuE)

I think this is a better comparison, as both have same TDP and fanless. (That Intel is a Kaby Lake Y Core i5 1.2Ghz/3.2Ghz Boost cpu)

Edit for perspective: Thats the same cpu in the just upgraded MacBooks.


Both of those are comparing a 3-core iPad with a 2-core Intel CPU.

And parity at the same TDP is unsurprising. The trouble is that the i5-7Y57 is quite slow by desktop standards and there is currently no ARM equivalent to 4+ core 45+ watt Intel processors at any TDP.

Unsurprising? So it is an easy thing to do to at least equal Intel with their vast experience after only a few years, when no one else is able to do it?

By the way if you have a problem with the number of cores just compare the single core score. I don't see a problem though, it they can cram one more core in same tdp then it's fair.

The jury is still out wether Apple could compete with Intel at those higher TDPs (15w-28w-45w)

> By the way if you have a problem with the number of cores just compare the single core score.

Then the Intel processor comes out ahead despite the iPad having a higher clock speed.

> I don't see a problem though, it they can cram one more core in same tdp then it's fair.

The quad core i5-7442EQ will destroy the dual core i5-7267U at anything threaded even though it has a lower TDP. Having more cores is an asymmetric advantage in threaded benchmarks.

Ignore the stated frequency in the geekbench page as its almost always wrong. That intel cpu is pegged at 3.2Ghz for the duration of the bench vs (supposedly) 2.36Ghz on the iPad.

Well, Intel kinda paved the way for most of the performance improvement ideas in these CPUs already. The fruit is hanging further and further up the tree.

It's unlikely that the A series would be able to compete with the Intel CPUs at significantly higher TDPs from the get go - especially when you start considering interconnections between cores etc. They might be able to get close to lower TDP desktop i5s, but then just look at the difficulty Ryzen is having at doing just that...

So it is an easy thing to do to at least equal Intel with their vast experience after only a few years...

ARM will be 27 years old this year.

They're talking specifically about Apple's SoCs which have only been produced for a few years (2011?).

Sure. Which are based on twenty years of bootstrapping, at minimum, and the forward, technical trailblazing of Intel et al.

Catching up is easier than pushing the envelope. It's impressive, but it's not earth-shattering.

Still nobody expected Apple to get this good so quickly.

That's... incredible.

Performance is no longer the driving force compelling a lot of people to upgrade their machines. Smaller form-factors, longer battery life, wireless connectivity, portability, and weight are the primary motivators for a lot of people. Just look at the Macbook Air, when it came out I knew dozens of people who dumped their 17inch Macbook Pros for it because it was good enough for their needs from a performance perspective and kicked the shit out of the MBP in every other measure.

I... what? The first Air was underpowered, throttled quickly, didn't even have a backlit keyboard, had a worse screen and was much more expensive than the base MBP. It slightly outmatched the MBP physically and in battery life, but it did not 'kick the shit out of the MBP by every other measure'.

He probably means the 2010 or 2011 airs, which is when they "got good"

If you're using the metrics that person has outlined (battery life, portability, small form factor, etc.) then the Air would destroy the 17" MBP. Those things are tanks.

Not saying I agree, I'll take power over some tiny thing any day. Just pointing out you're reply is targeting all of the wrong things.

A modern 17" Macbook Pro might be the one thing that would convince me to go over to the dark side. My girlfriend has a 5-6 year old one, and the build quality can't be beat - unfortunately it's reaching the point where it is barely usable, because it doesn't have the performance specs to keep up nowadays.

I don't like these tiny little laptops everybody is building these days. I want a full keyboard and enough screen real-estate to actually do something without needing reading glasses.

There is an... interesting post by Linus Torvalds about GeekBench:


In other words, Linus claims it's mainly testing the speed of specialised instructions implemented in hardware, and not general-purpose computing. For a similar analogy, compare the speed of a Bitcoin mining ASIC vs. a GPU or even a CPU. At the same clock speed, dedicated hardware will vastly outperform the software implementation.

I think it's somewhat ironic then, that the "RISC" CPU is getting higher scores due to the presence of such CISC-y instructions.

That was about Geekbench 3, the previous version. The new version doesn't have those problems, and also it uses the exact same workload between desktops and mobile devices.

In those same forums there are comments by Linus about the new Geekbench 4 and they are much more positive. http://www.realworldtech.com/forum/?threadid=159853&curposti...

Something is still wonky with their approach. Consider a 3770 vs 6700: https://browser.primatelabs.com/v4/cpu/compare/3041851?basel...

Almost all other benchmarks I've seen put the 6700 at ~15% faster for both single and multi core tests, including most importantly, real world benchmarks. Geekbench has the 6700 at 47% faster single core and 64% faster multi-core. The few individual benchmarks I've see with numbers close to Geekbench's overall spread are using new instructions or are actually testing the onboard GPU.

The 6700 result you selected is significantly faster than the average for the 6700: http://browser.geekbench.com/processor-benchmarks

Maybe it's overclocked? If you look at the averages for the 3770 and the 6700 Geekbench reports a difference of 20% for single core.

>The new version doesn't have those problems.

See, that's the thing. It may be much better than before, but starting from such a low level this doesn't mean much.

Considering the whole Geekbench history I have a hard time accepting that statement in its absolute form. That they suddenly are _the_ indesputable luminaries about all the intricacies, pitfalls and subtleties of multi-arch benchmarking and have done everything right to make a single artificial score a meaningful comparator (which is questionable in itself in the best of cases IMO).

I don't think a lot of these tests are that "CISC-y". They're just incredibly synthetic.

Both CPU's are entirely CISC. The Intel CPU just have way more of... well, everything. That's the primary reason for the TDP gap.

It's really hard to say anything about the benchmarks without the assembly for both architectures, though. Is the Intel memory copy a simple "rep mov", or a mov using all xmm registers? Is the ARM AES benchmark using AES extensions?

Well, akhchtually! rep mov is pretty fast these days (on Ivy Bridge onwards).

On the other hand XMM registers are pretty old, we have YMM and ZMM now.

When I tried on an Ivy Bridge laptop-class processor a few years back, move over XMM utilizing a handful of registers was still significantly faster than rep mov. It would make sense for Intel to improve it, though.

But my point was more that you need to know what it does to know what you compare. That the "fast way" changes over time makes it even more important.

Only for small copies (due to static overhead/no branch prediction in microcode). For large buffers, rep/movs has been a win since ivybridge once your buffers are too large for L2. A big part of this is that the stores used by rep/movs get to do a dataless read-for-ownership, so they actually move 1/3 less data around than any other implementation when they miss L2.

Another obvious win is that you do not trash "user-facing" registers with rep mov, like you do with a multi-xmm-register dance.

I'm pro rep mov, but an XMM duffs device just won over rep mov where I tested. It was years ago, however, so I do not remember if I tried to run above L2 and L3. It was related to a compiler optimization debate for alternate memcpy/memmove implementations...

> get to do a dataless read-for-ownership, so they actually move 1/3 less data around than any other implementation when they miss L2.

Can't you achieve same using cache line sized non-temporal stores?

There are no cacheline-size store instructions pre AVX-512 (on Ivybridge, the widest store is 32B, and even that cracks to two 16B store uops; it's a real 32B store on Haswell+). Also the NT stores suffer from a bunch of other hazards that rep/movs avoids.

Non temporal stores shouldn't go to the cache at all though. They go to write combining buffers instead.

Shouldn't, but do. They RFO into the LLC.

I would argue that the LLVM test isn't skewed with specialized instructions, and the iPad wins it regardless.

I'd really be interested to see an evaluation of what makes the difference there.

First guess: The iPad's massively larger L2 cache.

And what, exactly, is the LLVM test?

Do you have access to the source?

Well, you're comparing against a 5 year old machine. What point does this make? It's about as meaningful as saying my smartphone has reached parity with a Cray supercomputer (but a 20 year old one).

You don't think it's a little impressive that a mobile SoC in a tablet (that is constrained by power and physical space) is matching the performance of an i7 in an iMac that has the benefits of sitting on your desk plugged into power all day, isn't constrained by size, and has significant active cooling?

I, for one, am constantly amazed by the performance of such mobile/small devices (regardless of type and model). However, shouldn't such a result also say something about the iMac (or any other "desktop" for that matter)? I mean, 5 years is not that much when it comes to performance; "shrinking" a desktop in 5 years (while keeping the performance) to such an extent seams unlikely.

Its definitely a noteworthy accomplishment. Not sure of all the benchmarks they used, but hopefully they tested more than just hardware accelerated operations or else it detracts from overall versatility a bit.

the i7 can keep that performance indefinitely though. Can the SoC do the same thing (i.e. not trigger thermal throttling) without active cooling and on battery power?

Honest question btw, I have no idea how the Apple cpu behaves in this scenario.

I think it's impressive in a general Moore's law is mind-boggling way, in the way my phone is more powerful than the desktop I bought in 2007. Nothing particular to this SoC, and nothing special about this particular comparison to this particular machine.

Mobile CPUs have many constrains to adapt to, but "size" shouldn't really be one of them.

Huh? Size of the system is one of the defining constraints of "mobile". I guess you are saying the chip is small enough you don't think size of the CPU will be the limiting factor for setting the size of the system? True that other components are larger, but not true that the CPU is insignificant!

I'm curious why you say that. Circuit board real estate seems to be a precious commodity especially in iphones. What do you mean by size?

I was answering about the CPU (or SoC)

As an aside, In a high production volume chip, size (silicon area) == cost. It's really that simple for a lot of design choices.

It makes a big difference for cooling though.

As in: the bigger, the better?? :)

You're right that it is an unfair comparison to the state-of-the-art. And a lot of people are considering upgrading their older systems not because those systems are slow or under-performing but because of other factors like portability, dying batteries, and failing hardware. I think their point is that an iPad can be a suitable replacement for a lot of people not necessarily looking for an upgrade.

My mom has a behemoth of a Dell laptop from 7 or 8 years ago. It's an i7 with 8GB of RAM and if I dropped an SSD in it for her then it would hum along just fine. It has performance parity with my maxed out Yoga 2 from a couple years back. She's looking to upgrade it not due to performance but because it's heavy, the battery is dead and a replacement is $200, and she want's a 10-key.

My point is that performance is no longer the driving force compelling people to upgrade.

Well, it is somewhat on iOS because somehow Apple's software gets extremely heavy. Wirth's law or something. The iPad 2 used to feel pretty damn speedy, now it's an absolute dog.

Definitely and I think that trend will continue as the platform matures into something approaching feature parity with a desktop operating system.

Hopefully that won't actually happen. Apple and Google are putting a great deal of thought into the APIs and features they're implementing on iOS and Android with much consideration to factors such as battery use and performance.

That being said, the latest versions of Google's apps have really kneecapped the Nexus 4 which I've used as my daily driver since 2012. Sadly I might just have to upgrade.

I would say an important point of note is that a lot of people who would consider themselves power users still use Intel chips from the early 2010's. Coming from someone who's using a Sandy Bridge i7.

Like you I'm using a quand-core 'Server' Mac Mini from 2012 with a Sandy Bridge processor. What exactly do you make of this? That we are not power users? (Not being sarcastic or anything like that, I've been thinking along these same lines for a while, but I cannot quite put my finger on it.)

That desktop CPU performance gains have slowed.

Well the power wall has kind of set an upper bound on clock speed. (although kaby lakes can do 5GHz no problem which is pretty impressive) So until a fundamental breakthrough in heat extraction happens, were kinda left with only power efficiency and parallelism as ways to increase performance (which desktops have done). Mores law has still more or less held true thus far. I think ARM just had a long way to catch up but it will eventually hit physical barriers at some point and performance increases will slow.

It makes the point of a click-bait headline :). Try running a latest gen Intel i7 against the latest ARM chip on desktop workloads (high-end gaming, databases, compilers, video encoding, AI computation, etc.) and we will see a bit of a different picture.

And while these small devices do an amazing job, one should never forget that they are also extremely optimized for efficiency. Intel Desktop CPUs have different goals and you need to take the whole picture into account at which point you will see that comparing Intel & ARM is like comparing apples and bananas.

> desktop workloads (high-end gaming, databases, compilers, video encoding, AI computation, etc.) and we will see a bit of a different picture.

Well that still matters. A lot of desktop user even in enterprise are not doing any of that and 5 year old machine is perfectly good enough. We are talking 10's of thousands of desktop in each large company have become matched in power by the latest smartphone and tablet.

So that could mean the end of the general user gravy train for Intel and AMD. That could mean your next MBP is going to cost even more than the next best tablet for a smaller but still significant enough performance difference.

You might prefer this one, as I find it much more relevant. http://browser.geekbench.com/v4/cpu/compare/3036382?baseline...

That is a Core-i5 Kaby Lake Y. Same approximately TDP, both fanless. Thats the CPU in the new Kaby Lake MacBooks that Apple just upgraded too.

It still loses to the A10X it seems in multicore, very similar single core performance.

Well for one it shows the progress made in 5 years time. And a lot of desktop PC's nowadays are 5 years old or even older.

That reference i7 is significantly faster than the main PC of %95+ of my friends/relatives. Then again, the vast majority of people need a proper home PC for little more than the occasional word processing task, so this really is getting pretty academic until an iPad is at least interesting to musicians, video editors or hardcore gamers.

I think it's more impressive to compare a phone with a 20 year old Silicon Graphics workstation. The OpenGL performance obliterates it.

You don't think that a battery-powered pocket computer with as much computing power as a 1997 Cray is meaningful?

I think it's more meaningful that this iPad comparison is surprising. Back in the full throes of Moore's law this would be expected.

It really shows we're in a new era of sub exponential improvement.

For how long? Sustained performance of SOCs in tablets and smartphones is way lower than what measured by benchmarks.

Edit: Furthermore, something looks fishy with the Intel results. Desktop Intel CPUs definitely have a multiprocess score ratios near their physical core counts (increased by 10-30% for HT-capable ones) yet Geekbench only seems to achieve an MP ratio a bit above 2.

Still, if performance is actually comparable (for which I have my doubts) that means we could get ARM laptops which could easily sustain it.

Wow, I have that i7 3770 in my tower at home (now paired with a Geforce 1070) and it still feels delightfully overpowered. If I could run Ableton Live or Reason on this, or even if Apple simply had a canonical, first-rate controller that encouraged interesting Switch-like games, this would be tempting. As it is, I can't think of a single thing I could do with this that I couldn't do with the iPad Air 2 that mostly sits unused on our coffee table.

So, in the near future, we will have more companies that are able to make fast-enough CPUs for consumption other than Intel and AMD.

I think this is a good thing. Traditionally, the technology of CPU seems so complex that few can handle that. But now Apple, Google and some other manufactures will compete in this field.

This will lower the cost of massive computing.

You mean like ARM? Microsoft builds Windows for ARM these days, full fledged Windows this time unlike Windows RT. Photoshop has been demo'ed on a Qualcomm Snapdragon CPU running Windows 10 and devices are expected to be released later this year.

And maybe the end game for Apple's A series of CPU's is to get them at least into Macbook and Macbook Air if not Macbook Pro, to begin with. They'd get a whole other kind of control over that supply chain and not having to stall releases based on Intel schedules. I wouldn't be surprised if they have a version of macOS already compiling.

The macOS compiling on ARM you mentioned is very interesting. I guess probably Apple or other big manufactures may be already able to make CPU of X86_64 or other architectures.

Frankly it has been the case for years already.

We keep coming up with ever more exotic reasons for not retiring the x86 hegemony for home computing, but the core one that we are loath to acknowledge is legacy software.

WinTel sticks around because anything else means a wholesale replacement of every program ever bought.

Even Intel failed against that beast when they tried to push Itanium, and it was not even a desktop product.

Similarly FOSS have a hard time getting a foothold because devs keep breaking backwards compatibility on a whim (and no, Flatpak et al is not a solution. It is lipstick on a pig).

Arm socs throttle aggressively and reaching x86 level of sustained performance will lead to more power consumption so the geekbench scores can be misleading.

Having said that the performance of some of the newer Arm socs are encouraging. For instance you can now get a tiny 4k htpc for as little as $50-80 making the use of expensive and power hungry x86 parts redundant.

Gigabit routers, nas, htpc devices and even some chromebooks now perform adequately with arm socs while sipping power.

iPads/iPhones, the last few generations don't throttle, at least not in 30m-1h of continuous usage, compared to Android devices.

Check Anandtech reviews if you don’t believe it.

Archive link for those having trouble loading the site: http://archive.is/V4K0P

Whoever posted this took a very, very pessimistic 3770. Here's a much more representative result: http://browser.geekbench.com/v4/cpu/compare/3036382?baseline...

I also have strong suspicions that real world workloads that aren't so cache-optimised would swing massively in favour of the 3770 as well. The best benchmarks we have of that unfortunately are things like 3DMark physics bench (the old A9X was half as fast as even an M-5Y71 in that benchmark).

It's also worth noting that for TDP comparisons, even Intel's own 35w and 45w parts compare favourably to the desktop parts, and newer generations increase this gap substantially. Some multi-core geekbench scores per watt (higher is better):

77w i7-3770: 162 - 45w i7-3770T: 255 - 45w i7-3632QM: 257 - 17w i7-3687U: 315

3.5w - 7w i7-7Y75? 900 - 1800.

A10X, assuming it's 5W TDP? 1800.

This is a big deal but a terrible way to point it out. I had that iMac until recently and it rips, that's a big boost for a tablet. That said it's like saying, iPhone has more computing power than NASA...In 1963.

Makes me wonder when we'll see a MacBook Air with an ARM SoC.

When transition to Intel was announced Steve Jobs said, that they had OS X running on Intel CPUs since the first version. I would not be surprised that there are ARM powered Macs in Apple labs right now.

Given how often the common internals between MacOS and iOS is paraded around, you can bet your ass they have.

Heck, iOS 11 seems to be one serious transition point in that regard.

Slap a keyboard case on the new ipad pro and you are a long way there once iOS 11 ships...

My two decades old dream is to simply slide a wireless mouse, keyboard, and monitor next to my phone then I'd have a desktop computer.

Sooner or later our mobile devices will morph into our desktops?

Microsoft kinda did that with Continuum for Win10 phones. For the majority of PCs it's more than fast enough. It's a shame it didn't take off.

I think the basic problem there is the few devices that support it, and their price.

Microsoft just debuted their new shell which apparently makes Continuum a lot better. So it's still part of their future strategy:


Interesting. Wonder if they will try to reboot the W10 mobile at some point.

Halium bootstrapping Plasma Mobile/Ubuntu Touch.


see also postmarketOS, whose founder recently has posted about it on HN.

You might want to check out Samsung Dex then

Someone on redit used Samsung Dex as a laptop replacement for two weeks.

I was sure she would hate it, but she seemed to like it.

Does anyone know if the x86 emulation for ARM that Microsoft is using was created in house if licensed from a third party? If it's the later, it certainly opens for Apple to easily get their hands on the same thing for a quicker transfer not that they couldn't do it them selves if they felt it was needed(though I think they bought or licensed the technology for Rosetta).

What does it say? Is it "A10X is awesome" or is it "geekbench sucks"?

Something fishy is going on here. I don't have a ton of experience with hardware, but enough to be pretty sure that the 2012 iMac is under-performing its specs here.

(edit) For example, if you just look at CPU benchmarks, the processor in the iMac is not far off the i7 7700k that someone posted a benchmark for elsewhere in the thread, yet the 7700k supposedly thrashes the iPad while the iMac doesn't. Maybe both the iMac and iPad have thermal issues that a standard PC desktop would not.

The thing standing out to me is the almost-twice-as-good LLVM performance of the iPad. It might make for a good iOS dev machine, after all. I still don't think it would be pleasant without a touchpad, but I also haven't tried it yet. Also, using it all day would be very hard on the arms/shoulders, I think.

Right, but that’s a 5 year old machine they’re comparing against. Still impressive, but not what the title implies.

I found an i7-7700K benchmark to compare it to [1]. Hopefully it's an apples-to-apples comparison...

[1] http://browser.geekbench.com/v4/cpu/compare/3036382?baseline...

That is a 90 watt part compared to a 5 or 10 watt part. The performance/watt of A10x is amazing.

CPU performance doesn't scale linearly with TDP at all. Even Intel's own lower power offerings look insanely awesome when you consider performance per watt vs their higher end parts.

Well, for about 10 minutes or so it takes to run the benchmark. Then it's straight into thermal throttling.

That 90 watt part meanwhile is designed to put out that performance all day long.

(You might think that makes the iPad performance all the more impressive but that kind of throttling is nowadays built into the chip itself, it's not just limited thermal design. They are peddling this as a pro device and I would like my pro device to keep up that performance level for the entire time it takes to build my C++ project or run my deep learning kernel)

iPad doesn't thermal throttle.

Agreed, title might better be: "iPad performance now where iMacs were five years ago." Still interesting (says someone using an iMac from 2008).

That's a quad-core desktop Core i7 with 4 cores and 8 threads, and clocked much higher than the Apple A10X.

Also notably one is actively cooled and one isn't. I love that these beastly SoCs are peaceful little chips sitting silently.

It's still old. There was the 4770, 4790, 6700 and now the 7700 after it. Each version was only a bit better than its predecessor, but 4 times a small bump results in a big bump.

It is not slow, but it is not as fast as a modern machine. Maybe impressive, but not parity.

the i7-7700 is around 20-25% faster. An ultra low power chip only 20-25% behind current high power desktop chips is quite impressive.

Not in geekbench though, see https://news.ycombinator.com/item?id=14505189. That's 20k multi thread score vs 9k.

That benchmark might be a bit flawed.

Something does not add up here. Taking that benchmark and compare it to the other, we get http://browser.geekbench.com/v4/cpu/compare/231758?baseline=...

The i7-7700K is not twice as fast.

It's the iMac's thermal throttling...

That is incredible poor design. A beefy CPU crippled because it't throttled. Why not use a lower power CPU instead and get better performance.

Wouldn't it make sense to try and compare it to one of Intel's more recent lower spec offerings anyways? Something that compares in terms of price and form factor.

https://news.ycombinator.com/item?id=14505243 Well, it performs well against them too.

Title should probably mention the "Intel platform" is 5 years old.

Wow, ARM has blown up due to cellphones. It's starting to compare to x86 in performance to a degree.

Wonder how long people are going to keep buying thousand-dollar cellphones each year that's fueling this.

Looks like processor architecture really matters.

Btw - is LLVM still faster on a 5-year-old machine than on an iPad? I am seeing well, right?

You've got it exactly the wrong way around. The iPad has nearly double the score in that particular benchmark (higher is better).

You saw the reverse. :) LLVM is much faster on the iPad.

ARM chips are awesome! I'm sorry I can not buy ARM shares. Next generation of Macs will probably have ARM chips.

Are they really saying that the iPad has 30% more memory bandwidth?

lol the mac is 5 year old

And despite of this, lots of things that you could do on Commodore 64 or ZX Spectrum you (almost) can't do on iPad: text editing, programming.

Of course you can edit text on an iPad. That's ridiculous. There are hundreds and hundreds of apps. I would start with something like Coda: https://panic.com/coda-ios/

As for programming well it depends on the language. You can code Python: http://omz-software.com/pythonista/

The US went to the moon and back with relying on hand calculations, it has to be put in the perspective, innit?

Hand calculations and 4% of GDP :)

I think you might be confusing GDP with something else. Apollo spending peaked in 1966 at just shy of $3 billion [1] while US GDP was 815 billion [2] for about 0.3% percent of the GDP. On the other hand, [3] shows US federal outlays at $17.2 billion, so Apollo would have been 17% of the total budget.

[1] https://en.wikipedia.org/wiki/Apollo_program#Costs [2] https://www.google.com/search?q=gdp+of+us [3] https://www.whitehouse.gov/omb/budget/Historicals

Hrm, I read it stated that way somewhere, but looking at this link, it was more like 4% of the federal budget. Still an incredible outlay of money.


Still hand calculations, a brain and two hands are not gdp (or cpu)-dependent. Engineering works are. My point is Apple is cretinising the world and its brightest minds, neither a novel nor a popular point, I know, but more and more poignant.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact