Edit for perspective: Thats the same cpu in the just upgraded MacBooks.
And parity at the same TDP is unsurprising. The trouble is that the i5-7Y57 is quite slow by desktop standards and there is currently no ARM equivalent to 4+ core 45+ watt Intel processors at any TDP.
By the way if you have a problem with the number of cores just compare the single core score. I don't see a problem though, it they can cram one more core in same tdp then it's fair.
The jury is still out wether Apple could compete with Intel at those higher TDPs (15w-28w-45w)
Then the Intel processor comes out ahead despite the iPad having a higher clock speed.
> I don't see a problem though, it they can cram one more core in same tdp then it's fair.
The quad core i5-7442EQ will destroy the dual core i5-7267U at anything threaded even though it has a lower TDP. Having more cores is an asymmetric advantage in threaded benchmarks.
It's unlikely that the A series would be able to compete with the Intel CPUs at significantly higher TDPs from the get go - especially when you start considering interconnections between cores etc. They might be able to get close to lower TDP desktop i5s, but then just look at the difficulty Ryzen is having at doing just that...
ARM will be 27 years old this year.
Catching up is easier than pushing the envelope. It's impressive, but it's not earth-shattering.
Not saying I agree, I'll take power over some tiny thing any day. Just pointing out you're reply is targeting all of the wrong things.
I don't like these tiny little laptops everybody is building these days. I want a full keyboard and enough screen real-estate to actually do something without needing reading glasses.
In other words, Linus claims it's mainly testing the speed of specialised instructions implemented in hardware, and not general-purpose computing. For a similar analogy, compare the speed of a Bitcoin mining ASIC vs. a GPU or even a CPU. At the same clock speed, dedicated hardware will vastly outperform the software implementation.
I think it's somewhat ironic then, that the "RISC" CPU is getting higher scores due to the presence of such CISC-y instructions.
In those same forums there are comments by Linus about the new Geekbench 4 and they are much more positive.
Almost all other benchmarks I've seen put the 6700 at ~15% faster for both single and multi core tests, including most importantly, real world benchmarks. Geekbench has the 6700 at 47% faster single core and 64% faster multi-core. The few individual benchmarks I've see with numbers close to Geekbench's overall spread are using new instructions or are actually testing the onboard GPU.
Maybe it's overclocked? If you look at the averages for the 3770 and the 6700 Geekbench reports a difference of 20% for single core.
See, that's the thing. It may be much better than before, but starting from such a low level this doesn't mean much.
Considering the whole Geekbench history I have a hard time accepting that statement in its absolute form. That they suddenly are _the_ indesputable luminaries about all the intricacies, pitfalls and subtleties of multi-arch benchmarking and have done everything right to make a single artificial score a meaningful comparator (which is questionable in itself in the best of cases IMO).
Both CPU's are entirely CISC. The Intel CPU just have way more of... well, everything. That's the primary reason for the TDP gap.
It's really hard to say anything about the benchmarks without the assembly for both architectures, though. Is the Intel memory copy a simple "rep mov", or a mov using all xmm registers? Is the ARM AES benchmark using AES extensions?
On the other hand XMM registers are pretty old, we have YMM and ZMM now.
But my point was more that you need to know what it does to know what you compare. That the "fast way" changes over time makes it even more important.
I'm pro rep mov, but an XMM duffs device just won over rep mov where I tested. It was years ago, however, so I do not remember if I tried to run above L2 and L3. It was related to a compiler optimization debate for alternate memcpy/memmove implementations...
Can't you achieve same using cache line sized non-temporal stores?
First guess: The iPad's massively larger L2 cache.
Honest question btw, I have no idea how the Apple cpu behaves in this scenario.
My mom has a behemoth of a Dell laptop from 7 or 8 years ago. It's an i7 with 8GB of RAM and if I dropped an SSD in it for her then it would hum along just fine. It has performance parity with my maxed out Yoga 2 from a couple years back. She's looking to upgrade it not due to performance but because it's heavy, the battery is dead and a replacement is $200, and she want's a 10-key.
My point is that performance is no longer the driving force compelling people to upgrade.
Hopefully that won't actually happen. Apple and Google are putting a great deal of thought into the APIs and features they're implementing on iOS and Android with much consideration to factors such as battery use and performance.
That being said, the latest versions of Google's apps have really kneecapped the Nexus 4 which I've used as my daily driver since 2012. Sadly I might just have to upgrade.
And while these small devices do an amazing job, one should never forget that they are also extremely optimized for efficiency. Intel Desktop CPUs have different goals and you need to take the whole picture into account at which point you will see that comparing Intel & ARM is like comparing apples and bananas.
Well that still matters. A lot of desktop user even in enterprise are not doing any of that and 5 year old machine is perfectly good enough. We are talking 10's of thousands of desktop in each large company have become matched in power by the latest smartphone and tablet.
So that could mean the end of the general user gravy train for Intel and AMD. That could mean your next MBP is going to cost even more than the next best tablet for a smaller but still significant enough performance difference.
That is a Core-i5 Kaby Lake Y. Same approximately TDP, both fanless. Thats the CPU in the new Kaby Lake MacBooks that Apple just upgraded too.
It still loses to the A10X it seems in multicore, very similar single core performance.
It really shows we're in a new era of sub exponential improvement.
Edit: Furthermore, something looks fishy with the Intel results. Desktop Intel CPUs definitely have a multiprocess score ratios near their physical core counts (increased by 10-30% for HT-capable ones) yet Geekbench only seems to achieve an MP ratio a bit above 2.
I think this is a good thing. Traditionally, the technology of CPU seems so complex that few can handle that. But now Apple, Google and some other manufactures will compete in this field.
This will lower the cost of massive computing.
And maybe the end game for Apple's A series of CPU's is to get them at least into Macbook and Macbook Air if not Macbook Pro, to begin with. They'd get a whole other kind of control over that supply chain and not having to stall releases based on Intel schedules. I wouldn't be surprised if they have a version of macOS already compiling.
We keep coming up with ever more exotic reasons for not retiring the x86 hegemony for home computing, but the core one that we are loath to acknowledge is legacy software.
WinTel sticks around because anything else means a wholesale replacement of every program ever bought.
Even Intel failed against that beast when they tried to push Itanium, and it was not even a desktop product.
Similarly FOSS have a hard time getting a foothold because devs keep breaking backwards compatibility on a whim (and no, Flatpak et al is not a solution. It is lipstick on a pig).
Having said that the performance of some of the newer Arm socs are encouraging. For instance you can now get a tiny 4k htpc for as little as $50-80 making the use of expensive and power hungry x86 parts redundant.
Gigabit routers, nas, htpc devices and even some chromebooks now perform adequately with arm socs while sipping power.
Check Anandtech reviews if you don’t believe it.
I also have strong suspicions that real world workloads that aren't so cache-optimised would swing massively in favour of the 3770 as well. The best benchmarks we have of that unfortunately are things like 3DMark physics bench (the old A9X was half as fast as even an M-5Y71 in that benchmark).
It's also worth noting that for TDP comparisons, even Intel's own 35w and 45w parts compare favourably to the desktop parts, and newer generations increase this gap substantially. Some multi-core geekbench scores per watt (higher is better):
77w i7-3770: 162 - 45w i7-3770T: 255 - 45w i7-3632QM: 257 - 17w i7-3687U: 315
3.5w - 7w i7-7Y75? 900 - 1800.
A10X, assuming it's 5W TDP? 1800.
Heck, iOS 11 seems to be one serious transition point in that regard.
Sooner or later our mobile devices will morph into our desktops?
see also postmarketOS, whose founder recently has posted about it on HN.
I was sure she would hate it, but she seemed to like it.
(edit) For example, if you just look at CPU benchmarks, the processor in the iMac is not far off the i7 7700k that someone posted a benchmark for elsewhere in the thread, yet the 7700k supposedly thrashes the iPad while the iMac doesn't. Maybe both the iMac and iPad have thermal issues that a standard PC desktop would not.
That 90 watt part meanwhile is designed to put out that performance all day long.
(You might think that makes the iPad performance all the more impressive but that kind of throttling is nowadays built into the chip itself, it's not just limited thermal design. They are peddling this as a pro device and I would like my pro device to keep up that performance level for the entire time it takes to build my C++ project or run my deep learning kernel)
It is not slow, but it is not as fast as a modern machine. Maybe impressive, but not parity.
That benchmark might be a bit flawed.
The i7-7700K is not twice as fast.
Wonder how long people are going to keep buying thousand-dollar cellphones each year that's fueling this.
Btw - is LLVM still faster on a 5-year-old machine than on an iPad? I am seeing well, right?
As for programming well it depends on the language. You can code Python: http://omz-software.com/pythonista/