(1) It's an iPad CPU running x86_64 code in emulation, but it's running macOS, not iPadOS. This is not an iPad, which AFAIK can't run x86_64 code in emulation.
(2) It's a two-year-old chip, not a one-year-old one. :)
The whole thing is a little weird; if I'm understanding it correctly, they ran the Windows ARM version of Geekbench natively on the Surface X, and ran the macOS x86_64 version of Geekbench on the ARM-based Developer Transition Kit hardware and it still performed competitively. So Apple's CPUs are pretty good and Rosetta 2 does really good instruction translation, but I don't think we can draw a lot of conclusions from the specific numbers either way. (Even setting aside complaints about Geekbench as a benchmark, commercially available ARM Macs when they start shipping are almost certainly not going to based on the DTK CPU/hardware, just like the original Intel DTK compared to the first Intel Macs.)
The original title I put in was "2 Year old ...". Hacker News just deleted the 2 automatically after posting.
The more interesting comparison is just how much better the ARM macs will be from the x86 ones in comparable workloads and power consumption. Seeing the Apple ARM cores being able to show all they can with proper cooling and less power limitations is something we've been waiting for awhile.
There will always be fanboys for Apple and anyone else, but this is effectively a 2 year old CPU winning on a benchmark where I don't think anyone thought it would. That suggests that when the real hardware comes out that it will make Microsoft's ARM efforts look downright anemic.
Of course, I might be wrong, but based on Apple's history, I think I'll be proven right by January.
This leads to the next question - why is Microsoft's device so slow? Are they doing this on purpose or due to a technical limitation? Or some other reason?
At one point Qualcomm developed it's own high performance custom ARM cores, but after a huge fumble on their transition to 64 bit, they just gave up and started licensing stock ARM cores.
ARM is getting ready to license a higher performance extra-big core to anyone interested, so there should be some movement there in the not too distant future.
However, you can definitely blame Microsoft for making a bad call on emulating legacy Win32 software.
They tried to transition to ARM based WindowsRT without supporting legacy x86 software at all. Even today, Windows on ARM doesn't support legacy x86 64 bit programs.
This is not correct. Their CPUs are heavily modified, just not very well executed.
The problem is that Qualcomm puts most its efforts on other parts of the soc, such as the stupid ML coprocessor. Now you have like 5-6 different DSP-like coprocessors.
Edit: oh, and 5G. They spent $$$$ to be the first with working 5G modems that nobody needs.
I wonder, and I really wish it's gonna be a game changer, how well they will behave on jobs that go on for hours or days at full CPU load.
This is a demonstration of how powerful Apple’s current silicon is in absolute terms, and it also shows that it can run x86 code at more than adequate speed for many use cases.
I’d say not much, and not at all. Even if cooling is a factor the outcome still has the same meaning.
Apple silicon is way ahead, and emulation is excellent to the point of being a non-issue.
A devkit that can supply more watts of electricity and remove more watts of heat would definitely have an advantage.
A most fair comparison would require using the same type of case, power, and cooling for both CPUs.
But we can clearly see the order of magnitude, and it is quite obviously impressive in both of the ways already stated.
iPads are faster than Mac mini’s even with the same thermal differential.
The thermal issue does nothing to dismiss this truth.
A13 Big Cores:
>In the mobile space, there’s really no competition as the A13 posts almost double the performance of the next best non-Apple SoC.
Last year I’ve noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. In SPECfp2006 the A13 is still roughly 15% behind.
A13 Little Cores:
>In the face-off against a Cortex-A55 implementation such as on the Snapdragon 855, the new Thunder cores represent a 2.5-3x performance lead while at the same time using less than half the energy.
The differences to Qualcomm’s Adreno architecture are now so big that even the newest Snapdragon 865 peak performance isn’t able to match Apple’s sustained performance figures. It’s no longer that Apple just leads in CPU, they are now also massively leading in GPU.
Not that impressive.
It's user-mode emulation of x86_64 whether the translation happens just-in-time or ahead-of-time. AOT just can afford to spend more time optimizing out redundant emulation operations in each basic block.
It's not emulating privileged modes or devices, but neither is Microsoft's emulation (or user-mode QEMU for that matter.)
Nothing to be too happy about
In either case, this system in emulated state will out-perform a number of laptops they have sold like the 12 inch MacBook, with emulation. Thats impressive. The Rumored machines they will ship add two more high performance cores and likely eat way more power. I wouldn't be shocked to see it be as fast as current macs in emulation.
Although, granted, I'd imagine the new Macbooks having newer, faster chips than the developer toolkits once they finally launch.
> I think the “in emulation” bit uncertain. I think this got AOT translated to armv8.
The benchmark is measuring the performance of Geekbench running in Rosetta, which is the emulation layer allowing x86_64 code to run under Apple’s ARM chip. Are you saying you don’t trust that the benchmark is actually running under Rosetta, or are you claiming that Rosetta doesn’t count as “true” emulation because it performs some translation AOT?
> Also not very apples-to-apples comparison.
How so? The benchmark is between Apple’s flagship tablet CPU and Microsoft’s flagship tablet CPU, with the additional handicap that Apple’s chip is running the code through an emulation layer in order to demonstrate the impressive performance of Rosetta. How is the comparison unfair?
According to Apple it's not an emulation layer at all, though.
They themselves call it "translation technology"  and the term "emulation" is nowhere to be found.
I'm not sure why tech outlets insist on calling it an emulation layer, was it called that in the WWDC presentation?
My intuition is that “emulation” generally means an entire system rather than individual apps running on a system?
Rosetta translates binary code of an app from one instruction set to another.
And I can install whatever application I want, and not what Apple allows me to. It might be a bit slower, but it is more flexible.
I've heard this repeated elsewhere as well, and I'm quite confused about it. Because, Apple also shared this weird screenshot:
The most permissive option is "Reduced Security: Allows any version of signed operating system software ever trusted by Apple to run."
How do you run an OS that Apple never explicitly trusted?