This Geekbench comparison, on the other hand, tells me they're on par (both single and multi-core scores)
I'd take this claim with a massive boulder of salt, if you consider it at all that is.
They run the same workload compiled with Clang for each platform, as described in a PDF on their website.
I'm pretty sure the iPad Pro SoC in a MacBook Pro case would be at least on par with the latest Intel-specced MBP
At best, you can use a hardware encoder that happens to come with the GPU but usually you won't want that because it comes with a compression hit compared to software.
The relevance to media encoding is that the higher profiles do a lot of data dependent special casing. It's no accident that the earliest CUDA Video codecs only handled the base profile.
This also hurts a CPU implementation but much less.
> It makes zero sense to compare like this. Geekbench is closed source.
But probably there are remarkable differences in Integer vs. Floating point performance or Hardware acceleration. I assume iPad/iPhone have good HW acceleration for anything video/crypto/3D/...
Processor are definitely stalling in IPC (Instructions Per Cycle) improvements. But I don't think it's fair to call the A13 the fastest chip ever in an Apple device based on heavily hardware accelerated tasks. That's like saying an ASIC is faster than an AMD/Intel. Of course the ASIC will perform faster in certain tasks since they're purpose built for that. But the AMD/Intel are general purpose processors that are meant to do anything. The iPad most definitely has crypto/video/image processing hardware offloads and those are as fast as an Intel. But if I run a workload that makes extensive use of AVX (Instructions for processing Vectorized data), the Intel processors will blow the Apple A13 out of the water.
The point here is, these kinds of click-baity tweets are the ones that end up on articles and reach the general public. It's irresponsible.
On this basis alone I would be extremely dubious comparing between platforms
These Geekbench comparisons keep showing phone CPUs being faster than server ones, but I question their validity.
I really do think this indicates that Apple/ARM has caught up to Intel when it comes to architecture and we already know TSMC has surpassed them in process technology.
An ARM MacBook would be a kickass machine. I actually hope they don’t release some sort of emulation layer like they did in the previous transitions. We don’t want to be spending battery with that when the tooling these days is much more homogeneous and high level.
It’s also very tempting to speculate what they would be able to do with a larger, 40 something watt budget.
If so does that mean PC games would theoretically run faster on an iPhone 11 than on desktop PCs (ignoring GPU speeds)?
Mac Pros run Xeon processors that prioritize having many cores. Their single core performance is very good, but worse than Intel's top mainstream chips, like the 9900K.
If you're willing to spend $200 on a cpu, you can get faster or more cores.
It must be difficult to be in a management position where you obviously see the industry is moving towards ARM, but you need to orchestrate a plan in motion to correctly time when the customer/ developer is ready to make the switch.
Apple is in a unique position to make this happen though. And they have done it before with the PowerPC->x86 switchover.
Their solution has previously been to have fat binaries with both versions compiled alongside. They control the toolchain and the app store so this is not that difficult to enforce.
EDIT: In case it wasn't clear; there are Apple II emulators for the Macintosh.
So the simplest way for Apple to get a ARM-based computer would be allowing the iPad to be more of a computer. Alternatively, they do it the hard way and launch ARM-based Macbooks. That would be nice too. But putting iPadOS onto a Macbook would rather go in the wrong direction. MacOS should run well on ARMs anyway.
A nice proof of concept, but I don’t think it’s the right path for a serious tool.
Personally, I would refresh my iPad much more often, without the limitations. And it is not very realistic from Apple to expect people to carry both a laptop and a tablet. So forcing the customers to choose between those device types is not very customer friendly.
Actually, I am currently considering to get the Samsung Galaxy Book S, which seems to be a very interesting, ARM-powered laptop. Of course I would prefer a new iPad Pro, if not for its limitations.
Even the Xbox had backwards compatibility across ISAs.
Somehow I doubt people will care that much about efficiency.
Curious if Apple does an ARM Mac Mini first as a canary.
I say this knowing full well than in any battery life versus thinness tradeoff, Apple has picked thinner just about every time.
I had several smartphones I didn’t need to charge more than every other day or even every three days if I wasn’t using them nonstop. Apple could do that today.
Well, to be fair, smartphones have much, much smaller bodies to carry around a battery. If Apple makes a notebook with the same A13 processor I’m pretty sure it will last more than two or three days.
BTW, one of smartphone’s function is that it’s portable. You really can’t blame it being form over function when talking about battery life; there are many, many people who want light phones. If you need longer battery life by sacrificing weight, you can get yourself a backup battery (or a battery case)
> I had several smartphones I didn’t need to charge more than every other day or even every three days if I wasn’t using them nonstop. Apple could do that today.
My experience is that my iPhone XS survives a day and a half without any charging. YMMV, of course, but for me that’s enough.
It would be interesting to see the single thread performance of intel Xeon W-3275M CPU from the Mac Pro 2019. The Cascade Lake architecture supposedly increases single thread performance by decreasing the base frequency of certain cores while increasing other cores as part of its Speed Select Technology (SST). Of course this depends on the power configuration set by Apple, I wonder whether they would voluntarily keep it under max performance in order to project the single thread superiority of their AX chips.
The sad truth is that processing speed improvements have slowed down to a crawl. The exponential predicted by Moore's law looks more like a shallow linear progression today.
Computers are not improving nearly as quickly as they once were. Weirdly, no one is talking about this. If computers stop improving, efficiency will stop improving, which will ultimately mean that standards of living will stop improving. If standards of living stop improving, i.e., the pie stops growing, there could be a whole variety of horrible social and political repercussions as everyone fights for their finite slice.
However, almost immediately afterwards, IPC, which had been stagnant over that whole time period, started itself steadily climbing exponentially, with the rate falling only fairly slowly.
The overall result is exponential growth in ST performance that far outlived frequency growth, and which has only truly tapered recently... though it's likely AMD's recent reentry to the market starts to recover trends somewhat.
But now that IPC is starting to struggle (though not falter), core counts are finally taking off. We haven't hit the end yet.
The fact that the industry is now relying on these tricks are actually good evidence that performance is hitting a wall. In the past, if you wanted to 2X boost in performance, you could just wait a year. Now waiting doesn't work. You need to build your own ASIC or move things to the cloud where you can rely on big-tech's economies of scale.
We have hit an end, and the performance boosts we see today are one trick ponies. After you build your ASIC, you can't get much faster. After you move to the cloud and reduce processing power costs, you can't reduce further. These tricks have slowed the "perceived" deterioration of Moore's law, but they are acts of desperation, not real progress.
Might be the reason why Intel is putting FPGAs in the latest Xeons: https://www.anandtech.com/show/12773/intel-shows-xeon-scalab...
> Might be the reason why Intel is putting FPGAs in the latest Xeons: https://www.anandtech.com/show/12773/intel-shows-xeon-scalab....
This is an excellent example. As things are trending, I would expect FPGA and other hardware programming to become more common. I would not be surprised if in a few years websites and browsers start doing their own FPGA optimizations. Adding FPGA programmibility does not increase real raw performance, it just allows developers to do optimizations they would not otherwise be able to do.
Along this line of thinking, you would expect developers to start moving away from slower scripting languages towards faster compiled languages. This has not happened yet at any large scale, largely because the huge demand of digital services has placed a premium on developer productivity over application speed. However, this trend is happening on the edges (e.g., using C-based Numpy within Python, the steady rise of Go). If computer performance remains as flat as it has been, I predict we'll start seeing more popular libraries rewritten in faster compiled languages (and eventually FPGAs/ASICs) within the next 5 years.
I fail to see the connection between the computers’ efficiency and standard of living. You seem to have made quite a big leap there. Can you please elaborate?
It is a big leap, and it's probably better suited for an essay than a HN comment. But the general idea is that most improvements to standard of living have relied on technology, whether that's agriculture (past 5000 years); plumbing and sanitation (past 2000 years); transportation and mass production (past 500 years); or calculation, communication and automation (past 50 years).
Humanity has been on an exponential curve of improvement for a hundred years, if not many hundreds. If this exponential in fact turns out to be more of a sigmoid function, and technological development eventually stops, it will have huge repercussions for our entire society.
I'm personally not fully convinced we've reached the end, but there are signs that we already have or will soon. Even if you think the chance of this happening is low, the consequences of an end to technological progress is huge. We need technologists, economists, socialists, politicians, and even fiction writers to be investigating this potentiality. The fact that this is not heavily studied is disturbing.
Yes, computer architects have been thinking about this for a while, but so far it's largely stayed in their domain. Everyone else keeps thinking the "nerds" will keep on doing their thing and progress will continue exponentially as it has for the past 50-100 years. But what if that exponential stops, what if it's a sigmoid and not an exponential at all? We need folks outside of the computer industry to consider this possible outcome.