The Apple A13 - even its implementation in the iPhone SE, in microbenchmarks achieves on par single core performance  with the Core i7 8086k  and Ryzen 9 3950X . That's the highest single core performance you can buy in PCs in principle.
I don't have to explain how insane it is a 5-ish watts smartphone CPU delivers that kind of performance, even if it is in bursts. By sticking to Intel or even x86 in general, there is ample evidence Apple is leaving a lot of performance on the table. Not just in MacBooks - but for the Mac Pro too.
: https://browser.geekbench.com/v5/cpu/search?utf8=%E2%9C%93&q... - the iPhone 11 with the A13 has to do as a surrogate while the SE is just out, they benchmark the same.
It's worth noting that Geekbench is a pure microbenchmark. The iPhone will not sustain performance as long as the others. The point is that Apple could solve this when moving to bigger devices.
Honestly, I'm not convinced they're THAT far ahead. They don't have a lot of the legacy baggage Intel has to contend with, and they're the only company making high end ARM chips (besides Amazon and a few other weird server implementations), but being able to match big core i7s in some benchmarks single threaded is to a large extent something that Intel's own low power chips can also do, at least burstily.
There are a lot of challenges to big many-cored chips beyond single-core performance, and we really don't know where they are with that yet, as there are no publicly-available examples of Apple desktop chips.
That's, if you'll pardon the pun, an Apples to Oranges comparison. Apple isn't making "high end ARM chips" either if your comparator is powerful servers. You need to look at what Apple is doing within their power envelope and compare that to what everyone else is doing within _their_ power envelopes. The A13 Bionic is an uncooled 6W TDP chip blowing past 95W base TDP chips that require hefty active cooling.
This whole thread reminds me of how passionate the power pc enthousiasts were defending it as superior, right before apple switched to intel and doubled mac performance overnight.
To be fair, the PowerPCs _were_ measurably better and faster when each one was released. Apple just couldn't get a G5 CPU that would fit into a laptop, and IBM was an unreliable partner with a slow release cycle, so by the time the transition happened they had fallen behind.
The G5 wasn't great though. When Steve Jobs announced it, Apple already showed it only trading blows with the then-current Pentium 4 (a Pentium 4 - they sucked!). And a few months after the first Power Mac G5s were launched, they were already resoundly beaten by the new Athlon 64s .
Add to that the G5 was basically a POWER4 server chip, and IBM was only building server chips in the future, Apple basically had no choice. Nothing really to do with PowerPC vs x86, but more to do with what kind of processors their suppliers were willing to build.
POWER is very much alive in the high-end server space, powering IBM's p and i series of machines.
I don't think it was ever used for anything.
The Wii U for all its flaws is probably the most "practical" Power-based machine you can get nowadays, given relative power, availability, size and price. A 1.2GHz triple core PowerPC G3 would probably still eek out Raspberry Pi 3 like performance. Shame the Linux port to it never really got off the ground (also partially due to IBM's hackjob of an SMP implementation for the G3).
The A12X from 2018's iPad Pro is the sort of chip I'd expect to see in an ARM laptop, and its multicore scores are similar to the top end of 2018 Macbook Pros.
2020's iPad revision didn't get much in the way of processor improvements (just one more GPU core), so the new 16" MBP has pulled ahead with an 8-core i9, but when we get a new iPad based on an A13X or A14X I expect it to be back in that range again.
And these are in thin fanless tablets. With a proper cooling system, there's got to be some extra juice to be squeezed out of them.
You're not wrong. In general, it's true that a microbenchmark amplifies the Apple A13's strengths due to power limits. The assumption I make is that microbenchmarks indicate the true peak performance of Apple's architecture, and as power limits become a smaller constraint when Apple uses its chips in laptops and desktops they will make available that performance in a more sustained way.
But even low power Intels don't compare that favourably. Intel's new i7 10510U delivers very nice single core performance . But it's worth noting that 1) that still does not quite match the A13's burst performance 2) that chip is still rated for a power profile much larger than the A13's and 3) as always in these discussions - Intel's "TDP" is a marketing term not a power limit. At high turbos the chip is permitted to consume quite a bit more power than the 15W it's rated for.
This particular Intel chip boosts to 4.90GHz. For Apple chips, even stuff like clockspeeds are a matter of conjecture, but Wikichip without a source claims that the A13 tops out at 2.65GHz  which if true indicates a lot more thermal and frequency headroom in bigger form factors.
I just benchmarked my MacBook Pro+Safari in Jetstream 2.0  - not a microbenchmark - and it scored nearly 145 compared to the nearly 130 the iPhone 11 scores . That's with a "45W TDP" Core i7 8850H topping out at 4.3GHz. It's hard to benchmark iPhones well, but all evidence points to the fact that they are actually really fast.
: https://en.wikichip.org/wiki/apple/ax/a13 - worth noting that high-end Qualcomm SoCs also operate at comparable frequencies.
* 6 year old Macbook Pro i7-4980HQ, Windows 10 - 102
* 5 year old Macbook Pro i7-5557U, OS X - 100
* Threadripper 2990WX desktop - 99
So, uh, I might have some questions about this benchmark's general validity now?! - though maybe it is some evidence in favour of my vague feeling that the Threadripper sometimes doesn't feel as fast as it seems like it ought to feel.
Moreover, while the Threadripper 2990WX is a really awesome processor, single core benchmarks (I think Jetstream is mostly limited to a single thread) aren't particularly its strength. Over multiple runs it should beat your Macbook Pro, but not by a huge amount. If not, take a look at how you're cooling that beast :)
The Zen2 (3900x) core speed on the other workstation reported as almost twice as fast, with 12 cores. Really wish that TR4 board supported the 39xx threadripper series.
Ryzen and Threadripper 1000- and 2000-series, and Ryzen Mobile < 4000-series are all on Zen or Zen+ architecture.
The current gen Ryzen and Threadripper 3000-series and the Ryzen Mobile 4000-series are the ones running on Zen 2. This is where AMD is competitive with Intel on single-threaded workloads, largely across the board.
Parent mentioned a 2990WX, which is a Zen+ part.
One of those 'few other' was Scaleway, but they recently buckled up and ended their ARM server lineup abruptly. They were running Marvell ThunderX SoCs(Upto 64 cores, 128GB RAM).
So, Amazon might soon takeover ARM server market i.e at-least till Tim Cook does a Satya Nadella and brings in Apple IaaS with ARM CPUs.
I'm typing this with a Surface Pro X running a ARM64 CPU called SQ1 which a customization of a Snapdragon 8cx. It is quite high end and is not made by Apple. It might not be the amazing custom CPUs they have on iOS devices but it is still a pretty good CPU.
That isn't necessarily true. Having competitive performance at lower power isn't always the same thing as having better performance, even assuming these benchmarks are representative.
Processors designed specifically for low-power make different design trade offs. One of those is to exchange maximum clock speed for IPC (because higher clocks burn watts). The A13 maxes out at 2.65GHz, the i7 8086k hits 5GHz. Chances are you can't just give the A13 a 95W power budget and see it hit 5GHz, it would have to be redesigned and the kinds of changes necessary to get there would generally lower IPC.
Apple is also riding the same advantage as AMD -- they're using TSMC's 7nm process which is better than what Intel is currently stuck with. Even AMD is still using an older process for the I/O die. We don't know what that's going to look like a year or two from now.
Meanwhile the renewed competition between Intel and AMD makes this kind of a bad time to move away. They're both going to be working hard to take the performance crown from each other and Apple would have to beat both of them to claim an advantage. And continue to do so, or they'd have a lot of pissed off customers and developers after forcing a transition to a new architecture only to have it fall behind right after the transition is over.
It's not really a difficult concept to understand. If your CPU runs at 5GHz then maximum time a single cycle is allowed to take is 0.2 nanoseconds. CPU designers have to make sure that this limit is never exceeded anywhere on the chip. If you make a even the slighted mistake in some unimportant corner of the CPU you will end up limiting the maximum performance of the entire CPU.
Most CPUs are optimized for a specific clock frequency and going beyond it is not possible without sacrificing stability.
It's not aida64 but it is a pretty decent metric, and consistent.
I agree that Apple's ARM CPUs are very competitive on simple scalar instructions and memory latency/bandwidth. However x86/x64 CPUs have up to 512 bit wide vector instructions and many programs use vector instructions somewhere deep down in the stack. I guess that the first generation of Apple ARM64 CPUs will offer only ARM NEON vector instructions which are 128 bit wide and honestly a little pathetic at this point in time. But on the other hand I am very excited about this new competition for x86 CPUs and I will for sure buy once of these new Macs in order to optimize my software for ARM64.
Would love to learn more from sources if people might provide a newb an intro.
If you believe this, you won't believe what's in this box.
> A memcopy of a small to medium-sized struct might be compiled into a bunch of 128bit mov for example and then immediately working on that moved struct
I'm not sure that's true: rep movs is pretty fast these days.
There's a fundamental difference between GPU code and vector CPU instructions, though. GPU shader instructions aren't interwoven with the CPU instructions.
Yes, if you restrict yourself to not arbitrarily mixing the vector code with the non-vector code, you can put the vector code off in a dedicated processor (GPU in this case). The GP explicitly stated that a lack of this restriction prevents efficiently farming it off to a coprocessor.
That's only true if you target skylake and newer. If you target generic x86_64 then compilers will only emit rep mov for long copies due to some CPUs having a high baseline cost for it. There's some linker magic that might get you some optimized version when you callq memcpy, but that doesn't help with inlined copies.
Why exactly do you think seven-years-old is too-old, but five-years-old isn't?
And if we're talking about memcpy over (small) ranges that are likely still in L1 you're definitely not going to notice the difference.
Many high compute tasks are CPU bound. GPUs are only good for lots of dumb math that doesn't change a lot. Turns out that only applies to a small set of problems, so you need to put in lots of effort to turn your problem into lots of dumb math instead of a little bit of smart math and justify the penalty for leaving L1.
Consider a typical use case for SIMD instructions - you just decrypted an image or bit of audio downloaded over SSL and want to process it for rendering. The data is in the CPU caches already. SIMD will munch it.
Does your game use Denuvo? Then it straight-up won't run without AVX.
People are stuck in a 2012 mindset that AVX is some newfangled thing. It's not, it's used everywhere now. And it will be even more widely used once AVX-512 hits the market - even if you are not using 512-bit width, AVX-512 adds a bunch of new instruction types that fill in some gaps in the existing sets, and extend it with GPU-like features (lane masking).
This isn't just a note, it's an important clarification.
Microbenchmarking is used in-lieu of proper benchmarking because you can't do proper benchmarking.
Anandtech architecture reviews are helpful, as always. Worth reading for a page or two from the linked page:
The short of it is: massively wide execution units, massive amounts of SRAM, massive amounts of cache at all levels. There's no real "secret sauce", they're just willing to pay to make an incredibly fat core and Qualcomm and company are not.
They have 16 MB of system cache on A13 for 2 high-performance and 4 low-performance cores, which is as much as a 9900K gets for 8 cores and as much as Zen2 gets for 4 cores. Plus another 8MB per big core on top of that (so up to 24MB per core in single-threaded mode), and 4 MB per small core.
It helps that they're a vertically-integrated company, they don't have to sell their processors on the open market at competitive prices such that an OEM can also make a profit selling a finished product at competitive prices, they just sell the finished product.
Something like Cinebench ran 10 times in a row and taking the average of results would be more meaningful.
Also, the benchmark has to enable or disable optimization on all platforms. Some people on reddit claim that Geekbench is highly optimized for ARM and less optimized for X86.
Zen2 has 2x16MB L3 and 2x4x512KB L2 per chiplet (36MB) so it's not like Apple is throwing down afore-unheard-of quantities of SRAM. It's true a single A13 thread has much more accessible L2 capacity, though.
The 5775c (https://wccftech.com/intel-broadwell-core-i7-5775c-128mb-l4-...) was a good example of a no-compromises (from a cache standpoint) CPU from Intel that just annihilated their other CPUs at the time... it's not that hard to do, provided you're willing to pay the price for it somewhere else.
I've thought for years that the overhead of the extra decoder hardware and legacy cruft was non-trivial (though Intel claims that's not true). The evolution of ARMv8 (where ARM went much closer to it's RISC roots) seems to disagree. This explains the performance per watt issue (and potentially some IPC).
That said, scaling IPC (instructions per clock) seems to have a pretty big limit. x86 has basically hit a wall and it's been lots of time and research for small gains. Additionally, the biggest challenges in large systems is that the cost to do a calculation on some piece of data is often less than the cost to move that data to and from the CPU. As Apple increases cache size, frequency, and starts dealing with bigger interconnect issues, I suspect we'll see a distinct damper on their performance gains.
Qualcomm (and ARM as the designers of the core) has a very different problem to solve. They can't make money off of software. They make money when they sell new chips and they make more money from new designs than from old ones. This means incremental changes to ensure a steady revenue stream. Since Apple having a fast, proprietary CPU doesn't actually affect Qualcomm or ARM, they most likely don't even see themselves as in direct competition. Most people buy Android or iOS phones for reasons other than peak CPU performance and Qualcomm is fairly competitive with a lot of these (esp actual power usage).
A further complication is that they also need "one design to rule them all". They can't afford to make many different designs, so they make one design that does everything. Apple doesn't need to spend loads of time and money trying to optimize the horrible aarch32 ISA. Instead, they spend all that time on their aarch64 work. ARM and Qualcomm however need to add that feature so the markets that want it still buy their chips.
Apple shipped their large 64-bit design only a couple years after the ISA was introduced. Put simply, that is impossible. It takes 4-5 years to make a new, high-performance design. It took ARM 3 years for their small design (basically upgrading the existing A9 to the new ISA) and closer to 4.5 years to actually ship their large design (A57) and another year for a "fixed design (A72, though it's actually a different design team and uarch). Though the gap has been closing, 2.5 years in the semicon business is an eternity.
A crufty ISA and non-CPU scaling problems seem to explain Intel/AMD. A late start, bigger market requirements, and perverse incentives against increasing performance seem to explain ARM/Qualcomm
Basically a lot of apples advantages are:
1: Complete vertical control of compiler+OS+hardware
2: Plenty of margin to spend on extra die
3: More advanced process @ TMSC
4: Very narrow focus, apple has only a few models of iphone+ipad, where as intel has dozens of different dies they modify/sell into hundreds of product lines. So everything is a compromise.
Any of those four give them a pretty significant advantage, the fact that they benefit from all four cannot be discounted.
In addition, complex instruction decoding requires more decode stages. This isn't a trivial cost. Intel can shave off several stages if they have a decode cache hit and that's not including the ones that are required regardless (even the simple Jaguar core by AMD has 7+ decode stages possible). Whenever you have a branch miss, you get penalized. Fewer necessary decode stages reduces that penalty.
So people have been making these arguments for years, frequently with myopic views. These days what seems to be consuming power on x86 are fatter vector units, higher clock rates, and more IO/Memory lanes/channels/etc. Those are things that can't be waved away with alternative ISAs.
Either the engineers at Intel and AMD are bad at their job (not likely) or the ISA actually does matter.
So not as sexy as phones, but the power/perf profiles are very competitive with similar arm devices (A72). If you compare the power/perf profile of a denverton with a part like the solidrun MACCHIATObin the atom is way ahead.
Check out https://www.dfi.com/ for ideas where intel might be doing quite with those atom/etc devices.
The reality is just that Apple is ahead in chip design at the moment.
When Medfield came out, Apple didn't have it's own chip and x86 still lost. It was an entire 1.5 nodes smaller and only a bit faster than the A9 chips of the time (and only in single-core benches). The A15 released not too long after absolutely trounced it.
>It was an entire 1.5 nodes smaller and only a bit faster than the A9 chips of the time
You seem to have the chronology all mixed up here. Medfield came out in 2012. The A9 came out in 2015. Apple was already designing its own chips in 2012. (The A4 came out in 2010.)
Actually ARM cores were available earlier than that, just nobody wanted to license them until the elephant in the room (Samsung) forced everybody to follow.
2011 -- ARM announces 64-bit ISA
2012 -- ARM announces they are working on A53 and A57 and AMD annouces they'll be shipping Opteron A1100 in 2014.
2013 -- The Apple A7 ships doubling performance over ARM's A15 design.
2013 -- Qualcomm employee leaks that Apple's timeline floored them and their roadmap was "nowhere close to Apple's" (Qualcomm seems to switch to A57 design around here in desperation -- probably why the 810 was so disliked and terrible).
2014 -- Apple ships the A8 improving performance 25%.
early 2015 -- Samsung and Qualcomm devices ship with A57. Anandtech accurately describes it saying "Architecturally, the Cortex A57 is much like a tweaked Cortex A15 with 64-bit support." Unsurprisingly, the performance is very similar to A15.
late 2015 -- Apple ships A9 with a 70% boost in CPU performance.
later 2015 -- Qualcomm ships the custo 64-bit kryo architecture as the 820. It regresses in some areas, but offers massive improvements in others for something close to a 30% performance improvement over the 810 with A57 cores.
2016 -- AMD finally launches the A1100. ARM finally ships the A72 as their first design really tailored to the new 64-bit ISA.
Apple -- 2 years to ship new high-performance design
ARM -- 4 years to ship high-performance design, 5 years for new design
Qualcomm -- 4.5 to 5 years to ship new high-performance design
Sorry, something's definitely fishy. Nobody can design and ship that good of a processor in less than 2 years.
Is that inherent to the architecture or is this a self-imposed limitation by Apple since it has to sip power and run without any active cooling?
Also, the A-series chips seem to fall down in comparison against the Intel Macs on multi-core performance which seems like it would matter for anyone who needs a desktop.
RISC finally coming into is own.
For the longest time, Intel was able to fend off much better architectures simply by being a fab generation or two ahead, more clock speed, more transistors, more brute force.
Not to belittle the engineers wringing out seemingly impossible performance from the venerable architecture, but the the architectural limitations always mean extra work and extra constraints that have to be worked around.
And now that Dennard scaling is dead, Moore's law wheezing and not helping all that much for our mostly serial workloads, they just can't compensate for the architecture any longer, at least not agains a determined, well-funded and technically competent competitor that's not beholden to Wintel.
I remember when the Archimedes came out and just offered incredibly better performance than the then prevalent code-museums, 386 and 68K variants, at incredibly lower transistor counts. The 486 and 68040 were able to compete again, but with vastly larger transistor budgets (and presumably power budgets as well, but we didn't look at that back then).
Oh, and can we have our Tansputers now? Pretty please, Xmos?
Apple provides brief windows of automated compatibility, but code untouched since 2003 won’t run on a mac today, and code untouched since 1986 wouldn’t run on a 2003 mac.
I just got a 2020 MacBook Air with an Ice Lake 10th generation 10nm X64 chip in it.
It's a quad-core, newer core rev, has AVX2 and a bunch of other stuff.
It runs very noticeably cooler than a 2019 Air with a 14nm Amber Lake chip in it. Battery life is also noticeably better. When I read the spec differences I basically traded my 14nm Amber Lake Air for a 2020 Air (and also because of the better keyboard). It's a great machine.
The difference between the Amber Lake and Ice Lake core designs is not substantial and the Ice Lake has twice the cores and a larger on-board GPU, so how is it so noticeably cooler? I can fully max out all four cores of the 10nm Ice Lake and it doesn't get as hot as the 14nm Amber Lake did at lower loads! The answer is obviously the process node: 10nm vs 14nm. It's a more efficient chip at the physical circuit level.
ARM has some intrinsic power advantages over X64. The biggest thing is that ARM instructions come in only two or three sizes and it's easy to size and decode them, while X64 instructions come in sizes from one byte to 16 bytes and are a massive pain to decode. That decode cost comes in energy and transistors, but it's worth pointing out that this is a mostly fixed cost that shrinks as a percentage of the overall CPU power/transistor budget as the process node shrinks. In other words the cruftiness of X64 remains the same as you go from 22nm to 14nm to 10nm.
Other than the ugly decode path the ALU, FPU, vector, crypto, etc. silicon is not fundamentally different from what you'd find in a high-end ARM chip. A lot of the difference is clearly in the fabrication. ARM chips have been below 14nm for a while, while Intel X64 chips have lagged.
(Tangent: since the actual engine block is largely the same, I've wondered if Apple might not slap an X64 decoder in front of their silicon in place of the ARM64 decoder and make Apple X64 chips?! I am not a semiconductor engineer though, so I don't know how hard this would be and/or what IP issues would prevent this. Probably unlikely but not impossible. They certainly have the cash and leverage to muscle Intel into licensing anything they need licensed.)
If Intel gets its act together with process nodes and/or starts using other fabs who are at tighter lower power nodes, the advantage will shrink a lot. AMD already has mobile chips that are close to Apple's ARM chips in performance/watt, partly because they are fabbed at TSMC at 7nm.
BTW: I'm not claiming Apple's chips aren't impressive, and as long as they don't lock down MacOS and make it no longer a "real computer" I personally don't mind if they go to ARM64. Also: the fact that Apple's chips only get this great performance in bursts is mostly due to power and cooling constraints on fanless thin phones and tablets. In a laptop with better cooling or a desktop they could sustain that performance no problem.
TIL of this commemorative naming. Sole comment I could find on HN 2 years ago https://news.ycombinator.com/item?id=17409849
Nope. But Apple put a MASSIVE amount of cache on their chips.
It's not rocket science, but it is expensive.
So we should expect lock-in on Mac Pro chips as well. Exactly what people were asking for. Another brick you can't upgrade without paying the cost of a new machine.
no software has gotten slower by the ages. Apple, Qualcomm, Intel can make as much faster processors with x cores, but do we have software able to utilize those, eh! run a JS heavy site | app and you see most processors heat up these days whether mobile or desktop.
most programming languages can't easily delegate work to cores with smoothness like how Erlang n Elixir do it. in Python threads, were a nightmare but now with concurrent futures or dask at least we can utilize all cores.
tldr - we need to make faster software
Unfortunately, the majority of users seem conditioned to accept software with awful performance, so there's no impetus for developers to upgrade their skills.
I can build an application in electron 4-10x faster (at least) versus building the same application in C. If I'm costing a company $100-200 per hour, would they rather pay me for 4 months (500 hours and $50,000-100,000) or would they rather pay me for 1-2 years (2-4k hours) at a cost of $200,000 - $800,000?
What about when we multiply that by a team of 5-10 people? Don't forget that time to market is often incredibly important. Tell them 2 years and 8 million or 4 months at 1 million and what will they say?
OTOH, C isn't a GUI development environment. If you want to compare a C based environment you compare it with GTK/QT/winforms/etc.
In the end, as someone who has written GUIs in a wide range of tooling i'm not sure there really is that much difference.
The web ecosystem is a big mess where trends change every month and working in web ecosystem requires looking up things a lot because no one bothers to master the thing. It doesn't help that many web developers don't have solid foundations.
This is a trap managers generally fall into. Cheap developers aren't equivalent to competent developers, and their incompetence will cost you more than what you save by hiring them instead of a competent developer.
But. That said. Whilst I'm no big fan of web apps, there are good reasons they're so prevalent. It's not merely about developer convenience. Native GUI toolkits can appear artificially performant because they're required by the OS and thus almost always resident, vs cross platform toolkits that may be used by only one app at a time. When you open the lid though the gap between an engine like Blink and something like Qt, JavaFX or Cocoa isn't that big. They're mostly doing similar things in similar ways. The big cost on the web is the DOM+CSS but CSS has proven popular with devs, so native toolkits increasingly implement it or something like it.
My laptop battery would disagree. I get about an hour and a half with Chrome/Electron, four or five without.
Note: I didn't say the web itself is highly optimal. Just that for a web browser, Chrome is pretty thoroughly optimised.
Just use compiled, strongly typed languages.
Yes: Java, C#, C, C++, Rust.
Hardware is fast and cheap, and it's getting even faster and cheaper. It's perfectly fine to utilize this power, if it makes developing products faster, easier or cheaper.
Now, there are still cases when you need to send a machine to roam the mountains of another planet. This may justify doing some assembly.
No, it wouldn't. Those tasks would just become less efficient with time as developers stopped caring to optimize them, as has happened with the overwhelming majority of consumer software for the past several decades.
I was trying to express that Electron (and the like) is not an inherently bad thing. It allows to trade hardware capacity for easier development experience. Those developers who use it create useful software that works. And software that works in a given environment is exactly the point of the industry, is it not?
In other words, Intel's CPUs are mostly about "burst" performance these days, too. They don't get anywhere the promised peak performance for any significant amount of time.
The conclusion of your article is precisely about how motherboard manufacturers don't obey the official/nominal behavior and how that makes it irrelevant:
> Any modern BIOS system, particularly from the major motherboard vendors, will have options to set power limits (long power limit, short power limit) and power duration. In most cases, at default settings, the user won’t know what these are set to because it will just say ‘Auto’, which is a codeword for ‘we know what we want to set it as, don’t worry about it’. The vendors will have the values stored in memory and use them, but all the user will see is ‘Auto’. This lets them set PL2 to 4096W and Tau to something very large, such as 65535, or -1 (infinity, depending on the BIOS setup). This means the CPU will run in its turbo modes all day and all week, just as long as it doesn’t hit thermal limits.
Intel desktop processors will sustain boosts for an arbitrary amount of time. Yes, they will exceed the nominal TDP while doing so, so do AMD processors (AMD's version of "PL2", which they call the "PPT limit", allows power consumption up to 30% higher than the nominal TDP while boosting, and there is no official limit to how long this state may occur).
These limits are of course observed much more strictly on laptops since power/thermal constraints actually make a real-world difference there. But overall, Ice Lake perf/watt is competitive with Renoir and its IPC and per-core performance is actually higher than Renoir. You just get fewer cores.
I guess you can solve anything if you do it yourself, but that was perplexing for me that they would do that. I am not an expert on this, but this happened some time between 2016–2018.
Edit: I stuck with my upgraded 2013 model and even today it's fast and good for the job. Even my 2009 mac is still running. So I am not saying they can't pull it off, I am just wondering whether they give enough attention to their non-iPhone products.
Don't bother with the links, the comments are more informative. In some of the comments people explain why it's Intel's fault and not Apple, in some comments people explain processor speeds. A lot of it is about the keyboard that they changed, but that is not related to this. My point was more that I think Apple should put the amount of effort into their laptops that they did in 2009, whether or not they do is subjective until we see their new processors.
Whereas other PC shops typically will just throw PCs together and assume they'll fix issues found in patches.
That's not going to fly for "just works" Apple product image - and why it's positioned and priced as a "luxury" product.
2. If Apple were "far ahead" in terms of performance in general, they would probably have been using these chips in products which aren't smartphones.
It's not trivial to port x86 software to ARM, or even run an energy efficient emulator.
Is that correct?
Then why do I need all this performance? Perhaps to run js on shitty websites. Well, maybe for gaming.
That said, the experience of trying to browse a local newspaper's website with NoScript turned off on my 2015 MacBook Pro tells me that, yes, JS on shitty websites is a problem. And not one that most people would find to be particularly avoidable. Heck, even GMail is getting to be noticeably slow on that computer.
As long as it keeps ticking and I can get whichever spare parts I need on eBay or something, I'm not going to replace it anytime soon.
The only reason I have a more powerful desktop PC (still ~2011 vintage) and don't just use the X220s in a dock, is that it it struggles with a 1440p external monitor (full HD is fine, though) and I sometimes like to play more graphically intense games.
Just say No to bad software, starting with web ads.
We are just doing that. And one of the reasons is better software costs more than faster hardware.
I'm half-convinced that Apple killed 32bit support so early precisely to see how developers coped; if it had been really bad they could have re-introduced it in a point release of Catalina. As it was, it wasn't very bad and most developers complied, which is an argument in favour of an ARM transition being feasible.
The only other reason I can think of to so aggressively move to 64bit is security, but most of the apps that were stuck on 32bit were not that big a security concern.
I am not just talking about 32-bit support. It shows up in a lot of random libraries that wind up deprecated and replaced with something less capable. That's a pattern I have seen a lot elsewhere and it's usually a bad sign for overall product quality.
The only good reason I can imagine is, that when Tim Cook announces macOS on ARM, he will claim "runs everything that runs on Catalina".
Sure, Coke sales are down 50%, but the customers who are buying New Coke say they like it just as much as the old recipe!
The best bet is Windows since it would happily run software made 25 years ago in Windows 95 time.
To use macOS, you have to be satisfied by using the apps which run on macOS: Adobe apps, Apple apps, Microsoft Word and open source software.
Enterprise is Windows territory, most businesses are Windows territory, education is Windows territory, most home users / small businesses are also Windows territory.
So if Adobe apps will have an ARM build, that will satisfy a huge part of their user base. The rest would use Apple tools which will get ARM builds and open source tools which already have or will have ARM builds.
Is it really so easy to introduce 32-bit support back?
_If_ they were taking the approach of a deliberately early deprecation, which it seems like they were given the timing relative to the rest of the industry, it would only make sense to make it be an easily reversible decision.
Oh, you meant PC games? Sure, the Mac only has a small percentage of (for example) Steam games, but that percentage is steadily rising - it's now over 25%. Switching architecture is unlikely to present a major problem for most developers, especially given that they're probably using Unity or Unreal Engine.
As a former game developer I can tell you that that is an issue. Apple is totally against using cross platform tools. They break compatibility as much as they can.
Framework change, architecture change and so on.
Instead of going with OpenGL ES, Vulkan, OpenCL they made Metal.
If user base is large enough, as is the case with iOS, there's an incentive to go trough the pains of releasing for that platform. But that isn't the case with macOS. Maybe for Adobe is worth it to spend resources to build software for macOS, but for other companies that might not be the case.
Anyway, you make much more money by targeting Play station and Xbox, and the resources needed are the same in money and man hours so it kame sense to target macOS last, if ever.
And no, not everybody is using Unity and Unreal. That is true mostly for indies.
This article appears to successfully statically translate arm64 binary into an x86_64 binary using bitcode.
Also, a considerable number of Mac apps don’t use the App Store for distribution.
(I can't find the video after a cursory search...)
Is this is the reversal of this revolution? Are we going back to a vertically integrated market due to consolidation in market players or because of performance / power concerns? Everyone seems to be making their own chips and boards these days. Google / TPU, AWS / Graviton, Microsoft / SQ1...
Will we ever see a fragmentation in ISAs a la EEE? IMHO that would be a catastrophic regression in the software space, easily a black swan event, if you say needed to compile software differently between major cloud vendors just to deploy.
Intel's vertical integration worked well for them for so many years. However, the crack has been around long enough for others to start muscling in. AWS can push Graviton because Intel has been stuck at 14nm for so long (yes, they have some 10nm parts now, but it's been limited). Apple can push a move to ARM desktop/laptops because Intel has stagnated on the fab side.
I wouldn't say this is a reversal of that revolution as much as a demonstration of the power and fragility of vertical integration. Intel's vertical integration of design and fab gave them a lot of power. Money from chip sales drove fab improvements for a long time and kept them well ahead of competitors. However, enough stumbling left them in a fragile place.
I think part of it is that ARM also has reference implementations that people can use as starting blocks. I don't know a lot about chip design myself, but it seems like it would be a lot easier to start off with a working processor and improve it than starting from scratch.
I think we're just seeing a dominant player that no one really likes stumble for long enough combined with people willing to target ARM. Whether I run my Python or C# or Java on ARM or Intel doesn't matter too much to me and if AWS can offer me ARM servers at a discount, I might as well take advantage of that. Intel pressed its fab advantage and the importance of the x86 instruction set against everyone. Now Intel has a worse fab and their instruction set isn't as important anymore. They've basically lost their two competitive advantages. I'm not arguing that Intel is dying, but they're certainly in a weaker position than they used to be.
I had an ASUS ZenFone 2 powered by Intel. It was fantastic! I didn’t notice any problems or major slowness compared to a Qualcomm chip. To me it seemed like they had a competent product they could iterate on. And they just canceled the program, how short-sighted!
I mean, maybe I’m wrong here and there isn’t really money in that business.
Margins. They were far too focused on margin they didn't realise their moat were cracked once they let go of it. 1 Billion+ of Smartphone / Tablet SoC, Modem and many other silicon pieces are now Fabbed on TSMC. None of these exist 10 years ago. Just like the PC revolution, while Sun and IBM were enjoying the Workstation and Server marketing booming, x86 took over the PC market segment, and slowly over the years took over server market.
The same could happen to Intel, this time it is ARM and will likely take 10 -15 years.
And I keep seeing the same thing happening over and over again, companies were too focused on their current market and short term benefits and margins they fail to see the bigger picture. Both Microsoft and Intel are similar here. ( And many other non tech companies. )
Microsoft kind of moved from their traditional market. Now they make more money from services and cloud.
Microsoft, Intel, RIM, Nokia, HP, MIPS, Sony, HTC
And this are just the few of the big players.
And even if Microsoft lost with Windows on phones, they still try to make apps. I am currently using Edge on Android because its built in ad block is quite good.
The only reason Intel killed XScale was that it wasn't x86, it came through an acquisition, and they were afraid to cannibalize their own x86-based mobile plans.
Turns out it would have been far better to disrupt yourself rather than let others do it.
A childish NIH driven decision that destroyed their future.
ARM has many other licensees so Intel didn't want to continue
They didn't sell enough. Phone makers, software developers and users settled on ARM.
edit:// I am commenting on the ASUS ZenFone 2 which has the intel processor running ANDORID, fyi.
I think you make a key point here. A whole lot of code now runs inside one runtime or another, and even outside of that, cross-architeture toolchains have gotten a lot better partly thanks to LLVM.
The instruction set just doesn't matter even to most programmers these days.
Actually more that they bit way more than they could chew when they started the original 10nm node, which would've been incredibly powerful if they had managed to pull it right. But they couldn't, and so they stagnated on 14nm and had to improve that node forever and ever. They also stagnated the microarch, because Skylake was amazing beyond other (cutting corners on speculative execution, yes), so all the folowing lakes where just rehashes of Skylake.
Those were bad decisions that were tied to Intel not solving the 10nm node (temember tick-tock? Which then became architecture-node-optimztion? And then it was just tick-tock-tock-tock-tock forever and ever), and insisting on a microarch that, as time went by, started to show it's age.
Meanwhile AMD was running from behind, but they had clearly identified their shortcommings and how they could effectively tackle thme. Having the option to manufacture with either Global Foundries or TSMC was just another good decision, but not really a game changer until TSMC showed that 7nm was not just marketing fad, but a clearly superior node than 14nm+++ (and a good competitor to 10nm+, which Intel still is ironing).
That brings us to 2020, where AMD is about to beat them hard both on mobile (for the first time ever) and yet again on desktop, with "just" a new microarch (Zen 3, coming late 2020). The fact that this new microarch will be manufactured on 7nm+ is just icing on the cake, even if AMD stayed in the 7nm process they'd still have a clear advantge over Zen 2 (of course, their own) and against anything Intel can place in front of them.
That brings us to Apple. Apple is choosing to manufacture their own chips for notebooks not because there's no good x86 part, but because they can and want to. This is simply further vertical integration for them. And this way the can couple their A-whatever chips ever more tightly with their software and their needs. Not a bad thing per-se, but it will separate even more the macs from a developer perspective.
And despite CS having improved a lot in the field of emulation, cross compilers, and whatever clever trick we can think of to get x86-over-ARM, I think in the end this move will severely affect software that is developed multiplatform (this'd be mac/windows/linux, take two and ignore the other). This is some slight debacle that we've seen with consoles and PC games before.
PC, Xbox (can't remember which) and PS3 were three very different platforms back in 2005-ish. And while the PS3 held a monster processors which was indeed a "supercomputer on a chip" (for it's time), it was extremely alien. Games which were developed to be multiplatform had to be developed at a much higher cost, because they could not have an entirely shared code base. Remember Skyrim being optimized by a mod? That was because the PC version was based on the Xbox version, but they had to turn off all compiler optimizations to get it to compile. And that shipped because they had to.
Now imagine having Adobe shipping a non-optimized mac-ARM version of their products because they had to turn off a lot of optimizations from their products to get them to compile. Will it be that Adobe suddenly started making bad software, or that Adobe-on-Mac is now slow?
Maybe I got a little ranty here. In the end, I guess time will tell if this was a good or a bad move from Apple.
All current Macs include a T2 chip, which is a variant of the A10 chip that handles various tasks like controlling the SSD NAND, TouchID, Webcam DSP, various security tasks and more.
The scenario you mention — a upgraded "T3" chip based on a newer architecture that would act as a coprocessor used to execute ARM code natively on x86 machines — seems possible, but I don't know how likely it is.
Or you'd have to have fat binaries to have x86/ARM execution, assuming the T3 chip would get the chance to run programs. Now either program would have to be pinned to an x86 or ARM core at their start (maybe some applications can set preference, like having PS be always pinned to x86 cores) or have the magical ability to migrate threads/processes from one arch to another, on the fly, while keeping the state consistent... I don't think such a thing has ever even been dreamed of.
I don't think there's a chance to have ARM/x86 coexist as "main CPUs" in the same computer without it being extremely expensive, and even defeating the purpose of having a custom-made CPU to begin with.
Doing so definitely would be counterproductive for Apple in the short-term, but at the same time might be a reasonable long-term play to get people exposed to and programming against the ARM processor while still being able to use the x86 processor for tasks that haven't yet been ported. Eventually the x86 processor would get sunsetted (or perhaps relegated to an add-on card or somesuch).
a) performance wise, they move would be driven by having a better performing A chip
b) if they aimed at a 15W part battery life would suffer. 6W parts don't deliver good performance.
c) for cost, they'd have to buy the intel processor, and the infrastructure to support it (socket, chipset, heatsink, etc)
Specially for (c), I don't think either Intel would accept selling chips as co-processors (it'd be like admitting their processors aren't good enough to be main processors), nor Apple would put itlsef in a position to adjust the internals of their computers just to acomodate something which they are trying to get away from.
Who said they'd have to be from Intel specifically? AMD makes x86 CPUs, too. Speaking of:
> 6W parts don't deliver good performance.
AMD's APUs have historically been pretty decent performance-wise (relative to Intel alternatives at least), and a 6W dual-core APU is on the horizon: https://www.anandtech.com/show/15554/amd-launches-ultralowpo...
Apple probably doesn't need the integrated GPU, so an AMD-based coprocessor could trim that off for additional power savings (making room in the power budget to re-add hyperthreading or additional cores and/or to bump up the base or burst clock speeds).
> for cost, they'd have to buy the intel processor
> and the infrastructure to support it (socket, chipset, heatsink, etc)
Laptops (at least the ones as thin as Macbooks) haven't used discrete "sockets"... ever, I'm pretty sure. The vast majority of the time the CPU is soldered directly to the motherboard, and indeed that seems to be the case for the above-linked APU. The heatsink is something that's already needed anyway, and these APUs don't typically need much of it. The chipset's definitely a valid point, but a lot of it can be shaved off by virtue of it being a coprocessor.
People care more about the software tools they use to do their jobs than on the operating system and hardware.
So, it might depend on how much would cost Adobe to release for ARM.
This is a fair point. If Apple had any sense they'd pay Adobe to do it if it came to it.
Also Photoshop was first released in 1987 and has been through all the same CPU transitions as Apple (m68k/ppc/...) so presumably some architecture-independence is baked in at some level.
Me too. But if the apps I need don't come to ARM, I don't care too much about ARM on the desktop/laptop/workstation.
Using volatile for threadsafe stuff. Arm has weaker memory model than X86 so it requires barriers. C++ standard threading lib handles this for you but not everyone uses it.
Memory alignment. Arm tends to be more critical of that. While it’s impossible for well formed C++ program to mess it up it’s quite common for people just go ”Hey it’s just a number” and go full yolo with it. Because hey, it works on their machine.
I think that's the fundamental mistake in reasoning:
If ARM is cheaper for AWS, then AWS has no reason at all to offer it to its customers at a discount because the customers will not move if no discount if offered. As long as there's no mass market for ARM PCs/servers that work with all the modern software that anyone can rack and sell a-la ServInt/Erols/EV1 circa 1996 there won't be pricing pressure.
This has played time and time again in transit pricing.
AWS ARM is instances are mattress market.
I never really fully appreciated this until I read a review (probably linked on HN) for a new system released in 1981 or 1982 and my mind couldn't stop boggling at how there was a CPU with its custom ISA and an OS written specifically for that ISA and applications written specifically for that OS, and the reviewer was praising some innovative features of the ISA and how the applications could make use of those.
The icing of the cake was how the reviewer discussed how this system could be a big commercial success and which other systems it might take marketshare from - without ever mentioning the IBM PC released around the same time...
The platform ended up failing, but they spun out ARM. If ARM end up overtaking Intel on the desktop, it gives the story some entertaining irony/symmetry.
Saying that, I did write my first ever computer program on them and ended up making iPhone apps.
I’m not sure it’s really fair to characterize them as “crappy”.
Anecdotally, it seems to me that today, the more successful companies are those that tend to be vertically integrated, such as Apple, Tesla, and Amazon.
* DEC Alpha
* Sun SPARC
* HP PA-RISC
* SGI MIPS (MIPS was owned by SGI in the 90's)
* IBM PowerPC, RS/6000
That we still don't have any good GPGPU resources is just crazy to me.
Heterogenous computing with TPUs/GPUs/DSPs and other chips should be standard by now.
It sounds nice in theory but in practice is hard. Writing CUDA or OpenCL is not exactly pleasant or easy and compilers do a poor job at vectorizing code.
Se we use accelerators when it's an absolute must.
I find that a curious statement. While there was quite a bit of diversity in microprocessors, microcomputer manufacturers almost never built their own processors. The majority of the 8 bit market was 6502 and Z80, with a smattering of 8080 and 6809. The 16 bit market mostly was 68000.
Neither Motorola nor Intel were big players in microcomputer manufactoring. The 6502 story is a bit more complicated: MOS sold the KIM-1. And MOS itself were bought by Commodore, though there were several 6502 manufacturers.
The only microcomputer manufacturers I can think of that truly designed their processors were Acorn and Texas Instruments.
The broader interpretation of this idea, that the x86 era saw a shift away from vertically integrated computer manufacturing, is absolutely true. The narrower interpretation, that the 8086 chip triggered a shift away from vertically integrated computer manufacturing, is not.
The "x86 era" is really the era of microcomputer architectures eating mainframes and minis. I suspect that would still have happened if the 68k (or Z8000, NS32000, etc) had won the war instead of the x86.
Cromemco Z-2: 8-bit Z80 @ 4 MHz, maximum 256 kB RAM (bank-switched in a 64 kB address space), 0.007 Whetstone MIPS 
VAX-11/780: 32-bit VAX @ 5 MHz, maximum 8 MB RAM, 0.476 Whetstone MIPS 
I have no idea what the prices were. One source reckons a Z-2 was $995 , but a price list from 1983 has a system with 64 kB and two floppy drives for $4695 . In 1978, the list price of a 11/780 with half a megabyte of memory, a floppy drive, a tape drive, and two hard disk spindles (possibly not including disk packs, though) was $241,255 .
I think that is probably correct. The economics of "commodity" microprocessors were such that one (or two) would have probably won had the x86 not. (Of course, the shift to horizontal is not just about the microprocessor but volume operating systems, volume scale out servers, packaged software and open source, etc.
Half of the early ads for the 68000 called it a 16 bit processor and the rest 32 bits. The latter became more popular as time went on.
The pressure from Android side is huge.
This wouldn’t be the first time Apple used a non-x86 chip in the Mac. Nobody followed them last time.
They are mainly polishing things.
>Nobody followed them last time.
Breaking compatibility with software and operating systems and hardware is bad for the consumer.
With x86 I can run any software I need and I can optimize for cost and performance. I can use diverse CPUs, use graphic cards, memory chips, cases, PSUs, SSDs and HDDs from different makers at different price points and performance points.
I can hit exactly the sweet point I need to. And if something breaks, it won't be hard to replace.
There's something of a shift back to more vertical integration but:
-- As another commenter mentioned, there's simultaneously been something of a split between chip design and fabs
-- The big public cloud providers probably make a stronger case for a return to vertical integration than Apple does
-- For the most part, any vertical integration today is taking place in the context of both global supply chains and standards. You don't have every company using their own networking protocols and disk drive interconnects.
I don't remember anything specific to the 8086 that made it a "horizontally integrated market" w.r.t. its competitors (I might be wrong, of course)
Wasm blobs seem poised to moot this in the next 10 years. So many things on the desktop have already moved to the web or Electron. Server-side applications turning into browser-executed blobs backed by one of a few database systems still on the server is the next logical step.
Wasm is supposedly language agnostic (although I’d disagree heh) whereas even CLR imposes a lot on the language you build for it.
With Moore's Law ending, these are the tricks we need to get up to to improve user experience, reduce power draw, and make hardware go further.
And the move to ARM isn't guaranteed to bring them more sales. Customers might dislike it and software makers might dislike it.
Not just that - but the many, many legacy apps professionals rely on. The 64-bit move already decimated the professional audio space. The cynic in me can't help but see this as a move towards pure consumer device. As a development machine it will be all but useless as you find things that won't compile or work correctly on the new CPU arch.
I mean, I suppose it depends on what sort of development you're doing, but in 2020 most libraries do work on ARM. There'll no doubt be a painful period (as there was with the death of PPC), but it shouldn't be that dramatic.
I'm not a developer so forgive my ignorance, but isn't this what cross-compiling is for? I get that compiling natively can increase performance and find obscure hardware issues, but it's my understanding that, for example, ARM builds of GNU/Linux binaries are just cross-compiled by server farms that are also natively compiling the AMD64 builds.
Also, fat binaries and JIT emulation have been a thing forever, especially for Apple who has dealt with these changes twice now (68k -> PPC -> x86-64).
I just don't see this being any different than current multi-platform efforts like Debian, NetBSD, etc., except it's a for-profit company with billions of dollars and thousands of expert employees behind it.
I recently committed a change to FreeBSD's kernel (written in C) which I'd tested on amd64 and x86. Much to my surprise, when I did the cross-build for all platforms, I had a compile-time assert fail on 32-bit arm because the size of a struct was too large. It turns out that 32-bit arm (at least on FreeBSD) naturally aligns 8 byte data types to 8 byte boundaries, whereas x86 packs them and allows them to straddle the 8 byte boundary. This left a 4-byte hole in my struct, and caused it to be 4 byte too large.
These are the sorts of things that bite you when moving from x86/x86_64 to a risc platform, even when its the same endian.
Granted, the move to ARM is a bit more work, but again I don't think anyone should be especially surprised by it. As soon as the "A" processors started posting real, positive benchmark numbers I figured that Apple would move to them and away from Intel. In my minds eye I can see Apple differentiating machines depending on how many ARM chips it has. MacBook Air - 2 A15; MacBook - 4 A15X; MacBook Pro - 12 A15X; Mac - 128 A16X (or something like that)
A lack of sympathy does not make my audio software work, however. And I believe what parent is driving at is that should the ARM transition take place, even more stuff isn't going to work for the end-user. So a little empathy for the user, eh?
When I hear that some group (ie. pro audio software makers) won't update their offerings I have to believe it's because they don't want to or they just don't want to serve the market anymore. In either case it seems like there is an opportunity for someone to create an alternative, and possibly make some great money.
The problem with the 32->64 transition (or x86->ARM) doesn't lie with active businesses failing to "get with the program" and update their software - it lies with abandonware. With software that's been put out either by defunct companies or sometimes literally deceased programmers.
In some niches, this sort of stuff is really, really common - generally this is the case if there's a really stable API for building things, like VST plugins, and if the niche in question has a lot of failed businesses. A lot of times in the pro audio space, a musician will spend a large part of their career "collecting" a ton of little one-off sound libraries and fx plugins, because these are the only way they can get the computer to produce that exact kind of sound they're looking for. This collection slowly builds up over the course of, say, a decade - just like a graphic designer would collect fonts.
The difference is that unlike fonts, which last had their "greet the reaper" moment back when bitmap fonts got scrapped in the mid-90s (despite OpenType becoming a thing, TrueType fonts from the 90s still work fine, some 30 years later), any audio plugins that aren't compatible with the cpu architecture die out. And that's just really brutal to a working musician.
You can't get an update to most of those because there's a ton of attrition in that industry; lots of small-time plugin makers realize pretty fast that it's a very difficult place to make any kind of ROI, so they quit after a few years.
Games are in a really similar place - they're a business slaughterhouse where most companies that attempt to make something discover they're not going to cover the initial investment, so after the game's produced it typically gets a couple of years of barebones support, and then gets abandoned - or the company just croaks. Any kind of rewrite is completely out of the question. The tragedy is that most of these games are pretty good and fun, they're just not economically viable.
I love apple moving the tech forward, but we desperately need a better emulation solution, and/or we've got to get the industry off of coding for bare metal.
There are very few creators who actually create the things they rely on to create, ultimately we are all consumers even inside our professions. And like any consumer, we are all at the mercy of a market we don’t control. Either you have to accept that everything must come to an end and plan for that eventual end, or you have to dig deeper into your creativity.
Everyone has something they rely on that will disappear before they’re ready to lose it. It’s a reality of life and as much as humankind has experienced that loss for thousands of years, we never seem to get any better at accepting it or planning for it.
Every architecture switch has casualties, what I feel might be shortsighted by Apple this time is the x86 Cocoa apps that die this time are not going to be replaced by Catalyst iPad ports or ARM builds, they'll be replaced with Electron apps.
At least both architectures are 64 bit little endian, so all those buggy C programs that do illegal type casting might still work.
One issue might be that X86 is pretty relaxed when it comes to unaligned memory accesses. I'm not sure how ARM64 handles it.
Anything external will suffer, and apple probably aren't gonna cry about that.
That's true only if "Apple using CPU I deem cool" counts as performance.
Otherwise, benchmarks say that first Intel Mac Mini was almost twice as fast than the latest Mac Mini with PowerPC.
I get the point you're trying to make, but unless you owned both the last PPC mini and the first Intel mini at the same time, as I did in 2006 (and still do), you have no idea what you're talking about.