Hacker News new | past | comments | ask | show | jobs | submit login

Neither cloud computing nor 80% of the desktop world run on M1, regardless how great they might be.



Ummm... M1 is ARM. AWS Graviton is ARM (and way cheaper for the power you get). Literally EVERY iOS and Android device in the world, is ARM. Windows 11? ARM (with pretty decent x64 emulation, from experimenting with it!)

Thinking not only that desktops still matter, but that the current proportion of desktops matter where the future is concerned, and not looking at growth rates? That's a grievous error.

https://www.tomshardware.com/news/arm-6-7-billion-chips-per-...


> M1 is ARM. AWS Graviton is ARM

Graviton 2 has an absolutely huge number of cores (64), but lacks the memory bandwidth to keep them all fed at once.

>the L3 cache of the Graviton2 was shared amongst all its cores, and we also discovered how only 8-16 cores were able to saturate the memory controllers of the system.

Scaling linearly across cores might be easy for some workloads, but for anything that even remotely has some kind of memory pressure should see greater slowdowns given that all the threads are competing for the shared L3 and DRAM resources.

https://www.anandtech.com/show/15578/cloud-clash-amazon-grav...

There's more going on than "ARM is ARM".


This is an irrelevant criticism. The claim I was originally responding to was that "Neither cloud computing nor 80% of the desktop world run on M1, regardless how great they might be"

I think it's pretty clear I shot that all to hell by pointing out that M1 is ARM, and ARM is going everywhere, now and in the future.

Now, it's quite possible that the architectural details you point out may be design foibles for most workloads in just the Graviton2 series, but the writing is on the wall, the other 3 walls, the floor, and the ceiling that the near future is not x64 but some combination of ARM and RISC-V


That is not the same hardware design, and nothing prevents Intel to license ARM yet again.

People should actually learn about CPU architectures and hardware design.


"The Apple M1 chip features four big Firestorm CPU cores for high-load scenarios, backed by four smaller Icestorm CPU cores designed for efficiency. If this sounds familiar, you’ve probably encountered Android phones with a similar ARM CPU layout. ARM calls this layout ARM big.LITTLE and it’s been around since 2014. The CPU uses the AArch64 or ARM64 extension set of the ARM architecture."

AArch64 is the ARM architecture API supported by every Linux variant that has an ARM build that I've ever seen to date. The very reason https://asahilinux.org/ was able to get up to speed so quickly is due to this fact (although they're still working on driver support for various subsystems).

If Intel has already licensed ARM, then what the hell is taking them so long? The M1 runs Intel code at similar speed but 1/6 the power cost. The fact that Apple pulled this off, and not Intel themselves, is hilarious. This would be like Intel coming out with an x64 chip that ran PowerPC code slightly faster than PowerPC itself but at 1/6 the power. x64 is a dead-man-walking CPU API, and it's about damn time.


Performance doesn't scale linearly with power use... very far from it actually. So making comparisons like "uses 2x the power" means very little. CPU will consume 2-3x more power to gain 5-10% performance. Especially in the desktop setting, they tend to consume as much power as possible for even the most incremental gains in performance... because obviously you're plugged into the wall.

And M1 is a process node ahead of Intel's offerings.

Intel fell laughably far behind on the manufacturing side, but people touting M1 as some design achievement and not simply a more cutting edge manufacturing process don't know what they're talking about.

The big gains in CPU performance in the modern era have always come from process node jumps. The big/little concept is a great architectural change for mobile as well. But you're not going to get substantially improved performance without improving the manufacturing side.

Both AMD and Intel's 5nm equivalent offerings will perform similarly to M1 on both power and perf.


How is 5nm going to erase using 200% more power (40W vs 120W) than the M1 while doing the same computing work?


Did you read the comment? 2x power use can translate to 5% performance difference. Obviously actual numbers will differ, but relationship is far from linear.

For example, desktop processors will use up to 3-4x power as mobile processors, yet not nearly 3-4x more powerful. More like a dozen or two percent at same core count.

M1 wouldn't be nearly twice as performant if given twice the power. Just as any other processor.

The early indications of next gen Intel/AMD mobile processors is that they're looking to be more powerful than m1. Right now they are a technology generation behind.

It's like comparing performance of PS5 to Xbox 360 and saying Sony is a better hardware creator.

I'm not knocking m1 at all, it's a good achievement. But people are worshiping it under false assumptions and incorrect comparisons. The primary advantage apple had was to buy out all the 5nm production at TSMC before AMD could


It would run in ARM, it doesn't need to be as great as M1, it just need to be efficient. And it's already happening (e.g. Graviton).


Except general ARM isn't the same as M1, and Intel can just license ARM again as they already did once.


> general ARM isn't the same as M1

Of course it isn't, but it is in the same direction, i.e. A class of products of power efficiency that are not seen in X86 chips.

In terms of data centers/cloud, this is about the most important metric (cost) there is.

> Intel can just license ARM again as they already did once.

Of course they can, but for the larger players why would they want the middle man? The barrier of entry is already gone.


Everyone is a middle man,ARM doesn't own factories.


*extra middle man


Yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: