Hacker News new | past | comments | ask | show | jobs | submit login
I love ARM (20i.com)
90 points by 20i 9 days ago | hide | past | favorite | 187 comments





This is my consideration.

1)ARM is good for phones/tablets/portable small devices which runs on battery because it saves much more power.

2)I think that x64 hardware of similar specs outperforms arm64 in performance.[Personal observance]

3) I love how I can boot Different linux distros/BSDs/HaikuOS/Windows/other x64-generic-OS.In my arm64 phone,I can't(OS must be especially crafted for that specific processor ,then it must be tweaked carefully for that device in order to boot).Also,there is very much diversity in arm64 space.This implies that some arm64 hardware can't boot/run some OSs which hasn't been crafted for that device.

For my use-case,(3) is very important.I am damn sure that,some manufacturers might manufacture hardware that can't boot other OS(say situation where most (arm) laptops can't run linux and only runs windows). I don't want that to happen.Also , booting up OS in arm64 is not standardized(there is lot of diversity in boot process in arm64 unlike x64).Also ,some components might not be upstreamed in Linux kernel(So,I need to patch my kernel everytime/My kernel will never get updated to new major version like many android phones).

Speaking of other architectures,I also don't think RISC-V will become major architecture anytime soon/will suffer the same disadvantages of arm64 I mentioned above.Maybe,pp64(openpower) might be good architecture but I haven't got a chance to deal with that hardware.


Regarding 3, I think a lot of this stems from ARM not using standardised interfaces with discovery of peripherals. Historically, board configuration ended up "compiled into" kernels and bootloaders. Device Tree (DTB) has made this a bit better, as the device tree can sit alongside the kernel or bootloader, holding the board-specific configurations.

The real root issue though is that ARM devices are generally built as "finished" products, so don't need to support pluggable hardware, meaning you can make assumptions about what pin on the SoC is used for each purpose, and exactly what it's connected to. This leads to kludges on phone handsets sometimes, where audio jack polarity is incorrect etc, and needs a board-specific driver hack. These hacks were often maintained in a board-specific kernel tree.

DTB certainly helps, but ARM devices are still (to some extent) quite hardware-specific. Moving towards UEFI bootloaders on ARM64 looks promising, and there are now "generic" ARM64 bootable images for some Linux distros.

Hopefully we'll see more use of a standard UEFI interface on ARM devices, so we can get towards having common bootable OS images, without board specifics needing handled by the OS!


Another major problem is OEMs and distributors that insist on locked boot loaders or other DRM baked into the firmware and OS. Such machines are never intended to run anything but approved software, which is always abandoned.

IMHO, any locked device sold that connects to the internet should be forced by law to unlock after 6 months of no updates.

I would love to see Verizon forced to do this. It would be glorious.


Would that include phones with locked bootloaders? That would be awesome for reviving old phones.

Device tree is really a step backwards. People hate how complicated plug and play became, but device tree is x86 before hardware discovery became a thing.

That's mostly the gist of what you're saying, but I think it needs to change; a lot of embedded platforms are adopting device tree now and it's just making things more difficult.


Not really; X86 still has an equivalent to device tree in the ACPI tables.

Device Tree is IMO better than X86's implementation because ACPI just gives you bytecode that you as the kernel have to trust to perform actions whereas device tree gives you a declarative view of the system.


Not being better ACPI so much as having busses that are reconfigurable on typical PCs.

With ACPI the configuration is packaged with the hardware, with device tree it's not. ACPI might be worse for certain reasons but it is more usable in practice.


> Not being better ACPI so much as having busses that are reconfigurable on typical PCs.

But there's tons of buses that aren't reconfigurable on PCs. You just don't generally see it because it's papered over with ACPI. But all the same I2C/SPI and random devices sitting off the major system management devices still exist on PCs.

> With ACPI the configuration is packaged with the hardware, with device tree it's not.

Eh, I've seen both options with both models. And ACPI being less declarative means that kernels have to ship larger patch tables to fix broken ACPI code. With device tree, a declarative approach means the drivers are free to fix issues in whatever way that kernel thinks is best.

> ACPI might be worse for certain reasons but it is more usable in practice.

I think it's easier to ship an MVP and forget about it than device tree, but device tree is easier to maintain over a long period.


>device tree is easier to maintain over a long period.

I don't really disagree, but for the most part all I've ever been able to buy are ARM boards with horrible support. If people aren't going to take up DT then it might be wise to find some middle ground.

Also -- you can probe for I2C/SPI devices.


Thanks for the insight. Are there some distros which have better support for ARM as a whole is really more a question of what distro has the best support for my specific ARM device?

Also how far along is UEFI bootloaders for ARM64?


The problem is onboard peripherals that share pins. There is no way to represent these simultaneously in a devicetree.

What do you mean? The device tree is just a nested hash table, the interpretation of which is completely up to the device driver(s). If there is a limitation like you say, I guess it is fixable by patching the relevant driver?

On 2), using the iPad Pro puts me in the opposite camp, where sheer performance is better from my perception. The limitations come more from what I can do that how good/fast/responsive it is done.

I get there is more to it that just an ARM vs x86 straight comparison, but even my 2019 MBP feels less smooth when it comes to simple and comparable tasks like browsing sites while watching videos.


Since the early days of Quicktime Apple have been quite good at keeping multitasking feeling more realtime on desktop systems.

So far as I can tell, Apple's advantage is a vertical integration one, enabling them to choose their own goalposts. Those seem to include "run Javascript as fast as possible", which involves using an Apple-tuned JIT compiler on an Apple-tuned ARM core.

I'm not sure whether they're directly responsible for the ARM Javascript instruction ( https://stackoverflow.com/questions/50966676/why-do-arm-chip... ), but you can be sure they're going to use it where appropriate. Combine a thousand similar bits of full stack corner shaving, and you get an advantage.

Jeff Atwood has Opinions about this, e.g. https://blog.codinghorror.com/the-tablet-turning-point/

The virus scanners tend to give standard PCs a massive disadvantage in percieved performance against locked-down tablets, too.


Interestingly, Jeff Atwood was the one speculating that A12's Speedometer performance was due to JavaScript instructions: https://twitter.com/codinghorror/status/1049082262854094848. Turns out, no, it wasn't–it's one instruction that doesn't determine the performance of the test, and JavaScriptCore wasn't even using it. Most of the performance just comes from the JIT being really good and the chip's raw performance being good.

These days I will often use my phone instead of my desktop because it's less laggy. Admittedly my computer is a few years old, but so is my phone!

What activity could possibly be less laggy on your phone?

I can't stand using my iPhone or my iPad for anything other than very basic features since they're so slow and limited compared to how quickly I can get things done on one of my real computers.

Selecting text or placing the cursor on a phone is an exercise in frustration. Browsing the web too - I have to contend with every other website treating my phone like a 2nd class citizen and some sites like Reddit or Twitch try so hard to force you to use their app. On Google I have to either use desktop mode and zoom in after each search or I have to put up with shitty AMP results.

I basically associate ARM with shitty locked-down operating systems that don't have my best interests in mind.


> Selecting text or placing the cursor on a phone is an exercise in frustration.

That's just a difference in UI - a small phone touchscreen is simply not a very effective input device compared to even a plain touchpad using the exact same area. (To the point where I think all touchscreen-only devices should support "touchpad mode" with a pointer out of the box. VNC/RDP implementations on mobile devices have been doing this for ages, so it's quite physically achievable.)


> What activity could possibly be less laggy on your phone?

Photos on my iPhone SE (2016) or 9.7" iPad Pro (2016) is significantly more performant than Photos on either of my MBPs (13" & 15", 2015).

Like massively so, to the point that I absolutely loathe having to use my Mac whenever I need to do anything with a photo or video.


That's probably because photos are an incredibly common task on phones and so the software has gotten a huge amount of optimization attention. I don't think it's a hardware issue.

Phone UIs in general get more love just because with that tiny screen and given the normal causal use cases users are far less tolerant of slow or poorly designed UIs on phones. The squeaky wheel gets the oil, and on mobile any UI issue is instant death. On desktop it's just an annoyance.


The point was to give an example that desktop/laptop computers do not always result in better performance than phones.

The GP didn't seem to believe it was possible that a phone could be "less laggy".

In line with your own remarks, there are a number of common use cases that have been optimised on phones in a way that they never were on desktop, resulting in a superior user experience on the phone vs the desktop.


You had me with you until the last paragraph. I appreciate my ARM device. In my case an iphone. Out of all if the other ‘brands’ Apple comparatively has my best interests (privacy and security) in mind.

With some of the mentioned trade-offs of course.


Your best interests are not necessarily everyone's best interests, security at the expense of freedom is not mine or many other people's best interest.

Security/privacy and freedom are at odds these days. Freedom means the freedom to install malware and spyware and the freedom for said malware and spyware to run wild on the device.

Android is more free than iOS, which means Android apps are much more free to spy on me.

If you are technical enough and conscientious enough to avoid this problem that's great, but that's not the majority of the market. For most users freedom means malware and highly invasive surveillance.

The Internet and the computing ecosystem decades ago was comparatively open and free. Then the barbarians came in the form of surveillance capitalism and industrialized malware operations. Now the fields are salted, the smaller settlements are burned to cinders, and everyone is cowering behind the high city walls of various walled gardens.


Meanwhile, Linux phones are starting to become a thing, even if they're just a group of survivors trying to tame and cultivate their own desert island

Much of the security afforded by systems without too much market share is that they are not seen as a juicy enough target. But any juicy target in the hands of a regular user (not an enthusiast, not an expert sysadmin) will be exploited. The more flexibility the user has, the bigger the chances they will pick convenience over security making the system even easier to exploit.

There's no system that will make everyone happy so the interests of the many for security outweighed he interests of the few for tinkerability.


Maybe I'm an outlier in this, but I don't think Linux is for everybody, nor do I think it has any reason to try to be. Linux phones should be an option for enthusiasts and security researchers, IMO. At no point did I intend to imply that they were some sort of answer to the issue of security and privacy for the majority of people.

If anything I thought my comment implied the opposite, since the community is so heavily DIY that it's comparatively a Robinson Crusoe scenario.


>Android is more free than iOS, which means Android apps are much more free to spy on me.

I'm curious about this, does this mean the ios app equivalent of facebook and every other service that's been caught spying on customers, somehow doesn't do exactly the same thing on that platform?

If I use facebook on an Apple device I become immune to their tracking?


As much as folks like to hate on Google here, Android's privacy protections / permissions are on par with iOS and improve with every release. Each has their advantages, but its not far off.

> Security/privacy and freedom are at odds these days.

They don’t have to be; it’s just that this is the option we have right now.


> On 2), using the iPad Pro puts me in the opposite camp, where sheer performance is better from my perception. The limitations come more from what I can do that how good/fast/responsive it is done.

How much of that is because it's using a 120hz display, vs 60hz on most computers?


In the context of this argument, does it matter? If you're doing the same tasks (ie. web browsing, as mentioned) and one is noticeably better or as good, the conclusion would be the ARM processor in the iPad Pro is at least as good as the processor in the 2017 Macbook at the task in question.

If it's just the refresh rate, that's nothing to do with the CPU (or, at least, nothing to do with the parts of the CPU that are meaningfully different between x86_64 and arm64).

My point is that even if the primary reason it appears better to the user is the high refresh rate, then the CPU must be no worse than the other, because that would cause a degradation in CPU intensive tasks such as web browsing.

It depends how big the effect is. Human perception is abysmal at judging performance. Particularly with web browsing, if you've got snappy compositing and the UI is otherwise slick, a slow browser performance will get blamed on the site, not the CPU.

The argument wasn't that they were equal, but that the ARM chip appeared _more_ performant than the x64 one. It's a legit point that refresh rate could make it appear so even if the actual performance is roughly the same.

For non-games, screen refresh does not incur additional CPU usage, it only hits the video memory bandwidth. There may be a few things like scrolling that are limited to one action frame per display frame, but it's not common to tie code to vsync.

On (2), what does "similar specs" mean to you? I can think of several interpretations like performance/watt or performance/dollar. It is not my understanding that Intel hardware generally beats ARM in these categories.

I suppose that Intel would win in a competition of "maximum single-thread perf regardless of power or cost". But that does not sounds like something that is intended by "simlar specs".

On (1), I would point out that saving power is beneficial to everyone and will likely only become more important in the future.

(3) is a bummer but is something that we can possibly solve with technologies like the Linux Device Tree. Certainly, it is not impossible to build an open platform on ARM if the manufacturer cooperates. To what extent they will do so, remains to be seen I think.


2) I can see x64 hardware on bit costlier machines. And,i can see arm64 on cheaper machines. On general computing devices, as price increases ,you get to see less arm64 hardwares.

I had low cost laptop with AMD processor. It compiled faster and handled load better than Rasp. Pi. of comparable hardware(similar specs not exact specs).

Rasp. Pi was ofc cheaper than AMD laptop.(about 0.4x of AMD laptop).

I don't think AMD laptop consumed lot of power than arm64 Rasp. Pi. arm64 being lower in cost consuming lower energy ,you can feel that arm64 is better than x64 in perf/dollar or perf/watt but reality is that there is no comparable hardware of both architecture in respective product range and difference is not that apparent when it comes to arm64. Also my AMD laptop compiled,cross compiled,handled large load much better than arm64 hardware of comparable specs.

1) Yes, power saving is desirable feature. I can't see arm64 workstations around. So,if you need performant hardwares,i think that x64/ppc64 becomes inevitable.

3) Definitely a bummer , still unresolved. Let's see how it goes. x64 has benefit in this point.


Just for a relatable example on most ARM processors that would compare to a low-end x86_64 machine, I notice python programs running far slower with similar memory and core count.

It might be memory issues, x86 machines have very wide memory busses. But I am still left a little confused, and practically it still has impact.


> On (2), what does "similar specs" mean to you? I can think of several interpretations like performance/watt or performance/dollar. It is not my understanding that Intel hardware generally beats ARM in these categories.

Not the person you replied to, but this could have a couple interpretations where Intel wins. Here's one option:

If you take two CPUs with similar core counts, clock speed, and transister counts then which one will have higher throughput, lower latency, ... for typical operations? This is open to interpretation of course (e.g. if you didn't need floating point throughput then AMD bulldozer architectures were competitive when released), but it's not unreasonable to evaluate CPUs based on your personal workloads.


The thing about x64 is that it's really complex instruction decoding system is unavoidable overhead. Especially since every processor translates the x64 instructions into processor-specific micro-instructions. And then those are run by the processor.

It seems to me that a simpler instruction set that occurs much less overhead than the above will simply have a higher performance ceiling. The only advantage I see in x64 is that the complex instructions are a form of 'compression' for the micro-instructions. Maaaaybe the resultant bandwidth reduction in reading isntructions is worth the performance overhead of x64 instruction decoding.


Decode complexity is a real cost to x86-64 but it's not a huge one. Well, for medium devices in the Pentium range it can be pretty significant but for modern large cores it'll only end up costing 5% or so of the total power budget on x86. And there are some design effects where ARM cores will tend to make the decode unit big enough that it's never the limiting factor whereas x86 cores tend to balance the size of the decoder against the other stages more.

More significant than that, I think, is that modern 64 bit ARM is actually tend to produce denser code than modern 64 bit x86 - which is really weird when you consider that ARM-64 is fixed width and x86-64 is variable width but backwards compatibility can do that.

And more important than that, however, is that x86-64 has a very strict memory model and ARM a more relaxed one which can cause problems for programmers who aren't careful but has huge implications for the design of the memory backend.


> More significant than that, I think, is that modern 64 bit ARM is actually tend to produce denser code than modern 64 bit x86

Do you have a citation for that? Everything I've seen says the opposite, including http://web.eece.maine.edu/~vweaver/papers/iccd09/ll_document...

The strongest wording I've seen is that aarch64 is 'competitive with' x86_64.


You know I think you're correct and I was mis-remembering. I think what I might have actually been confused by is instructions per task rather than bytes of instruction per task but given that my memory has already failed me once here who knows.

I'm sympathetic to this view and it was probably true in older generations of chips, but it turns out that instruction decode is a very small portion of die size and power usage these days. The L2 probably uses more juice, for example. See Hirki et al.: https://www.usenix.org/system/files/conference/cooldc16/cool...

I think most of the power envelope difference these days is process node, with Intel chips usually lagging behind.

That being said, I do think that if the same amount of performance engineering were dumped into ARM as has been put into X64 ARM would win. The simpler instruction encoding, less cruft (but not none!), and other modern characteristics would make that optimization go further.

My concern as others have mentioned is lack of standardized interfaces and peripheral enumeration, leading to a lack of flexibility and the loss of ability to run alternative OSes and drivers. That can be fixed, but does anyone care enough to fix it?


> I think most of the power envelope difference these days is process node, with Intel chips usually lagging behind.

That hasn't been the case until quite recently. Before TSMC 7nm Intel was still at least on-par with everyone else. I think it boils down to arm chips being designed from the ground up with power usage in mind (with a fair number having slower low-power secondary cores and such), whereas Intel mostly expects your CPU to always be doing something even at idle.


> The only advantage I see in x64 is that the complex instructions are a form of 'compression' for the micro-instructions.

It's not a very good one. The x86_64 ISA encoding evolved out of the original 8086 instruction encoding, and has inherited a lot of inefficiencies from there -- a lot of one-byte opcodes are wasted on rarely used bytewise instructions (like XLAT), privileged instructions (like the IN/OUT family), or on opcodes that are simply invalid in 64-bit mode.

Most instructions in real x86_64 code are two to four bytes long, not even counting immediate values. A focused attempt could probably do much better.


IANA hardware/cpu guy but, instructions that get decoded get cached in a parallel-to-data-L1 cache, the L1-instruction cache.

> Maaaaybe the resultant bandwidth reduction in reading isntructions is worth the performance overhead of x64 instruction decoding.

Access to main mem is slow, much slower than decoding circuitry, so it is very well worth it. From what I've read, the decoding time mattered early on, hence (among other reasons) RISC, but as chips speeded up decoding time became less important, then an overhead as the instructions were large. IIRC, and I can't back this up so take with caution, the alpha chips suffered because of this (but of course you could always take large instructions then decode them smaller and cache those decoded instrs).

Just my view. Ask a chip expert for a better opinion.


> The only advantage I see in x64 is that the complex instructions are a form of 'compression' for the micro-instructions. Maaaaybe the resultant bandwidth reduction in reading isntructions is worth the performance overhead of x64 instruction decoding.

It would be sensible to expect that, but as it turns out the RISC-V C ("compressed instructions") extension is quite competitive with x86/amd64. Of course, ARM64 doesn't seem to have an equivalent to RISC-V C, even the old Thumb mode is gone.


"I think that x64 hardware of similar specs outperforms arm64 in performance"

Performance per watt is probably going to be the primary driver. "How cheaply can I rent a 'vCPU' drives decisions."


> 2)I think that x64 hardware of similar specs outperforms arm64 in performance.[Personal observance]

Can you elaborate on this a bit? What devices are you comparing (on both sides of the comparison)?


about 3) whatever happened to the Server Base Boot Requirements (SBBR) and Server Base System Architecture (SBSA) specs? They were supposed to standardize stuff like that.

https://developer.arm.com/architectures/platform-design/serv...


regarding 2) I would like to see a non-heavily power managed ARM that also has a bunch of cache like a similar x64 would. 3) this is probably not the CPU, see Rpi4 which runs windows/linux/and several others

I love ARM too, that's why I'm sad to hear that SoftBank has put it on the block for sale and especially that Nvidia is eyeing it.

People say, it wouldn't get pass regulators; I doubt it, Qualcomm - Broadcom was blocked as US feared the latter might stall Qualcomm's telecom research giving advantage to China. But, it's the other way around for ARM Holdings - Nvidia deal; Why wouldn't US allow its company grabbing a future-proof computer architecture?

Sidenote/rant- Look what have you done WeWork, burning VC money with a nonsense company has led to a point that the future of computing democracy is now under threat. Well, SoftBank wasn't a good place for ARM Holdings in the first place but at least I thought they were in it for a long haul; I was wrong.

I feel RISC-V is the true future for open-source computing.


> I think Apple moving towards entirely ARM compatible Macs and Macbooks will drive the ARM SBC market forward for developers (it’s difficult to develop on architecture X for architecture Y).

I'd be very surprised if this was the case.


Not only limited connection between Macs and SBCs, but the opposite has been true about iOS development.

Although interestingly, Android development on the other hand was actually hampered for a long time by the slow emulator

If your doing Android Development with the ARM emulator, then its crazy slow. However, if you use the x86 emulator - then you get native speed.

Xcode/iOS's emulator is actually x86 - which is why it seems faster than Android's default emulator.


Xcode doesn’t emulate anything, it runs the entire iOS userspace as native processes on top of the macOS kernel.

It "emulates" the hardware - e.g. you can inject shake / GPS / etc

This is the same as the Android x86 "emulator".


IIRC the Android uses hardware-assisted virtualization through Intel HAXM. iOS simulator is run entirely in userspace in a seperate launchd context.

(Mostly referring to the actual processor/OS itself; buttons that "vibrate" the phone are not really technically interesting to me.)


The annoying part about the default x86 emulator that installs from Android Studio is that it's incompatible with hyper-v. Requiring a reboot to disable.

There's a little bit about the history of ARM and Apple in the podcast Acquired did about Softbank and ARM.

https://www.acquired.fm/episodes/arm-softbank


I'm surprised that Apple didn't buy ARM yet.

My guess is that Apple buying ARM is a money-losing proposition. Actually licensing the ISA can't be terribly profitable for a company of Apple's size, and ARM being fabless means there's not much competitive advantage to owning (rather than licensing) the ISA. Simply owning ARM might invite antitrust concerns that Apple really doesn't want to be dealing with, and risks souring their business relationships. Actually trying to exploit the ownership, beyond collecting royalties, would guarantee both. As, perhaps, would any future efforts to put proprietary ARM extensions into their Cortex line.

So the reason Apple would buy Arm is to save itself money on the architecture license in the long run, right? But it could also save money on the architecture license if it switched to RISC-V. And you can be sure there's a tightly guarded spreadsheet somewhere inside Apple that models whether or not switching to RISC-V is a good idea.

Of course, Arm can figure all that out too, so somewhere inside Arm they've tried to replicate Apple's "should we switch to RISC-V" spreadsheet, and Arm sets the price of the architecture license so that the spreadsheet will always say "don't switch to RISC-V".

The other thing to consider: Arm's core design business is quite valuable; probably more valuable than their architecture business. But Apple does their own core design, and would not be interested in selling core IP to others, so they would get no value from that business. But if they were to buy Arm, they'd still have to pay for it.


Apple owned 43% of ARM in the early 1990s, when the company was investing heavily into Newton and associated mobile technologies during the Jobs interregnum.

Owning the instruction set and reference designs maybe isn’t terribly useful, when your primary competitors already have perpetual licenses to the technology and do their own core designs.


I think they’d struggle to do anything with it, given anti-trust concerns

Apple is one of the parties which founded ARM

Source? As far as I can see they only made a joint venture: https://www.latimes.com/archives/la-xpm-1990-11-28-fi-4993-s...

According to that article the joint venture they co-founded was actually ARM.

>The company was founded in November 1990 as Advanced RISC Machines Ltd and structured as a joint venture between Acorn Computers, Apple Computer (now Apple Inc.) and VLSI Technology.

>The new company intended to further the development of the Acorn RISC Machine processor, which was originally used in the Acorn Archimedes and had been selected by Apple for its Newton project.

https://en.wikipedia.org/wiki/Arm_Holdings#Founding


True, however the joint venture was founded to “further develop” the technology. That means the ARM architecture itself seems to have originated at Acorn, without Apple.

The point is, originally, Apple were co-owners of Arm (together with Acorn and VLSI).

Yes, that's correct.

The CPU architecture was developed long before the joint venture.

IIRC, it began as an expansion board for the BBC Micro.

This article implies that initial ARM development was undertaken between 1983 - 1985:

https://www.theregister.com/2012/05/03/unsung_heroes_of_tech...


This will make a huge headache for them and it's legal department since acquisition must be approved from government.

That requires that Softbank is willing to sell.

They are: https://www.bloomberg.com/news/articles/2020-07-22/softbank-...

> As part of the sale process, SoftBank approached Apple to gauge its interest in acquiring Arm, according to people familiar with the matter. While the two firms had preliminary discussions, Apple isn’t planning to pursue a bid, the people said.


I'm not quite sure where the assumption that Apple Sillicon instruction set will even be compatible with other ARM chips. Apple does add their own instructions.

Citation?

Edit - I'm getting downvoted but if you're going to suggest that Apple are departing from the ARMv8 instruction set - which is a really important point if true and which I've not seen any evidence for - then you really need to supply a citiation.


I'm digging through history to find an article about Apple's LLVM patches to add a custom instruction only present on A series chip. Due to Sillion announcement it's kinda hard to find - but they did add at least one instruction to ARM instruction set.

From this, and from the very very strong marketing push to NOT use the name "ARM" from Apple, I have a suspicion that Apple Sillicon instruction set won't be a fully standard ARM holdings instruction set


"Apple Silicon" is simply a marketing term for, well, Apple's silicon. They're making this major change to their computers with the promise of better performance, functionality and battery life. Apple's made major investments to have the best chips in their class. Many of the functional improvements come from SoC siblings like GPU, neural cores, hardware codecs, etc. Why would Apple want to give ARM all the credit for that?

Apple isn't shy about communicating to developers that Apple Silicon uses ARM ISA. This is about conveying to consumers their unique platform advantage.


Thanks - I guess I would draw a big distinction between adding a new instruction as part of being first to a new version of ARMv8 - which then becomes part of the standard ISA, and adding a permanently Apple only instruction. Would be very disappointed if it's the latter.

Agreed that the Apple Silicon branding was very strong but assumed that it was just to distinguish themselves with consumers from those who will also use ARM on the desktop.


ARMv8.6 exists

Apple uses V8.4

Current ARM designs use v8.2

As you might suspect, that leaves Apple stuck doing the software support. It also gives them additional instructions that aren't proprietary to them, but are currently unique to them.


> and from the very very strong marketing push to NOT use the name "ARM" from Apple,

This is puzzling to me. I assume it's simple egotism. Apple marketing and also the user community at large tends to present their technologies as exclusive and product categories as invented by them, even when it's a stretch. This has a long history.


There are the AMX machine learning on-die accelerators. I assume there's some new instructions to talk to those.

You don't need special instructions to talk to peripheral devices. The ISA supports that from the get go. As a licensee, you can't add instructions to the ISA, only ARM can do that. That's what they do, they're the custodians of the ARM architecture.

They are probably just peripherals, like the GPU. I'd be very surprised if Apple strayed from the ARMv8 specification and added new instructions for those accelerators.


That whole thread is just speculation, same as what we are doing here.

See this post: https://www.realworldtech.com/forum/?threadid=187087&curpost...


As long as it's a compatible superset of ARM64, I don't see a big problem.

Just because you don't see the problem, it doesn't mean it's not there. However you want to put it, you can't add (or remove) instructions at your leisure if you want to keep your license. It does not happen. I can get into details if you want, but I think it's besides the point.

I can see a significant problem with adding proprietary instructions. Would be interested in more detail on ARM specifics if available. Thanks.

1) ARM sells "ecosystem". They can license their IP because licensees know that, once they buy into it, they benefit from a unified ecosystem. Developing and, more than anything else maintaining an ecosystem, is extremely expensive, messy, and error prone. ARM adds value by keeping that ecosystem in order.

2) ARM instructions have a fixed length of 4 bytes. This means that there's a limited number of instructions you can add. Intel, for instance, is different: they have a very complex encoding scheme that gives them, potentially, limitless encoding space.

3) If any licensee, specially a popular one like Apple, decides to take part of that encoding space for their own instructions... do you see the implications? Now, that space is unavailable to ARM. ARM, the owner and custodian of the ISA, are put in an impossible position: accept those instructions into the ISA (if Apple allows it, because now it's their IP) or lose unity in their ecosystem (fragmentation). In any case, they've lost extremely valuable encoding space and, more importantly, control over their own ISA.

And that's why not only ARM don't allow licensees to implement their own instructions, they will never allow it (generally speaking; they have reserved some encoding space in the M profile for custom instructions, but that's controlled customization in a profile that has fewer instructions to begin with).


Thanks. Makes a lot of sense.

Apple is making a really strong push to differentiate it's chips from ARM. A few of the mac sites are claiming it's "Apple Silicon" and not "ARM". I believe the heart of it is that Apple has "followed the ARM spec" but did not use ARMs actual silicon designs.

I don't really know what the backing of this is either. I'm sure they added a few of their own instructions. And I read that they single handedly pushed the ARM platform to 64bit (citation needed, definitely not sure if that's really true).


"Apple Silicon" is a literal truth. The silicon design, the microarchitecture, is theirs, not ARM's. The architecture, this includes the ISA but also many other things that have to be taken into consideration in CPU design (exception model, consistency, virtual memory, virtualization,...), is ARM's. They simply don't want to tie their brand to ARM's, which is fair enough, Apple's brand is much more valuable than ARM's.

Regarding the push for a 64-bit chip, I've heard the exact opposite. Aarch64 (64-bit ARM) was an ARM design without a customer, Apple were the first to buy into it. I've heard that many said it was a premature jump, and there was a lot of scepticism within ARM that they'd be able to sell it (this happened when ARM was selling mobile designs exclusively, with no intentions of getting into servers). Still, they went ahead with it, and was an unexpected success.


It’s easier to just tweak some system registers than to do that (and indeed this is how Apple implements their proprietary “in-silicon” features).

HN seems to dislike too much conciseness -- I guess a single-word comment, even if it's a good reply, comes across as impolite or low effort (for the lack of tone in text).

Thanks. (Please don't downvote me!)

Apple does not have a license to add instructions to the ARMv8 ISA. ARMv8 processors designed by Apple must be fully compliant with the ARMv8 instruction set architecture - such are the terms of the license.

ARM very recently started allowing licensees to add custom instructions, but only for Cortex-M (embedded).


Have you ever noticed that ARMv8-A architecture extensions are published faster than Arm-designed cores make use of them? I'm pretty sure thats because architecture licensees are a major driver of those extensions. If Apple needs a CPU that adds X and Y to the architecture and removes Z, I have no doubt that Arm will work with them to allow it, and that it will probably come in the form of an architecture extension that permits it. Arm gets as much from Apple out of the relationship as Apple gets from Arm, and Arm surely knows it. They don't benefit from bossing Apple around.

Arm very recently started allowing Core licensees to add custom instructions (on M33 and M55). But they've allowed architecture licensees to do that for a long time (see XScale).


Yeah, agreed that architectural licensees definitely influence future ARM specifications - e.g. Fujitsu was a major driver behind SVE.

IMO, there's an important difference between Apple adding new instructions that are not in the spec, and Apple convincing ARM to add new instructions to the spec and then implementing those instructions.

In the first case, those are Apple-only instructions. In the second, that's just Apple implementing new ARM instructions.

> But they've allowed architecture licensees to do that for a long time

I don't agree that developing specifications and extensions based on the needs and desires of licensees is the same as permitting licensees to add custom instructions not permitted by the specification.

> They don't benefit from bossing Apple around

ARM benefits greatly from limiting fragmentation of their platform by exerting control over licensees.


"ARM very recently started allowing licensees to add custom instructions, but only for Cortex-M (embedded)."

They've reserved a block for that in the encoding space. Quite exciting! And it makes a lot of sense in embedded world.


Apple couldn’t care less; they are supposed to support 4K pages in hardware but they do not on recent chips.

ARMv8-A supports page sizes of 4k, 16k, and 64k. The specification permits implementations to choose which of these page sizes to implement. See the register ID_AA64MMFR0_EL1.

I'm not an expert, but I believe 4K support is required? Apple doesn't do that; they just have 16K granules (which I forsee as being problematic with compatibility as they bring these over to macOS).

> I believe 4K support is required?

As I said, I don't think that's the case. ARMv8-A supports the 3 page sizes and has a register so that implementations can advertise which are supported.

> Armv8-A supports three different granule sizes: 4KB, 16KB, and 64KB.

> The granule sizes that a processor supports are IMPLEMENTATION DEFINED and are reported by ID_AA64MMFR0_EL1

https://developer.arm.com/architectures/learn-the-architectu...


You know I asked around recently and it seems to be that all three are now optional. Perhaps Apple has pushed for this, because I recall 4K and 64K support being a hard requirement.

If they want to keep their architectural license they have to have their designs pass certain compatibility benchmarks.

I wouldn't put it past them to have paid silly money for a bespoke license.

Every article I read about ARM64 doesn’t really have any technical, concrete reason to love it. I think we’re simply experiencing a sort of euphoria that comes with knowing there’s competition in the space and big players are mixing things up. There’s joy in anticipation.

The one big reason is, that ARM64 is a reasonably clean 64bit RISC CPU. That doesn't have so much historical baggage to carry as the x86-based CPUs. Which were never loved, at no time in their life cycle. But due to how history went, managed to kill all the nice RISC designs that once were around. SPARC, Alpha, MIPS, just to name some.

I can give you a quick one: 5nm CPUs in Apple laptops.

Servers next? Servers first! AWS released a second generation of the Graviton before Apple announced the Mac transition. Also great stuff is coming from Ampere and Marvell (ex-Cavium).

I've been running one of my production applications[1] on Marvell (ex-Cavium) based ARM server on Scaleway which has had it for several years now; unfortunately they decided to pull the plug off their entire ARM line up suddenly citing stability issues which not many have faced[2].

[1]https://needgap.com

[2]https://news.ycombinator.com/item?id=22865925


Back in the pre-Graviton days I managed to boot FreeBSD on their ThunderX VM offering, wrote a guide on the forums, even received an email about that from staff… and a few months later they broke local boot completely and then, yeah, closed the whole offering for some reason. What a disaster. As much as I dislike the whole Bezos Empire thing, I'm glad AWS picked up the ARM VM world and pushed it to great heights.

I think main cause of issue with Scaleway's ARM offering could be their own custom Bios and not the actual hardware. I hope they had come clean with the actual technical details of the stability issues instead of just mentioning it in one line on their migration guide to x86 from ARM servers.


I wish Scaleway hadn't discontinued ARM because I was thinking of migrating a project over to an ARM server just for fun but when I looked ARM was gone.

It might be coming back: https://amperecomputing.com/ampere-altra-family-of-cloud-nat...

> we’re evaluating Ampere Altra for the new breeze of our Arm64 cloud instances in late 2020. — Scaleway


nitpick : you probably mean 'citing'.

Thank you, corrected.

My reckless prediction: When Apple releases their first Apple Silicon Macs at the end of the year, one of them will be a rack-mount server.

It's been a while since they had one. https://en.m.wikipedia.org/wiki/Xserve

The Xserve was never really competitive in price or performance. An Arm based Macserve has the potential to offer higher performance per watt than x86-based offerings which might make it make more sense in the datacenter than the Xserve ever did.

Just on pure aesthetics, I had kind of wanted an Xserve as a recording computer to put in the rack next to all the synth modules. Of course, it never made financial sense to do so.

Xserves were really loud. I doubt you'd want it anywhere near an audio setup, unless it was in a separate room. You can hear it at 6:30 in this video: https://www.youtube.com/watch?v=pdMKuHVnjik

It would be to hope, that they release some rack-mounted hardware with their ARMs. It is actually surprising that the big users of server hardware have not pushed stronger towards ARM. It would absolutely make sense for Apple to use their own Silicon for their serving infrastructure as well as selling them to any interested party, as this increases the unit count of the Apple Silicon, which drives down the cost for all Macs.

Happy to bet against this.

The Mac Pro is rack mountable, and I would expect it to transition to ARM at some point

Lacking things like dual power supplies and out of band management, that probably isn't the context the GP meant.

This probably isn't a game changer for you, but Apple mentioned at WWDC that Lights Out Management is coming to Mac Pros in the next release of macOS. You need a Mac Mini somewhere on the network to manage commands from the MDM to the turned off Mac Pro, so I don't know what happens if that Mac Mini is turned off, but it's a move in the right direction I guess. (I suppose it's because they don't want network to be up before the firmware on the target Macs is validated, but I'm not sure).

https://developer.apple.com/documentation/devicemanagement/l...


If the Mac mini is capable of managing several servers, that's not a bad idea.

Way too expensive to even be in the ballpark of competing with other solutions.

The Xserve existed at a time when most servers were expensive and high margin. Today they are not so "ballpark of competing" would mean low margin or subsidized pricing for Apple.

I did say reckless.

With what advantage? Performance per Watt? Probably not anything software based they currently offer.

I personally hope to see more 64-bit only processors without the burden of AArch32 support.

You see that in, say, Cortex-A76, which only supports AArch32 at EL0. Supporting AArch32 at EL0 is quite cheap compared to also supporting it at the higher exception levels - EL0 can be done entirely in the instruction decoder.

Most server chips with custom cores are like that. Cavium-Marvell ThunderX 1/2 don't support aarch32 at all. Qualcomm Falkor didn't either (RIP).

What's the reason behind companies adopting ARM instead of the open standard RISC-V? E.g. Apple designs its own chips, wouldn't it make more sense to use RISC-V instead of paying fees to ARM?

Apple helped found ARM back in 1990 and is a major shareholder, they have a massive investment in the architecture, including enormous experience with it. They can't just drop the massive experience, knowledge, investment and hardware and software ecosystem they have built up around ARM and switch to a completely different architecture.

Also Apple already hold a perpetual license for the ARM architecture, the major costs of their investment in it were spent years or even decades ago.

The obvious counter is the sunk cost fallacy, but what does RISC-V actually have to offer that Apple doesn't already have with ARM? Those sunk costs have bought Apple actual tangible benefits on ARM that RISC-V doesn't have, such as architectural features like the secure enclave, better device integration, better GPU options, more mature device driver support, a stronger development and testing tools ecosystem.

The opportunity for RISC-V is new use cases not already strongly addressed by ARM, or where ARMs more mature ecosystem offers weaker benefits, for companies not already strongly invested in ARM.


In this case, I think that the "sunk cost fallacy" counter is as specious as it is obvious. It completely misplaces the burden of evidence.

Apple has a platform with hundreds of millions of active users, perhaps over a billion actual devices in use. In a situation like that, conservatism is warranted. The guiding principle is simple: If it ain't broke, don't fix it. Meaning that the argument doesn't start with any sort of discussion about technical advantages that RISC-V might have over ARM, or challenging reasons to stay on ARM as if, having dismissed them, the obvious default plan of migrating to RISC-V would kick into action. It starts with trying to demonstrate that something about ARM is broken. Because, if it ain't broke, don't fix it. So. . . is it hindering Apple's strategic or operational priorities in any way?


Presumably it's because ARM is a fully-baked technology that's been around for a while, with good compilers and all that, right now.

RISC-V is promising, but it's not there yet. A small company like Pine64 isn't necessarily going to have the resources to, for example, get the Linux kernel support for RISC-V into a production-ready state.

Apple, for its part, adopted ARM years before the RISC-V project even started.


Presumably because ARM’s ISA is more advanced and costs Apple basically nothing (I’ve heard it might be as low as six or seven figures for their architectural license).

Not judging, IMO but MAGAF* can really afford to pump money into enhancing RISC-V - individually or via a consortium, and stand to benefit together, while developing the whole ecosystem too. The early days of Apple and MS were sorta on these lines anyway - which caused an explosion of growth and wealth. Not able to reason why this isn't happening now!

*Microsoft, Apple, Google, Amazon, FB

[Edit 1]: A quick search throws up that Google, IBM, SONY etc are already members of RISC-V foundation.

Also, https://riscv.org/2018/08/eet-asia-article-microsoft-and-goo...


But why should they?

Cause, they ... can?

Why would a nation choose to go to the moon, or build something otherworldy?


This is corporate mindset 101. Questions like "why don't you do X" make no sense to a corporation. The question is always "why should I do X". Every decision has a cost and a benefit, and things happen when benefits outweight the costs proportionally to the implicit risk.

Companies have been successfully working with ARM for a loooong time. It's a massive, well supported, ecosystem that works quite well (no customer complains about delayed/cancelled products related to ARM's performance). What does RISC-V offer that's worth jumping into the void?

That's startup territory, you won't see big companies betting big on RISC-V for a while, if ever.


I think a big part of the issue with these things is that porting popular applications is not a high priority in the industry. I pitched porting v8 to RISC-V to a few companies a couple years back, and it seems like they were not interested in that.

Getting ATX (or other standardized) form-factor boards with chips big enough to test applications on, in my opinion, should be a high priority. As much as people talk about "post-PC" or whatever, PC-form-factor computers are the center of development and understanding, especially for server applications.

Being able to buy or use existing power supplies, chassis, disks, NICs, and GPUs (and maybe DIMMs, if that's not a massive engineering challenge) and put together a "PC" I think is really the key to development of the architecture, and they're a portal to applications like Windows laptops (see Qualcomm AArch64 laptops) and Chromebooks (big part of what I was thinking of when I begun and abandoned porting V8).

If there were ATX form-factor AArch64 boards available for less than a thousand bucks early on, it would have hit Android and laptops possibly a year or two earlier, with significantly less specific investment from ARM Holdings.


They have a lot of experience designing ARM chips that they're not just gonna throw out. And obviously the major advantage of going ARM is that they can finally unify IOS and Mac app development. In fact as far as I know, all IOS apps will be natively executable on Apple Silicon for Mac on day one.

I think RISC-V will eventually come to consumer facing (rather than embedded) products, but it might take another 10 years.


Apple didn't design its own cores when they started using Arm for iPhone, and RISC-V wasn't especially mature then either. There's a cost to switching, and you can bet that Arm designs its business relationship with Apple to make sure Apple continues to prefer Arm.

For the layperson, what is so special about ARM chip designs? Are they much more efficient in compute achieved per clock cycle or something?

Or as the author alludes to, have less power consuming non-useful overhead processes running in the background? Is it the equivalent of pipeline length that Apple pointed out when making the Intel transition back in the 2000s? (the "megahertz myth"?)


This is a relative comparison not absolute, and trying to be nontechnical about a technical topic is a minefield, but vaguely speaking for many decades ARM generally didn't worry as much about backwards compatibility and backwards binary compatibility and was pretty chill about every tiny little revision having tiny little improvements such that like V8.3-A added the "famous" FJCVTZS opcode (google it, its funny).

Intel in comparison likes backwards compatibility. Your new motherboard probably could boot MSDOS 1.0 if not CP/M, plus or minus the need for a floppy controller LOL. Also Intel stereotypically likes major upgrade "sets", so Intel likes to release a giant pack of new features like Streaming SIMD Extensions (aka SSE) rather than how ARM will toss out literally one new opcode in a release.

Over the long term (decades) it seems the ARM strategy has been VASTLY more successful.


Thanks for that really interesting summary!

Regarding backwards compatibility, is this basically that certain companies developed new products so quickly that there was no point (or value) to supporting old hardware (for the company, or desired by users)?

And therefore delay + overheads of waiting for compatible new CPUs was worse than the value of what they did release each time? Plus the generally worse heat/power utilizing designs?


It is of course a general stereotype in that its useful for prediction but its not as solid as, say, the law of gravity, but ARM comes from a software as a project world where you write the code for a specific piece of hardware with a defined and short production life, whereas Intel desktop PC comes from a software as a product world where you expect your word processor to keep working even if you replace your motherboard or swap in a faster CPU. One has a world of permanently sealed cases and firmware thats never updated (with unfortunate security results) whereas the other at least in theory has user-serviceable parts inside a screwed together case. Ironically that sounds a lot like MAC vs PC so it makes sense that MAC would go ARM.

Basically the only advantage is simplified instruction decoding, which it has in common with other RISCs. It has some other nice properties too, I think it really shows that it has taken lessons learned from other RISC architectures and struck a good balance. But the reason for the switch isn't how nice ARM is, the reason is that Intel is struggling with its manufacturing process and Apple wants to be in control of its own destiny.

Apple also want to add their own narrow-purpose chips (like media encoders and what have you).

It's more than instruction decoding, though admittedly that's a big part. There's also memory ordering (tho it's debatable whether that's problematic), the number of logical registers (particularly important for JITs) and messy dependencies like partial flag updates.

ARM doesn't sell chips, they sell IP. E.g.: their ISA (instruction set) or soft cores (designs that you can integrate in your own chip and then fabricate).

There's nothing special in the chips themselves, it's the business model what's quite different from Intel's.

Historically, ARM (and their licensees) have designed their cores targeting power efficiency over everything else, but there's nothing that prevents Apple or any other licensee from designing a power hungry ARM CPU.


From what I remember biggest difference is that Intel designs and produces chips. Where ARM only designs chips and everyone can build their own chips based on the design of course paying license fees. So Broadcom, Qualcomm and others are building ARM chips and would not be surprised if Samsung and Apple soon build their own.

Edit: Yes Samsung has something called Exynos, Cortex is from Broadcom, Qualcomm has Snapdragon. Basically there is much more competition in ARM.


> Hopefully that will be in a way that will lead to there being releases/hybrids/OS projects of Apple’s collections of OSes finding themselves running on boards like the RockPro64 (inside the Pinebook Pro).

I am looking forward to seeing what they do with ARM. Probably the most interesting thing will be what kinds of custom SoCs will appear.

But don't get your hopes up for any "hackintoshes." At least, not commercial ones. You will probably be able to scare up homebrew devices (like they have nowadays, with Intel), but Apple has always been rather...punitive towards corporations that produce Apple clones.


I find it unlikely that even the homebrew community will be able to install Mac OS 11 on any non-apple hardware without emulation for a good long while because Apple is likely going to lock down their software to only run on their hardware, much like how iOS can't be run on an Android device without an emulator.

I agree. I think the T2 (T3?) chip is going to end up being embedded in the Apple Silicon SoC.

Definitely, they will consolidate their T2 functionality and also functionality traditionally left to the chipset into one SoC.

I will be the dissenting voice and say I prefer MIPS.

Like the Wintel (Windows/Intel) monoculture, ARM brings the GoogleArm monoculture. The Apple/Arm monoculture that some people seem to hope for would be no different: heavily restricted, with fewer possibilities for free software.

MIPS64 offer us a good royaltie-free alternative, to be independent of the western companies (UK/US based)

If Nvidia gets ARM, I fully expect the same shenanigans Intel pulled on X86.

So I hope a third party alternative (MIPS, Sparc, anything) will be there, hopefully with a major fab like Samsung behind it.


MIPS64 is no longer royalty-free.

Wave Computing Closes Its MIPS Open Initiative with Immediate Effect, Zero Warning

https://www.hackster.io/news/wave-computing-closes-its-mips-...



So we're only left with RISCV? Sparc seams a dead end, and power too.

Power ISA isn't dead. Besides IBM's server offerings, you can buy workstations with a POWER9 retail right now. POWER10 is in the works and it's a royalty-free ISA.

That's in addition to all the little embedded cores that are still around.


IBM is actively investing in POWER though

Would be nice if they brought that back into the laptop realm.

There is a project to make a power pc laptop https://www.powerpc-notebook.org/en/ Though it does not use a IBM CPU.

They claim to be using a NXP T2080 CPU as the core of their laptop. This is doomed to failure.

The T2080 is a high-speed communications processor. It is not designed for energy efficiency. It can draw up to 20W at full power, and does not support the sorts of power-saving features that would be required for a mobile part. The datasheet doesn't even specify the processor's idle power consumption! Some of the tables can be read to imply that it might be able to idle down to roughly 7W, but that's still completely unsuitable for a laptop.


Are there other Power processors that might be better suited for a laptop?

Somewhat not by much. NXP has a couple of other QorIQ parts which might fit a laptop application a little better; for instance, the T1040 series (4 cores, 1.5 GHz, 7W TDP) seems like a better fit. IBM's POWER parts are aimed squarely at supercomputing and are completely unsuitable for this use case.

But, quite frankly, none of these parts are really intended for mobile use. There hasn't been any real interest in mobile Power hardware since Apple got out of the business nearly 15 years ago -- ARM cores have utterly dominated that market.


Where is MIPS used these days? I've only seen one MIPS device in the wild (a DOCSIS modem using some domain-specific Broadcom MCU) but I've heard about it plenty over the years.

Routers mostly. I think some set-top-boxes and DVRs also used it.

Shame, it had quite the history from being in all those old SGI machines.


I like that there is a real option to break away from x86/x64 now. It may not be the best for all use cases, but 10-15 years ago there was no viable personal computing option. I'm not wishing for Intel's downfall either, they seem to do better once there is real competition around. While AMD has made huge strides here, the real problem is there needs to be a high performance processor that doesn't consume power like x86/x64.

Aren’t we in fact just talking about the CPU architecture (a.k.a. the instruction set, ISA)? On the inside, all CPUs are (or can be made) the same regardless of the architecture - the differences dictated only by technical requirements such as desired performance, price, power consumption, etc.

It's done wonders for embedded systems. I still have a Netwinder in my drawer, that was when I switched from various oddball micros to ARM - never looked back. Is ARM the best? Probably not, but that doesn't matter. The only one I kinda miss is the MSP430 - but not enough to keep working with it.

I too am feeling somewhat optimistic about this direction in technology, and for approximately similar reasons, though I don't think I feel so strongly about it as to craft a blog post on the topic.

I read the whole article expecting something programming-related to come (it didn't).



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: