1)ARM is good for phones/tablets/portable small devices which runs on battery because it saves much more power.
2)I think that x64 hardware of similar specs outperforms arm64 in performance.[Personal observance]
3) I love how I can boot Different linux distros/BSDs/HaikuOS/Windows/other x64-generic-OS.In my arm64 phone,I can't(OS must be especially crafted for that specific processor ,then it must be tweaked carefully for that device in order to boot).Also,there is very much diversity in arm64 space.This implies that some arm64 hardware can't boot/run some OSs which hasn't been crafted for that device.
For my use-case,(3) is very important.I am damn sure that,some manufacturers might manufacture hardware that can't boot other OS(say situation where most (arm) laptops can't run linux and only runs windows). I don't want that to happen.Also , booting up OS in arm64 is not standardized(there is lot of diversity in boot process in arm64 unlike x64).Also ,some components might not be upstreamed in Linux kernel(So,I need to patch my kernel everytime/My kernel will never get updated to new major version like many android phones).
Speaking of other architectures,I also don't think RISC-V will become major architecture anytime soon/will suffer the same disadvantages of arm64 I mentioned above.Maybe,pp64(openpower) might be good architecture but I haven't got a chance to deal with that hardware.
The real root issue though is that ARM devices are generally built as "finished" products, so don't need to support pluggable hardware, meaning you can make assumptions about what pin on the SoC is used for each purpose, and exactly what it's connected to. This leads to kludges on phone handsets sometimes, where audio jack polarity is incorrect etc, and needs a board-specific driver hack. These hacks were often maintained in a board-specific kernel tree.
DTB certainly helps, but ARM devices are still (to some extent) quite hardware-specific. Moving towards UEFI bootloaders on ARM64 looks promising, and there are now "generic" ARM64 bootable images for some Linux distros.
Hopefully we'll see more use of a standard UEFI interface on ARM devices, so we can get towards having common bootable OS images, without board specifics needing handled by the OS!
IMHO, any locked device sold that connects to the internet should be forced by law to unlock after 6 months of no updates.
I would love to see Verizon forced to do this. It would be glorious.
That's mostly the gist of what you're saying, but I think it needs to change; a lot of embedded platforms are adopting device tree now and it's just making things more difficult.
Device Tree is IMO better than X86's implementation because ACPI just gives you bytecode that you as the kernel have to trust to perform actions whereas device tree gives you a declarative view of the system.
With ACPI the configuration is packaged with the hardware, with device tree it's not. ACPI might be worse for certain reasons but it is more usable in practice.
But there's tons of buses that aren't reconfigurable on PCs. You just don't generally see it because it's papered over with ACPI. But all the same I2C/SPI and random devices sitting off the major system management devices still exist on PCs.
> With ACPI the configuration is packaged with the hardware, with device tree it's not.
Eh, I've seen both options with both models. And ACPI being less declarative means that kernels have to ship larger patch tables to fix broken ACPI code. With device tree, a declarative approach means the drivers are free to fix issues in whatever way that kernel thinks is best.
> ACPI might be worse for certain reasons but it is more usable in practice.
I think it's easier to ship an MVP and forget about it than device tree, but device tree is easier to maintain over a long period.
I don't really disagree, but for the most part all I've ever been able to buy are ARM boards with horrible support. If people aren't going to take up DT then it might be wise to find some middle ground.
Also -- you can probe for I2C/SPI devices.
Also how far along is UEFI bootloaders for ARM64?
I get there is more to it that just an ARM vs x86 straight comparison, but even my 2019 MBP feels less smooth when it comes to simple and comparable tasks like browsing sites while watching videos.
Jeff Atwood has Opinions about this, e.g. https://blog.codinghorror.com/the-tablet-turning-point/
The virus scanners tend to give standard PCs a massive disadvantage in percieved performance against locked-down tablets, too.
I can't stand using my iPhone or my iPad for anything other than very basic features since they're so slow and limited compared to how quickly I can get things done on one of my real computers.
Selecting text or placing the cursor on a phone is an exercise in frustration. Browsing the web too - I have to contend with every other website treating my phone like a 2nd class citizen and some sites like Reddit or Twitch try so hard to force you to use their app. On Google I have to either use desktop mode and zoom in after each search or I have to put up with shitty AMP results.
I basically associate ARM with shitty locked-down operating systems that don't have my best interests in mind.
That's just a difference in UI - a small phone touchscreen is simply not a very effective input device compared to even a plain touchpad using the exact same area. (To the point where I think all touchscreen-only devices should support "touchpad mode" with a pointer out of the box. VNC/RDP implementations on mobile devices have been doing this for ages, so it's quite physically achievable.)
Photos on my iPhone SE (2016) or 9.7" iPad Pro (2016) is significantly more performant than Photos on either of my MBPs (13" & 15", 2015).
Like massively so, to the point that I absolutely loathe having to use my Mac whenever I need to do anything with a photo or video.
Phone UIs in general get more love just because with that tiny screen and given the normal causal use cases users are far less tolerant of slow or poorly designed UIs on phones. The squeaky wheel gets the oil, and on mobile any UI issue is instant death. On desktop it's just an annoyance.
The GP didn't seem to believe it was possible that a phone could be "less laggy".
In line with your own remarks, there are a number of common use cases that have been optimised on phones in a way that they never were on desktop, resulting in a superior user experience on the phone vs the desktop.
With some of the mentioned trade-offs of course.
Android is more free than iOS, which means Android apps are much more free to spy on me.
If you are technical enough and conscientious enough to avoid this problem that's great, but that's not the majority of the market. For most users freedom means malware and highly invasive surveillance.
The Internet and the computing ecosystem decades ago was comparatively open and free. Then the barbarians came in the form of surveillance capitalism and industrialized malware operations. Now the fields are salted, the smaller settlements are burned to cinders, and everyone is cowering behind the high city walls of various walled gardens.
There's no system that will make everyone happy so the interests of the many for security outweighed he interests of the few for tinkerability.
If anything I thought my comment implied the opposite, since the community is so heavily DIY that it's comparatively a Robinson Crusoe scenario.
I'm curious about this, does this mean the ios app equivalent of facebook and every other service that's been caught spying on customers, somehow doesn't do exactly the same thing on that platform?
If I use facebook on an Apple device I become immune to their tracking?
They don’t have to be; it’s just that this is the option we have right now.
How much of that is because it's using a 120hz display, vs 60hz on most computers?
I suppose that Intel would win in a competition of "maximum single-thread perf regardless of power or cost". But that does not sounds like something that is intended by "simlar specs".
On (1), I would point out that saving power is beneficial to everyone and will likely only become more important in the future.
(3) is a bummer but is something that we can possibly solve with technologies like the Linux Device Tree. Certainly, it is not impossible to build an open platform on ARM if the manufacturer cooperates. To what extent they will do so, remains to be seen I think.
I had low cost laptop with AMD processor. It compiled faster and handled load better than Rasp. Pi. of comparable hardware(similar specs not exact specs).
Rasp. Pi was ofc cheaper than AMD laptop.(about 0.4x of AMD laptop).
I don't think AMD laptop consumed lot of power than arm64 Rasp. Pi. arm64 being lower in cost consuming lower energy ,you can feel that arm64 is better than x64 in perf/dollar or perf/watt but reality is that there is no comparable hardware of both architecture in respective product range and difference is not that apparent when it comes to arm64. Also my AMD laptop compiled,cross compiled,handled large load much better than arm64 hardware of comparable specs.
1) Yes, power saving is desirable feature. I can't see arm64 workstations around. So,if you need performant hardwares,i think that x64/ppc64 becomes inevitable.
3) Definitely a bummer , still unresolved. Let's see how it goes. x64 has benefit in this point.
It might be memory issues, x86 machines have very wide memory busses. But I am still left a little confused, and practically it still has impact.
Not the person you replied to, but this could have a couple interpretations where Intel wins. Here's one option:
If you take two CPUs with similar core counts, clock speed, and transister counts then which one will have higher throughput, lower latency, ... for typical operations? This is open to interpretation of course (e.g. if you didn't need floating point throughput then AMD bulldozer architectures were competitive when released), but it's not unreasonable to evaluate CPUs based on your personal workloads.
It seems to me that a simpler instruction set that occurs much less overhead than the above will simply have a higher performance ceiling. The only advantage I see in x64 is that the complex instructions are a form of 'compression' for the micro-instructions. Maaaaybe the resultant bandwidth reduction in reading isntructions is worth the performance overhead of x64 instruction decoding.
More significant than that, I think, is that modern 64 bit ARM is actually tend to produce denser code than modern 64 bit x86 - which is really weird when you consider that ARM-64 is fixed width and x86-64 is variable width but backwards compatibility can do that.
And more important than that, however, is that x86-64 has a very strict memory model and ARM a more relaxed one which can cause problems for programmers who aren't careful but has huge implications for the design of the memory backend.
Do you have a citation for that? Everything I've seen says the opposite, including http://web.eece.maine.edu/~vweaver/papers/iccd09/ll_document...
The strongest wording I've seen is that aarch64 is 'competitive with' x86_64.
That being said, I do think that if the same amount of performance engineering were dumped into ARM as has been put into X64 ARM would win. The simpler instruction encoding, less cruft (but not none!), and other modern characteristics would make that optimization go further.
My concern as others have mentioned is lack of standardized interfaces and peripheral enumeration, leading to a lack of flexibility and the loss of ability to run alternative OSes and drivers. That can be fixed, but does anyone care enough to fix it?
That hasn't been the case until quite recently. Before TSMC 7nm Intel was still at least on-par with everyone else. I think it boils down to arm chips being designed from the ground up with power usage in mind (with a fair number having slower low-power secondary cores and such), whereas Intel mostly expects your CPU to always be doing something even at idle.
It's not a very good one. The x86_64 ISA encoding evolved out of the original 8086 instruction encoding, and has inherited a lot of inefficiencies from there -- a lot of one-byte opcodes are wasted on rarely used bytewise instructions (like XLAT), privileged instructions (like the IN/OUT family), or on opcodes that are simply invalid in 64-bit mode.
Most instructions in real x86_64 code are two to four bytes long, not even counting immediate values. A focused attempt could probably do much better.
> Maaaaybe the resultant bandwidth reduction in reading isntructions is worth the performance overhead of x64 instruction decoding.
Access to main mem is slow, much slower than decoding circuitry, so it is very well worth it. From what I've read, the decoding time mattered early on, hence (among other reasons) RISC, but as chips speeded up decoding time became less important, then an overhead as the instructions were large. IIRC, and I can't back this up so take with caution, the alpha chips suffered because of this (but of course you could always take large instructions then decode them smaller and cache those decoded instrs).
Just my view. Ask a chip expert for a better opinion.
It would be sensible to expect that, but as it turns out the RISC-V C ("compressed instructions") extension is quite competitive with x86/amd64. Of course, ARM64 doesn't seem to have an equivalent to RISC-V C, even the old Thumb mode is gone.
Performance per watt is probably going to be the primary driver. "How cheaply can I rent a 'vCPU' drives decisions."
Can you elaborate on this a bit? What devices are you comparing (on both sides of the comparison)?
People say, it wouldn't get pass regulators; I doubt it, Qualcomm - Broadcom was blocked as US feared the latter might stall Qualcomm's telecom research giving advantage to China. But, it's the other way around for ARM Holdings - Nvidia deal; Why wouldn't US allow its company grabbing a future-proof computer architecture?
Sidenote/rant- Look what have you done WeWork, burning VC money with a nonsense company has led to a point that the future of computing democracy is now under threat. Well, SoftBank wasn't a good place for ARM Holdings in the first place but at least I thought they were in it for a long haul; I was wrong.
I feel RISC-V is the true future for open-source computing.
I'd be very surprised if this was the case.
Xcode/iOS's emulator is actually x86 - which is why it seems faster than Android's default emulator.
This is the same as the Android x86 "emulator".
(Mostly referring to the actual processor/OS itself; buttons that "vibrate" the phone are not really technically interesting to me.)
Of course, Arm can figure all that out too, so somewhere inside Arm they've tried to replicate Apple's "should we switch to RISC-V" spreadsheet, and Arm sets the price of the architecture license so that the spreadsheet will always say "don't switch to RISC-V".
The other thing to consider: Arm's core design business is quite valuable; probably more valuable than their architecture business. But Apple does their own core design, and would not be interested in selling core IP to others, so they would get no value from that business. But if they were to buy Arm, they'd still have to pay for it.
Owning the instruction set and reference designs maybe isn’t terribly useful, when your primary competitors already have perpetual licenses to the technology and do their own core designs.
>The new company intended to further the development of the Acorn RISC Machine processor, which was originally used in the Acorn Archimedes and had been selected by Apple for its Newton project.
IIRC, it began as an expansion board for the BBC Micro.
This article implies that initial ARM development was undertaken between 1983 - 1985:
> As part of the sale process, SoftBank approached Apple to gauge its interest in acquiring Arm, according to people familiar with the matter. While the two firms had preliminary discussions, Apple isn’t planning to pursue a bid, the people said.
Edit - I'm getting downvoted but if you're going to suggest that Apple are departing from the ARMv8 instruction set - which is a really important point if true and which I've not seen any evidence for - then you really need to supply a citiation.
From this, and from the very very strong marketing push to NOT use the name "ARM" from Apple, I have a suspicion that Apple Sillicon instruction set won't be a fully standard ARM holdings instruction set
Apple isn't shy about communicating to developers that Apple Silicon uses ARM ISA. This is about conveying to consumers their unique platform advantage.
Agreed that the Apple Silicon branding was very strong but assumed that it was just to distinguish themselves with consumers from those who will also use ARM on the desktop.
Apple uses V8.4
Current ARM designs use v8.2
As you might suspect, that leaves Apple stuck doing the software support. It also gives them additional instructions that aren't proprietary to them, but are currently unique to them.
This is puzzling to me. I assume it's simple egotism. Apple marketing and also the user community at large tends to present their technologies as exclusive and product categories as invented by them, even when it's a stretch. This has a long history.
See this post: https://www.realworldtech.com/forum/?threadid=187087&curpost...
2) ARM instructions have a fixed length of 4 bytes. This means that there's a limited number of instructions you can add. Intel, for instance, is different: they have a very complex encoding scheme that gives them, potentially, limitless encoding space.
3) If any licensee, specially a popular one like Apple, decides to take part of that encoding space for their own instructions... do you see the implications? Now, that space is unavailable to ARM. ARM, the owner and custodian of the ISA, are put in an impossible position: accept those instructions into the ISA (if Apple allows it, because now it's their IP) or lose unity in their ecosystem (fragmentation). In any case, they've lost extremely valuable encoding space and, more importantly, control over their own ISA.
And that's why not only ARM don't allow licensees to implement their own instructions, they will never allow it (generally speaking; they have reserved some encoding space in the M profile for custom instructions, but that's controlled customization in a profile that has fewer instructions to begin with).
I don't really know what the backing of this is either. I'm sure they added a few of their own instructions. And I read that they single handedly pushed the ARM platform to 64bit (citation needed, definitely not sure if that's really true).
Regarding the push for a 64-bit chip, I've heard the exact opposite. Aarch64 (64-bit ARM) was an ARM design without a customer, Apple were the first to buy into it. I've heard that many said it was a premature jump, and there was a lot of scepticism within ARM that they'd be able to sell it (this happened when ARM was selling mobile designs exclusively, with no intentions of getting into servers). Still, they went ahead with it, and was an unexpected success.
ARM very recently started allowing licensees to add custom instructions, but only for Cortex-M (embedded).
Arm very recently started allowing Core licensees to add custom instructions (on M33 and M55). But they've allowed architecture licensees to do that for a long time (see XScale).
IMO, there's an important difference between Apple adding new instructions that are not in the spec, and Apple convincing ARM to add new instructions to the spec and then implementing those instructions.
In the first case, those are Apple-only instructions. In the second, that's just Apple implementing new ARM instructions.
> But they've allowed architecture licensees to do that for a long time
I don't agree that developing specifications and extensions based on the needs and desires of licensees is the same as permitting licensees to add custom instructions not permitted by the specification.
> They don't benefit from bossing Apple around
ARM benefits greatly from limiting fragmentation of their platform by exerting control over licensees.
They've reserved a block for that in the encoding space. Quite exciting! And it makes a lot of sense in embedded world.
As I said, I don't think that's the case. ARMv8-A supports the 3 page sizes and has a register so that implementations can advertise which are supported.
> Armv8-A supports three different granule sizes: 4KB, 16KB, and 64KB.
> The granule sizes that a processor supports are IMPLEMENTATION DEFINED and are reported by ID_AA64MMFR0_EL1
> we’re evaluating Ampere Altra for the new breeze of our Arm64 cloud instances in late 2020. — Scaleway
Also Apple already hold a perpetual license for the ARM architecture, the major costs of their investment in it were spent years or even decades ago.
The obvious counter is the sunk cost fallacy, but what does RISC-V actually have to offer that Apple doesn't already have with ARM? Those sunk costs have bought Apple actual tangible benefits on ARM that RISC-V doesn't have, such as architectural features like the secure enclave, better device integration, better GPU options, more mature device driver support, a stronger development and testing tools ecosystem.
The opportunity for RISC-V is new use cases not already strongly addressed by ARM, or where ARMs more mature ecosystem offers weaker benefits, for companies not already strongly invested in ARM.
Apple has a platform with hundreds of millions of active users, perhaps over a billion actual devices in use. In a situation like that, conservatism is warranted. The guiding principle is simple: If it ain't broke, don't fix it. Meaning that the argument doesn't start with any sort of discussion about technical advantages that RISC-V might have over ARM, or challenging reasons to stay on ARM as if, having dismissed them, the obvious default plan of migrating to RISC-V would kick into action. It starts with trying to demonstrate that something about ARM is broken. Because, if it ain't broke, don't fix it. So. . . is it hindering Apple's strategic or operational priorities in any way?
RISC-V is promising, but it's not there yet. A small company like Pine64 isn't necessarily going to have the resources to, for example, get the Linux kernel support for RISC-V into a production-ready state.
Apple, for its part, adopted ARM years before the RISC-V project even started.
*Microsoft, Apple, Google, Amazon, FB
[Edit 1]: A quick search throws up that Google, IBM, SONY etc are already members of RISC-V foundation.
Why would a nation choose to go to the moon, or build something otherworldy?
Companies have been successfully working with ARM for a loooong time. It's a massive, well supported, ecosystem that works quite well (no customer complains about delayed/cancelled products related to ARM's performance). What does RISC-V offer that's worth jumping into the void?
That's startup territory, you won't see big companies betting big on RISC-V for a while, if ever.
Getting ATX (or other standardized) form-factor boards with chips big enough to test applications on, in my opinion, should be a high priority. As much as people talk about "post-PC" or whatever, PC-form-factor computers are the center of development and understanding, especially for server applications.
Being able to buy or use existing power supplies, chassis, disks, NICs, and GPUs (and maybe DIMMs, if that's not a massive engineering challenge) and put together a "PC" I think is really the key to development of the architecture, and they're a portal to applications like Windows laptops (see Qualcomm AArch64 laptops) and Chromebooks (big part of what I was thinking of when I begun and abandoned porting V8).
If there were ATX form-factor AArch64 boards available for less than a thousand bucks early on, it would have hit Android and laptops possibly a year or two earlier, with significantly less specific investment from ARM Holdings.
I think RISC-V will eventually come to consumer facing (rather than embedded) products, but it might take another 10 years.
Or as the author alludes to, have less power consuming non-useful overhead processes running in the background? Is it the equivalent of pipeline length that Apple pointed out when making the Intel transition back in the 2000s? (the "megahertz myth"?)
Intel in comparison likes backwards compatibility. Your new motherboard probably could boot MSDOS 1.0 if not CP/M, plus or minus the need for a floppy controller LOL. Also Intel stereotypically likes major upgrade "sets", so Intel likes to release a giant pack of new features like Streaming SIMD Extensions (aka SSE) rather than how ARM will toss out literally one new opcode in a release.
Over the long term (decades) it seems the ARM strategy has been VASTLY more successful.
Regarding backwards compatibility, is this basically that certain companies developed new products so quickly that there was no point (or value) to supporting old hardware (for the company, or desired by users)?
And therefore delay + overheads of waiting for compatible new CPUs was worse than the value of what they did release each time? Plus the generally worse heat/power utilizing designs?
There's nothing special in the chips themselves, it's the business model what's quite different from Intel's.
Historically, ARM (and their licensees) have designed their cores targeting power efficiency over everything else, but there's nothing that prevents Apple or any other licensee from designing a power hungry ARM CPU.
Edit: Yes Samsung has something called Exynos, Cortex is from Broadcom, Qualcomm has Snapdragon. Basically there is much more competition in ARM.
I am looking forward to seeing what they do with ARM. Probably the most interesting thing will be what kinds of custom SoCs will appear.
But don't get your hopes up for any "hackintoshes." At least, not commercial ones. You will probably be able to scare up homebrew devices (like they have nowadays, with Intel), but Apple has always been rather...punitive towards corporations that produce Apple clones.
Like the Wintel (Windows/Intel) monoculture, ARM brings the GoogleArm monoculture. The Apple/Arm monoculture that some people seem to hope for would be no different: heavily restricted, with fewer possibilities for free software.
MIPS64 offer us a good royaltie-free alternative, to be independent of the western companies (UK/US based)
If Nvidia gets ARM, I fully expect the same shenanigans Intel pulled on X86.
So I hope a third party alternative (MIPS, Sparc, anything) will be there, hopefully with a major fab like Samsung behind it.
Wave Computing Closes Its MIPS Open Initiative with Immediate Effect, Zero Warning
That's in addition to all the little embedded cores that are still around.
The T2080 is a high-speed communications processor. It is not designed for energy efficiency. It can draw up to 20W at full power, and does not support the sorts of power-saving features that would be required for a mobile part. The datasheet doesn't even specify the processor's idle power consumption! Some of the tables can be read to imply that it might be able to idle down to roughly 7W, but that's still completely unsuitable for a laptop.
But, quite frankly, none of these parts are really intended for mobile use. There hasn't been any real interest in mobile Power hardware since Apple got out of the business nearly 15 years ago -- ARM cores have utterly dominated that market.
Shame, it had quite the history from being in all those old SGI machines.